Add Batch 22d38657-6c8c-4a54-8f21-52f973adf53b
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_content_list.json +0 -0
- 2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_model.json +0 -0
- 2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_origin.pdf +3 -0
- 2201.10xxx/2201.10095/full.md +549 -0
- 2201.10xxx/2201.10095/images.zip +3 -0
- 2201.10xxx/2201.10095/layout.json +0 -0
- 2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_content_list.json +1297 -0
- 2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_model.json +2015 -0
- 2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_origin.pdf +3 -0
- 2201.10xxx/2201.10147/full.md +268 -0
- 2201.10xxx/2201.10147/images.zip +3 -0
- 2201.10xxx/2201.10147/layout.json +0 -0
- 2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_content_list.json +2224 -0
- 2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_model.json +0 -0
- 2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_origin.pdf +3 -0
- 2201.10xxx/2201.10252/full.md +466 -0
- 2201.10xxx/2201.10252/images.zip +3 -0
- 2201.10xxx/2201.10252/layout.json +0 -0
- 2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_content_list.json +0 -0
- 2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_model.json +0 -0
- 2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_origin.pdf +3 -0
- 2201.10xxx/2201.10276/full.md +421 -0
- 2201.10xxx/2201.10276/images.zip +3 -0
- 2201.10xxx/2201.10276/layout.json +0 -0
- 2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_content_list.json +0 -0
- 2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_model.json +0 -0
- 2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_origin.pdf +3 -0
- 2201.10xxx/2201.10295/full.md +509 -0
- 2201.10xxx/2201.10295/images.zip +3 -0
- 2201.10xxx/2201.10295/layout.json +0 -0
- 2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_content_list.json +0 -0
- 2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_model.json +0 -0
- 2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_origin.pdf +3 -0
- 2201.10xxx/2201.10326/full.md +680 -0
- 2201.10xxx/2201.10326/images.zip +3 -0
- 2201.10xxx/2201.10326/layout.json +0 -0
- 2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_content_list.json +0 -0
- 2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_model.json +0 -0
- 2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_origin.pdf +3 -0
- 2201.10xxx/2201.10469/full.md +779 -0
- 2201.10xxx/2201.10469/images.zip +3 -0
- 2201.10xxx/2201.10469/layout.json +0 -0
- 2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_content_list.json +0 -0
- 2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_model.json +0 -0
- 2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_origin.pdf +3 -0
- 2201.10xxx/2201.10474/full.md +433 -0
- 2201.10xxx/2201.10474/images.zip +3 -0
- 2201.10xxx/2201.10474/layout.json +0 -0
- 2201.10xxx/2201.10488/1e872e11-1f22-46ed-ad12-c2cf99c7dcb3_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -8247,3 +8247,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 8247 |
2202.03xxx/2202.03866/ae4fb52f-169c-4e6e-8194-2ae56c89e086_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8248 |
2202.03xxx/2202.03896/193a36c5-4117-4f97-a608-5881bf8cf42c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8249 |
2202.08xxx/2202.08959/50784fd7-3850-4a6c-971f-ac596cec3463_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8247 |
2202.03xxx/2202.03866/ae4fb52f-169c-4e6e-8194-2ae56c89e086_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8248 |
2202.03xxx/2202.03896/193a36c5-4117-4f97-a608-5881bf8cf42c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8249 |
2202.08xxx/2202.08959/50784fd7-3850-4a6c-971f-ac596cec3463_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8250 |
+
2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8251 |
+
2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8252 |
+
2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8253 |
+
2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8254 |
+
2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8255 |
+
2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8256 |
+
2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8257 |
+
2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8258 |
+
2201.10xxx/2201.10488/1e872e11-1f22-46ed-ad12-c2cf99c7dcb3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8259 |
+
2201.10xxx/2201.10494/5a432a64-fde0-4f11-a44e-17a555300bf6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8260 |
+
2201.10xxx/2201.10500/7f86644e-c154-456a-95bb-01ed40138b98_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8261 |
+
2201.10xxx/2201.10528/2affe443-0cf4-4d31-a2d8-6ef52bc72df8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8262 |
+
2201.10xxx/2201.10582/a2e029ea-f550-4650-94b7-986cbbe831e3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8263 |
+
2201.10xxx/2201.10700/750a4f3b-61ed-494a-9095-fe99034255f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8264 |
+
2201.10xxx/2201.10703/a2781d64-e87e-41ba-b81a-f5ebc06b3932_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8265 |
+
2201.10xxx/2201.10728/25f058e0-8136-43d7-82ce-653a808461ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8266 |
+
2201.10xxx/2201.10737/f60b2cd3-2c24-4f02-8088-a6a2091a9c7f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8267 |
+
2201.10xxx/2201.10766/e4aeee4b-37fb-4918-a641-4150bfeb916c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8268 |
+
2201.10xxx/2201.10776/74cf99b0-1ac0-420b-b5a8-9570e7baafd0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8269 |
+
2201.10xxx/2201.10787/fe5846d5-e540-4bda-b29c-dce93c4b40eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8270 |
+
2201.10xxx/2201.10801/56476f02-3115-4961-9b16-8ef170e042e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8271 |
+
2201.10xxx/2201.10830/7c3f66ec-5011-4338-bf8c-7024c6494420_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8272 |
+
2201.10xxx/2201.10883/2ae37e49-1d28-4934-b252-3190b255287d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8273 |
+
2201.10xxx/2201.10953/8c1d9a1d-81fb-4b9d-936c-fab09d2e3865_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8274 |
+
2201.10xxx/2201.10963/a222f95f-6dae-4f2b-9422-5f1fdd111218_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8275 |
+
2201.10xxx/2201.10990/71da1da5-fe5d-484c-86d3-48157263cab0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8276 |
+
2201.11xxx/2201.11006/f20987ea-f8f1-4ab0-ba16-15699b5c9729_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8277 |
+
2201.11xxx/2201.11037/65cf9122-75ac-451a-b908-43c3a3ec48ae_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8278 |
+
2201.11xxx/2201.11063/9e225651-a718-47b8-b873-d007ab366944_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8279 |
+
2201.11xxx/2201.11095/1556e56d-71d2-4ee5-b656-7dc85b107c9e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8280 |
+
2201.11xxx/2201.11114/baa8b4da-6da4-44c4-981a-4318e7e34fff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8281 |
+
2201.11xxx/2201.11167/22f3f090-90df-4813-b959-4dc3e5e4f5c4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8282 |
+
2201.11xxx/2201.11188/d4b8f771-64bf-4209-b40a-142a2b1384f4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8283 |
+
2201.11xxx/2201.11206/6c95a9cf-6ca1-4f5f-80dc-3d87e28fe56e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8284 |
+
2201.11xxx/2201.11227/d277389c-8c26-4559-acd8-6031d2b91d85_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8285 |
+
2201.11xxx/2201.11662/5639e47a-c539-4512-8419-9e201b721348_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8286 |
+
2202.00xxx/2202.00126/a4c87c5e-92c4-4f82-9000-eae59c99dbff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8287 |
+
2202.00xxx/2202.00443/9c15bd3b-01a3-4138-b07f-58d303bc2b8c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8288 |
+
2202.01xxx/2202.01344/1331c45e-4224-4580-b9e6-82059e02a3d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8289 |
+
2202.01xxx/2202.01356/3676c034-ae9f-4c70-b05d-51cb58ba4fe5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8290 |
+
2202.01xxx/2202.01361/30c2e42e-57d0-4202-889a-198e29692996_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8291 |
+
2202.01xxx/2202.01374/3ed9161a-6e4e-41e7-932f-20f7d6159f8e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8292 |
+
2202.01xxx/2202.01381/e4abf9f7-6bab-4e0b-94dc-7a3b76269506_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8293 |
+
2202.01xxx/2202.01415/316abac6-bce8-4ad3-84e1-fbc7b9a61431_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8294 |
+
2202.01xxx/2202.01422/c64d0dcb-8333-49e9-9525-ede7556b27ef_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8295 |
+
2202.01xxx/2202.01440/74643283-8f7f-4a4c-a1ff-b3bf929bc1d0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8296 |
+
2202.01xxx/2202.01459/91d1ae47-f415-49e7-b1f7-bb5188de89a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8297 |
+
2202.01xxx/2202.01479/b7d8de8c-5103-482c-9249-e73042d7dd6e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8298 |
+
2202.01xxx/2202.01493/633b5579-ed6a-46bb-a11c-c4a4ca2ef151_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8299 |
+
2202.01xxx/2202.01512/c79c9e0c-0d81-4956-9146-d8c7e79d7e77_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8300 |
+
2202.01xxx/2202.01562/82071340-50bc-49a0-bc96-9aa352205c94_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8301 |
+
2202.01xxx/2202.01575/69f10e20-be11-46b3-ae0f-ab5dfeddb988_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8302 |
+
2202.01xxx/2202.01602/371842aa-09aa-4c5f-a8ee-bc70bd4861ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8303 |
+
2202.01xxx/2202.01606/e015d165-308a-4715-b20d-0f5049047ee9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8304 |
+
2202.01xxx/2202.01624/2aa3e9bf-3cc8-4653-9e1d-33c870cb3a01_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8305 |
+
2202.01xxx/2202.01651/4aaa8d24-c0ac-42e1-81b5-608f035fe87e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8306 |
+
2202.01xxx/2202.01653/5a883e0a-ed74-4d26-ac78-e6d33278bae0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8307 |
+
2202.01xxx/2202.01699/e31e7778-14ad-472b-a7d8-69c782b0526b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8308 |
+
2202.01xxx/2202.01711/e8e6fb45-8a1d-472c-b3dd-a9c8d622aa68_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8309 |
+
2202.01xxx/2202.01712/478ba5c2-2bf3-477f-ae25-d09618a55168_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8310 |
+
2202.02xxx/2202.02140/12f9dda3-c283-43fe-b6a9-2898b05a1d29_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8311 |
+
2202.02xxx/2202.02452/9a8856bb-f5fc-4086-b66e-bfad1cb5dd32_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8312 |
+
2202.02xxx/2202.02459/62a377bf-8302-4064-8024-b043d1a3eb6c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8313 |
+
2202.03xxx/2202.03575/fdb1ec7a-1a17-46c8-9bff-c6ae737ca0b8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10095/5fb6f5d5-6456-451c-9799-640ab76a86af_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc076d465e768b93ba97be6cdc7e38c4762c67b870a637b69ae78a0eb700c76d
|
| 3 |
+
size 955694
|
2201.10xxx/2201.10095/full.md
ADDED
|
@@ -0,0 +1,549 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation
|
| 2 |
+
|
| 3 |
+
Geet Sethi
|
| 4 |
+
|
| 5 |
+
Stanford University and Meta
|
| 6 |
+
|
| 7 |
+
Stanford, California, USA
|
| 8 |
+
|
| 9 |
+
geet@cs.stanford.edu
|
| 10 |
+
|
| 11 |
+
Christos Kozyrakis
|
| 12 |
+
|
| 13 |
+
Stanford University
|
| 14 |
+
|
| 15 |
+
Stanford, California, USA
|
| 16 |
+
|
| 17 |
+
kozyraki@stanford.edu
|
| 18 |
+
|
| 19 |
+
Bilge Acun
|
| 20 |
+
|
| 21 |
+
Meta
|
| 22 |
+
|
| 23 |
+
Menlo Park, California, USA
|
| 24 |
+
|
| 25 |
+
acun@fb.com
|
| 26 |
+
|
| 27 |
+
Caroline Trippel
|
| 28 |
+
|
| 29 |
+
Stanford University
|
| 30 |
+
|
| 31 |
+
Stanford, California, USA
|
| 32 |
+
|
| 33 |
+
trippel@stanford.edu
|
| 34 |
+
|
| 35 |
+
Niket Agarwal
|
| 36 |
+
|
| 37 |
+
Meta
|
| 38 |
+
|
| 39 |
+
Menlo Park, California, USA
|
| 40 |
+
|
| 41 |
+
niketa@fb.com
|
| 42 |
+
|
| 43 |
+
Carole-Jean Wu
|
| 44 |
+
|
| 45 |
+
Meta
|
| 46 |
+
|
| 47 |
+
Cambridge, Massachusetts, USA
|
| 48 |
+
|
| 49 |
+
carolejeanwu@fb.com
|
| 50 |
+
|
| 51 |
+
# ABSTRACT
|
| 52 |
+
|
| 53 |
+
We propose RecShard, a fine-grained embedding table (EMB) partitioning and placement technique for deep learning recommendation models (DLRMs). RecShard is designed based on two key observations. First, not all EMBs are equal, nor all rows within an EMB are equal in terms of access patterns. EMBs exhibit distinct memory characteristics, providing performance optimization opportunities for intelligent EMB partitioning and placement across a tiered memory hierarchy. Second, in modern DLRMs, EMBs function as hash tables. As a result, EMBs display interesting phenomena, such as the birthday paradox, leaving EMBs severely under-utilized. RecShard determines an optimal EMB sharding strategy for a set of EMBs based on training data distributions and model characteristics, along with the bandwidth characteristics of the underlying tiered memory hierarchy. In doing so, RecShard achieves over 6 times higher EMB training throughput on average for capacity constrained DLRMs. The throughput increase comes from improved EMB load balance by over 12 times and from the reduced access to the slower memory by over 87 times.
|
| 54 |
+
|
| 55 |
+
# CCS CONCEPTS
|
| 56 |
+
|
| 57 |
+
- Information systems $\rightarrow$ Recommender systems; - Computer systems organization $\rightarrow$ Neural networks.
|
| 58 |
+
|
| 59 |
+
# KEYWORDS
|
| 60 |
+
|
| 61 |
+
Deep learning recommendation models, AI training systems, Memory optimization, Neural networks
|
| 62 |
+
|
| 63 |
+
# ACM Reference Format:
|
| 64 |
+
|
| 65 |
+
Geet Sethi, Bilge Acun, Niket Agarwal, Christos Kozyrakis, Caroline Trippel, and Carole-Jean Wu. 2022. RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '22), February 28 -
|
| 66 |
+
|
| 67 |
+
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
|
| 68 |
+
|
| 69 |
+
ASPLOS '22, February 28 - March 4, 2022, Lausanne, Switzerland
|
| 70 |
+
|
| 71 |
+
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
|
| 72 |
+
|
| 73 |
+
ACM ISBN 978-1-4503-9205-1/22/02...$15.00
|
| 74 |
+
|
| 75 |
+
https://doi.org/10.1145/3503222.3507777
|
| 76 |
+
|
| 77 |
+
March 4, 2022, Lausanne, Switzerland. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3503222.3507777
|
| 78 |
+
|
| 79 |
+
# 1 INTRODUCTION
|
| 80 |
+
|
| 81 |
+
Deep learning (DL) is pervasive, supporting a wide variety of application domains [5, 6, 14, 15, 21, 32, 37, 40]. A significant fraction of deep learning compute cycles in industry-scale data centers can be attributed to deep learning recommendation models (DL-RMs) [3, 4, 9, 11, 20, 34, 43, 47, 48]. For example, at Facebook, DLRMs account for more than $50\%$ of training demand [32] and more than $80\%$ of inference demand [11]. Moreover, Google's search engine relies on its recommender system, such as RankBrain, for search query processing [36].
|
| 82 |
+
|
| 83 |
+
DLRMs DLRMs exhibit distinct systems implications compared to more traditional neural network architectures [10, 16, 23, 24, 38]. This is due to their use of embedding layers which demand orders-of-magnitude higher memory capacity and exhibit significantly lower compute-intensity [11, 26, 33]. Embedding layers, comprised of embedding tables (EMBs), support the transformation of categorical (i.e., sparse) features into dense representations. Categorical features are typically represented as one-hot or multi-hot binary vectors, where entries represent feature categories. Activated categories (binary value of 1) in a feature vector then induce a set of look-ups to the feature's corresponding EMB to extract dense latent vectors.
|
| 84 |
+
|
| 85 |
+
System Requirement Characteristics for DLRMs The large feature space for industry-scale DLRMs demands significant compute throughput (PF/s), memory capacity (10s of TBs), and memory bandwidth (100s of TB/s) [31]. Figure 1 illustrates that the memory capacity and bandwidth demands for DLRMs have been growing super-linearly, exceeding the memory capacities available on training hardware. Figure 1a shows that between 2017-2021, the memory capacity requirements of DLRMs have grown by 16 times. EMB memory footprints are on the order of terabytes (TB) [26, 46] and account for over $99\%$ of the total model capacity [11]. The growth in the number and sizes of EMBs stems from the increase in the number of features and feature categories represented, in order to improve the overall DLRM prediction quality. Figure 1b shows that, within the same four-year period, per-sample DLRM memory bandwidth demand, determined by the amount of EMB rows accessed in a single training data sample, has increased by almost 30 times,
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
(a) DLRM memory requirements have grown by $16\mathrm{x}$ , while memory capacity on GPU accelerators has improved by less than $6\mathrm{x}$ .
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
(b) DLRM bandwidth demands have grown by $30\mathrm{x}$ , far outpacing the bandwidth growth of accelerator memories and interconnects.
|
| 92 |
+
|
| 93 |
+
outpacing the growth and availability of memory bandwidth on state-of-the-art training hardware.
|
| 94 |
+
|
| 95 |
+
Hierarchical Memory in Training Systems The widening gap between the DLRM memory needs and the memory specifications of modern training system hardware motivates new memory optimization techniques to effectively scale training throughput. While the exact training system architectures differ, hierarchical memory systems, e.g. tiered hierarchies composed of GPU HBM, CPU DRAM, and SSD [46], are becoming increasingly common for DLRM training. Since not all EMBs can fit entirely in GPU HBMs, this scenario gives rise to optimization strategies to address the first challenge – deciding where EMBs should be placed in the hierarchical memory system to maximize training throughput. Strategically placing EMBs brings up the second challenge – ensuring efficient utilization of all available memory capacity and bandwidth.
|
| 96 |
+
|
| 97 |
+
Characterizing EMB Access Patterns for DLRMs In this paper, we make two key observations regarding the memory access behaviors of EMBs that motivate more performant and efficient EMB partitioning and placement schemes.
|
| 98 |
+
|
| 99 |
+
First, not all EMBs are equal, nor are all rows within an EMB equal in terms of access behaviors. For example, the frequency distribution of a sparse feature's categorical values often follows a power law distribution. Therefore, a relatively small fraction of EMB rows will source the majority of all EMB accesses. Furthermore, as illustrated in Figure 3, sparse features, and thus EMBs, exhibit varying bandwidth demands due to varying pooling factors – the number of activated categories on average in a particular sparse feature sample – and coverage – the fraction of training samples in which a particular feature appears. Second, in modern DLRMs, EMBs function as hash tables. As a result, EMBs display interesting phenomena, such as the birthday paradox, which leaves a significant portion of EMBs unused due to hash collisions. Unused EMB space is further increased with increasing hash sizes.
|
| 100 |
+
|
| 101 |
+
Building on the in-depth sparse feature characterization of production scale DLRMs (Section 3), we propose RecShard - a new approach to improve DLRM training throughput using a data-driven and system-aware EMB partitioning and placement strategy. RecShard's EMB sharding strategy is informed by per-feature training data distributions—categorical value frequency distributions (Figure 5), pooling factor statistics (Figure 6a) as well as coverage distributions of all sparse features (Figure 6b). RecShard also considers
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 1: DLRM system requirement growth trend.
|
| 105 |
+
Figure 2: Generalized hybrid-parallel DLRM architecture. Data parallel modules (MLPs) are shaded blue while model parallel EMBs are shaded orange.
|
| 106 |
+
|
| 107 |
+
EMB design settings—hash functions and table sizes (Figure 7) as well as characteristics of the underlying tiered memory. RecShard considers the training system design parameters simultaneously through the use of a mixed integer linear program (MILP) to produce an optimal EMB sharding strategy. Overall, the key contributions of this paper are as follows:
|
| 108 |
+
|
| 109 |
+
- Fine-grained, data-driven EMB sharding: We demonstrate that EMB access patterns during DLRM training vary within and across EMBs. As a result, DLRM training throughput stands to improve with fine-grained EMB sharding. Further, EMB access patterns can be estimated by deriving statistics from less than $1\%$ of training data (categorical value frequency distribution, pooling factor, and coverage) and the target DLRM architecture (hash function and hash size). Thus, intelligent EMB sharding schemes can be instituted prior to training time.
|
| 110 |
+
- RecShard: We propose RecShard - a new approach for fine-grained sharding of EMBs with respect to a multi-level memory hierarchy consisting of GPU HBM and CPU DRAM. RecShard optimizes EMB partitioning and placement globally based on the estimated sparse feature characteristics and DLRM architecture.
|
| 111 |
+
- Real system evaluation: To demonstrate its efficacy, we implement and evaluate RecShard in the context of a production scale DLRM. We demonstrate that RecShard can on average improve the performance and load balance of DLRM EMB training by over 5x and over 11x, respectively, compared to the state-of-the-art industry sharding strategies [1, 26, 31].
|
| 112 |
+
|
| 113 |
+
# 2 BACKGROUND
|
| 114 |
+
|
| 115 |
+
Figure 2 gives an overview of the canonical Deep Learning Recommendation Model (DLRM) architecture [33]. In this section, we provide background on DRLMs and the training systems.
|
| 116 |
+
|
| 117 |
+
DLRMs process user-content pairs to predict the probability that a user will interact with a particular piece of content, commonly referred to as the click-through-rate (CTR). To produce such a
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
Figure 3: Example illustrating the pooling factor and coverage statistics, along with the embedding lookup and (sum) pooling operation. In this example there are two sparse features, $A$ and $B$ , with two corresponding embedding tables, and a training dataset composed of three training data samples. The average pooling factors of sparse features $A$ and $B$ over the dataset are 3.66 and 3, respectively, while the coverages are 1.0 and .33, respectively. The example shows the embedding lookup and pooling operation for the second training data sample (highlighted in bold). For sparse feature $A$ , the raw input IDs are hashed with an output size of 100 (which corresponds to the number of rows in $A$ 's EMB), generating the corresponding embedding lookup indices. These embedding rows, each containing embedding dimension number of values, are then read and combined, i.e. pooled, via element-wise summation to produce the output vector of the lookup operation. For sparse feature $B$ , the second training data sample is NULL, signifying that $B$ contains no feature data for that particular data sample. This results in the stages which sparse feature $A$ went through being bypassed and a 0-vector being produced as the output.
|
| 121 |
+
|
| 122 |
+
prediction, DLRMs consume two types of features: dense and sparse. Dense features represent continuous data, such as a user's age or the time of day, while sparse features represent categorical data, such as domain names or recent web pages viewed by a user. To encode
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
Figure 4: Sparse feature cardinality (categorical space; x-axis) versus chosen feature hash size (EMB size; y-axis) for 200 sparse features used in a large production-scale model. Hash size equal to cardinality is shown by the red-dotted line.
|
| 126 |
+
|
| 127 |
+
this categorical data, sparse features are represented as one-hot or multi-hot binary vectors which are only activated for a small subset of relevant categories (hence the term *sparse*). Sparse features used in DLRMs can have cardinalities in the billions [22, 46].
|
| 128 |
+
|
| 129 |
+
At a high level, the primary components of DLRMs are MultiLayer Perceptrons (MLPs) and Embedding Tables (EMBs). EMBs are commonly-used to transform sparse features from the high-dimensional, sparse input space to low-dimensional, dense embedding vectors. EMBs perform this operation by functioning as large lookup tables, where, in theory, each rows acts as a latent vector encoding of a particular sparse feature value (i.e., category). The activated, or hot, indices of the sparse inputs then act as indices into the EMBs, gathering one or more embedding vectors.
|
| 130 |
+
|
| 131 |
+
In practice, however, the binary-encoded sparse feature inputs are hashed prior to EMB look-up. Hashing serves two purposes. First, hashing allows the bounding of a sparse feature's EMB to a pre-determined, fixed size. Second, hashing permits the handling of unseen inputs at runtime [1, 22]. Once gathered, the embedding vectors are aggregated on a per-EMB basis using a feature pooling operation, such as summation or concatenation. The pooled vectors, along with the outputs of the bottom MLP layers (which process dense inputs), are then combined using a feature interaction layer, before proceeding through the top MLP layers and producing a prediction for the estimated engagement for the user-content pair. Training Systems for DLRMs DLRMs present significant infrastructure challenges. While the MLP layers are compute-intensive and exhibit (relatively) small memory footprints, single EMBs of production-scale DLRMs can be on the order of 100s of gigabytes each, with the total memory capacity on the multi-TB scale [22, 31, 46]. Furthermore, EMBs exhibit irregular memory access patterns [41], and the concurrent vector accesses per-EMB and across EMBs require substantial memory bandwidth [1, 23]. This has led to a hybrid data- and model-parallel training approach (Figure 2). MLP layers (both top and bottom) are replicated across all trainers (GPUs in figure) in a data-parallel manner, while EMBs are sharded across trainers to exploit model-parallelism [17, 18, 31, 44].
|
| 132 |
+
|
| 133 |
+
The ever-increasing memory capacity and bandwidth demands of DLRM training has also led to the emergence of training systems with tiered hierarchical memories (such as hierarchies with HBM, DRAM, and SSD tiers). The large collection of EMBs are partitioned and/or cached across the various tiers [31, 46]. One class of partitioning approaches leverages unified virtual memory (UVM) [13]. This places both host DRAM and accelerator HBM in a shared virtual address space, allowing transparent access of host DRAM on a GPU accelerator without explicit host-device transfers [27, 30]. UVM can greatly expand the usable memory capacity of a GPU node with ease. For example, a server with 8x 32GB HBM GPUs can have 2TB of DRAM [1].
|
| 134 |
+
|
| 135 |
+
However, for memory-bound workloads, such as DLRMs, using UVM naively can come with significant performance cost. While the latest GPUs contain HBMs with bandwidth capacity approaching 2TB/s, the interconnects used can have bandwidth capacity an order of magnitude less. Single direction throughput of PCIe $4.0 \times 16$ , for example, is just $32 \mathrm{~GB} / \mathrm{s}$ . This places particular importance on the DLRM EMB sharding scheme—hundreds of EMBs with heterogeneous memory characteristics have to be placed across potentially hundreds of trainers.
|
| 136 |
+
|
| 137 |
+
To address the performance needs of production-scale DLRM training in the presence of rapidly-growing memory capacity and bandwidth demands, this paper focuses on the partitioning and placement problem—determining the optimal placement of EMBs on a tiered memory system with fixed memory capacity and bandwidth constraints.
|
| 138 |
+
|
| 139 |
+
# 3 CHARACTERIZATION OF DLRM SPARSE FEATURES
|
| 140 |
+
|
| 141 |
+
The goal of a DLRM sharder is to partition a model's EMBs across a training system's hardware topology, in order to fully exploit model parallelism and thereby maximize training throughput. This requires an EMB placement across an increasingly tiered memory hierarchy that balances training load across all trainers (GPUs). To achieve such load balancing, an effective EMB sharder must be able to accurately estimate the memory demands of each EMB. RecShard addresses this problem through a data-driven approach.
|
| 142 |
+
|
| 143 |
+
This section presents our in-depth memory characterization of sparse features used in industry-scale DLRMs. The characterization study captures the statistical nature of recommendation training data, and sheds light on five key characteristics of DLRM sparse features which RecShard exploits to improve the EMB training throughput performance. Notably, we find that a sparse feature's value distribution enables us to determine the portion of an EMB that will exhibit high temporal locality during training, the feature's average pooling factor provides a proxy for its memory bandwidth cost, and the feature's coverage allows us to rank the placement priorities across EMBs. Furthermore, these statistics are distinct and vary over time for each sparse feature.
|
| 144 |
+
|
| 145 |
+
# 3.1 Skewed Categorical Distribution Presents Unique EMB Locality Characteristics
|
| 146 |
+
|
| 147 |
+
A small subset of categories can constitute the majority of accesses to an EMB.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
Figure 5: Hashed Value Frequency CDFs of 200 sparse features used in a production DLRM. The CDFs are generated from over two billion randomly-selected training samples over ten days of data, post-cache.
|
| 151 |
+
|
| 152 |
+
Sparse features represent categorical data, with each sparse feature's data sample containing a variable length list of categorical values from its sparse feature space. As the size of this categorical feature space can be arbitrarily large, it is natural to ask if a subset of values appear more often than others, and in fact they do [8, 19, 25, 42]. For example, the country a user is located in is a common feature for recommendation use cases. If we were to measure the distribution of this feature, we would see the feature follows a skewed power-law distribution, as the world population by country itself follows a power-law distribution with a long tail. Production-scale DLRMs often consist of hundreds of features that exhibit similar categorical frequency distributions [1, 31].
|
| 153 |
+
|
| 154 |
+
Figure 5 illustrates the cumulative distribution function (CDF) of 200 representative categorical features of a production DLRM. While a handful of features exhibit more uniform value distributions, the vast majority display a power-law distribution over the categorical values. In other words, for the majority of features, a small subset of categories appear much more frequently than the rest. This implies that a small set of EMB rows comprise the majority of EMB accesses. It is important to also highlight that the strength of the distribution varies from one feature to another, requiring consideration of the distribution on a per-feature basis.
|
| 155 |
+
|
| 156 |
+
Overall, the locality characteristics unique to each feature give rise to an optimization opportunity - EMB entries within a table can be placed across a tiered memory hierarchy based on expected access patterns. We refer to this optimization as fine-grained EMB partitioning.
|
| 157 |
+
|
| 158 |
+
# 3.2 Pooling Factors Determine Memory Bandwidth Demand
|
| 159 |
+
|
| 160 |
+
Within a training data sample, each EMB exhibits its own bandwidth demand due to varying pooling factor distributions.
|
| 161 |
+
|
| 162 |
+
Activated indices in a sparse feature's input effectively correspond to the rows in the feature's EMB that should be accessed to acquire latent vector representations of the categories. This results in a scatter-gather memory access pattern, where one embedding vector is accessed for each activated index. The $n$ EMB rows
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
(a) Average pooling factor: the number of 'hot' indices in an average sparse feature's input sample.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
(b) Coverage percentage: the probability a sparse feature is present in a random training data sample.
|
| 169 |
+
|
| 170 |
+
accessed by a sparse feature's input is its sample pooling factor, whereas the interaction of the corresponding $n$ latent embedding vectors via pooling determines the feature sample's representation. The distribution of the pooling factors $-n$ of a sparse feature across the training data models the feature's memory bandwidth consumption.
|
| 171 |
+
|
| 172 |
+
Furthermore, the pooling factor distribution can vary from feature to feature, resulting in memory bandwidth needs that are feature-specific (i.e., EMB-specific). This is due to variability in the information each feature represents. While the feature representing the location of a user may always be of length one, a feature representing the pages recently viewed by a user will likely have length greater than one. Figure 6a depicts the average pooling factor distribution for hundreds of sparse features which varies widely. Some sparse features exhibit high pooling factors of approximately two hundred on average, while the average pooling factors of others are on the order of a few tens; the result is an order of magnitude difference in the memory bandwidth demand.
|
| 173 |
+
|
| 174 |
+
As with sparse feature value distributions, the pooling distributions for sparse features are also skewed with a long tail; however unlike the value distributions, they cannot be broadly classified as being power-laws with varying degrees of strengths. We experimented with an assortment of summary statistics, such as the median and mean, to determine which provides the best estimate for the 'average' case across all features; resulting in the choice of mean as the estimate for the average pooling factor of a sparse feature. This choice was made as we observed that the mean generally tends to over-estimate an EMB's bandwidth demand, which we find preferable to under-estimating and potentially resulting in a sub-optimal EMB placement.
|
| 175 |
+
|
| 176 |
+
In summary, pooling factor diversity across features motivates optimizations that consider per-feature average pooling factors to approximate the unique memory bandwidth consumption characteristics for EMBs.
|
| 177 |
+
|
| 178 |
+
# 3.3 Varying Degrees of Coverage for Sparse Features Determines EMB Placement Priority
|
| 179 |
+
|
| 180 |
+
Sparse features exhibit varying degrees of coverage, with some EMBs being used much more often than others.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 6: Average pooling factor and coverage vary widely from feature to feature. Collectively, they serve as a proxy for the per-sample bandwidth demand of a feature.
|
| 184 |
+
Figure 7: The impact of hashing on the feature value frequency distribution. Even using a hash size greater than the number of unique values, hashing causes the compression of the raw value distribution, leaving considerable EMB under-utilization.
|
| 185 |
+
|
| 186 |
+
Not all sparse features of a DLRM are referenced in each training data sample. There are a variety of reasons for this, such as a particular feature being phased in or out, or a user simply not having the content interaction or metadata necessary for the feature to be instantiated. Regardless of the reason, there is variability in the presence of sparse features across training inputs, which provides us with additional empirical information for system performance optimizations.
|
| 187 |
+
|
| 188 |
+
Figure 6b depicts the feature access probabilities (y-axis) across hundreds of sparse features sampled from a number of industry-scale DLRMs (x-axis). The probability that a sparse feature is present in a training sample is referred to as its coverage. Similar to the pooling factor distribution (Section 3.2), the coverage of individual sparse features varies widely from feature to feature - ranging from less than $1\%$ on the low-end to $100\%$ on the high-end. This observation demonstrates the importance of considering per-feature coverage characteristics in EMB placement decisions. Thus, a feature's coverage gives rise to system optimizations based on the prioritization of EMBs according to their frequency of use.
|
| 189 |
+
|
| 190 |
+
# 3.4 Embedding Hashing Leads to Sub-optimal System Memory Utilization
|
| 191 |
+
|
| 192 |
+
While a simple technique, embedding hashing is inefficient from the perspective of system memory utilization.
|
| 193 |
+
|
| 194 |
+
The cardinality of a given sparse feature can be on the order of billions. Thus, constructing an EMB representing the entirety of such a sparse feature would be prohibitively expensive in terms of the memory capacity requirement. Furthermore, it would not generalize to unseen feature values when new categories emerge. Thus, it is unrealistic to construct a one-to-one mapping between every sparse feature value and EMB rows. Instead, the EMBs of industry-scale DLRMs typically employ hashing [1, 22, 39], using a random hash function to map arbitrary feature values to output values constrained by a specified hash size. The hash size therefore dictates the size of the EMB.
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Figure 8: Increasing the hash size to accommodate the tail leaves an increasing percentage of the hash space unused, which RecShard can reclaim. The blue dot denotes the point at which hash size is equal to input cardinality.
|
| 198 |
+
|
| 199 |
+
A consequence of using random hashing to map a feature's inputs to corresponding EMB entries is hash collisions—where the hash function maps two unique input values to the same output value. The existence of hash collisions can be demonstrated via the pigeonhole principle, as mapping $H + 1$ unique values with a hash size of $H$ requires at least two input value overlap. What is less obvious however, is whether or not, and to what degree, collisions occurs when the hash size is equal to or even slightly greater than the number of unique input values seen. Commonly known as the birthday paradox, when hashing $N$ unique input values with a hash size of $H = N$ , one will observe that approximately $\frac{1}{e}$ input values will collide. And, as $N = H$ , this results in $\frac{1}{e}$ hash entries being unused.
|
| 200 |
+
|
| 201 |
+
Figure 7 depicts the birthday paradox phenomenon by illustrating the pre- and post hash distributions for a specific feature of a production DLRM. The pre hash distribution (dark blue line) depicts the input feature value space, whereas the post hash distribution (light blue line) depicts the distribution of accesses to the corresponding EMB. The red-dotted vertical line denotes the specified hash size and therefore the number of unique embedding vectors that can be captured by this EMB. Although the hash size is greater than the number of unique pre hash values observed (the red dotted line is to the right of the dark blue line), the post hash embedding space compresses the pre hash categorical feature space (the light blue line terminates to the left of the dark blue line). Furthermore, Figure 7 highlights the under-utilization of EMBs due to training data sparsity by $26\%$ and hash collisions by another $22\%$ .
|
| 202 |
+
|
| 203 |
+
Increasing the hash size to accommodate the tail of the power-law distribution - a technique which can improve model performance [46] - leaves an increasing percentage of the hash space under-utilized, which RecShard can reclaim. Figure 8 illustrates that, as the hash size is increased to accommodate the tail of the input sparse feature distribution (Section 3.1), an increasing percentage of the hash space is unused by training samples (sparsity increases).
|
| 204 |
+
|
| 205 |
+
Given the observations above, hashing gives additional insight into designing an intelligent partitioning strategy for EMBs. Due
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 9: Sparse features are grouped into two general categories, users and content. Both feature types exhibit dynamic memory demand over time. We show memory demand for a large production model (~400 features) over a 20-month period. Data represent averages over all relevant features.
|
| 209 |
+
|
| 210 |
+
to the birthday paradox and the desire to choose a hash size which can retain as much of the tail as possible, a non-trivial percentage of embedding rows will not be accessed at all during training. This enables us to move the under-utilized portions of EMBs to a slower memory tier (or potentially avoid allocation altogether) without visible impact on the training time performance.
|
| 211 |
+
|
| 212 |
+
# 3.5 Sparse Feature Memory Patterns Evolve over Time
|
| 213 |
+
|
| 214 |
+
Sparse features exhibit distinct, dynamic memory demands over time.
|
| 215 |
+
|
| 216 |
+
Sections 3.1-3.4 provide insights into how memory characteristics specific to DLRM sparse features and EMB design can be used to optimize the EMB performance of DLRMs through an intelligent data-driven sharding strategy. It is, however, also important to know how often EMB sharding should be performed. Once deployed, industry-scale production models may be continuously retrained on new data for potentially many weeks [14] at a time.
|
| 217 |
+
|
| 218 |
+
Figure 9 illustrates how average feature lengths evolve over a 20-month time period for two distinct types of sparse features: content features and user features. Based on the time-varying nature of sparse feature statistics, ideally the benefit of re-sharding would be evaluated regularly throughout training as new data arrives, due to the potentially large impact that data distribution shifts can have on training throughput. Although this benefit can be approximated quickly by RecShard (Section 4), it must be dynamically weighed against the cost of carrying out the re-sharding on the given training stack and topology.
|
| 219 |
+
|
| 220 |
+
# 4 RECSHARD
|
| 221 |
+
|
| 222 |
+
Building on the EMB memory access characterization results in Section 3, we design, implement, and evaluate an intelligent EMB
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
Figure 10: Overview flow diagram of the RecShard pipeline.
|
| 226 |
+
|
| 227 |
+
sharding strategy - RecShard. RecShard is a data-driven EMB sharding framework that optimizes embedding table partitioning and placement across a tiered memory hierarchy. Figure 10 provides the design overview for RecShard, which is comprised of three primary phases: Training Data Profiling (Section 4.1), Embedding Table Partitioning and Placement (Section 4.2), and Remapping (Section 4.3). RecShard leverages a MILP along with the latest training data distributions and EMB design characteristics to produce an optimal EMB sharding strategy each time a given DLRM is trained.
|
| 228 |
+
|
| 229 |
+
# 4.1 Training Data Profiling
|
| 230 |
+
|
| 231 |
+
The first stage of the RecShard pipeline is model-based training data profiling, which approximates the aforementioned memory characteristics in Section 3. In this stage, RecShard first samples and hashes a random subset of the input training dataset based on the DLRM architecture specification. The purpose of this sampling is to estimate three per-EMB statistics: (1) the value frequency CDF over the EMB entries, (2) the average pooling factor of accesses for each EMB, and (3) each EMB's coverage over the training dataset.
|
| 232 |
+
|
| 233 |
+
We observe empirically that sampling $1\%$ or less of large training data stores achieves statistical significance to accurately facilitate high-performance EMB partitioning decisions. This is largely because increasing the sampling rate primarily serves to capture more of the tail of a sparse feature's skewed distribution. With respect to the value frequency CDF, these extra "tail values," when hashed, will either map to their own EMB entry with minimal access count, or will collide with other previously-seen feature values. And viewing more of the tail has little to no impact on the average pooling factor and coverage of an EMB. In all cases, not capturing the full tail is sufficient from the perspective of memory pattern profiling.
|
| 234 |
+
|
| 235 |
+
In the training data profiling phase, RecShard constructs the value frequency and pooling factor statistics as well as the coverage of each sparse feature for use in sharding.
|
| 236 |
+
|
| 237 |
+
# 4.2 Embedding Table Partitioning and Placement
|
| 238 |
+
|
| 239 |
+
RecShard uses the generated per-feature statistics to produce an efficient, load-balanced EMB partitioning decision. In order to perform partitioning and sharding across multiple compute nodes with a tiered memory hierarchy, RecShard formulates the partitioning problem as a mixed integer linear program (MILP). By solving the MILP [12], RecShard can globally minimize per-GPU cost, a proxy
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Parameter</td><td>Description</td></tr><tr><td>M</td><td>Number of GPUs</td></tr><tr><td>j</td><td>Number of EMBs</td></tr><tr><td>B</td><td>Batch size</td></tr><tr><td>CapD</td><td>Per-GPU HBM Capacity</td></tr><tr><td>CapH</td><td>Per-GPU Host DRAM Capacity</td></tr><tr><td>BWHBM</td><td>GPU HBM Bandwidth</td></tr><tr><td>BWUVM</td><td>UVM Transfer Bandwidth</td></tr><tr><td>ICDFj</td><td>Inverse Value Frequency CDF of EMB j</td></tr><tr><td>avg_poolj</td><td>Average Pooling Factor of EMB j</td></tr><tr><td>coveragej</td><td>Coverage of EMB j</td></tr><tr><td>hash_sizej</td><td>Hash Size of EMB j</td></tr><tr><td>dimj</td><td>Embedding Dimension of EMB j</td></tr><tr><td>bytesj</td><td>Size of data-type of EMB j</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 1: Description of Parameters used in the RecShard MILP.
|
| 244 |
+
|
| 245 |
+
for EMB training latency, simultaneously, while ensuring that neither GPU on-device nor per-node host memory limits are violated. The remainder of this section outlines our MILP formulation, which considers the problem of sharding EMBs across a two-tier memory hierarchy consisting of GPU HBM and host DRAM accessed via UVM. We refer to the latter as UVM for the rest of this paper. Table 1 summarizes parameters used by the MILP formulation.
|
| 246 |
+
|
| 247 |
+
MILP Formulation As the training throughput is determined by the embedding operator performance of the slowest trainer, we formulate the MILP as a minimization problem to:
|
| 248 |
+
|
| 249 |
+
minimize C
|
| 250 |
+
|
| 251 |
+
subject to $c_{m}\leq C\quad \forall m\in M$ (1)
|
| 252 |
+
|
| 253 |
+
$M$ is the set of GPUs available for training (each GPU is represented by an integer ID $m$ ranging from 0 to $M - 1$ ), $c_{m}$ is the total EMB cost for GPU $m$ , and $C$ is the maximum single GPU cost to minimize, subject to Constraint 1.
|
| 254 |
+
|
| 255 |
+
In order to estimate the total EMB cost per GPU, RecShard incrementally incorporates the per-EMB memory statistics to construct constraints which effectively describe the space of all possible EMB partition and placement combinations for the underlying tiered memory hierarchy.
|
| 256 |
+
|
| 257 |
+
To construct a search space of candidate placements, the first constraint specified by RecShard is the mapping of each EMB to a
|
| 258 |
+
|
| 259 |
+
single GPU. An EMB can either be located fully in a GPU's HBM, fully in UVM, or split across both in a fine-grained manner. If an EMB is placed entirely in HBM, the corresponding GPU will be the sole accessor of the entire EMB. If an EMB is placed entirely in UVM, it must be assigned a GPU that will issue memory accesses to it. When an EMB is located in both HBM and UVM, we map both partitions to the GPU whose HBM is utilized. This constraint is formulated as follows:
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
\sum_ {m} p _ {m j} = 1 \quad \forall j \in J \tag {2}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
p _ {m j} \in \{0, 1 \} \quad \forall m \in M \quad \forall j \in J \tag {3}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$p_{mj}$ is a binary variable indicating whether EMB $j$ is assigned to GPU $m$ , and Constraint 2 ensures that each EMB is assigned to exactly one GPU.
|
| 270 |
+
|
| 271 |
+
When determining the EMB-to-GPU mappings, RecShard must also decide how many, or if any at all, of each EMB's rows should be placed in HBM. To do so, RecShard uses each EMB's post hash value frequency CDF to estimate the trade-off between the number of rows placed in HBM and the corresponding percentage of EMB accesses covered. To use the CDF within the MILP, RecShard first converts the CDF to its inverse, or ICDF, so that it can map the percentage of accesses covered to the corresponding number of EMBs rows. RecShard then produces a piece-wise linear approximation of the ICDF - as the ICDF is a non-linear function, it cannot be used directly within the MILP. To do so, 100 steps are uniformly selected with respect to the ICDF's $x$ values, where each step $i$ corresponds to a cumulative access percentage between 0 and $100\%$ . To capture both the $x$ and $y$ values of the ICDF, the constraints are formulated as follows:
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
\sum_ {i} x _ {i j} * I C D F _ {j} (i) * d i m _ {j} * b y t e s _ {j} = \operatorname {m e m} _ {j} \quad \forall j \in J \tag {4}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
\sum_ {i} x _ {i j} * \frac {i}{1 0 0} = p c t _ {j} \quad \forall j \in J \tag {5}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
\sum_ {i} x _ {i j} = 1 \quad \forall j \in J \tag {6}
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
x _ {i j} \in \{0, 1 \} \quad i = 0, \dots , 1 0 0 \quad \forall j \in J \tag {7}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
$x_{ij}$ is a binary variable indicating whether step $i$ was chosen for EMB $j$ . Constraint 6 ensures that one and only one step from the ICDF can be selected per EMB (i.e. there is a single split point separating the EMB rows mapped to HBM from those mapped to UVM). Constraint 5 converts the chosen step value for each EMB into the corresponding percentage - the ICDF's corresponding $x$ value. For each EMB, this percentage represents the cumulative percentage of accesses covered by the chosen split, and its value is stored as $pct_j$ . Finally, Constraint 4 translates each EMB's chosen split into the number of bytes needed to store its rows, $mem_j$ - the per-EMB HBM usage.
|
| 290 |
+
|
| 291 |
+
Given the constraints for encoding per-EMB HBM usage, constraints are added to guarantee per-GPU memory capacity limits
|
| 292 |
+
|
| 293 |
+
are not violated.
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
h a s h \_ s i z e _ {j} * d i m _ {j} * b y t e s _ {j} = E M B _ {j} \quad \forall j \in J \tag {8}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
\sum_ {j} p _ {m j} * \operatorname {m e m} _ {j} \leq C a p _ {D} \quad \forall m \in M \tag {9}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
\sum_ {j} p _ {m j} * (E M B _ {j} - m e m _ {j}) \leq C a p _ {H} \quad \forall m \in M \tag {10}
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
Constraint 9 accomplishes this for per-GPU HBM by summing the memory capacity requirements of all EMB portions assigned to each GPU $m$ and ensuring that ensuring that no GPU exceeds its HBM capacity of $Cap_{D}$ . Constraint 10 accomplishes this similarly for per-GPU host DRAM capacity limits, $Cap_{H}$ .
|
| 308 |
+
|
| 309 |
+
With the EMB partitioning and placement assignments properly constrained, RecShard can formulate the estimated per-GPU EMB cost.
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\left(\text {a v g} * \text {d i m} _ {j} * \text {b y t e s} _ {j} * B\right) *
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\left(\left(p c t _ {j} * \frac {1}{B W _ {H B M}}\right) + \left(\left(1 - p c t _ {j}\right) * \frac {1}{B W _ {U V M}}\right)\right)
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
= c _ {j} \quad \forall j \in J \tag {11}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\sum_ {j} p _ {m j} * \text {c o v e r a g e} _ {j} * c _ {j} = c _ {m} \quad \forall m \in M \tag {12}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
Constraint 11 estimates the cost of each EMB during a single forward pass of DLRM training. This is achieved by first calculating each EMB's approximate per-training step memory demand using the EMB's average pooling factor, embedding vector dimension, size (in bytes) of its embedding vector entries, and batch size. Per-step demand is then multiplied by: (1) the percentage of EMB rows that are estimated to be sourced from HBM $(pct_{j})$ along with a bandwidth based scaling factor $\left(\frac{1}{BW_{HBM}}\right)$ ; and (2) the percentage that are estimated to be sourced from UVM $(1 - pct_{j})$ along with its scaling factor $\left(\frac{1}{BW_{UVM}}\right)$ . The two products are summed to form an estimate of an EMB's cost to perform a single step lookup on average.
|
| 328 |
+
|
| 329 |
+
Constraint 12 formulates the per-GPU cost for the MILP's objective function. Instead of simply summing the per-EMB costs assigned to a GPU, we sum the product of the per-EMB cost and the corresponding EMB's coverage. This is because the per-EMB CDF presents a normalized view of accesses over a particular EMB, and the average pooling factor estimates the EMB's memory performance requirement over the samples it is present in. Therefore, to provide a global view of bandwidth requirements across all EMBs, RecShard weights each EMB's cost by its coverage.
|
| 330 |
+
|
| 331 |
+
With the constraints in place to formulate the per-GPU EMB cost, $c_{m}$ , the MILP solver considers all possible combinations of EMB partitioning and placement decisions based on RecShard's EMB statistics and the bandwidth characteristics of the underlying memory hierarchy (supplied via the $BW$ parameters in Constraint 11). In doing so, the MILP solver can compute an optimal sharding strategy that minimizes the model's largest single GPU EMB cost.
|
| 332 |
+
|
| 333 |
+
Key Properties of RecShard's MILP We address a number of key properties pertaining to RecShard's MILP formulation. First, when constructing its placements, RecShard's MILP begins by assigning each EMB to a single GPU. This design decision, and others which follow from it (such as per-GPU host DRAM capacity limits,
|
| 334 |
+
|
| 335 |
+
$Cap_{H}$ ), is done to simplify the handling of sharding across many GPUs and nodes. By splitting resources uniformly and constructing assignments on an abstract per-GPU basis, the resulting sharding assignment will not be tied to a specific system GPU (i.e. GPU 3 in the MILP can be mapped to any GPU in the system).
|
| 336 |
+
|
| 337 |
+
Second, RecShard's MILP features summation of the expected HBM and UVM lookup times to form a cost. Another operator, such as max, may be used, depending on the target system architecture. We use summation in our implementation of RecShard because, when accessed within the same kernel, the memory latency of performing mixed reads from HBM and UVM on current GPUs is approximately equal to the time to perform each read separately. However, if the target system architecture supports concurrent reads fully from mixed memories, the estimated EMB cost can be approximated using max.
|
| 338 |
+
|
| 339 |
+
Third, while RecShard performs partitioning and placement for DLRM training, it only estimates embedding operation latencies of the forward pass in the MILP. This is because the timing performance of the backward pass is roughly proportional to its forward pass performance. By doing so, it simplifies the MILP formulation and lowers the solver time.
|
| 340 |
+
|
| 341 |
+
In our experiments in Section 6, RecShard's MILP features 47,276 variables, and is solved in 21 seconds when UVM is not needed (RM1), and in 42 seconds when used (RM2/RM3), with a state-of-the-art solver (Gurobi [12]). It is important to note that solving time is not impacted by model size, but instead in terms of the number of trainers (e.g. GPUs), and the steps used to approximate the ICDF. In our experiments solving time tended to scale approximately linearly with number of trainers and steps.
|
| 342 |
+
|
| 343 |
+
# 4.3 Remapping Layer
|
| 344 |
+
|
| 345 |
+
Once the MILP solver produces a sharding strategy, RecShard determines the number of rows to be placed in HBM for each EMB via the activated $x_{ij}$ variable and the corresponding location on the EMB's ICDF, $ICDF_{j}(i)$ .
|
| 346 |
+
|
| 347 |
+
These selected rows cannot be placed directly in HBM and must go through a remapping stage. This step is necessary as EMBs are typically allocated contiguously in memory, with an EMB index also serving as the memory offset to access the underlying storage directly. As the EMB rows selected by the MILP to be placed in HBM are chosen based on their access frequency, they can be located at arbitrary positions within the EMB and thus be non-contiguous. To address this, RecShard creates a per-EMB remapping table, which maps each EMB index to its corresponding location in either HBM or UVM.
|
| 348 |
+
|
| 349 |
+
# 4.4 Expansion Beyond Two-Tiers
|
| 350 |
+
|
| 351 |
+
While the RecShard implementation is modeled after a two-tier memory hierarchy consisting of GPU HBM and host DRAM accessed via UVM, RecShard can be easily expanded to support a multi-tier memory hierarchy. At its core, each additional tier represents a new point on each EMB's CDF, potentially producing an additional split of EMB rows to be placed on the new memory tier. As each memory tier has its own bandwidth specifications from the view of the executing device (e.g. the GPU), the RecShard MILP
|
| 352 |
+
|
| 353 |
+
<table><tr><td>Model</td><td># Sparse Features</td><td>Total Hash Size</td><td>Emb. Dim.</td><td>Size</td></tr><tr><td>RM1</td><td>397</td><td>1,331,656,544</td><td>64</td><td>318 GB</td></tr><tr><td>RM2</td><td>397</td><td>2,661,369,917</td><td>64</td><td>635 GB</td></tr><tr><td>RM3</td><td>397</td><td>5,320,796,628</td><td>64</td><td>1270 GB</td></tr></table>
|
| 354 |
+
|
| 355 |
+
Table 2: DLRM Specifications
|
| 356 |
+
|
| 357 |
+
solver will automatically order the memory tiers via the bandwidth scaling factors.
|
| 358 |
+
|
| 359 |
+
# 5 EXPERIMENTAL METHODOLOGY
|
| 360 |
+
|
| 361 |
+
Baselines: To evaluate the efficacy of RecShard, we compare the performance of EMB operators under RecShard's throughput optimized sharding strategy with sharding schemes from prior work on production DLRM training systems [1, 26, 31]. State-of-the-art sharding schemes typically follow a two-step approach. First, they assign a fixed cost to each EMB based on a specific cost function. Second, they apply a heuristic algorithm to incrementally assign EMBs to GPUs while attempting to minimize the maximum cost across all GPUs (a measure of load balancing).
|
| 362 |
+
|
| 363 |
+
Step I-Cost Functions: We implement the following three cost functions - two representing the state-of-the-art and a third derived from the first two - and compare their impact on EMB training throughput with RecShard:
|
| 364 |
+
|
| 365 |
+
- Size [1, 26]: An EMB's cost is the product of its hash size and its embedding dimension (latent vector length).
|
| 366 |
+
- Lookup [1, 26]: An EMB's cost is the product of its average pooling factor and its embedding dimension.
|
| 367 |
+
- Size-and-Lookup: An EMB's cost is the product of its lookup based cost (above) and the log of its hash size - $\log_{10}(\text{hash\_size}_{EMB})$ - adding a non-linear function that attempts to capture potential caching effects of smaller EMBs.
|
| 368 |
+
|
| 369 |
+
In comparison, RecShard considers EMB access distributions, average pooling factor, coverage, hash function, hash size, and the memory bandwidth characteristics of the target system.
|
| 370 |
+
|
| 371 |
+
Step II-Heuristic Sharding Algorithms: To shard EMBs once assigned a cost, we implement a greedy heuristic algorithm [31] that works as follows. After receiving the list of EMBs to shard along with their associated costs, the greedy heuristic first sorts EMBs in descending cost order. It then descends the list, starting with the highest-cost EMB, and iteratively assigns EMBs to GPUs one-by-one. The heuristic continues down the sorted list of EMBs, placing each successive EMB on the GPU with the current lowest sum of costs. When GPU HBM has been saturated, the heuristic then allocates the remaining EMBs in UVM. In comparison, RecShard considers cost on a per-EMB entry basis and optimizes the placement of all EMB rows simultaneously, in one shot.
|
| 372 |
+
|
| 373 |
+
# 5.1 DLRM Specification
|
| 374 |
+
|
| 375 |
+
We evaluate the performance of the different sharding strategies on a system running a modified version of open-source DLRM [7, 33]. The implementation is modified to support the use of multi-hot
|
| 376 |
+
|
| 377 |
+
encoded training data samples and the open-source implementation of the high-performance embedding operator in the PyTorch FBGEMM library<sup>1</sup>.
|
| 378 |
+
|
| 379 |
+
We implement the RecShard remapping layer as a custom PyTorch $\mathrm{C + + }$ operator which is executed as a transform during the data loading stage. This allows remapping to be performed in parallel with training iterations, thus removing it from the critical path of model execution.
|
| 380 |
+
|
| 381 |
+
We evaluate RecShard using three production-scale DLRMs: RM1, RM2, and RM3, summarized in Table 2. All three RMs feature the same underlying DLRM architecture, implementing a large number of sparse features (397) and spanning a breadth of feature characteristics: categorical value distributions, pooling factors, and coverage; which collectively determine the locality characteristics of embedding accesses (Section 3). The difference between the RMs is the approximate doubling of the hash size for each EMB from RM1 to RM2, and furthermore from RM2 to RM3.
|
| 382 |
+
|
| 383 |
+
We generate different workloads by having a large, constant number of features and scaling the hash sizes for two key reasons. First, the complexity of the sharding problem directly scales with the number of features to be sharded and their characteristics; thus, a large number of features maximizes sharding complexity. Second, as has been observed internally at our company, and in prior evaluations of industry-scale DLRMs [1, 22, 46], increasing the hash size of an embedding table and thereby reducing collisions between sparse feature values is a simple, yet effective method of realizing accuracy improvements.
|
| 384 |
+
|
| 385 |
+
Based on the system specification discussed in the next section, RM1 requires 14 GPUs to completely fit all EMBs in reserved HBM, while RM2 requires 27 GPUs, and RM3 requires 53 GPUs.
|
| 386 |
+
|
| 387 |
+
# 5.2 Training System Specification
|
| 388 |
+
|
| 389 |
+
We evaluate the timing performance for all three sharding strategies on a two-socket server-node. Each socket features an Intel Xeon Platinum 8339HC CPU, 376GB of DDR4 DRAM, and 8x NVIDIA A100 (40GB) GPUs, connected to host DRAM via PCIe 3.0x16 for UVM support. As the scale of the RMs exceeds that of the memory capacity of the training nodes, during benchmarking we run each model-parallel section separately and extract the EMB performance metrics.
|
| 390 |
+
|
| 391 |
+
When implementing the training sharding strategies from prior work [1, 26] (our baselines for comparison), we use a batch size of 16,384 and limit each sharding strategy to use at most: (1) 24GB of HBM per GPU as the reserved memory for EMBs; (2) 128GB of host DRAM for usage per GPU for UVM allocated EMBs. The remaining HBM/DRAM capacity reserved for other model parameters, computation, and training overheads.
|
| 392 |
+
|
| 393 |
+
Performance Profiling: As the goal of RecShard is to improve per-iteration EMB latency, due to the large percentage of total run-times they constitute for many types of DLRMs [1, 11, 46], we evaluate execution time by tracing each GPUs execution and extracting all kernel run times related to the embedding operator. We do this by using the integrated PyTorch profiler, torch.profiler, which allows for tracing to begin after a specified waiting and
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Figure 11: EMB training performance improvement of different sharding strategies normalized to slowest strategy in group. RM1, RM2, and RM3 evaluated using 16 GPUs.
|
| 397 |
+
|
| 398 |
+
warm-up period. We specify a waiting period of 10 iterations, a warm-up period of 5 iterations, and trace for 20 iterations.
|
| 399 |
+
|
| 400 |
+
# 6 EVALUATION RESULTS AND ANALYSIS
|
| 401 |
+
|
| 402 |
+
Overall, RecShard achieves an average of 5x EMB training iteration time improvement across RM1, RM2, and RM3 in the 16-GPU setting, covering a wide range of memory demands. Figure 11 illustrates that RecShard improves the EMB training iteration time by 2.58x, 5.26x, and 7.41x for RM1, RM2, and RM3, respectively, over the next fastest sharding strategy. Table 3 summarizes the timing results for RecShard and the state-of-the-art schemes.
|
| 403 |
+
|
| 404 |
+
# 6.1 RecShard Workload Balance Analysis
|
| 405 |
+
|
| 406 |
+
A major factor contributing to RecShard's significant performance improvement comes from its ability to achieve better load balance across the GPU trainers. In particular, the EMB memory footprint of RM1 is approximately 318GB, allowing all EMBs to fit entirely in HBM when distributed among the 16 available GPUs. RecShard improves EMB training throughput for RM1 by over $2.5\mathrm{x}$ with respect to the next fastest sharding strategy (Size). It does so with an almost 9 times improvement in the standard-deviation of the iteration time across all GPUs, providing a much more uniform distribution of work (Table 3).
|
| 407 |
+
|
| 408 |
+
RecShard's ability to better load balance comes from two key aspects of its design. First, RecShard's hash analysis allows it to effectively determine which portion of each EMB is unused or sparsely used during training. The sparse regions are effectively assigned a cost of zero and thus can be allocated last. Second, RecShard's formulation of the EMB sharding problem as a MILP allows it to globally balance EMB operations across all GPUs simultaneously, in one shot. Since RM1 does not require UVM for EMB placement, the sharding cost formulation reduces to a function that is similar to the Lookup cost function of Section 5.1. However, when used with the greedy heuristic, the Lookup sharding strategy performs $46\%$ worse than the Size strategy (the best baseline RM1 strategy). This result highlights the performance improvements that stem from RecShard's fine-grained, data-driven MILP approach to embedding vector placement.
|
| 409 |
+
|
| 410 |
+
<table><tr><td>Model</td><td>Size-Based</td><td>Lookup-Based</td><td>Size-Based-Lookup</td><td>RecShard</td></tr><tr><td>RM1</td><td>7.12/21.23/13.06/4.01</td><td>5.08/30.97/12.99/5.59</td><td>5.55/26.03/12.91/4.72</td><td>6.53/8.21/7.48/0.45</td></tr><tr><td>RM2</td><td>20.52/49.65/33.82/7.37</td><td>10.40/55.85/32.47/9.87</td><td>7.47/56.66/32.95/10.26</td><td>6.52/9.44/7.75/0.78</td></tr><tr><td>RM3</td><td>40.43/76.15/56.45/10.86</td><td>3.37/73.30/55.27/18.53</td><td>5.10/85.01/56.04/20.39</td><td>6.83/9.90/8.31/0.69</td></tr></table>
|
| 411 |
+
|
| 412 |
+
Table 3: Min/Max/Mean/StdDev EMB training iteration time (in ms) across all GPUs based on per GPU averages for all sharding strategies on 16 GPUs. Training performance is bound by the slowest (i.e. max) EMB time, therefore lowest max iteration time is better. Load balanced is embodied by the standard deviation, with lower standard deviation signifying more balanced execution.
|
| 413 |
+
|
| 414 |
+
<table><tr><td>Model</td><td>Disparity</td><td>SB</td><td>LB</td><td>SBL</td></tr><tr><td rowspan="2">RM1</td><td>UVM->HBM</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>HBM->UVM</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td rowspan="2">RM2</td><td>UVM->HBM</td><td>28.67%</td><td>28.26%</td><td>28.26%</td></tr><tr><td>HBM->UVM</td><td>39.93%</td><td>39.99%</td><td>39.99%</td></tr><tr><td rowspan="2">RM3</td><td>UVM->HBM</td><td>23.29%</td><td>23.21%</td><td>23.21%</td></tr><tr><td>HBM->UVM</td><td>58.34%</td><td>59.36%</td><td>59.36%</td></tr></table>
|
| 415 |
+
|
| 416 |
+
Table 4: Percent of EMB rows allocated in UVM (resp. HBM) by each baseline strategy which RecShard places in HBM (resp. UVM). RM2 and RM3 require UVM for training on 16 GPUs, whereas RM1 does not. LB and SBL assign the same EMBs to HBM and UVM, but their exact GPU assignments differ. SB, LB, and SBL stand for Size-Based, Lookup-Based, and Size-Based-Lookup, respectively.
|
| 417 |
+
|
| 418 |
+
# 6.2 RecShard Embedding Placement Analysis
|
| 419 |
+
|
| 420 |
+
As DLRM sizes grow beyond the capacity of available GPU HBM, as is the case for RM2 and RM3, sharding pressure moves beyond simply load balancing across HBM and into load balancing across HBM and UVM. With orders of magnitude difference in the memory performance of HBM and UVM, incorrect EMB placements on UVM come with severe performance penalties. In this scenario, the state-of-the-art sharding strategies can significantly under-perform RecShard. This is exemplified with RM2's and RM3's results.
|
| 421 |
+
|
| 422 |
+
RecShard uses feature and EMB statistics to dynamically estimate EMB cost at the row granularity, enabling it to intelligently break apart an EMB into non-contiguous memory blocks and place each block independently across different tiers of the memory hierarchy. By doing so, RecShard determines and places the least performance-critical embedding portions of large DLRMs (i.e. RM2 and RM3) onto UVM.
|
| 423 |
+
|
| 424 |
+
For RM2, RecShard places an average of $53.4\%$ of rows per EMB and a total of $61\%$ of all EMB rows on UVM. For RM3, it places an average of $64.8\%$ of rows per EMB and a total of $61\%$ of all EMB rows on UVM. Figure 12 illustrates the partitioning decisions for RM2 using RecShard.
|
| 425 |
+
|
| 426 |
+
To further understand the difference in decision making between the baseline strategies and RecShard, we compare the EMB assignments and expected access counts for all strategies across RM2 and RM3. First, we explore how the individual EMB assignment by the Size, Lookup, and Size-and-Lookup strategies differ from
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
Figure 12: Partitions and Placements made by RecShard for RM2 on 16 GPUs. Each bar represents a single EMB. Bar height is the percentage of a specific EMB that RecShard placed on UVM. EMBs are grouped by the GPU they were assigned to (shown as colors). The number of EMBs assigned to each GPU is shown in parentheses. As expected, the number of EMBs assigned to each GPU is variable and the height of each bar is unique to each EMB.
|
| 430 |
+
|
| 431 |
+
RecShard's placement. That is, if an EMB was assigned to HBM, we examine the degree of overlap for the rows placed on UVM between the state-of-the-art strategy and RecShard. Table 4 summarizes this analysis. The rows labeled 'UVM->HBM' quantify the difference in the percentage of EMB rows placed in UVM for RM2 and RM3 by the state-of-the-art strategies versus RecShard. RecShard's ability to place more performance-critical, frequently-accessed embedding vectors onto HBM across all EMBs is the primary reason for its significantly higher performance.
|
| 432 |
+
|
| 433 |
+
# 6.3 RecShard Scalability Analysis
|
| 434 |
+
|
| 435 |
+
As model sizes increase, as expected, RecShard sees little performance degradation. This comes from the asymmetric impact on memory access statistics and memory usage that hash size scaling causes. The state-of-the-art strategies experience an average of 3.07 times performance slowdown in the EMB training iteration time between the largest DLRM (RM3) and RM1. However, RecShard only observes a $20.6\%$ increase in the EMB training iteration time over the same model size growth (Figure 13).
|
| 436 |
+
|
| 437 |
+
<table><tr><td>Model</td><td>Location</td><td>SB</td><td>LB</td><td>SBL</td><td>RecShard</td></tr><tr><td rowspan="2">RM1</td><td>HBM</td><td>88.74M</td><td>88.74M</td><td>88.74M</td><td>88.74M</td></tr><tr><td>UVM</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td rowspan="2">RM2</td><td>HBM</td><td>70.32M</td><td>70.90M</td><td>70.90M</td><td>88.48M</td></tr><tr><td>UVM</td><td>18.42M</td><td>17.84M</td><td>17.84M</td><td>259K</td></tr><tr><td rowspan="2">RM3</td><td>HBM</td><td>55.82M</td><td>56.85M</td><td>56.85M</td><td>88.29M</td></tr><tr><td>UVM</td><td>32.92M</td><td>31.89M</td><td>31.89M</td><td>450K</td></tr></table>
|
| 438 |
+
|
| 439 |
+
We explored this sparsity in access count by analysing the number of HBM and UVM accesses made by the EMBs in each of the sharding strategies in our training traces. We found (Table 5) that when doubling the hash size from RM1 to RM2, the baseline sharding strategies sourced on average $20.3\%$ of their accesses per-GPU per-iteration from UVM, while RecShard only sourced $0.2\%$ - over a $100x$ reduction. When hash size is quadrupled from RM1 to RM3 and sharding pressure doubles from RM2, the baseline sharding strategies sourced on average $36.3\%$ of their accesses per-GPU per-iteration from UVM, while RecShard only sourced $0.5\%$ . As HBM capacity is already exceeded in RM2, the additional model capacity (in bytes) of RM3 must be allocated in UVM. While the percentage of accesses sourced from UVM for RecShard more than doubles from RM2 to RM3, this value is still only $0.5\%$ of the total accesses (and over $70x$ less than the baseline strategies). This result highlights the sparsity of memory access to the new memory regions allocated by increased hash size.
|
| 440 |
+
|
| 441 |
+
# 6.4 End-to-End Training Time Improvement
|
| 442 |
+
|
| 443 |
+
While embedding operations can represent a significant portion of many industry-scale DLRMs [11, 46], the actual percentage of runtime varies based on model composition. RecShard improves end-to-end training performance in proportion to the time spent on embedding operations in the critical path of model execution (which in the canonical DLRM architecture consists of all embedding operations).
|
| 444 |
+
|
| 445 |
+
Knowing the runtime breakdown, the expected end-to-end DLRM training performance improvement can be approximated using Amdahl's law. With $p$ being the percentage of total execution time spent on critical path embedding operations, and $s$ being the speedup in embedding operation latency via improved sharding, the estimated end-to-end speedup is $\frac{1}{(1 - p) + \frac{p}{s}}$ .
|
| 446 |
+
|
| 447 |
+
As a concrete example, for memory-intensive models whose timing composition consists of $35 - 75\%$ embedding operations [11, 23] (with the remaining time being largely dominated by dense MLP
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
Figure 13: Slowdown of each sharding strategy on max EMB iteration time as model sizes scale $2\mathrm{x}$ and $4\mathrm{x}$ from RM1 to RM2, and RM1 to RM3, respectively. While heuristic based fixed-cost strategies suffer over a $3\mathrm{x}$ slowdown on average from RM1 to RM3, RecShard is less sensitive to performance degradation from model size scaling and only experiences a $1.2\mathrm{x}$ slowdown.
|
| 451 |
+
|
| 452 |
+
Table 5: Average number of HBM and UVM accesses per-GPU, per-iteration for each sharding strategy on RM2 and RM3 (batch-size of 16384 on 16 GPUs). RM1 does not not require UVM. Baseline strategies source on average $20.3\%$ (RM2) and $36.3\%$ (RM3) of EMB accesses from UVM. RecShard sources $0.2\%$ (RM2) and $0.5\%$ (RM3) of EMB accesses from UVM. LB and SBL assign the same EMBs to HBM and UVM, but their exact GPU assignments differ. SB, LB, and SBL stand for Size-Based, Lookup-Based, and Size-Based-Lookup, respectively.
|
| 453 |
+
|
| 454 |
+
<table><tr><td>Formulation</td><td>HBM</td><td>UVM</td></tr><tr><td>RecShard (Full)</td><td>69.07B</td><td>353M</td></tr><tr><td>CDF + Pooling</td><td>68.82B</td><td>604M</td></tr><tr><td>CDF + Coverage</td><td>68.54B</td><td>881M</td></tr><tr><td>CDF Only</td><td>67.79B</td><td>1.63B</td></tr></table>
|
| 455 |
+
|
| 456 |
+
Table 6: RecShard Ablation. Average number of HBM and UVM accesses per-GPU on RM3 (across 16 GPUs) over more than 200 million training data samples for different RecShard formulations. CDF only is the use of only the persparse feature value CDF in the MILP (i.e. average pooling factor and coverage are set 1). $CDF + Coverage$ is the use of both the CDF and coverage in the MILP; while $CDF + Pooling$ is the use of both the CDF and average pooling factor in the MILP. RecShard (Full) is the access counts when all per-EMB statistics are used simultaneously in the MILP.
|
| 457 |
+
|
| 458 |
+
layers and communication), and for which RecShard improves embedding performance by $2.5\mathrm{x}$ , the expected end-to-end performance benefit of RecShard is $1.27\mathrm{x}$ to $1.82\mathrm{x}$ . While the performance improvements afforded by RecShard are less pronounced for more MLP-dominated DLRMs, the position of embedding operations on the critical path of model execution and the scale of industry-DLRM training time (on the order of days [1]) indicates the importance of their acceleration.
|
| 459 |
+
|
| 460 |
+
# 6.5 RecShard Ablation
|
| 461 |
+
|
| 462 |
+
To better understand the impact the various sparse feature characteristics used within RecShard have on the performance of the generated sharding, we performed an ablation study to measure their significance on the number of HBM and UVM accesses made by each GPU. We evaluate four different formulation of RecShard, each differing by which per-EMB statistics are enabled for use within the MILP. The results of this ablation on RM3 (with 16 GPUs) over
|
| 463 |
+
|
| 464 |
+
more than 200 million training data samples is shown in Table 6. The four formulations of RecShard evaluated are:
|
| 465 |
+
|
| 466 |
+
- CDF only: Only the sparse feature value CDF is used in the MILP and the average pooling factor and coverage for each EMB are set to 1.
|
| 467 |
+
- $CDF +$ Coverage: Both the CDF and the per-EMB coverage are used in the MILP.
|
| 468 |
+
- $CDF +$ Pooling: Both the CDF and the per-EMB average pooling factor are used in the MILP.
|
| 469 |
+
- Full: All of the per-EMB statistics are used in the MILP simultaneously.
|
| 470 |
+
|
| 471 |
+
Similar to the results in Section 6.3, we observe that approximately $0.5\%$ of accesses on average in the full formulation of RecShard are sourced from UVM, while the simplest RecShard formulation, CDF only, sources approximately $2.4\%$ of its accesses from UVM. While this is still significantly less than the baseline sharding strategies, this nearly $5\mathrm{x}$ increase over the full formulation is due to the CDF providing no information about how often each EMB will be accessed in a training data sample. Thus when evaluating different potential partitioning and placement decisions, the MILP in the CDF only formulation has no information which it can exploit to accurately load balance EMBs across the GPUs based on their expected number of accesses. Adding one piece of per-sample EMB access information via the coverage almost halves the average UVM sourced access percentage to approximately $1.3\%$ , while using the average pooling factor instead provides an even greater reduction to approximately $0.9\%$ .
|
| 472 |
+
|
| 473 |
+
# 6.6 RecShard Overhead
|
| 474 |
+
|
| 475 |
+
For all models studied in this work, the Gurobi solver [12] was able to solve the placement and partitioning MILP in under 1 minute. After which, generating the remapping tables takes approximately 20 seconds for each GPU and has a storage cost of 4 bytes per row remapped (as the sign of the remapped index can be used to denote whether the corresponding table is the HBM or UVM partition). For the largest DLRM-RM3, this is a total storage overhead of $\sim 20\mathrm{GB}$ for over 5-billion rows remapped. In the scope of model training time (many hours to potentially days depending on model and data size), and model size (hundreds of GBs to multiple TBs), this overhead is minimal, especially due to the performance improvements RecShard provides.
|
| 476 |
+
|
| 477 |
+
Additionally RecShard incurs some overhead from training data profiling due to the consumption of feature level statistics. However, besides only needing to sample a small portion ( $\sim 1\%$ ) of large training data stores to achieve statistical significance, as the statistics are based on raw training data values and corresponding hash sizes (which are generally constant across models within a size tier), they can be shared across models and also updated in real-time as training data arrives, amortizing the cost.
|
| 478 |
+
|
| 479 |
+
# 7 RELATED WORK
|
| 480 |
+
|
| 481 |
+
Power-law distributions are a well-known phenomenon of features related to recommender systems [2, 8, 22, 42, 45]. This sparsity characteristic is an important feature for a variety of DL system performance optimizations. However, maintaining the long tail
|
| 482 |
+
|
| 483 |
+
is important because of the statistically significant accuracy impact [46]. This has led to recent works attempting to balance the trade-off between EMB sizes and model accuracy. One such category of work explores scaling the dimension of an EMB, that is the number of parameters used to encode an EMB row, based on the frequency of accesses to individual rows—more frequently accessed rows are given more space through increased embedding vector dimensions [8]. Another work explores the impact of hashing, ranging from the use of multiple hash functions alongside a 1:1 mapping for frequent categorical values [45], to entirely replacing the hashing plus embedding table structure with its own neural network [22]. In addition, other prior work proposes to prioritize frequently-accessed embedding rows for model parameter checkpointing [28], in order to improve failure tolerance of DLRM training. While prior work also tackles the problem of ever-increasing EMB sizes, their primary focus is the size of EMB itself, rather than on training throughput improvement.
|
| 484 |
+
|
| 485 |
+
Recent work has also explored the performance of splitting EMBs based on their frequency characteristics [2]. While similar in motivation, the type of training data and the scale of DLRMs explored in this paper are fundamentally different from the open-source datasets used in the related work. Our DLRMs read multi-hot encoded sparse features resulting in order-of-magnitude higher memory bandwidth needs, and EMB sizes demanding model-parallel training. In Criteo Terabyte (the largest of the open-source datasets), all of the features are 1-hot encoded (meaning their pooling factor is always 1), the number of features present is 26, and the total number of un-encoded embedding table rows is approximately 266 million. Thus, for each of these properties, the scale of open-source datasets/DLRMs [29, 35, 42] is an order of magnitude (or more) less than our evaluated datasets/DLRMs. Furthermore, all open-source datasets that we are aware of can fit entirely within a single GPU, making sharding and model-parallel training unnecessary.
|
| 486 |
+
|
| 487 |
+
# 8 CONCLUSION
|
| 488 |
+
|
| 489 |
+
Deep learning recommendation systems are the backbone of a wide variety of cloud services and products. Unlike other neural networks with primarily convolution or fully-connected layers, recommendation model embedding tables demand orders-of-magnitude higher memory capacity ( $>99\%$ of the model capacity) and bandwidth, and exhibit significantly lower compute-intensity. In this paper, we perform an in-depth memory characterization analysis and we identify five important memory characteristics for sparse features of DLRMs. Building on the analysis, we propose RecShard, which formulates the embedding table partitioning and placement problem for training systems with tiered memories. RecShard uses a MILP to reach a partitioning and placement decision that minimizes embedding access time under constrained memory capacities. We implement and evaluate RecShard by training a modified version of open-source DLRM with production data. RecShard can achieve an average of over 5 times speedup for the embedding kernels of three representative industry-scale recommendation models. We hope our findings will lead to further memory optimization insights in this important category of deep learning use cases.
|
| 490 |
+
|
| 491 |
+
# ACKNOWLEDGMENTS
|
| 492 |
+
|
| 493 |
+
We would like to thank Jade Nie, Jianyu Huang, Jongsoo Park, Andrew Tulloch, Xing Liu, Benny Chen, Ying Liu, Liu Ke, Udit Gupta, Newsha Ardalani, Hsien-Hsin S. Lee, and Kim Hazelwood at Meta for their valuable feedback and various discussions on this work, as well as Fan Yang and the anonymous reviewers for their constructive feedback. This work was supported in part by the Stanford Platform Lab and its affiliates for Geet Sethi and Christos Kozyrakis.
|
| 494 |
+
|
| 495 |
+
# REFERENCES
|
| 496 |
+
|
| 497 |
+
[1] Bilge Acun, Matthew Murphy, Xiaodong Wang, Jade Nie, Carole-Jean Wu, and Kim Hazelwood. 2021. Understanding Training Efficiency of Deep Learning Recommendation Models at Scale. In 2021 IEEE International Symposium on High Performance Computer Architecture (HPCA).
|
| 498 |
+
[2] Muhammad Adnan, Yassaman Ebrahimzadeh Maboud, Divya Mahajan, and Prashant J. Nair. 2021. High-Performance Training by Exploiting Hot-Embeddings in Recommendation Systems. CoRR (2021). https://arxiv.org/abs/2103.00686
|
| 499 |
+
[3] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. 2016. Wide & Deep Learning for Recommender Systems. In Workshop on Deep Learning for Recommender Systems.
|
| 500 |
+
[4] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In ACM Recommender Systems Conference.
|
| 501 |
+
[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248-255. https://doi.org/10.1109/CVPR.2009.5206848
|
| 502 |
+
[6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171-4186. https://doi.org/10.18653/v1/N19-1423
|
| 503 |
+
[7] Facebook Research. 2021. An implementation of a deep learning recommendation model (DLRM). https://github.com/facebookresearch/dlrm.
|
| 504 |
+
[8] Antonio Ginart, Maxim Naumov, Dheevatsa Mudigere, Jiyan Yang, and James Zou. 2019. Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems. arXiv preprint arXiv:1909.11810 (2019).
|
| 505 |
+
[9] Carlos A. Gomez-Uribe and Neil Hunt. 2016. The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Trans. Manage. Inf. Syst. 6, 4, Article 13 (Dec. 2016), 19 pages. https://doi.org/10.1145/2843948
|
| 506 |
+
[10] Udit Gupta, Samuel Hsia, Vikram Saraph, Xiaodong Wang, Brandon Reagen, Gu-Yeon Wei, Hsien-Hsin S. Lee, David Brooks, and Carole-Jean Wu. 2020. DeepRecSys: A System for Optimizing End-To-End At-Scale Neural Recommendation Inference. In Proceedings of the ACM/IEEE Annual International Symposium on Computer Architecture.
|
| 507 |
+
[11] Udit Gupta, Carole-Jean Wu, Xiaodong Wang, Maxim Naumov, Brandon Reagen, David Brooks, Bradford Cottel, Kim Hazelwood, Mark Hempstead, Bill Jia, Hsien-Hsin S. Lee, Andrey Malevich, Dheevatsa Mudigere, Mikhail Smelyanskiy, Liang Xiong, and Xuan Zhang. 2020. The Architectural Implications of Facebook's DNN-Based Personalized Recommendation. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA).
|
| 508 |
+
[12] Gurobi Optimization, LLC. 2021. Gurobi Optimizer Reference Manual. (2021). https://www.gurobi.com
|
| 509 |
+
[13] Mark Harris. 2013. Unified Memory in CUDA 6. https://developer.nvidia.com/blog/unified-memory-in-cuda-6/.
|
| 510 |
+
[14] Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law, Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, and Xiaodong Wang. 2018. Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA).
|
| 511 |
+
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770-778. https://doi.org/10.1109/CVPR.2016.90
|
| 512 |
+
[16] S. Hsia, U. Gupta, M. Wilkening, C. Wu, G. Wei, and D. Brooks. 2020. Cross-Stack Workload Characterization of Deep Recommendation Systems. In IEEE International Symposium on Workload Characterization (IISWC). IEEE Computer Society.
|
| 513 |
+
[17] Biye Jiang, Chao Deng, Huimin Yi, Zelin Hu, Guorui Zhou, Yang Zheng, Sui Huang, Xinyang Guo, Dongyue Wang, Yue Song, Liqin Zhao, Zhi Wang, Peng Sun, Yu Zhang, Di Zhang, Jinhui Li, Jian Xu, Xiaogiang Zhu, and Kun Gai. 2019.
|
| 514 |
+
|
| 515 |
+
XDL: An Industrial Deep Learning Framework for High-Dimensional Sparse Data. In Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data (Anchorage, Alaska) (DLP-KDD '19). Association for Computing Machinery, New York, NY, USA, Article 6, 9 pages. https://doi.org/10.1145/3326937.3341255
|
| 516 |
+
[18] Yimin Jiang, Yibo Zhu, Chang Lan, Bairen Yi, Yong Cui, and Chuanxiong Guo. 2020. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 463-479. https://www.usenix.org/conference/osdi20/presentation/jiang
|
| 517 |
+
[19] Manas R. Joglekar, Cong Li, Mei Chen, Taibai Xu, Xiaoming Wang, Jay K. Adams, Pranav Khaitan, Jiahui Liu, and Quoc V. Le. 2020. Neural Input Search for Large Scale Recommendation Models. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Virtual Event, CA, USA) (KDD '20). Association for Computing Machinery, New York, NY, USA, 2387-2397. https://doi.org/10.1145/3394486.3403288
|
| 518 |
+
[20] Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagemann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. 2017. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the ACM/IEEE 44th Annual International Symposium on Computer Architecture.
|
| 519 |
+
[21] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romero-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. 2021. Highly accurate protein structure prediction with AlphaFold. Nature (2021).
|
| 520 |
+
[22] Wang-Cheng Kang, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Ting Chen, Lichan Hong, and Ed H. Chi. 2021. Learning to Embed Categorical Features without Embedding Tables for Recommendation. CoRR (2021). https://arxiv.org/abs/2010.10784
|
| 521 |
+
[23] Liu Ke, Udit Gupta, Benjamin Youngjae Cho, David Brooks, Vikas Chandra, Utku Diril, Amin Firoozshahian, Kim M. Hazelwood, Bill Jia, Hsien-Hsin S. Lee, Meng Li, Bert Maher, Dheevatsa Mudigere, Maxim Naumov, Martin Schatz, Mikhail Smelyanskiy, Xiaodong Wang, Brandon Reagen, Carole-Jean Wu, Mark Hempstead, and Xuan Zhang. 2020. RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing. In 47th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2020, Valencia, Spain, May 30 - June 3, 2020. IEEE, 790-803. https://doi.org/10.1109/ISCA45697.2020.00070
|
| 522 |
+
[24] Sameer Kumar, James Bradbury, Cliff Young, Yu Emma Wang, Anselm Levskaya, Blake Hechtman, Dehao Chen, HyoukJoong Lee, Mehmet Deveci, Naveen Kumar, Pankaj Kanwar, Shibo Wang, Skye Wanderman-Milne, Steve Lacy, Tao Wang, Tayo Oguntebi, Yazhou Zu, Yuanzhong Xu, and Andy Swing. 2021. Exploring the limits of Concurrence in ML Training on Google TPUs. arXiv:2011.03641 [cs.LG]
|
| 523 |
+
[25] Haochen Liu, Xiangyu Zhao, Chong Wang, Xiaobing Liu, and Jiliang Tang. 2020. Automated Embedding Size Search in Deep Recommender Systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China) (SIGIR '20). Association for Computing Machinery, New York, NY, USA, 2307-2316. https://doi.org/10.1145/3397271.3401436
|
| 524 |
+
[26] Michael Lui, Yavuz Yetim, Ozgur Ozkan, Zhuoran Zhao, Shin-Yeh Tsai, Carole-Jean Wu, and Mark Hempstead. 2021. Understanding Capacity-Driven Scale-Out Neural Recommendation Inference. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS).
|
| 525 |
+
[27] Clemens Lutz, Sebastian Bref, Steffen Zeuch, Tilmann Rabl, and Volker Markl. 2020. Pump Up the Volume: Processing Large Data on GPUs with Fast Interconnects. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data (Portland, OR, USA) (SIGMOD '20). Association for Computing Machinery, New York, NY, USA, 1633-1649. https://doi.org/10.1145/3318464.3389705
|
| 526 |
+
[28] Kiwan Maeng, Shivam Bharuka, Isabel Gao, Mark Jeffrey, Vikram Saraph, Bor-Yiing Su, Caroline Trippel, Jiyan Yang, Mike Rabbat, Brandon Lucia, and Carole-Jean Wu. 2021. Understanding and Improving Failure Tolerant Training for Deep
|
| 527 |
+
|
| 528 |
+
Learning Recommendation with Partial Recovery. In Proceedings of Machine Learning and Systems.
|
| 529 |
+
[29] Peter Mattson, Christine Cheng, Gregory Diamos, Cody Coleman, Paulius Miciekvicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debo Dutta, Udit Gupta, Kim Hazelwood, Andy Hock, Xinyuan Huang, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St John, Carole-Jean Wu, Lingjie Xu, Cliff Young, and Matei Zaharia. 2020. MLPerf Training Benchmark. In Proceedings of Machine Learning and Systems.
|
| 530 |
+
[30] Seung Won Min, Vikram Sharma Mailthody, Zaid Qureshi, Jinjun Xiong, Eiman Ebrahimi, and Wen-mei Hwu. 2020. EMOGI: Efficient Memory-Access for out-ofMemory Graph-Traversal in GPUs. Proc. VLDB Endow. 14, 2 (Oct. 2020), 114-127. https://doi.org/10.14778/3425879.3425883
|
| 531 |
+
[31] Dheevatsa Mudigere, Yuchen Hao, Jianyu Huang, Andrew Tulloch, Srinivas Sridharan, Xing Liu, Mustafa Ozdal, Jade Nie, Jongsoo Park, Liang Luo, Jie Amy Yang, Leon Gao, Dmytro Ivchenko, Aarti Basant, Yuxi Hu, Jiyan Yang, Ehsan K. Ardestani, Xiaodong Wang, Rakesh Komuravelli, Ching-Hsiang Chu, Serhat Yilmaz, Huayu Li, Jiyuan Qian, Zhuobo Feng, Yinbin Ma, Junjie Yang, Ellie Wen, Hong Li, Lin Yang, Chonglin Sun, Whitney Zhao, Dimitry Melts, Krishna Dhulipala, KR Kishore, Tyler Graf, Assaf Eisenman, Kiran Kumar Matam, Adi Gangidi, Guoqiang Jerry Chen, Manoj Krishnan, Avinash Nayak, Krishnakumar Nair, Bharath Muthiah, Mahmoud khorashadi, Pallab Bhattacharya, Petr Lapukhov, Maxim Naumov, Lin Qiao, Mikhail Smelyanskiy, Bill Jia, and Vijay Rao. 2021. High-performance, Distributed Training of Large-scale Deep Learning Recommendation Models. CoRR (2021). arXiv:2104.05158 [cs.DC]
|
| 532 |
+
[32] Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiaodong Wang, Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair, Isabel Gao, Bor-Yiing Su, Jiyan Yang, and Mikhail Smelyanskiy. 2020. Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems. CoRR (2020). arXiv:2003.09518 [cs.DC]
|
| 533 |
+
[33] Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G. Azzolini, Dmytro Dzhulgakov, Andrey Mallevich, Ilia Cherniavskii, Yinghai Lu, Raghuraman Krishnamoorthi, Ansha Yu, Volodymyr Kondratenko, Stephanie Pereira, Xianjie Chen, Wenlin Chen, Vijay Rao, Bill Jia, Liang Xiong, and Misha Smelyanskiy. 2019. Deep Learning Recommendation Model for Personalization and Recommendation Systems. (2019). arXiv:1906.00091 [cs.IR]
|
| 534 |
+
[34] Yves Raimond. 2018. Deep Learning for Recommender Systems. https://www.slideshare.net/moustaki/deep-learning-for-recommender-systems-86752234.
|
| 535 |
+
[35] Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, Ramesh Chukka, Cody Coleman, Sam Davis, Pan Deng, Greg Diamos, Jared Duke, Dave Fick, J. Scott Gardner, Itay Hubara, Sachin Idgunji, Thomas B. Jablin, Jeff Jiao, Tom St. John, Pankaj Kanwar, David Lee, Jeffery Liao, Anton Lokhmotov, Francisco Massa, Peng Meng, Paulius Mickevicius, Colin Osborne, Gennady Pekhimenko, Arun Tejusve Raghunath Rajan, Dilip Sequeira, Ashish Sirasao, Fei Sun, Hanlin Tang, Michael Thomson, Frank Wei, Ephrem Wu, Lingjie Xu, Koichi Yamada, Bing Yu, George Yuan, Aaron Zhong, Peizhao Zhang, and Yuchen Zhou. 2020. MLPerf Inference Benchmark. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA).
|
| 536 |
+
[36] Danny Sullivan. 2016. Google uses RankBrain for every search, impacts rankings of "lots" of them. https://searchengineland.com/google-loves-rankbrain-uses-for-every-search-252526.
|
| 537 |
+
|
| 538 |
+
[37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 6000-6010.
|
| 539 |
+
[38] Yu Emma Wang, Carole-Jean Wu, Xiaodong Wang, Kim Hazelwood, and David Brooks. 2021. Exploiting Parallelism Opportunities with Deep Learning Frameworks. ACM Transactions on Architecture and Code Optimization 18, 1 (2021).
|
| 540 |
+
[39] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature Hashing for Large Scale Multitask Learning. In Proceedings of the 26th Annual International Conference on Machine Learning (Montreal, Quebec, Canada) (ICML '09). Association for Computing Machinery, New York, NY, USA, 1113-1120. https://doi.org/10.1145/1553374.1553516
|
| 541 |
+
[40] Jonathan A. Weyn, Dale R. Durran, and Rich Caruana. 2020. Improving Data-Driven Global Weather Prediction Using Deep Convolutional Neural Networks on a Cubed Sphere. Journal of Advances in Modeling Earth Systems 12, 9 (Sep 2020). https://doi.org/10.1029/2020ms002109
|
| 542 |
+
[41] Mark Wilkening, Udit Gupta, Samuel Hsia, Caroline Trippel, Carole-Jean Wu, David Brooks, and Gu-Yeon Wei. 2021. RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
|
| 543 |
+
[42] Carole-Jean Wu, Robin Burke, Ed Chi, Joseph A. Konstan, Julian J. McAuley, Yves Raimond, and Hao Zhang. 2020. Developing a Recommendation Benchmark for MLPerf Training and Inference. CoRR abs/2003.07336 (2020). https://arxiv.org/abs/2003.07336
|
| 544 |
+
[43] Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga Behram, James Huang, Charles Bai, Michael Gschwind, Anurag Gupta, Myle Ott, Anastasia Melnikov, Salvatore Candido, David Brooks, Geeta Chauhan, Benjamin Lee, Hsien-Hsin S. Lee, Bugra Akyildiz, Maximilian Balandat, Joe Spisak, Ravi Jain, Mike Rabbat, and Kim Hazelwood. 2021. Sustainable AI: Environmental Implications, Challenges and Opportunities. CoRR abs/2111.00364 (2021).
|
| 545 |
+
[44] Chunxing Yin, Bilge Acun, Xing Liu, and Carole-Jean Wu. 2021. TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models. CoRR abs/2101.11714 (2021). https://arxiv.org/abs/2101.11714
|
| 546 |
+
[45] Caojin Zhang, Yicun Liu, Yuanpu Xie, Sofia Ira Ktena, Alykhan Tejani, Akshay Gupta, Pranay Kumar Myana, Deepak Dilipkumar, Suvadip Paul, Ikuhiro Ihara, Prasang Upadhyaya, Ferenc Huszar, and Wenzhe Shi. 2020. Model Size Reduction Using Frequency Based Double Hashing for Recommender Systems. In Fourteenth ACM Conference on Recommender Systems (Virtual Event, Brazil) (RecSys '20). Association for Computing Machinery, New York, NY, USA, 521-526. https://doi.org/10.1145/3383313.3412227
|
| 547 |
+
[46] Weijie Zhao, Deping Xie, Ronglai Jia, Yulei Qian, Ruiquan Ding, Mingming Sun, and Ping Li. 2020. Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems. In Proceedings of Machine Learning and Systems.
|
| 548 |
+
[47] Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. 2019. Recommending What Video to Watch next: A Multitask Ranking System. In Proceedings of the 13th ACM Conference on Recommender Systems (Copenhagen, Denmark) (RecSys '19). Association for Computing Machinery, New York, NY, USA, 43-51. https://doi.org/10.1145/3298689.3346997
|
| 549 |
+
[48] Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaogiang Zhu, and Kun Gai. 2019. Deep interest evolution network for click-through rate prediction. In AAAI conference on artificial intelligence, Vol. 33. 5941-5948.
|
2201.10xxx/2201.10095/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ed5a3208bc0c81b11016fad9887a0ffcd7725bae942ee92d268bde7d8e1f0f0f
|
| 3 |
+
size 677709
|
2201.10xxx/2201.10095/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_content_list.json
ADDED
|
@@ -0,0 +1,1297 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
109,
|
| 8 |
+
70,
|
| 9 |
+
888,
|
| 10 |
+
170
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Dongyu Rao, Xiao-Jun Wu, Tianyang Xu",
|
| 17 |
+
"bbox": [
|
| 18 |
+
323,
|
| 19 |
+
184,
|
| 20 |
+
665,
|
| 21 |
+
202
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract—The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on a lightweight transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.",
|
| 28 |
+
"bbox": [
|
| 29 |
+
73,
|
| 30 |
+
255,
|
| 31 |
+
491,
|
| 32 |
+
603
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "I. INTRODUCTION",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
209,
|
| 42 |
+
633,
|
| 43 |
+
356,
|
| 44 |
+
647
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "With the development of imaging equipment and analysis approaches, multi-modal visual data is emerging rapidly with many practical applications. In general, image fusion has played an important role in helping human vision to perceive information association between multimodal data. Among them, the fusion of infrared and visible images has important applications in military, security, detection [1] and visual tracking [2], [3], [4], [5], [6], [7] etc., becoming an important part of image fusion tasks.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
73,
|
| 53 |
+
656,
|
| 54 |
+
490,
|
| 55 |
+
823
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "list",
|
| 61 |
+
"sub_type": "text",
|
| 62 |
+
"list_items": [
|
| 63 |
+
"D. Rao and X.-J. Wu (Corresponding author) are with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China. (e-mail: raodongyu@163.com, wu_xiaojun@jiangnan.edu.cn).",
|
| 64 |
+
"T. Xu is with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, P.R. China and the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK. (e-mail: tianyang_xu@163.com)"
|
| 65 |
+
],
|
| 66 |
+
"bbox": [
|
| 67 |
+
73,
|
| 68 |
+
835,
|
| 69 |
+
491,
|
| 70 |
+
944
|
| 71 |
+
],
|
| 72 |
+
"page_idx": 0
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"type": "image",
|
| 76 |
+
"img_path": "images/29d43f43df99971a0230825ed9e459d754a6739ef77ff8c93905395a1f66b071.jpg",
|
| 77 |
+
"image_caption": [
|
| 78 |
+
"Fig. 1. Infrared image (a), visible image (b) and fused image generated by the proposed method (c)."
|
| 79 |
+
],
|
| 80 |
+
"image_footnote": [],
|
| 81 |
+
"bbox": [
|
| 82 |
+
524,
|
| 83 |
+
255,
|
| 84 |
+
908,
|
| 85 |
+
411
|
| 86 |
+
],
|
| 87 |
+
"page_idx": 0
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"type": "text",
|
| 91 |
+
"text": "In order to design a natural and efficient image fusion algorithm, researchers have developed many fusion algorithms on the basis of traditional image processing. Firstly, the fusion algorithms based on multiscale transformation are proposed [8], [9], [10], [11], which applied traditional image processing methods to image fusion. Subsequently, fusion algorithms based on sparse / low-rank representation were applied [12], [13], [14]. These algorithms use specific image processing methods to obtain image representations, and obtain the output images by fusing the image representations. However, the image features obtained by these methods are relatively less salient. Most of the fusion methods also require complex designs, so that the fusion results usually introduce a large amount of noise. With the development of deep learning, image fusion methods based on convolutional neural networks have become the mainstream of the topic [15], [16]. However, since most image fusion tasks are unsupervised, the supervised end-to-end training framework is not suitable for training fusion tasks. Drawing on this, some fusion algorithms [17] used large-scale pre-trained networks to extract image features. However, the pre-trained network is mostly used for classification tasks, and the extracted features cannot meet the requirements of the fusion task. Subsequently, Li et al. [18], [19] proposed a fusion algorithm based on an encoder-decoder network, using",
|
| 92 |
+
"bbox": [
|
| 93 |
+
501,
|
| 94 |
+
483,
|
| 95 |
+
921,
|
| 96 |
+
941
|
| 97 |
+
],
|
| 98 |
+
"page_idx": 0
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"type": "page_number",
|
| 102 |
+
"text": "1",
|
| 103 |
+
"bbox": [
|
| 104 |
+
911,
|
| 105 |
+
30,
|
| 106 |
+
919,
|
| 107 |
+
40
|
| 108 |
+
],
|
| 109 |
+
"page_idx": 0
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"type": "aside_text",
|
| 113 |
+
"text": "arXiv:2201.10147v2 [cs.CV] 4 Feb 2022",
|
| 114 |
+
"bbox": [
|
| 115 |
+
22,
|
| 116 |
+
268,
|
| 117 |
+
58,
|
| 118 |
+
700
|
| 119 |
+
],
|
| 120 |
+
"page_idx": 0
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"type": "image",
|
| 124 |
+
"img_path": "images/3ef31c6a152bcc80a17894686f6b73e7009dd21d18ca72e12eabc8eec5d4d4d6.jpg",
|
| 125 |
+
"image_caption": [
|
| 126 |
+
"Fig. 2. The framework of ViT (Vision Transformer). \"B C H W\" respectively represent the batch size, channels, height and width. \"p\" means patch size. \"h w\" is the number of patches in height and width. \"E\" is the reduced dimension."
|
| 127 |
+
],
|
| 128 |
+
"image_footnote": [],
|
| 129 |
+
"bbox": [
|
| 130 |
+
173,
|
| 131 |
+
73,
|
| 132 |
+
826,
|
| 133 |
+
193
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 1
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "ordinary data sets for encoder-decoder training. This method makes the fusion task get rid of the dependence on multi-modal data sets. But this also makes it unable to effectively learn specific tasks. In order to obtain better performance for specific fusion tasks, the end-to-end image fusion methods [20], [21], [22] are proposed to learn more targeted network parameters through a specific network structure and loss function. This method is dedicated to training fusion tasks, which can usually achieve better fusion results. However, this puts forward higher requirements for the representative ability of the network and the effectiveness of the fusion method. At present, the end-to-end fusion algorithm mainly uses a convolutional neural network for feature extraction and achieves the fusion effect. However, due to the characteristics of CNN, this process usually ignores the global dependency infusion.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
73,
|
| 142 |
+
268,
|
| 143 |
+
491,
|
| 144 |
+
553
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 1
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "In order to solve the problem of global dependence and effective integration, we propose an infrared and visible image fusion algorithm based on the lightweight transformer and adversarial learning. Our method uses a general visual transformer for image spatial relationship learning. In particular, we propose a novel cross-channel transformer model to learn the channel relationship. The composite transformer fusion module has learned the global fusion relationship with space and channels. In addition, adversarial learning is introduced in the training process. We use two discriminators (infrared and fused image, visible and fused image) for adversarial training respectively. This allows the fused image to obtain higher-quality infrared and visible image characteristics.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
73,
|
| 153 |
+
555,
|
| 154 |
+
491,
|
| 155 |
+
789
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 1
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "The proposed method mainly has the following three innovations:",
|
| 162 |
+
"bbox": [
|
| 163 |
+
73,
|
| 164 |
+
790,
|
| 165 |
+
491,
|
| 166 |
+
821
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 1
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "list",
|
| 172 |
+
"sub_type": "text",
|
| 173 |
+
"list_items": [
|
| 174 |
+
"- A channel-token transformer is proposed to explore the channel relationships, which is effectively applied in the fusion method.",
|
| 175 |
+
"- A transformer module is designed to achieve global fusion relationship learning in complex scenarios.",
|
| 176 |
+
"- Adversarial learning is introduced into the training process. The discriminator of the two modalities"
|
| 177 |
+
],
|
| 178 |
+
"bbox": [
|
| 179 |
+
91,
|
| 180 |
+
827,
|
| 181 |
+
491,
|
| 182 |
+
944
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "introduces the characteristics of different modalities to the fused image to improve the fusion effect.",
|
| 189 |
+
"bbox": [
|
| 190 |
+
539,
|
| 191 |
+
268,
|
| 192 |
+
921,
|
| 193 |
+
301
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "II. RELATED WORK",
|
| 200 |
+
"text_level": 1,
|
| 201 |
+
"bbox": [
|
| 202 |
+
633,
|
| 203 |
+
319,
|
| 204 |
+
790,
|
| 205 |
+
333
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 1
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "text",
|
| 211 |
+
"text": "Although traditional methods are well investigated [?], [?], deep learning based methods are mainly discussed in this paper.",
|
| 212 |
+
"bbox": [
|
| 213 |
+
503,
|
| 214 |
+
339,
|
| 215 |
+
921,
|
| 216 |
+
390
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "A. Image Fusion Method Based on Deep Learning",
|
| 223 |
+
"text_level": 1,
|
| 224 |
+
"bbox": [
|
| 225 |
+
503,
|
| 226 |
+
411,
|
| 227 |
+
885,
|
| 228 |
+
428
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "The fusion algorithm based on deep learning has shown excellent performance in infrared and visible image fusion, multi-focus image fusion and medical image fusion, etc. Li et al. [23], [17] used a pretrained neural network to extract image features and used them for image fusion weight calculation. This is a preliminary combination of neural network and image fusion tasks. In order to obtain the depth features suitable for reconstructing images, Li et al. [18] first proposed an algorithm based on an auto-encoder network. In the absence of specific data, the algorithm can also achieve a good fusion effect. With the advancement of visual data collection equipment, some large-scale multi-mode data sets have appeared, so end-to-end fusion algorithms [24], [25] have received more attention and applications. This end-to-end fusion algorithm based on convolutional neural networks achieves better performance on a single task. But it still has some limitations, such as the spatial limitation of the fusion method based on a convolutional neural network. In this paper, the proposed method is an end-to-end image fusion algorithm. But compared to the CNN-based fusion network, we expand the network structure of the end-to-end algorithm and introduce the transformer that focuses on building global relationships into the fusion module. Our algorithm opens up new ideas in the design of fusion methods.",
|
| 235 |
+
"bbox": [
|
| 236 |
+
501,
|
| 237 |
+
431,
|
| 238 |
+
921,
|
| 239 |
+
868
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "B. Generative Adversarial Network",
|
| 246 |
+
"text_level": 1,
|
| 247 |
+
"bbox": [
|
| 248 |
+
504,
|
| 249 |
+
888,
|
| 250 |
+
772,
|
| 251 |
+
904
|
| 252 |
+
],
|
| 253 |
+
"page_idx": 1
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"type": "text",
|
| 257 |
+
"text": "A generative adversarial network (GAN) is an algorithm that obtains high-quality generated images by",
|
| 258 |
+
"bbox": [
|
| 259 |
+
503,
|
| 260 |
+
911,
|
| 261 |
+
921,
|
| 262 |
+
945
|
| 263 |
+
],
|
| 264 |
+
"page_idx": 1
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"type": "page_number",
|
| 268 |
+
"text": "2",
|
| 269 |
+
"bbox": [
|
| 270 |
+
911,
|
| 271 |
+
30,
|
| 272 |
+
919,
|
| 273 |
+
40
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "image",
|
| 279 |
+
"img_path": "images/c3d806b6959b0b044811eb5415bab605e8776d529ed44aa50eec284864ce3b73.jpg",
|
| 280 |
+
"image_caption": [
|
| 281 |
+
"Fig. 3. The framework of our method."
|
| 282 |
+
],
|
| 283 |
+
"image_footnote": [],
|
| 284 |
+
"bbox": [
|
| 285 |
+
161,
|
| 286 |
+
75,
|
| 287 |
+
802,
|
| 288 |
+
369
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 2
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "image",
|
| 294 |
+
"img_path": "images/fb1cbb5fe5ecd04cc9f23eda6e08d7cd2c8aab2682d3e740d54aabf4d7ee529a.jpg",
|
| 295 |
+
"image_caption": [
|
| 296 |
+
"Fig. 4. The framework of discriminator."
|
| 297 |
+
],
|
| 298 |
+
"image_footnote": [],
|
| 299 |
+
"bbox": [
|
| 300 |
+
133,
|
| 301 |
+
433,
|
| 302 |
+
437,
|
| 303 |
+
575
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 2
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "training two networks against each other. Goodfellow et al. [26] first proposed the idea of a generative adversarial network. The generator generates an image, and the discriminator determines whether the input image is a real image (True) or a generated image (False). Subsequently, many improvements based on the original GAN focused on speeding up the training of the network and improving the quality of the generated images [27], [28], [29]. These improvements also help GAN gain a wider range of applications [30], [31], [32]. Methods based on GAN are also widely used in image generation tasks [33], [34]. There are already some image fusion methods based on GAN [20], [22]. Adversarial learning is an important part of our approach. It improves the infrared and visible image characteristics in the fusion result by obtaining competitive consistency from the inputs. However, we abandon the discriminator of the classification mode and use the difference in the feature level to promote the fused image to have more infrared",
|
| 310 |
+
"bbox": [
|
| 311 |
+
73,
|
| 312 |
+
619,
|
| 313 |
+
493,
|
| 314 |
+
940
|
| 315 |
+
],
|
| 316 |
+
"page_idx": 2
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"type": "image",
|
| 320 |
+
"img_path": "images/02ce56680542a4402761d196ae15348dbd79d4f06d3fa4aadfbec1aefa43cec1.jpg",
|
| 321 |
+
"image_caption": [
|
| 322 |
+
"Fig. 5. The framework of transformer fusion module."
|
| 323 |
+
],
|
| 324 |
+
"image_footnote": [],
|
| 325 |
+
"bbox": [
|
| 326 |
+
555,
|
| 327 |
+
438,
|
| 328 |
+
879,
|
| 329 |
+
569
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 2
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "text",
|
| 335 |
+
"text": "or visible image information.",
|
| 336 |
+
"bbox": [
|
| 337 |
+
504,
|
| 338 |
+
616,
|
| 339 |
+
723,
|
| 340 |
+
632
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 2
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "C. Visual Transformer",
|
| 347 |
+
"text_level": 1,
|
| 348 |
+
"bbox": [
|
| 349 |
+
504,
|
| 350 |
+
654,
|
| 351 |
+
676,
|
| 352 |
+
669
|
| 353 |
+
],
|
| 354 |
+
"page_idx": 2
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"type": "text",
|
| 358 |
+
"text": "The transformer is a model based on a pure attention mechanism [35]. Its success in natural language processing inspires its application in computer vision. Due to the long-range dependence of the transformer in processing input, the visual transformer also has the ability to pay attention to the global relationship in image tasks. As a pioneering work of visual transformer, Dosovitskiy et al. [36] proposed ViT (Vision Transformer) for image classification tasks (Figure.2). This is a simple and effective application of transformer in visual tasks. Subsequently, Chen et al. [37] proposed a multi-task model based on the transformer, which achieved good results on multiple low-level visual tasks. The global spatial dependence of transformers has gained many applications in the field of computer vision. Inspired by the characteristics of the transformer, we pay at",
|
| 359 |
+
"bbox": [
|
| 360 |
+
501,
|
| 361 |
+
674,
|
| 362 |
+
921,
|
| 363 |
+
946
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 2
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "page_number",
|
| 369 |
+
"text": "3",
|
| 370 |
+
"bbox": [
|
| 371 |
+
911,
|
| 372 |
+
30,
|
| 373 |
+
919,
|
| 374 |
+
40
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 2
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "tention to the global correlation of images space and channels during the fusion process. We propose a new transformer model that focuses on channel relationships and applies it in the field of image fusion. Compared with the general transformer, our transformer fusion module is a lightweight model. This is a new exploration of transformer applications.",
|
| 381 |
+
"bbox": [
|
| 382 |
+
78,
|
| 383 |
+
69,
|
| 384 |
+
488,
|
| 385 |
+
184
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 3
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "III. PROPOSED METHOD",
|
| 392 |
+
"text_level": 1,
|
| 393 |
+
"bbox": [
|
| 394 |
+
187,
|
| 395 |
+
196,
|
| 396 |
+
379,
|
| 397 |
+
209
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 3
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "A. The Framework of Network",
|
| 404 |
+
"text_level": 1,
|
| 405 |
+
"bbox": [
|
| 406 |
+
78,
|
| 407 |
+
217,
|
| 408 |
+
305,
|
| 409 |
+
231
|
| 410 |
+
],
|
| 411 |
+
"page_idx": 3
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"type": "text",
|
| 415 |
+
"text": "As shown in Figure. 3, our model is mainly composed of two parts: one transformer-based generator and two discriminators. Typically, the fused image is obtained by the generator. Then, the output is refined during the adversarial learning between the generator and the discriminator.",
|
| 416 |
+
"bbox": [
|
| 417 |
+
78,
|
| 418 |
+
238,
|
| 419 |
+
488,
|
| 420 |
+
335
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 3
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "text",
|
| 426 |
+
"text": "Generator. The generator is used for the generation of the fused image. After the source images are merged in the channel dimension, the initial feature extraction is performed through the convolutional neural network. The mixed CNN features are input to the transformer fusion module to learn global fusion relations. Taking into account the consumption of computing resources and representation of features, three downsampling operators are added before the transformer fusion module. The fusion relationship learned in this process is up-sampled to different scales and multiplied by the corresponding features to achieve the preliminary result. The fusion features of different scales are up-sampled to the original image size and then superimposed to obtain the final fusion result.",
|
| 427 |
+
"bbox": [
|
| 428 |
+
78,
|
| 429 |
+
339,
|
| 430 |
+
488,
|
| 431 |
+
588
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 3
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "text",
|
| 437 |
+
"text": "Discriminator. The discriminator is used to refine the perception quality of the fused image. We set up two discriminators: fused image and infrared image (\"Dis-IR\"), fused image and visible image (\"Dis-VIS\"). These two discriminators provide high-resolution details of the visible image and a significant part of the infrared image for the fused image. The pre-trained VGG-16 network is used as the discriminator, which can be further finetuned during training. The network is shown in Figure.4. Taking the visible image discriminator (\"Dis-VIS\") as an example, the fused image and the visible image are separately input into the VGG-16 network to extract features. We calculate the L1 loss between the two features so that the fused image approximates the visible image from the context perspective. According to the number of downsampling, VGG-16 is divided into 4 layers. Different layers have different feature depths and different feature shapes. Inspired by Johnson et al. [38], we use the features of different depths extracted by VGG-16 to distinguish between infrared and visible features. The infrared discriminator uses the features",
|
| 438 |
+
"bbox": [
|
| 439 |
+
78,
|
| 440 |
+
590,
|
| 441 |
+
488,
|
| 442 |
+
941
|
| 443 |
+
],
|
| 444 |
+
"page_idx": 3
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"type": "text",
|
| 448 |
+
"text": "of the fourth layer of VGG-16 to retain more saliency information. While the visible discriminator uses the features of the first layer of VGG-16 to retain more detailed information.",
|
| 449 |
+
"bbox": [
|
| 450 |
+
508,
|
| 451 |
+
69,
|
| 452 |
+
919,
|
| 453 |
+
133
|
| 454 |
+
],
|
| 455 |
+
"page_idx": 3
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"type": "text",
|
| 459 |
+
"text": "In the training stage, source images are input to the generator to obtain the preliminary fused image. The preliminary fused image then passes through two discriminators with the effect of the fused image being fed back through the loss function. The above two steps are performed alternately to realize the confrontation training between the generator and the discriminator. Finally, we get a generator with an ideal generation effect to achieve the purpose of image fusion.",
|
| 460 |
+
"bbox": [
|
| 461 |
+
508,
|
| 462 |
+
137,
|
| 463 |
+
919,
|
| 464 |
+
287
|
| 465 |
+
],
|
| 466 |
+
"page_idx": 3
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"type": "text",
|
| 470 |
+
"text": "B. The Transformer Fusion Module",
|
| 471 |
+
"text_level": 1,
|
| 472 |
+
"bbox": [
|
| 473 |
+
508,
|
| 474 |
+
315,
|
| 475 |
+
772,
|
| 476 |
+
330
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 3
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"text": "As shown in Figure. 5, the transformer fusion module consists of two parts: general transformer (\"spatial transformer\") and cross-channel transformer (\"channel transformer\"). This helps us to obtain a more comprehensive global integration relationship.",
|
| 483 |
+
"bbox": [
|
| 484 |
+
508,
|
| 485 |
+
337,
|
| 486 |
+
919,
|
| 487 |
+
420
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 3
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"text": "Spatial Transformer As shown in Figure. 2, the image is divided into blocks and stretched into vectors, where \"p\" means patch size, \"w\" and \"h\" respectively represent the number of image blocks in the width and height dimensions of the image, \"E\" is the reduced dimension. Then, the vector group enters the transformer model for relation learning. The number of image blocks is used to learn the global relationship of the image. Therefore, we consider that the general transformer mainly learns the global spatial relationship between image patches. Inspired by the transformer-based low-level image task, we build a spatial transformer for the fusion task. As shown in Figure. 6, the spatial transformer is basically the same as the first half of ViT (Figure. 2). The difference is that we cancelled the addition of position embedding, and subsequent experiments also proved the rationality and effectiveness of this operation. In addition, when restoring from the vector group to the image, we compress the channel dimension, so that we get a relationship map with a channel number of 1. This corresponds to the spatial relationship of the image we obtained, avoiding the interference of other dimensional relationships.",
|
| 494 |
+
"bbox": [
|
| 495 |
+
508,
|
| 496 |
+
422,
|
| 497 |
+
919,
|
| 498 |
+
808
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 3
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"text": "Channel Transformer For image fusion tasks, we believe that the cross-channel relationship of images also plays an important role in fusion. Therefore, we propose a new cross-channel transformer model, which learns the correlation of information across the channel dimension. In the new transformer module, the number of tokens input to the encoder has changed from the number of image blocks to the number of image channels. Since",
|
| 505 |
+
"bbox": [
|
| 506 |
+
508,
|
| 507 |
+
810,
|
| 508 |
+
919,
|
| 509 |
+
941
|
| 510 |
+
],
|
| 511 |
+
"page_idx": 3
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "page_number",
|
| 515 |
+
"text": "4",
|
| 516 |
+
"bbox": [
|
| 517 |
+
911,
|
| 518 |
+
30,
|
| 519 |
+
919,
|
| 520 |
+
40
|
| 521 |
+
],
|
| 522 |
+
"page_idx": 3
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "image",
|
| 526 |
+
"img_path": "images/7cd3dbc112a4a57dd7197c0ee4e090953a056b24861aae38dbeae4e0645d2f52.jpg",
|
| 527 |
+
"image_caption": [
|
| 528 |
+
"Fig. 6. The framework of spatial transformer."
|
| 529 |
+
],
|
| 530 |
+
"image_footnote": [],
|
| 531 |
+
"bbox": [
|
| 532 |
+
130,
|
| 533 |
+
71,
|
| 534 |
+
867,
|
| 535 |
+
169
|
| 536 |
+
],
|
| 537 |
+
"page_idx": 4
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "image",
|
| 541 |
+
"img_path": "images/00010bd43f05b34fbcf57698d21bcf216d9c744dac4dc06ea74ff87467039cc1.jpg",
|
| 542 |
+
"image_caption": [
|
| 543 |
+
"Fig. 7. The framework of channel transformer."
|
| 544 |
+
],
|
| 545 |
+
"image_footnote": [],
|
| 546 |
+
"bbox": [
|
| 547 |
+
125,
|
| 548 |
+
227,
|
| 549 |
+
870,
|
| 550 |
+
318
|
| 551 |
+
],
|
| 552 |
+
"page_idx": 4
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"type": "image",
|
| 556 |
+
"img_path": "images/a70201fe421e6145c4b774e9d0e6d1551bcdec1d8a2b52a0c734bcfa28212176.jpg",
|
| 557 |
+
"image_caption": [
|
| 558 |
+
"Fig. 8. Infrared and visible image fusion experiment on \"human\" images."
|
| 559 |
+
],
|
| 560 |
+
"image_footnote": [],
|
| 561 |
+
"bbox": [
|
| 562 |
+
168,
|
| 563 |
+
381,
|
| 564 |
+
830,
|
| 565 |
+
614
|
| 566 |
+
],
|
| 567 |
+
"page_idx": 4
|
| 568 |
+
},
|
| 569 |
+
{
|
| 570 |
+
"type": "text",
|
| 571 |
+
"text": "position embedding is not required to provide category information in the image generation task, we have removed position embedding, which also makes the size of the input image more flexible. The channel transformer is also a structure similar to the spatial transformer. The main difference is that we change the object modelled by the transformer from the spatial relationship of the image block to the channel relationship. In this specific implementation, we use the number of channels as the token number, which is a simple but effective operation. Through two kinds of the transformer, we can get the relation mapping for the image fusion task.",
|
| 572 |
+
"bbox": [
|
| 573 |
+
73,
|
| 574 |
+
679,
|
| 575 |
+
491,
|
| 576 |
+
881
|
| 577 |
+
],
|
| 578 |
+
"page_idx": 4
|
| 579 |
+
},
|
| 580 |
+
{
|
| 581 |
+
"type": "text",
|
| 582 |
+
"text": "Composite Transformer The transformer of the two modes is combined into a transformer fusion module, which enables our fusion model to simultaneously learn",
|
| 583 |
+
"bbox": [
|
| 584 |
+
73,
|
| 585 |
+
893,
|
| 586 |
+
490,
|
| 587 |
+
944
|
| 588 |
+
],
|
| 589 |
+
"page_idx": 4
|
| 590 |
+
},
|
| 591 |
+
{
|
| 592 |
+
"type": "text",
|
| 593 |
+
"text": "spatial and channel relationships with global correlation. Through experiments, we find that using a channel transformer first and then using a spatial transformer can achieve better results. This shows that the combination of these two fusion modules is used to learn the coefficients that are more suitable for the fusion of infrared and visible images.",
|
| 594 |
+
"bbox": [
|
| 595 |
+
503,
|
| 596 |
+
679,
|
| 597 |
+
921,
|
| 598 |
+
797
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 4
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "text",
|
| 604 |
+
"text": "C. Loss Function",
|
| 605 |
+
"text_level": 1,
|
| 606 |
+
"bbox": [
|
| 607 |
+
504,
|
| 608 |
+
821,
|
| 609 |
+
638,
|
| 610 |
+
835
|
| 611 |
+
],
|
| 612 |
+
"page_idx": 4
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"text": "Previous image fusion algorithms based on deep learning usually use multiple loss functions to optimize the fused image from different perspectives during training. But this causes mutual conflict among loss functions. Inspired by [39], we make improvements on the basis of the SSIM loss. A single loss function achieves a good",
|
| 617 |
+
"bbox": [
|
| 618 |
+
501,
|
| 619 |
+
843,
|
| 620 |
+
921,
|
| 621 |
+
944
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 4
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "page_number",
|
| 627 |
+
"text": "5",
|
| 628 |
+
"bbox": [
|
| 629 |
+
911,
|
| 630 |
+
30,
|
| 631 |
+
919,
|
| 632 |
+
40
|
| 633 |
+
],
|
| 634 |
+
"page_idx": 4
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"text": "fusion effect and avoids the problem of entanglement of multiple loss functions.",
|
| 639 |
+
"bbox": [
|
| 640 |
+
73,
|
| 641 |
+
68,
|
| 642 |
+
491,
|
| 643 |
+
101
|
| 644 |
+
],
|
| 645 |
+
"page_idx": 5
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"text": "SSIM [40] is a measure of structural similarity between images. As shown in Eq. (1), X, Y represent two images respectively. $\\mu$ and $\\sigma$ stand for mean and standard deviation respectively. $\\sigma_{XY}$ means the covariance between X and Y. $C_1$ and $C_2$ are stability coefficients.",
|
| 650 |
+
"bbox": [
|
| 651 |
+
73,
|
| 652 |
+
102,
|
| 653 |
+
490,
|
| 654 |
+
186
|
| 655 |
+
],
|
| 656 |
+
"page_idx": 5
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "equation",
|
| 660 |
+
"text": "\n$$\nS S I M (X, Y) = \\frac {\\left(2 \\mu_ {X} \\mu_ {Y} + C _ {1}\\right) \\left(2 \\sigma_ {X Y} + C _ {2}\\right)}{\\left(\\mu_ {X} ^ {2} + \\mu_ {Y} ^ {2} + C _ {1}\\right) \\left(\\sigma_ {X} ^ {2} + \\sigma_ {Y} ^ {2} + C _ {2}\\right)} \\tag {1}\n$$\n",
|
| 661 |
+
"text_format": "latex",
|
| 662 |
+
"bbox": [
|
| 663 |
+
89,
|
| 664 |
+
193,
|
| 665 |
+
490,
|
| 666 |
+
237
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 5
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "text",
|
| 672 |
+
"text": "Variance reflects the contrast of the image, and an image with high contrast is more helpful for the human visual system to capture information. As shown in Eq. (2), $M$ and $N$ are the image size in the horizontal and vertical directions respectively. $\\mu$ represents the mean of the image. We use variance as the standard and choose one as the reference image from infrared and visible images. The structural similarity between the fused image and the reference image is calculated, so that the fused image gradually approaches the reference image during the optimization process. This operation allows the fusion result to better obtain the important information from the infrared or visible image.",
|
| 673 |
+
"bbox": [
|
| 674 |
+
73,
|
| 675 |
+
238,
|
| 676 |
+
491,
|
| 677 |
+
457
|
| 678 |
+
],
|
| 679 |
+
"page_idx": 5
|
| 680 |
+
},
|
| 681 |
+
{
|
| 682 |
+
"type": "equation",
|
| 683 |
+
"text": "\n$$\n\\sigma^ {2} (X) = \\frac {\\sum_ {i = 0} ^ {M - 1} \\sum_ {j = 0} ^ {N - 1} [ X (i , j) - \\mu ] ^ {2}}{M N} \\tag {2}\n$$\n",
|
| 684 |
+
"text_format": "latex",
|
| 685 |
+
"bbox": [
|
| 686 |
+
156,
|
| 687 |
+
464,
|
| 688 |
+
490,
|
| 689 |
+
516
|
| 690 |
+
],
|
| 691 |
+
"page_idx": 5
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "text",
|
| 695 |
+
"text": "In Eq. (3), $Var\\_SSIM$ calculates the structural similarity of the divided image. $\\sigma^2$ is the variance of the image. $I_X$ and $I_Y$ represent two source images respectively. $I_F$ means a fused image. $W$ is the number of image blocks after division, and the size of each image block is set to $11 \\times 11$ . Image segmentation is achieved through sliding windows. Through the sliding window, the fused image can well coordinate the consistency between different image blocks. The calculation of the loss function is shown in Eq. (4).",
|
| 696 |
+
"bbox": [
|
| 697 |
+
73,
|
| 698 |
+
521,
|
| 699 |
+
491,
|
| 700 |
+
690
|
| 701 |
+
],
|
| 702 |
+
"page_idx": 5
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "equation",
|
| 706 |
+
"text": "\n$$\nV a r _ {-} S S I M \\left(I _ {X}, I _ {Y}, I _ {F} \\mid W\\right) = \\left\\{ \\begin{array}{l} S S I M \\left(I _ {X}, I _ {F}\\right), \\\\ i f \\sigma^ {2} (X) > \\sigma^ {2} (Y) \\\\ S S I M \\left(I _ {Y}, I _ {F}\\right), \\\\ i f \\sigma^ {2} (Y) > = \\sigma^ {2} (X) \\end{array} \\right. \\tag {3}\n$$\n",
|
| 707 |
+
"text_format": "latex",
|
| 708 |
+
"bbox": [
|
| 709 |
+
84,
|
| 710 |
+
710,
|
| 711 |
+
488,
|
| 712 |
+
768
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 5
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "equation",
|
| 718 |
+
"text": "\n$$\nL _ {v a r - S S I M} = 1 - \\frac {1}{N} \\sum_ {W = 1} ^ {N} V a r _ {-} S S I M \\left(I _ {X}, I _ {Y}, I _ {F} \\mid W\\right) \\tag {4}\n$$\n",
|
| 719 |
+
"text_format": "latex",
|
| 720 |
+
"bbox": [
|
| 721 |
+
83,
|
| 722 |
+
785,
|
| 723 |
+
488,
|
| 724 |
+
816
|
| 725 |
+
],
|
| 726 |
+
"page_idx": 5
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "text",
|
| 730 |
+
"text": "IV. EXPERIMENTS",
|
| 731 |
+
"text_level": 1,
|
| 732 |
+
"bbox": [
|
| 733 |
+
207,
|
| 734 |
+
833,
|
| 735 |
+
357,
|
| 736 |
+
848
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 5
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "A. Setup",
|
| 743 |
+
"text_level": 1,
|
| 744 |
+
"bbox": [
|
| 745 |
+
73,
|
| 746 |
+
854,
|
| 747 |
+
145,
|
| 748 |
+
871
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 5
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "Datasets. In the training phase, 40,000 pairs of corresponding infrared and visible images are selected as the training data from the KAIST [41] data set. KAIST data set is a pedestrian data set containing various general",
|
| 755 |
+
"bbox": [
|
| 756 |
+
73,
|
| 757 |
+
877,
|
| 758 |
+
490,
|
| 759 |
+
946
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 5
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "text",
|
| 765 |
+
"text": "scenes of campus, street and countryside. Each picture contains a visible image and a corresponding infrared image. At present, some end-to-end image fusion algorithms [16] use it as training data. The training image size is set to $256 \\times 256$ pixels. In the testing phase, we use 10 pairs of images from the test image of [18] as the test set. The size of the test data is arbitrary (generally not more than $2048 \\times 2048$ pixels).",
|
| 766 |
+
"bbox": [
|
| 767 |
+
501,
|
| 768 |
+
68,
|
| 769 |
+
921,
|
| 770 |
+
203
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 5
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "text",
|
| 776 |
+
"text": "Hyper-Parameters. In the training phase, we choose Adam as the optimizer and the learning rate is set to a constant of 0.0001. Training data includes 40,000 pairs of images and batch size is set to 16. Complete training requires 20 epochs. Inspired by [36], [37], we chose fixed values for some parameters in the transformer fusion module. The patch size of the spatial transformer and channel transformer is set to 4 and 16 respectively. Taking into account the different dimensions of the data processed by a spatial transformer and channel transformer, the embedding dimensions are set to 2048 and 128 respectively. Our model is implemented with NVIDIA TITAN Xp and Pytorch.",
|
| 777 |
+
"bbox": [
|
| 778 |
+
501,
|
| 779 |
+
204,
|
| 780 |
+
921,
|
| 781 |
+
422
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 5
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "text",
|
| 787 |
+
"text": "Compared Methods. The proposed method is compared with 15 methods in subjective and objective evaluation, including classic and latest methods. These are: Ratio of Low-pass Pyramid (RP) [42], Wavelet [43], Dual-Tree Complex Wavelet Transform (DTCWT) [44], Curvelet Transform (CVT) [45], Multi-resolution Singular Value Decomposition (MSVD) [46], gradient transfer and total variation minimization (GTF) [47], DenseFuse [18], DeepFuse [48], a general end-to-end fusion network(IFCNN) [21], FusionGAN [20], NestFuse [19], PMGI [49], U2Fusion [24], RFN-Nest [16], and MEFGAN [50], respectively.",
|
| 788 |
+
"bbox": [
|
| 789 |
+
503,
|
| 790 |
+
422,
|
| 791 |
+
921,
|
| 792 |
+
625
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 5
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "B. Results Analysis",
|
| 799 |
+
"text_level": 1,
|
| 800 |
+
"bbox": [
|
| 801 |
+
504,
|
| 802 |
+
651,
|
| 803 |
+
653,
|
| 804 |
+
667
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 5
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "text",
|
| 810 |
+
"text": "We use subjective evaluation and objective evaluation to measure the performance of the fusion algorithm. Subjective evaluation judges whether the fusion result conforms to human visual perception, such as clarity, salient information, etc. Therefore, the subjective evaluation method puts the fused images obtained by different algorithms together for intuitive visual comparison.",
|
| 811 |
+
"bbox": [
|
| 812 |
+
501,
|
| 813 |
+
674,
|
| 814 |
+
919,
|
| 815 |
+
792
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 5
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "text",
|
| 821 |
+
"text": "In Figure. 8, the fusion results of all methods are put together for subjective judgment. Although some methods can achieve a certain fusion effect, it introduces more artificial noise, which affects the acquisition of visual information, such as (c), (d), (e), (f), (g). In contrast, the fusion result produced by the deep learning method is more in line with human vision. Most methods based on deep learning can maintain the basic environmental information of the visible image and the salient",
|
| 822 |
+
"bbox": [
|
| 823 |
+
501,
|
| 824 |
+
792,
|
| 825 |
+
921,
|
| 826 |
+
944
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 5
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "page_number",
|
| 832 |
+
"text": "6",
|
| 833 |
+
"bbox": [
|
| 834 |
+
911,
|
| 835 |
+
30,
|
| 836 |
+
919,
|
| 837 |
+
40
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 5
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "table",
|
| 843 |
+
"img_path": "images/cf3537d720104b17c0092e5f99424d59d7b500502efd3e78bd1ce9c759019f36.jpg",
|
| 844 |
+
"table_caption": [
|
| 845 |
+
"TABLEI QUANTITATIVE EVALUATION RESULTS OF INFRARED AND VISIBLE IMAGE FUSION TASKS. THE BEST THREE RESULTS ARE HIGHLIGHTED IN RED, BROWN AND BLUE FONTS."
|
| 846 |
+
],
|
| 847 |
+
"table_footnote": [],
|
| 848 |
+
"table_body": "<table><tr><td>Method</td><td>SF</td><td>EN</td><td>\\(Q_{abf}\\)</td><td>\\(FMI_w\\)</td><td>MS-SSIM</td><td>\\(FMI_{pixel}\\)</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>RP [42]</td><td>12.7249</td><td>6.5397</td><td>0.4341</td><td>0.3831</td><td>0.8404</td><td>0.8929</td><td>13.0794</td><td>63.2427</td><td>0.6420</td></tr><tr><td>Wavelet [43]</td><td>6.2567</td><td>6.2454</td><td>0.3214</td><td>0.4183</td><td>0.8598</td><td>0.9096</td><td>12.4907</td><td>52.2292</td><td>0.2921</td></tr><tr><td>DTCWT [44]</td><td>11.1296</td><td>6.4791</td><td>0.5258</td><td>0.4419</td><td>0.9053</td><td>0.9186</td><td>12.9583</td><td>60.1138</td><td>0.5986</td></tr><tr><td>CVT [45]</td><td>11.1129</td><td>6.4989</td><td>0.4936</td><td>0.4240</td><td>0.8963</td><td>0.9156</td><td>12.9979</td><td>60.4005</td><td>0.5930</td></tr><tr><td>MSVD [46]</td><td>8.5538</td><td>6.2807</td><td>0.3328</td><td>0.2828</td><td>0.8652</td><td>0.9036</td><td>12.5613</td><td>52.9853</td><td>0.3031</td></tr><tr><td>GTF [47]</td><td>9.5022</td><td>6.5781</td><td>0.4400</td><td>0.4494</td><td>0.8169</td><td>0.9056</td><td>13.1562</td><td>66.0773</td><td>0.4071</td></tr><tr><td>DenseFuse [18]</td><td>9.3238</td><td>6.8526</td><td>0.4735</td><td>0.4389</td><td>0.8692</td><td>0.9061</td><td>13.7053</td><td>81.7283</td><td>0.6875</td></tr><tr><td>DeepFuse [48]</td><td>8.3500</td><td>6.6102</td><td>0.3847</td><td>0.4214</td><td>0.9138</td><td>0.9041</td><td>13.2205</td><td>66.8872</td><td>0.5752</td></tr><tr><td>IFCNN [21]</td><td>11.8590</td><td>6.6454</td><td>0.4962</td><td>0.4052</td><td>0.9129</td><td>0.9007</td><td>13.2909</td><td>73.7053</td><td>0.6090</td></tr><tr><td>FusionGAN [20]</td><td>8.0476</td><td>6.5409</td><td>0.2682</td><td>0.4083</td><td>0.6135</td><td>0.8875</td><td>13.0817</td><td>61.6339</td><td>0.4928</td></tr><tr><td>NestFuse [19]</td><td>9.7807</td><td>6.8745</td><td>0.5011</td><td>0.4483</td><td>0.8817</td><td>0.9025</td><td>13.7491</td><td>83.0530</td><td>0.7195</td></tr><tr><td>PMGI [49]</td><td>8.7195</td><td>6.8688</td><td>0.3787</td><td>0.4018</td><td>0.8684</td><td>0.9001</td><td>13.7376</td><td>69.2364</td><td>0.6904</td></tr><tr><td>U2Fusion [24]</td><td>11.0368</td><td>6.7227</td><td>0.3934</td><td>0.3594</td><td>0.9147</td><td>0.8942</td><td>13.4453</td><td>66.5035</td><td>0.7680</td></tr><tr><td>RFN-Nest [16]</td><td>5.8457</td><td>6.7274</td><td>0.3292</td><td>0.3052</td><td>0.8959</td><td>0.9063</td><td>13.4547</td><td>67.8765</td><td>0.5404</td></tr><tr><td>MEFGAN [50]</td><td>7.8481</td><td>6.9727</td><td>0.2076</td><td>0.1826</td><td>0.6709</td><td>0.8844</td><td>13.9454</td><td>43.7332</td><td>0.7330</td></tr><tr><td>TGFuse(ours)</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>",
|
| 849 |
+
"bbox": [
|
| 850 |
+
109,
|
| 851 |
+
122,
|
| 852 |
+
887,
|
| 853 |
+
383
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 6
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "table",
|
| 859 |
+
"img_path": "images/d90a40945e10a18364d356b34b563bc9460b41479846ceb26f5084e1ab72e3d7.jpg",
|
| 860 |
+
"table_caption": [
|
| 861 |
+
"TABLE II THE OBJECTIVE EVALUATION ON WHETHER TO USE GAN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 862 |
+
],
|
| 863 |
+
"table_footnote": [],
|
| 864 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o GAN</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>GAN</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>",
|
| 865 |
+
"bbox": [
|
| 866 |
+
133,
|
| 867 |
+
439,
|
| 868 |
+
864,
|
| 869 |
+
489
|
| 870 |
+
],
|
| 871 |
+
"page_idx": 6
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "table",
|
| 875 |
+
"img_path": "images/c0f18fc56fe13ae302e93fc45acfef41b9f5cb8a188e80bd9175f7a7066572bb.jpg",
|
| 876 |
+
"table_caption": [
|
| 877 |
+
"TABLE III THE OBJECTIVE EVALUATION ON DIFFERENT TRANSFORMER FUSION METHOD. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 878 |
+
],
|
| 879 |
+
"table_footnote": [],
|
| 880 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>Spatial</td><td>10.8364</td><td>6.8665</td><td>0.5491</td><td>0.4281</td><td>0.9337</td><td>0.9173</td><td>13.7330</td><td>86.2626</td><td>0.7247</td></tr><tr><td>Channel</td><td>11.1283</td><td>6.9520</td><td>0.5622</td><td>0.4328</td><td>0.9107</td><td>0.9169</td><td>13.9040</td><td>91.2356</td><td>0.7417</td></tr><tr><td>Spatial+Channel</td><td>10.8808</td><td>6.9161</td><td>0.5304</td><td>0.4139</td><td>0.9172</td><td>0.9089</td><td>13.8323</td><td>94.6343</td><td>0.7565</td></tr><tr><td>Channel+Spatial</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr></table>",
|
| 881 |
+
"bbox": [
|
| 882 |
+
112,
|
| 883 |
+
545,
|
| 884 |
+
885,
|
| 885 |
+
627
|
| 886 |
+
],
|
| 887 |
+
"page_idx": 6
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "table",
|
| 891 |
+
"img_path": "images/59e218dffeeadc8b769f18422b0ca81d7800c95c9feb9996ca7410cd9b0b0165.jpg",
|
| 892 |
+
"table_caption": [
|
| 893 |
+
"TABLE IV THE OBJECTIVE EVALUATION ON WHETHER TO USE POSITION EMBEDDING. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 894 |
+
],
|
| 895 |
+
"table_footnote": [],
|
| 896 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o PE</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>PE</td><td>10.8748</td><td>6.9332</td><td>0.5522</td><td>0.4186</td><td>0.9340</td><td>0.9174</td><td>13.8664</td><td>90.5422</td><td>0.7654</td></tr></table>",
|
| 897 |
+
"bbox": [
|
| 898 |
+
142,
|
| 899 |
+
681,
|
| 900 |
+
857,
|
| 901 |
+
734
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 6
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "table",
|
| 907 |
+
"img_path": "images/6ab17f918c14b8dbea84dd0c4afb50af216e3fcb8c6bb5186d20e64202fba0d1.jpg",
|
| 908 |
+
"table_caption": [
|
| 909 |
+
"TABLEV THE OBJECTIVE EVALUATION ON DIFFERENT ENCODER LAYERS OF TRANSFORMER. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.(\"/” MEANS TRAINING FAILURE)"
|
| 910 |
+
],
|
| 911 |
+
"table_footnote": [],
|
| 912 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>3-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td>11.1740</td><td>6.8722</td><td>0.5623</td><td>0.4209</td><td>0.9404</td><td>0.9198</td><td>13.7443</td><td>86.7715</td><td>0.7539</td></tr></table>",
|
| 913 |
+
"bbox": [
|
| 914 |
+
140,
|
| 915 |
+
803,
|
| 916 |
+
857,
|
| 917 |
+
869
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 6
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "text",
|
| 923 |
+
"text": "human of the infrared image at the same time. Compared with other methods, our method not only highlights the",
|
| 924 |
+
"bbox": [
|
| 925 |
+
73,
|
| 926 |
+
898,
|
| 927 |
+
491,
|
| 928 |
+
931
|
| 929 |
+
],
|
| 930 |
+
"page_idx": 6
|
| 931 |
+
},
|
| 932 |
+
{
|
| 933 |
+
"type": "text",
|
| 934 |
+
"text": "infrared information of the person in the red frame but also maintains the visible details of the door. The sky as",
|
| 935 |
+
"bbox": [
|
| 936 |
+
503,
|
| 937 |
+
898,
|
| 938 |
+
921,
|
| 939 |
+
931
|
| 940 |
+
],
|
| 941 |
+
"page_idx": 6
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "page_number",
|
| 945 |
+
"text": "7",
|
| 946 |
+
"bbox": [
|
| 947 |
+
911,
|
| 948 |
+
30,
|
| 949 |
+
919,
|
| 950 |
+
40
|
| 951 |
+
],
|
| 952 |
+
"page_idx": 6
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"type": "table",
|
| 956 |
+
"img_path": "images/dd57ac0370db7e70f73091c0c193d81e18940ca266c0a4d0b29dc471a58f9f60.jpg",
|
| 957 |
+
"table_caption": [
|
| 958 |
+
"TABLE VI THE OBJECTIVE EVALUATION ON DIFFERENT LAYERS OF CNN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.(\"/\") MEANS TRAINING FAILURE)"
|
| 959 |
+
],
|
| 960 |
+
"table_footnote": [],
|
| 961 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>2-layers</td><td>10.3438</td><td>6.7281</td><td>0.5560</td><td>0.4314</td><td>0.9006</td><td>0.9097</td><td>13.4562</td><td>94.2280</td><td>0.6862</td></tr><tr><td>3-layers</td><td>11.0769</td><td>6.8959</td><td>0.5497</td><td>0.4272</td><td>0.9298</td><td>0.9157</td><td>13.7919</td><td>92.5518</td><td>0.7517</td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr></table>",
|
| 962 |
+
"bbox": [
|
| 963 |
+
140,
|
| 964 |
+
122,
|
| 965 |
+
856,
|
| 966 |
+
204
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 7
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "table",
|
| 972 |
+
"img_path": "images/b5e548b51fc6ef122440d9c4a710f7b09051d02389d87b2b5841b98434cb743e.jpg",
|
| 973 |
+
"table_caption": [
|
| 974 |
+
"TABLE VII THE OBJECTIVE EVALUATION ON DIFFERENT CHANNELS. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 975 |
+
],
|
| 976 |
+
"table_footnote": [],
|
| 977 |
+
"table_body": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>32-channels</td><td>10.6360</td><td>6.9228</td><td>0.5715</td><td>0.4370</td><td>0.9276</td><td>0.9206</td><td>13.8456</td><td>90.1796</td><td>0.7061</td></tr><tr><td>64-channels</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>128-channels</td><td>11.1181</td><td>6.9388</td><td>0.5545</td><td>0.4142</td><td>0.9368</td><td>0.9163</td><td>13.8776</td><td>88.5524</td><td>0.8069</td></tr></table>",
|
| 978 |
+
"bbox": [
|
| 979 |
+
122,
|
| 980 |
+
258,
|
| 981 |
+
874,
|
| 982 |
+
325
|
| 983 |
+
],
|
| 984 |
+
"page_idx": 7
|
| 985 |
+
},
|
| 986 |
+
{
|
| 987 |
+
"type": "text",
|
| 988 |
+
"text": "the background also retains the high-resolution visible scene. Such a fused image is friendly and easy to accept information for human vision.",
|
| 989 |
+
"bbox": [
|
| 990 |
+
73,
|
| 991 |
+
353,
|
| 992 |
+
491,
|
| 993 |
+
402
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 7
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "text",
|
| 999 |
+
"text": "There are many different evaluation indicators for objective evaluation. We have selected nine common evaluation indicators for the quality of fused images. These are: Spatial Frequency (SF) [51], Entropy (EN) [52], quality of images $(\\mathrm{Q}_{abf})$ [53], feature mutual information with wavelet transform(FMIw) [54], multiscale SSIM (MS-SSIM) [55], feature mutual information with pixel(FMIpixel) [54] Standard Deviation of Image (SD) [56], Visual Information Fidelity (VIF) [57], and mutual information (MI) [58], respectively. In Table.I, We compared the performance of all methods on 9 evaluation indicators. The best three results are highlighted in red, brown and blue fonts. Our method performed best on 7 indicators and also achieved third place on the remaining two indicators. Through subjective and objective evaluation, our method is proved to have obvious advantages in performance.",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
73,
|
| 1002 |
+
404,
|
| 1003 |
+
491,
|
| 1004 |
+
691
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 7
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "C. Ablation Study",
|
| 1011 |
+
"text_level": 1,
|
| 1012 |
+
"bbox": [
|
| 1013 |
+
75,
|
| 1014 |
+
719,
|
| 1015 |
+
215,
|
| 1016 |
+
734
|
| 1017 |
+
],
|
| 1018 |
+
"page_idx": 7
|
| 1019 |
+
},
|
| 1020 |
+
{
|
| 1021 |
+
"type": "text",
|
| 1022 |
+
"text": "GAN. Adversarial learning during training is very effective in image generation tasks, but how to combine it with fusion tasks is a problem in its application. Our original method only has the generation part of the fused image and does not include two discriminators. In this case, our method has surpassed the previous method in most objective evaluation indicators. In order to enhance the characteristics of the fused image: the high resolution of the visible image and the highlighted part of the infrared image, we introduce adversarial learning into the training process. We use the pre-trained VGG-16 network as a discriminator to enhance the characteristics",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
73,
|
| 1025 |
+
741,
|
| 1026 |
+
491,
|
| 1027 |
+
944
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 7
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "of different modalities at the feature level. The objective evaluation results are shown in the Table. II. Compared with the method that does not use adversarial training, the new method with GAN has improved on seven indicators. This also proves the effectiveness of introducing generative confrontation methods.",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
501,
|
| 1036 |
+
353,
|
| 1037 |
+
919,
|
| 1038 |
+
453
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 7
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "text",
|
| 1044 |
+
"text": "Transformer Fusion Module. We propose two transformer fusion methods: spatial transformer and channel transformer. They can work alone or in combination with each other. In Table. III, we separately verify the results of using the two transformer fusion modules alone and in combination. The effect of passing through the channel transformer first and then passing through the space transformer will be better. We believe that it is more beneficial for fusion to first pay attention to the channel relationship between corresponding blocks in the process of modelling.",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
501,
|
| 1047 |
+
455,
|
| 1048 |
+
921,
|
| 1049 |
+
640
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 7
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "text",
|
| 1055 |
+
"text": "Position Embedding. In our transformer fusion method, position embedding is removed because the category information provided by position embedding is not needed in the fusion task. However, whether the direct removal of position embedding has an effect on the training of the transformer has not been verified. Therefore, we train the TGFuse model with and without position embedding respectively. Comparing the indicators of the fusion results in Table IV, we find that removing position embedding has a positive effect on the results.",
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
501,
|
| 1058 |
+
641,
|
| 1059 |
+
921,
|
| 1060 |
+
824
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 7
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "text",
|
| 1066 |
+
"text": "Transformer Module Layers. The transformer model we use is a multi-layer encoder model based on ViT. The number of encoder layers also has a great impact on performance. Unlike classification tasks, fusion tasks are less complex and require fewer layers. But too few layers may also lead to failure of fusion relationship learning. Therefore, we set different values for experiments to find",
|
| 1067 |
+
"bbox": [
|
| 1068 |
+
501,
|
| 1069 |
+
825,
|
| 1070 |
+
921,
|
| 1071 |
+
944
|
| 1072 |
+
],
|
| 1073 |
+
"page_idx": 7
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "page_number",
|
| 1077 |
+
"text": "8",
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
911,
|
| 1080 |
+
30,
|
| 1081 |
+
919,
|
| 1082 |
+
40
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 7
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "text",
|
| 1088 |
+
"text": "the number of layers most suitable for the fusion task. The comparative results of the experiment are shown in the Table. V. When the number of layers is three, the test result is a meaningless black image. It may be that too few layers cause the transformer fusion module can not learn the available fusion relationship. When the number of layers is five, the test result becomes worse. This may be because the fusion relationship learned by the deep transformer fusion module is redundant. We select the most suitable number of layers (4 layers) based on the experimental results.",
|
| 1089 |
+
"bbox": [
|
| 1090 |
+
73,
|
| 1091 |
+
69,
|
| 1092 |
+
491,
|
| 1093 |
+
252
|
| 1094 |
+
],
|
| 1095 |
+
"page_idx": 8
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "text",
|
| 1099 |
+
"text": "CNN Layers. Firstly, multi-layer CNN is used to extract features from the input image, which can help the transformer module to converge faster. The number of layers of CNN (that is, the number of \"Res-Block\") affects the granularity and depth of the extracted features. We set different values to experiment to find the most suitable number of CNN layers. The more layers, the more times the image is downsampled. When the image block is too small, the model cannot learn an effective fusion relationship. As shown in Table. VI, when the depth is 4 layers, the model learns the best fusion relationship. When the layer is deeper, the resulting image is meaningless black blocks. This means that if the feature block is too small, the fusion module cannot fuse information effectively.",
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
75,
|
| 1102 |
+
255,
|
| 1103 |
+
491,
|
| 1104 |
+
507
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 8
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"text": "CNN Channels. As an important dimension of image features, the number of feature channels is also an important factor influencing algorithm performance. In the process of feature extraction, we get four image features with the same dimensions but different scales. The difference in the number of channels means that the distribution of channel dimension information is different. In the ablation experiment, we choose a few typical values as the number of channels. After comparison in Table. VII, we select the number of channels (64 channels) with the best performance.",
|
| 1111 |
+
"bbox": [
|
| 1112 |
+
73,
|
| 1113 |
+
507,
|
| 1114 |
+
493,
|
| 1115 |
+
694
|
| 1116 |
+
],
|
| 1117 |
+
"page_idx": 8
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "text",
|
| 1121 |
+
"text": "V. CONCLUSION",
|
| 1122 |
+
"text_level": 1,
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
215,
|
| 1125 |
+
717,
|
| 1126 |
+
351,
|
| 1127 |
+
732
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 8
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "In this paper, we proposed an infrared and visible image fusion method based on a lightweight transformer module and generative adversarial learning. The proposed transformer is deeply involved in the fusion task as a fusion relation learning module. Adversarial learning provides generators with different modal characteristics during the training process at the feature level. This is the first attempt of deep combination and application of transformer and adversarial learning in the image fusion task. Our method has also achieved outstanding performance in subjective and objective evaluation, which proves the effectiveness and advancement of our method.",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
73,
|
| 1136 |
+
742,
|
| 1137 |
+
495,
|
| 1138 |
+
946
|
| 1139 |
+
],
|
| 1140 |
+
"page_idx": 8
|
| 1141 |
+
},
|
| 1142 |
+
{
|
| 1143 |
+
"type": "text",
|
| 1144 |
+
"text": "REFERENCES",
|
| 1145 |
+
"text_level": 1,
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
661,
|
| 1148 |
+
69,
|
| 1149 |
+
767,
|
| 1150 |
+
84
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 8
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "list",
|
| 1156 |
+
"sub_type": "ref_text",
|
| 1157 |
+
"list_items": [
|
| 1158 |
+
"[1] J. Sun, C. Li, X.-J. Wu, V. Palade, and W. Fang, \"An effective method of weld defect detection and classification based on machine vision,\" IEEE Transactions on Industrial Informatics, vol. 15, no. 12, pp. 6322-6333, 2019. 1",
|
| 1159 |
+
"[2] X. Luo, Z. Zhang, and X. Wu, \"A novel algorithm of remote sensing image fusion based on shift-invariant shearlet transform and regional selection,\" AEU-International Journal of Electronics and Communications, vol. 70, no. 2, pp. 186-197, 2016. 1",
|
| 1160 |
+
"[3] X. Luo, Z. Zhang, B. Zhang, and X.-J. Wu, \"Image fusion with contextual statistical similarity and nonsubsampled shearlet transform,\" IEEE Sensors Journal, vol. 17, no. 6, pp. 1760-1771, 2017. 1",
|
| 1161 |
+
"[4] H. Li, X.-J. Wu, and J. Kittler, \"Mdlatrr: A novel decomposition method for infrared and visible image fusion,\" IEEE Transactions on Image Processing, vol. 29, pp. 4733-4746, 2020. 1",
|
| 1162 |
+
"[5] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, \"Learning low-rank and sparse discriminative correlation filters for coarse-to-fine visual object tracking,\" IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3727-3739, 2019. 1",
|
| 1163 |
+
"[6] T. Xu, Z. Feng, X.-J. Wu, and J. Kittler, \"Adaptive channel selection for robust visual object tracking with discriminative correlation filters,\" International Journal of Computer Vision, vol. 129, no. 5, pp. 1359-1375, 2021. 1",
|
| 1164 |
+
"[7] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, \"An accelerated correlation filter tracker,\" Pattern Recognition, vol. 102, p. 107172, 2020. 1",
|
| 1165 |
+
"[8] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in 15th Pacific Conference on Computer Graphics and Applications (PG'07). IEEE, 2007, pp. 382–390. 1",
|
| 1166 |
+
"[9] Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proceedings of the IEEE, vol. 87, no. 8, pp. 1315–1326, 1999. 1",
|
| 1167 |
+
"[10] S.-G. Chen and X.-J. Wu, “A new fuzzy twin support vector machine for pattern classification,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 9, pp. 1553–1564, 2018. 1",
|
| 1168 |
+
"[11] C. Li, W. Yuan, A. Bovik, and X. Wu, \"No-reference blur index using blur comparisons,\" *Electronics letters*, vol. 47, no. 17, pp. 962-963, 2011. 1",
|
| 1169 |
+
"[12] C. Chen, Y. Li, W. Liu, and J. Huang, \"Image fusion with local spectral consistency and dynamic gradient sparsity,\" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2760-2765. 1",
|
| 1170 |
+
"[13] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Information Fusion, vol. 25, pp. 72–84, 2015. 1",
|
| 1171 |
+
"[14] Y.-J. Zheng, J.-Y. Yang, J. Yang, X.-J. Wu, and Z. Jin, “Nearest neighbour line nonparametric discriminant analysis for feature extraction,” *Electronics Letters*, vol. 42, no. 12, pp. 679–680, 2006. 1",
|
| 1172 |
+
"[15] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network,” Information Fusion, vol. 36, pp. 191–207, 2017. 1",
|
| 1173 |
+
"[16] H. Li, X.-J. Wu, and J. Kittler, \"Rfn-nest: An end-to-end residual fusion network for infrared and visible images,\" Information Fusion, 2021. 1, 6, 7",
|
| 1174 |
+
"[17] H. Li, X.-j. Wu, and T. S. Durrani, \"Infrared and visible image fusion with resnet and zero-phase component analysis,\" Infrared Physics & Technology, vol. 102, p. 103039, 2019. 1, 2"
|
| 1175 |
+
],
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
506,
|
| 1178 |
+
104,
|
| 1179 |
+
921,
|
| 1180 |
+
944
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 8
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "page_number",
|
| 1186 |
+
"text": "9",
|
| 1187 |
+
"bbox": [
|
| 1188 |
+
910,
|
| 1189 |
+
30,
|
| 1190 |
+
919,
|
| 1191 |
+
40
|
| 1192 |
+
],
|
| 1193 |
+
"page_idx": 8
|
| 1194 |
+
},
|
| 1195 |
+
{
|
| 1196 |
+
"type": "list",
|
| 1197 |
+
"sub_type": "ref_text",
|
| 1198 |
+
"list_items": [
|
| 1199 |
+
"[18] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614–2623, 2018. 1, 2, 6, 7",
|
| 1200 |
+
"[19] H. Li, X.-J. Wu, and T. Durrani, “Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 12, pp. 9645–9656, 2020. 1, 6, 7",
|
| 1201 |
+
"[20] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. 2, 3, 6, 7",
|
| 1202 |
+
"[21] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, \"Ifcnn: A general image fusion framework based on convolutional neural network,\" Information Fusion, vol. 54, pp. 99-118, 2020. 2, 6, 7",
|
| 1203 |
+
"[22] Y. Fu, X.-J. Wu, and T. Durrani, \"Image fusion based on generative adversarial network consistent with perception,\" Information Fusion, 2021. 2, 3",
|
| 1204 |
+
"[23] H. Li, X.-J. Wu, and J. Kittler, \"Infrared and visible image fusion using a deep learning framework,\" in 2018 24th international conference on pattern recognition (ICPR). IEEE, 2018, pp. 2705-2710. 2",
|
| 1205 |
+
"[24] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, \"U2fusion: A unified unsupervised image fusion network,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 6, 7",
|
| 1206 |
+
"[25] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, \"Infrared and visible image fusion via detail preserving adversarial learning,\" Information Fusion, vol. 54, pp. 85-98, 2020. 2",
|
| 1207 |
+
"[26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" Advances in neural information processing systems, vol. 27, 2014. 3",
|
| 1208 |
+
"[27] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, \"Least squares generative adversarial networks,\" in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802. 3",
|
| 1209 |
+
"[28] J. Zhao, M. Mathieu, and Y. LeCun, \"Energy-based generative adversarial networks,\" in 5th International Conference on Learning Representations, ICLR 2017, 2017. 3",
|
| 1210 |
+
"[29] D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv preprint arXiv:1703.10717, 2017. 3",
|
| 1211 |
+
"[30] J. Liang, H. Zeng, and L. Zhang, \"High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9392-9400. 3",
|
| 1212 |
+
"[31] H. Liu, Z. Wan, W. Huang, Y. Song, X. Han, and J. Liao, \"Pd-gan: Probabilistic diverse gan for image inpainting,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9371-9381. 3",
|
| 1213 |
+
"[32] W. Xia, Y. Yang, J.-H. Xue, and B. Wu, “Tedigan: Text-guided diverse face image generation and manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2256–2265. 3",
|
| 1214 |
+
"[33] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410. 3",
|
| 1215 |
+
"[34] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \"Unpaired image-to-image translation using cycle-consistent adversarial networks,\" in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. 3"
|
| 1216 |
+
],
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
76,
|
| 1219 |
+
70,
|
| 1220 |
+
491,
|
| 1221 |
+
943
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 9
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "list",
|
| 1227 |
+
"sub_type": "ref_text",
|
| 1228 |
+
"list_items": [
|
| 1229 |
+
"[35] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008. 3",
|
| 1230 |
+
"[36] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in International Conference on Learning Representations, 2020. 3, 6",
|
| 1231 |
+
"[37] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12299-12310. 3, 6",
|
| 1232 |
+
"[38] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision. Springer, 2016, pp. 694–711. 4",
|
| 1233 |
+
"[39] R. Hou, D. Zhou, R. Nie, D. Liu, L. Xiong, Y. Guo, and C. Yu, \"Vif-net: an unsupervised framework for infrared and visible image fusion,\" IEEE Transactions on Computational Imaging, vol. 6, pp. 640-651, 2020. 5",
|
| 1234 |
+
"[40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, \"Image quality assessment: from error visibility to structural similarity,\" IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004. 6",
|
| 1235 |
+
"[41] S. Hwang, J. Park, N. Kim, Y. Choi, and I. So Kweon, \"Multispectral pedestrian detection: Benchmark dataset and baseline,\" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037-1045. 6",
|
| 1236 |
+
"[42] A. Toet, \"Image fusion by a ratio of low-pass pyramid,\" Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989. 6, 7",
|
| 1237 |
+
"[43] L. J. Chipman, T. M. Orr, and L. N. Graham, \"Wavelets and image fusion,\" in Proceedings., International Conference on Image Processing, vol. 3. IEEE, 1995, pp. 248-251. 6, 7",
|
| 1238 |
+
"[44] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, \"Pixel-and region-based image fusion with complex wavelets,\" Information fusion, vol. 8, no. 2, pp. 119-130, 2007. 6, 7",
|
| 1239 |
+
"[45] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Information fusion, vol. 8, no. 2, pp. 143–156, 2007. 6, 7",
|
| 1240 |
+
"[46] V. Naidu, \"Image fusion technique using multi-resolution singular value decomposition,\" Defence Science Journal, vol. 61, no. 5, p. 479, 2011. 6, 7",
|
| 1241 |
+
"[47] J. Ma, C. Chen, C. Li, and J. Huang, \"Infrared and visible image fusion via gradient transfer and total variation minimization,\" Information Fusion, vol. 31, pp. 100-109, 2016. 6, 7",
|
| 1242 |
+
"[48] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, \"Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,\" in ICCV, vol. 1, no. 2, 2017, p. 3. 6, 7",
|
| 1243 |
+
"[49] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12797-12804. 6, 7",
|
| 1244 |
+
"[50] H. Xu, J. Ma, and X.-P. Zhang, “Mef-gan: Multi-exposure image fusion via generative adversarial networks,” IEEE Transactions on Image Processing, vol. 29, pp. 7203-7216, 2020. 6, 7",
|
| 1245 |
+
"[51] A. M. Eskicioglu and P. S. Fisher, \"Image quality measures and their performance,\" IEEE Transactions on communications, vol. 43, no. 12, pp. 2959-2965, 1995. 8",
|
| 1246 |
+
"[52] J. W. Roberts, J. A. Van Aardt, and F. B. Ahmed, \"Assessment of image fusion procedures using entropy, image quality, and multispectral classification,\" Journal of Applied Remote Sensing, vol. 2, no. 1, p. 023522, 2008. 8"
|
| 1247 |
+
],
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
506,
|
| 1250 |
+
70,
|
| 1251 |
+
919,
|
| 1252 |
+
943
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 9
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "page_number",
|
| 1258 |
+
"text": "10",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
903,
|
| 1261 |
+
30,
|
| 1262 |
+
919,
|
| 1263 |
+
40
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 9
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "list",
|
| 1269 |
+
"sub_type": "ref_text",
|
| 1270 |
+
"list_items": [
|
| 1271 |
+
"[53] C. Xydeas, , and V. Petrovic, “Objective image fusion performance measure,” *Electronics letters*, vol. 36, no. 4, pp. 308–309, 2000. 8",
|
| 1272 |
+
"[54] M. Haghighat and M. A. Razian, \"Fast-fmi: Non-reference image fusion metric,\" in 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2014, pp. 1-3. 8",
|
| 1273 |
+
"[55] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345–3356, 2015. 8",
|
| 1274 |
+
"[56] Y.-J. Rao, \"In-fibre bragg grating sensors,\" Measurement science and technology, vol. 8, no. 4, p. 355, 1997. 8",
|
| 1275 |
+
"[57] H. R. Sheikh and A. C. Bovik, \"Image information and visual quality,\" IEEE Transactions on image processing, vol. 15, no. 2, pp. 430-444, 2006. 8",
|
| 1276 |
+
"[58] G. Qu, D. Zhang, and P. Yan, \"Information measure for performance of image fusion,\" *Electronics letters*, vol. 38, no. 7, pp. 313-315, 2002. 8"
|
| 1277 |
+
],
|
| 1278 |
+
"bbox": [
|
| 1279 |
+
76,
|
| 1280 |
+
70,
|
| 1281 |
+
491,
|
| 1282 |
+
308
|
| 1283 |
+
],
|
| 1284 |
+
"page_idx": 10
|
| 1285 |
+
},
|
| 1286 |
+
{
|
| 1287 |
+
"type": "page_number",
|
| 1288 |
+
"text": "11",
|
| 1289 |
+
"bbox": [
|
| 1290 |
+
903,
|
| 1291 |
+
30,
|
| 1292 |
+
919,
|
| 1293 |
+
40
|
| 1294 |
+
],
|
| 1295 |
+
"page_idx": 10
|
| 1296 |
+
}
|
| 1297 |
+
]
|
2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_model.json
ADDED
|
@@ -0,0 +1,2015 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "page_number",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.912,
|
| 7 |
+
0.031,
|
| 8 |
+
0.921,
|
| 9 |
+
0.041
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "1"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.11,
|
| 18 |
+
0.071,
|
| 19 |
+
0.89,
|
| 20 |
+
0.171
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.325,
|
| 29 |
+
0.185,
|
| 30 |
+
0.666,
|
| 31 |
+
0.203
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Dongyu Rao, Xiao-Jun Wu, Tianyang Xu"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.075,
|
| 40 |
+
0.256,
|
| 41 |
+
0.493,
|
| 42 |
+
0.604
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Abstract—The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on a lightweight transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "title",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.21,
|
| 51 |
+
0.634,
|
| 52 |
+
0.357,
|
| 53 |
+
0.648
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "I. INTRODUCTION"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.075,
|
| 62 |
+
0.657,
|
| 63 |
+
0.491,
|
| 64 |
+
0.824
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "With the development of imaging equipment and analysis approaches, multi-modal visual data is emerging rapidly with many practical applications. In general, image fusion has played an important role in helping human vision to perceive information association between multimodal data. Among them, the fusion of infrared and visible images has important applications in military, security, detection [1] and visual tracking [2], [3], [4], [5], [6], [7] etc., becoming an important part of image fusion tasks."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.075,
|
| 73 |
+
0.837,
|
| 74 |
+
0.49,
|
| 75 |
+
0.89
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "D. Rao and X.-J. Wu (Corresponding author) are with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China. (e-mail: raodongyu@163.com, wu_xiaojun@jiangnan.edu.cn)."
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.075,
|
| 84 |
+
0.89,
|
| 85 |
+
0.492,
|
| 86 |
+
0.945
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "T. Xu is with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, P.R. China and the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK. (e-mail: tianyang_xu@163.com)"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "list",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.075,
|
| 95 |
+
0.837,
|
| 96 |
+
0.492,
|
| 97 |
+
0.945
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": null
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "image",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.526,
|
| 106 |
+
0.256,
|
| 107 |
+
0.91,
|
| 108 |
+
0.412
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": null
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "image_caption",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.504,
|
| 117 |
+
0.428,
|
| 118 |
+
0.922,
|
| 119 |
+
0.457
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Fig. 1. Infrared image (a), visible image (b) and fused image generated by the proposed method (c)."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.503,
|
| 128 |
+
0.484,
|
| 129 |
+
0.923,
|
| 130 |
+
0.942
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "In order to design a natural and efficient image fusion algorithm, researchers have developed many fusion algorithms on the basis of traditional image processing. Firstly, the fusion algorithms based on multiscale transformation are proposed [8], [9], [10], [11], which applied traditional image processing methods to image fusion. Subsequently, fusion algorithms based on sparse / low-rank representation were applied [12], [13], [14]. These algorithms use specific image processing methods to obtain image representations, and obtain the output images by fusing the image representations. However, the image features obtained by these methods are relatively less salient. Most of the fusion methods also require complex designs, so that the fusion results usually introduce a large amount of noise. With the development of deep learning, image fusion methods based on convolutional neural networks have become the mainstream of the topic [15], [16]. However, since most image fusion tasks are unsupervised, the supervised end-to-end training framework is not suitable for training fusion tasks. Drawing on this, some fusion algorithms [17] used large-scale pre-trained networks to extract image features. However, the pre-trained network is mostly used for classification tasks, and the extracted features cannot meet the requirements of the fusion task. Subsequently, Li et al. [18], [19] proposed a fusion algorithm based on an encoder-decoder network, using"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "aside_text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.023,
|
| 139 |
+
0.27,
|
| 140 |
+
0.059,
|
| 141 |
+
0.701
|
| 142 |
+
],
|
| 143 |
+
"angle": 270,
|
| 144 |
+
"content": "arXiv:2201.10147v2 [cs.CV] 4 Feb 2022"
|
| 145 |
+
}
|
| 146 |
+
],
|
| 147 |
+
[
|
| 148 |
+
{
|
| 149 |
+
"type": "page_number",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.912,
|
| 152 |
+
0.031,
|
| 153 |
+
0.92,
|
| 154 |
+
0.041
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": "2"
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "image",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.174,
|
| 163 |
+
0.074,
|
| 164 |
+
0.828,
|
| 165 |
+
0.194
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": null
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "image_caption",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.074,
|
| 174 |
+
0.212,
|
| 175 |
+
0.924,
|
| 176 |
+
0.24
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "Fig. 2. The framework of ViT (Vision Transformer). \"B C H W\" respectively represent the batch size, channels, height and width. \"p\" means patch size. \"h w\" is the number of patches in height and width. \"E\" is the reduced dimension."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.074,
|
| 185 |
+
0.269,
|
| 186 |
+
0.493,
|
| 187 |
+
0.554
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "ordinary data sets for encoder-decoder training. This method makes the fusion task get rid of the dependence on multi-modal data sets. But this also makes it unable to effectively learn specific tasks. In order to obtain better performance for specific fusion tasks, the end-to-end image fusion methods [20], [21], [22] are proposed to learn more targeted network parameters through a specific network structure and loss function. This method is dedicated to training fusion tasks, which can usually achieve better fusion results. However, this puts forward higher requirements for the representative ability of the network and the effectiveness of the fusion method. At present, the end-to-end fusion algorithm mainly uses a convolutional neural network for feature extraction and achieves the fusion effect. However, due to the characteristics of CNN, this process usually ignores the global dependency infusion."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.074,
|
| 196 |
+
0.556,
|
| 197 |
+
0.493,
|
| 198 |
+
0.79
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "In order to solve the problem of global dependence and effective integration, we propose an infrared and visible image fusion algorithm based on the lightweight transformer and adversarial learning. Our method uses a general visual transformer for image spatial relationship learning. In particular, we propose a novel cross-channel transformer model to learn the channel relationship. The composite transformer fusion module has learned the global fusion relationship with space and channels. In addition, adversarial learning is introduced in the training process. We use two discriminators (infrared and fused image, visible and fused image) for adversarial training respectively. This allows the fused image to obtain higher-quality infrared and visible image characteristics."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.075,
|
| 207 |
+
0.791,
|
| 208 |
+
0.493,
|
| 209 |
+
0.822
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "The proposed method mainly has the following three innovations:"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.093,
|
| 218 |
+
0.828,
|
| 219 |
+
0.49,
|
| 220 |
+
0.875
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "- A channel-token transformer is proposed to explore the channel relationships, which is effectively applied in the fusion method."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.093,
|
| 229 |
+
0.878,
|
| 230 |
+
0.49,
|
| 231 |
+
0.91
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "- A transformer module is designed to achieve global fusion relationship learning in complex scenarios."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.093,
|
| 240 |
+
0.912,
|
| 241 |
+
0.492,
|
| 242 |
+
0.945
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "- Adversarial learning is introduced into the training process. The discriminator of the two modalities"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "list",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.093,
|
| 251 |
+
0.828,
|
| 252 |
+
0.492,
|
| 253 |
+
0.945
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": null
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.54,
|
| 262 |
+
0.269,
|
| 263 |
+
0.922,
|
| 264 |
+
0.303
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "introduces the characteristics of different modalities to the fused image to improve the fusion effect."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "title",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.635,
|
| 273 |
+
0.32,
|
| 274 |
+
0.792,
|
| 275 |
+
0.334
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "II. RELATED WORK"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.504,
|
| 284 |
+
0.34,
|
| 285 |
+
0.922,
|
| 286 |
+
0.391
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "Although traditional methods are well investigated [?], [?], deep learning based methods are mainly discussed in this paper."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "title",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.504,
|
| 295 |
+
0.412,
|
| 296 |
+
0.887,
|
| 297 |
+
0.429
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "A. Image Fusion Method Based on Deep Learning"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.503,
|
| 306 |
+
0.433,
|
| 307 |
+
0.922,
|
| 308 |
+
0.869
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "The fusion algorithm based on deep learning has shown excellent performance in infrared and visible image fusion, multi-focus image fusion and medical image fusion, etc. Li et al. [23], [17] used a pretrained neural network to extract image features and used them for image fusion weight calculation. This is a preliminary combination of neural network and image fusion tasks. In order to obtain the depth features suitable for reconstructing images, Li et al. [18] first proposed an algorithm based on an auto-encoder network. In the absence of specific data, the algorithm can also achieve a good fusion effect. With the advancement of visual data collection equipment, some large-scale multi-mode data sets have appeared, so end-to-end fusion algorithms [24], [25] have received more attention and applications. This end-to-end fusion algorithm based on convolutional neural networks achieves better performance on a single task. But it still has some limitations, such as the spatial limitation of the fusion method based on a convolutional neural network. In this paper, the proposed method is an end-to-end image fusion algorithm. But compared to the CNN-based fusion network, we expand the network structure of the end-to-end algorithm and introduce the transformer that focuses on building global relationships into the fusion module. Our algorithm opens up new ideas in the design of fusion methods."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "title",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.505,
|
| 317 |
+
0.89,
|
| 318 |
+
0.773,
|
| 319 |
+
0.905
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "B. Generative Adversarial Network"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.504,
|
| 328 |
+
0.912,
|
| 329 |
+
0.922,
|
| 330 |
+
0.946
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "A generative adversarial network (GAN) is an algorithm that obtains high-quality generated images by"
|
| 334 |
+
}
|
| 335 |
+
],
|
| 336 |
+
[
|
| 337 |
+
{
|
| 338 |
+
"type": "page_number",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.912,
|
| 341 |
+
0.031,
|
| 342 |
+
0.921,
|
| 343 |
+
0.041
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "3"
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "image",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.162,
|
| 352 |
+
0.076,
|
| 353 |
+
0.803,
|
| 354 |
+
0.37
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": null
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "image_caption",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.075,
|
| 363 |
+
0.392,
|
| 364 |
+
0.319,
|
| 365 |
+
0.406
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "Fig. 3. The framework of our method."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "image",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.134,
|
| 374 |
+
0.434,
|
| 375 |
+
0.439,
|
| 376 |
+
0.577
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": null
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "image_caption",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.075,
|
| 385 |
+
0.59,
|
| 386 |
+
0.327,
|
| 387 |
+
0.605
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "Fig. 4. The framework of discriminator."
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.074,
|
| 396 |
+
0.62,
|
| 397 |
+
0.494,
|
| 398 |
+
0.941
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "training two networks against each other. Goodfellow et al. [26] first proposed the idea of a generative adversarial network. The generator generates an image, and the discriminator determines whether the input image is a real image (True) or a generated image (False). Subsequently, many improvements based on the original GAN focused on speeding up the training of the network and improving the quality of the generated images [27], [28], [29]. These improvements also help GAN gain a wider range of applications [30], [31], [32]. Methods based on GAN are also widely used in image generation tasks [33], [34]. There are already some image fusion methods based on GAN [20], [22]. Adversarial learning is an important part of our approach. It improves the infrared and visible image characteristics in the fusion result by obtaining competitive consistency from the inputs. However, we abandon the discriminator of the classification mode and use the difference in the feature level to promote the fused image to have more infrared"
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "image",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.557,
|
| 407 |
+
0.439,
|
| 408 |
+
0.88,
|
| 409 |
+
0.57
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": null
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "image_caption",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.504,
|
| 418 |
+
0.587,
|
| 419 |
+
0.838,
|
| 420 |
+
0.601
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "Fig. 5. The framework of transformer fusion module."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.505,
|
| 429 |
+
0.617,
|
| 430 |
+
0.725,
|
| 431 |
+
0.633
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "or visible image information."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "title",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.505,
|
| 440 |
+
0.655,
|
| 441 |
+
0.677,
|
| 442 |
+
0.67
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "C. Visual Transformer"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.503,
|
| 451 |
+
0.675,
|
| 452 |
+
0.923,
|
| 453 |
+
0.947
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "The transformer is a model based on a pure attention mechanism [35]. Its success in natural language processing inspires its application in computer vision. Due to the long-range dependence of the transformer in processing input, the visual transformer also has the ability to pay attention to the global relationship in image tasks. As a pioneering work of visual transformer, Dosovitskiy et al. [36] proposed ViT (Vision Transformer) for image classification tasks (Figure.2). This is a simple and effective application of transformer in visual tasks. Subsequently, Chen et al. [37] proposed a multi-task model based on the transformer, which achieved good results on multiple low-level visual tasks. The global spatial dependence of transformers has gained many applications in the field of computer vision. Inspired by the characteristics of the transformer, we pay at"
|
| 457 |
+
}
|
| 458 |
+
],
|
| 459 |
+
[
|
| 460 |
+
{
|
| 461 |
+
"type": "page_number",
|
| 462 |
+
"bbox": [
|
| 463 |
+
0.912,
|
| 464 |
+
0.031,
|
| 465 |
+
0.921,
|
| 466 |
+
0.041
|
| 467 |
+
],
|
| 468 |
+
"angle": 0,
|
| 469 |
+
"content": "4"
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"bbox": [
|
| 474 |
+
0.079,
|
| 475 |
+
0.07,
|
| 476 |
+
0.49,
|
| 477 |
+
0.185
|
| 478 |
+
],
|
| 479 |
+
"angle": 0,
|
| 480 |
+
"content": "tention to the global correlation of images space and channels during the fusion process. We propose a new transformer model that focuses on channel relationships and applies it in the field of image fusion. Compared with the general transformer, our transformer fusion module is a lightweight model. This is a new exploration of transformer applications."
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "title",
|
| 484 |
+
"bbox": [
|
| 485 |
+
0.189,
|
| 486 |
+
0.197,
|
| 487 |
+
0.38,
|
| 488 |
+
0.21
|
| 489 |
+
],
|
| 490 |
+
"angle": 0,
|
| 491 |
+
"content": "III. PROPOSED METHOD"
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "title",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.079,
|
| 497 |
+
0.218,
|
| 498 |
+
0.307,
|
| 499 |
+
0.232
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "A. The Framework of Network"
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.079,
|
| 508 |
+
0.239,
|
| 509 |
+
0.49,
|
| 510 |
+
0.337
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "As shown in Figure. 3, our model is mainly composed of two parts: one transformer-based generator and two discriminators. Typically, the fused image is obtained by the generator. Then, the output is refined during the adversarial learning between the generator and the discriminator."
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "text",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.079,
|
| 519 |
+
0.34,
|
| 520 |
+
0.49,
|
| 521 |
+
0.589
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "Generator. The generator is used for the generation of the fused image. After the source images are merged in the channel dimension, the initial feature extraction is performed through the convolutional neural network. The mixed CNN features are input to the transformer fusion module to learn global fusion relations. Taking into account the consumption of computing resources and representation of features, three downsampling operators are added before the transformer fusion module. The fusion relationship learned in this process is up-sampled to different scales and multiplied by the corresponding features to achieve the preliminary result. The fusion features of different scales are up-sampled to the original image size and then superimposed to obtain the final fusion result."
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.079,
|
| 530 |
+
0.592,
|
| 531 |
+
0.49,
|
| 532 |
+
0.943
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "Discriminator. The discriminator is used to refine the perception quality of the fused image. We set up two discriminators: fused image and infrared image (\"Dis-IR\"), fused image and visible image (\"Dis-VIS\"). These two discriminators provide high-resolution details of the visible image and a significant part of the infrared image for the fused image. The pre-trained VGG-16 network is used as the discriminator, which can be further finetuned during training. The network is shown in Figure.4. Taking the visible image discriminator (\"Dis-VIS\") as an example, the fused image and the visible image are separately input into the VGG-16 network to extract features. We calculate the L1 loss between the two features so that the fused image approximates the visible image from the context perspective. According to the number of downsampling, VGG-16 is divided into 4 layers. Different layers have different feature depths and different feature shapes. Inspired by Johnson et al. [38], we use the features of different depths extracted by VGG-16 to distinguish between infrared and visible features. The infrared discriminator uses the features"
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.509,
|
| 541 |
+
0.07,
|
| 542 |
+
0.92,
|
| 543 |
+
0.135
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "of the fourth layer of VGG-16 to retain more saliency information. While the visible discriminator uses the features of the first layer of VGG-16 to retain more detailed information."
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "text",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.509,
|
| 552 |
+
0.138,
|
| 553 |
+
0.92,
|
| 554 |
+
0.288
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "In the training stage, source images are input to the generator to obtain the preliminary fused image. The preliminary fused image then passes through two discriminators with the effect of the fused image being fed back through the loss function. The above two steps are performed alternately to realize the confrontation training between the generator and the discriminator. Finally, we get a generator with an ideal generation effect to achieve the purpose of image fusion."
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "title",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.509,
|
| 563 |
+
0.316,
|
| 564 |
+
0.773,
|
| 565 |
+
0.331
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "B. The Transformer Fusion Module"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.509,
|
| 574 |
+
0.338,
|
| 575 |
+
0.92,
|
| 576 |
+
0.421
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "As shown in Figure. 5, the transformer fusion module consists of two parts: general transformer (\"spatial transformer\") and cross-channel transformer (\"channel transformer\"). This helps us to obtain a more comprehensive global integration relationship."
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.509,
|
| 585 |
+
0.424,
|
| 586 |
+
0.92,
|
| 587 |
+
0.809
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "Spatial Transformer As shown in Figure. 2, the image is divided into blocks and stretched into vectors, where \"p\" means patch size, \"w\" and \"h\" respectively represent the number of image blocks in the width and height dimensions of the image, \"E\" is the reduced dimension. Then, the vector group enters the transformer model for relation learning. The number of image blocks is used to learn the global relationship of the image. Therefore, we consider that the general transformer mainly learns the global spatial relationship between image patches. Inspired by the transformer-based low-level image task, we build a spatial transformer for the fusion task. As shown in Figure. 6, the spatial transformer is basically the same as the first half of ViT (Figure. 2). The difference is that we cancelled the addition of position embedding, and subsequent experiments also proved the rationality and effectiveness of this operation. In addition, when restoring from the vector group to the image, we compress the channel dimension, so that we get a relationship map with a channel number of 1. This corresponds to the spatial relationship of the image we obtained, avoiding the interference of other dimensional relationships."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.509,
|
| 596 |
+
0.811,
|
| 597 |
+
0.92,
|
| 598 |
+
0.943
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "Channel Transformer For image fusion tasks, we believe that the cross-channel relationship of images also plays an important role in fusion. Therefore, we propose a new cross-channel transformer model, which learns the correlation of information across the channel dimension. In the new transformer module, the number of tokens input to the encoder has changed from the number of image blocks to the number of image channels. Since"
|
| 602 |
+
}
|
| 603 |
+
],
|
| 604 |
+
[
|
| 605 |
+
{
|
| 606 |
+
"type": "page_number",
|
| 607 |
+
"bbox": [
|
| 608 |
+
0.912,
|
| 609 |
+
0.031,
|
| 610 |
+
0.92,
|
| 611 |
+
0.041
|
| 612 |
+
],
|
| 613 |
+
"angle": 0,
|
| 614 |
+
"content": "5"
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "image",
|
| 618 |
+
"bbox": [
|
| 619 |
+
0.131,
|
| 620 |
+
0.073,
|
| 621 |
+
0.868,
|
| 622 |
+
0.17
|
| 623 |
+
],
|
| 624 |
+
"angle": 0,
|
| 625 |
+
"content": null
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "image_caption",
|
| 629 |
+
"bbox": [
|
| 630 |
+
0.075,
|
| 631 |
+
0.188,
|
| 632 |
+
0.36,
|
| 633 |
+
0.203
|
| 634 |
+
],
|
| 635 |
+
"angle": 0,
|
| 636 |
+
"content": "Fig. 6. The framework of spatial transformer."
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "image",
|
| 640 |
+
"bbox": [
|
| 641 |
+
0.127,
|
| 642 |
+
0.228,
|
| 643 |
+
0.872,
|
| 644 |
+
0.319
|
| 645 |
+
],
|
| 646 |
+
"angle": 0,
|
| 647 |
+
"content": null
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "image_caption",
|
| 651 |
+
"bbox": [
|
| 652 |
+
0.075,
|
| 653 |
+
0.339,
|
| 654 |
+
0.367,
|
| 655 |
+
0.354
|
| 656 |
+
],
|
| 657 |
+
"angle": 0,
|
| 658 |
+
"content": "Fig. 7. The framework of channel transformer."
|
| 659 |
+
},
|
| 660 |
+
{
|
| 661 |
+
"type": "image",
|
| 662 |
+
"bbox": [
|
| 663 |
+
0.169,
|
| 664 |
+
0.382,
|
| 665 |
+
0.831,
|
| 666 |
+
0.616
|
| 667 |
+
],
|
| 668 |
+
"angle": 0,
|
| 669 |
+
"content": null
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "image_caption",
|
| 673 |
+
"bbox": [
|
| 674 |
+
0.075,
|
| 675 |
+
0.636,
|
| 676 |
+
0.53,
|
| 677 |
+
0.651
|
| 678 |
+
],
|
| 679 |
+
"angle": 0,
|
| 680 |
+
"content": "Fig. 8. Infrared and visible image fusion experiment on \"human\" images."
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "text",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.074,
|
| 686 |
+
0.68,
|
| 687 |
+
0.492,
|
| 688 |
+
0.882
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "position embedding is not required to provide category information in the image generation task, we have removed position embedding, which also makes the size of the input image more flexible. The channel transformer is also a structure similar to the spatial transformer. The main difference is that we change the object modelled by the transformer from the spatial relationship of the image block to the channel relationship. In this specific implementation, we use the number of channels as the token number, which is a simple but effective operation. Through two kinds of the transformer, we can get the relation mapping for the image fusion task."
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "text",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.075,
|
| 697 |
+
0.895,
|
| 698 |
+
0.491,
|
| 699 |
+
0.945
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "Composite Transformer The transformer of the two modes is combined into a transformer fusion module, which enables our fusion model to simultaneously learn"
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "text",
|
| 706 |
+
"bbox": [
|
| 707 |
+
0.504,
|
| 708 |
+
0.68,
|
| 709 |
+
0.922,
|
| 710 |
+
0.798
|
| 711 |
+
],
|
| 712 |
+
"angle": 0,
|
| 713 |
+
"content": "spatial and channel relationships with global correlation. Through experiments, we find that using a channel transformer first and then using a spatial transformer can achieve better results. This shows that the combination of these two fusion modules is used to learn the coefficients that are more suitable for the fusion of infrared and visible images."
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "title",
|
| 717 |
+
"bbox": [
|
| 718 |
+
0.505,
|
| 719 |
+
0.822,
|
| 720 |
+
0.64,
|
| 721 |
+
0.837
|
| 722 |
+
],
|
| 723 |
+
"angle": 0,
|
| 724 |
+
"content": "C. Loss Function"
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "text",
|
| 728 |
+
"bbox": [
|
| 729 |
+
0.503,
|
| 730 |
+
0.844,
|
| 731 |
+
0.922,
|
| 732 |
+
0.945
|
| 733 |
+
],
|
| 734 |
+
"angle": 0,
|
| 735 |
+
"content": "Previous image fusion algorithms based on deep learning usually use multiple loss functions to optimize the fused image from different perspectives during training. But this causes mutual conflict among loss functions. Inspired by [39], we make improvements on the basis of the SSIM loss. A single loss function achieves a good"
|
| 736 |
+
}
|
| 737 |
+
],
|
| 738 |
+
[
|
| 739 |
+
{
|
| 740 |
+
"type": "page_number",
|
| 741 |
+
"bbox": [
|
| 742 |
+
0.912,
|
| 743 |
+
0.031,
|
| 744 |
+
0.92,
|
| 745 |
+
0.041
|
| 746 |
+
],
|
| 747 |
+
"angle": 0,
|
| 748 |
+
"content": "6"
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"bbox": [
|
| 753 |
+
0.074,
|
| 754 |
+
0.069,
|
| 755 |
+
0.492,
|
| 756 |
+
0.102
|
| 757 |
+
],
|
| 758 |
+
"angle": 0,
|
| 759 |
+
"content": "fusion effect and avoids the problem of entanglement of multiple loss functions."
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"bbox": [
|
| 764 |
+
0.074,
|
| 765 |
+
0.103,
|
| 766 |
+
0.491,
|
| 767 |
+
0.188
|
| 768 |
+
],
|
| 769 |
+
"angle": 0,
|
| 770 |
+
"content": "SSIM [40] is a measure of structural similarity between images. As shown in Eq. (1), X, Y represent two images respectively. \\(\\mu\\) and \\(\\sigma\\) stand for mean and standard deviation respectively. \\(\\sigma_{XY}\\) means the covariance between X and Y. \\(C_1\\) and \\(C_2\\) are stability coefficients."
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "equation",
|
| 774 |
+
"bbox": [
|
| 775 |
+
0.091,
|
| 776 |
+
0.194,
|
| 777 |
+
0.491,
|
| 778 |
+
0.238
|
| 779 |
+
],
|
| 780 |
+
"angle": 0,
|
| 781 |
+
"content": "\\[\nS S I M (X, Y) = \\frac {\\left(2 \\mu_ {X} \\mu_ {Y} + C _ {1}\\right) \\left(2 \\sigma_ {X Y} + C _ {2}\\right)}{\\left(\\mu_ {X} ^ {2} + \\mu_ {Y} ^ {2} + C _ {1}\\right) \\left(\\sigma_ {X} ^ {2} + \\sigma_ {Y} ^ {2} + C _ {2}\\right)} \\tag {1}\n\\]"
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"bbox": [
|
| 786 |
+
0.074,
|
| 787 |
+
0.239,
|
| 788 |
+
0.492,
|
| 789 |
+
0.458
|
| 790 |
+
],
|
| 791 |
+
"angle": 0,
|
| 792 |
+
"content": "Variance reflects the contrast of the image, and an image with high contrast is more helpful for the human visual system to capture information. As shown in Eq. (2), \\( M \\) and \\( N \\) are the image size in the horizontal and vertical directions respectively. \\( \\mu \\) represents the mean of the image. We use variance as the standard and choose one as the reference image from infrared and visible images. The structural similarity between the fused image and the reference image is calculated, so that the fused image gradually approaches the reference image during the optimization process. This operation allows the fusion result to better obtain the important information from the infrared or visible image."
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "equation",
|
| 796 |
+
"bbox": [
|
| 797 |
+
0.157,
|
| 798 |
+
0.465,
|
| 799 |
+
0.491,
|
| 800 |
+
0.517
|
| 801 |
+
],
|
| 802 |
+
"angle": 0,
|
| 803 |
+
"content": "\\[\n\\sigma^ {2} (X) = \\frac {\\sum_ {i = 0} ^ {M - 1} \\sum_ {j = 0} ^ {N - 1} [ X (i , j) - \\mu ] ^ {2}}{M N} \\tag {2}\n\\]"
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "text",
|
| 807 |
+
"bbox": [
|
| 808 |
+
0.074,
|
| 809 |
+
0.522,
|
| 810 |
+
0.493,
|
| 811 |
+
0.691
|
| 812 |
+
],
|
| 813 |
+
"angle": 0,
|
| 814 |
+
"content": "In Eq. (3), \\( Var\\_SSIM \\) calculates the structural similarity of the divided image. \\( \\sigma^2 \\) is the variance of the image. \\( I_X \\) and \\( I_Y \\) represent two source images respectively. \\( I_F \\) means a fused image. \\( W \\) is the number of image blocks after division, and the size of each image block is set to \\( 11 \\times 11 \\). Image segmentation is achieved through sliding windows. Through the sliding window, the fused image can well coordinate the consistency between different image blocks. The calculation of the loss function is shown in Eq. (4)."
|
| 815 |
+
},
|
| 816 |
+
{
|
| 817 |
+
"type": "equation",
|
| 818 |
+
"bbox": [
|
| 819 |
+
0.085,
|
| 820 |
+
0.711,
|
| 821 |
+
0.49,
|
| 822 |
+
0.769
|
| 823 |
+
],
|
| 824 |
+
"angle": 0,
|
| 825 |
+
"content": "\\[\nV a r _ {-} S S I M \\left(I _ {X}, I _ {Y}, I _ {F} \\mid W\\right) = \\left\\{ \\begin{array}{l} S S I M \\left(I _ {X}, I _ {F}\\right), \\\\ i f \\sigma^ {2} (X) > \\sigma^ {2} (Y) \\\\ S S I M \\left(I _ {Y}, I _ {F}\\right), \\\\ i f \\sigma^ {2} (Y) > = \\sigma^ {2} (X) \\end{array} \\right. \\tag {3}\n\\]"
|
| 826 |
+
},
|
| 827 |
+
{
|
| 828 |
+
"type": "equation",
|
| 829 |
+
"bbox": [
|
| 830 |
+
0.084,
|
| 831 |
+
0.786,
|
| 832 |
+
0.49,
|
| 833 |
+
0.817
|
| 834 |
+
],
|
| 835 |
+
"angle": 0,
|
| 836 |
+
"content": "\\[\nL _ {v a r - S S I M} = 1 - \\frac {1}{N} \\sum_ {W = 1} ^ {N} V a r _ {-} S S I M \\left(I _ {X}, I _ {Y}, I _ {F} \\mid W\\right) \\tag {4}\n\\]"
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "title",
|
| 840 |
+
"bbox": [
|
| 841 |
+
0.209,
|
| 842 |
+
0.834,
|
| 843 |
+
0.358,
|
| 844 |
+
0.849
|
| 845 |
+
],
|
| 846 |
+
"angle": 0,
|
| 847 |
+
"content": "IV. EXPERIMENTS"
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "title",
|
| 851 |
+
"bbox": [
|
| 852 |
+
0.075,
|
| 853 |
+
0.856,
|
| 854 |
+
0.147,
|
| 855 |
+
0.872
|
| 856 |
+
],
|
| 857 |
+
"angle": 0,
|
| 858 |
+
"content": "A. Setup"
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"bbox": [
|
| 863 |
+
0.075,
|
| 864 |
+
0.878,
|
| 865 |
+
0.491,
|
| 866 |
+
0.947
|
| 867 |
+
],
|
| 868 |
+
"angle": 0,
|
| 869 |
+
"content": "Datasets. In the training phase, 40,000 pairs of corresponding infrared and visible images are selected as the training data from the KAIST [41] data set. KAIST data set is a pedestrian data set containing various general"
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"bbox": [
|
| 874 |
+
0.503,
|
| 875 |
+
0.069,
|
| 876 |
+
0.923,
|
| 877 |
+
0.204
|
| 878 |
+
],
|
| 879 |
+
"angle": 0,
|
| 880 |
+
"content": "scenes of campus, street and countryside. Each picture contains a visible image and a corresponding infrared image. At present, some end-to-end image fusion algorithms [16] use it as training data. The training image size is set to \\(256 \\times 256\\) pixels. In the testing phase, we use 10 pairs of images from the test image of [18] as the test set. The size of the test data is arbitrary (generally not more than \\(2048 \\times 2048\\) pixels)."
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"bbox": [
|
| 885 |
+
0.503,
|
| 886 |
+
0.205,
|
| 887 |
+
0.923,
|
| 888 |
+
0.423
|
| 889 |
+
],
|
| 890 |
+
"angle": 0,
|
| 891 |
+
"content": "Hyper-Parameters. In the training phase, we choose Adam as the optimizer and the learning rate is set to a constant of 0.0001. Training data includes 40,000 pairs of images and batch size is set to 16. Complete training requires 20 epochs. Inspired by [36], [37], we chose fixed values for some parameters in the transformer fusion module. The patch size of the spatial transformer and channel transformer is set to 4 and 16 respectively. Taking into account the different dimensions of the data processed by a spatial transformer and channel transformer, the embedding dimensions are set to 2048 and 128 respectively. Our model is implemented with NVIDIA TITAN Xp and Pytorch."
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"bbox": [
|
| 896 |
+
0.504,
|
| 897 |
+
0.424,
|
| 898 |
+
0.923,
|
| 899 |
+
0.626
|
| 900 |
+
],
|
| 901 |
+
"angle": 0,
|
| 902 |
+
"content": "Compared Methods. The proposed method is compared with 15 methods in subjective and objective evaluation, including classic and latest methods. These are: Ratio of Low-pass Pyramid (RP) [42], Wavelet [43], Dual-Tree Complex Wavelet Transform (DTCWT) [44], Curvelet Transform (CVT) [45], Multi-resolution Singular Value Decomposition (MSVD) [46], gradient transfer and total variation minimization (GTF) [47], DenseFuse [18], DeepFuse [48], a general end-to-end fusion network(IFCNN) [21], FusionGAN [20], NestFuse [19], PMGI [49], U2Fusion [24], RFN-Nest [16], and MEFGAN [50], respectively."
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "title",
|
| 906 |
+
"bbox": [
|
| 907 |
+
0.505,
|
| 908 |
+
0.652,
|
| 909 |
+
0.655,
|
| 910 |
+
0.669
|
| 911 |
+
],
|
| 912 |
+
"angle": 0,
|
| 913 |
+
"content": "B. Results Analysis"
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "text",
|
| 917 |
+
"bbox": [
|
| 918 |
+
0.503,
|
| 919 |
+
0.675,
|
| 920 |
+
0.921,
|
| 921 |
+
0.793
|
| 922 |
+
],
|
| 923 |
+
"angle": 0,
|
| 924 |
+
"content": "We use subjective evaluation and objective evaluation to measure the performance of the fusion algorithm. Subjective evaluation judges whether the fusion result conforms to human visual perception, such as clarity, salient information, etc. Therefore, the subjective evaluation method puts the fused images obtained by different algorithms together for intuitive visual comparison."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"bbox": [
|
| 929 |
+
0.503,
|
| 930 |
+
0.794,
|
| 931 |
+
0.922,
|
| 932 |
+
0.945
|
| 933 |
+
],
|
| 934 |
+
"angle": 0,
|
| 935 |
+
"content": "In Figure. 8, the fusion results of all methods are put together for subjective judgment. Although some methods can achieve a certain fusion effect, it introduces more artificial noise, which affects the acquisition of visual information, such as (c), (d), (e), (f), (g). In contrast, the fusion result produced by the deep learning method is more in line with human vision. Most methods based on deep learning can maintain the basic environmental information of the visible image and the salient"
|
| 936 |
+
}
|
| 937 |
+
],
|
| 938 |
+
[
|
| 939 |
+
{
|
| 940 |
+
"type": "page_number",
|
| 941 |
+
"bbox": [
|
| 942 |
+
0.912,
|
| 943 |
+
0.031,
|
| 944 |
+
0.92,
|
| 945 |
+
0.041
|
| 946 |
+
],
|
| 947 |
+
"angle": 0,
|
| 948 |
+
"content": "7"
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "table_caption",
|
| 952 |
+
"bbox": [
|
| 953 |
+
0.12,
|
| 954 |
+
0.072,
|
| 955 |
+
0.877,
|
| 956 |
+
0.111
|
| 957 |
+
],
|
| 958 |
+
"angle": 0,
|
| 959 |
+
"content": "TABLEI QUANTITATIVE EVALUATION RESULTS OF INFRARED AND VISIBLE IMAGE FUSION TASKS. THE BEST THREE RESULTS ARE HIGHLIGHTED IN RED, BROWN AND BLUE FONTS."
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "table",
|
| 963 |
+
"bbox": [
|
| 964 |
+
0.111,
|
| 965 |
+
0.123,
|
| 966 |
+
0.888,
|
| 967 |
+
0.384
|
| 968 |
+
],
|
| 969 |
+
"angle": 0,
|
| 970 |
+
"content": "<table><tr><td>Method</td><td>SF</td><td>EN</td><td>\\(Q_{abf}\\)</td><td>\\(FMI_w\\)</td><td>MS-SSIM</td><td>\\(FMI_{pixel}\\)</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>RP [42]</td><td>12.7249</td><td>6.5397</td><td>0.4341</td><td>0.3831</td><td>0.8404</td><td>0.8929</td><td>13.0794</td><td>63.2427</td><td>0.6420</td></tr><tr><td>Wavelet [43]</td><td>6.2567</td><td>6.2454</td><td>0.3214</td><td>0.4183</td><td>0.8598</td><td>0.9096</td><td>12.4907</td><td>52.2292</td><td>0.2921</td></tr><tr><td>DTCWT [44]</td><td>11.1296</td><td>6.4791</td><td>0.5258</td><td>0.4419</td><td>0.9053</td><td>0.9186</td><td>12.9583</td><td>60.1138</td><td>0.5986</td></tr><tr><td>CVT [45]</td><td>11.1129</td><td>6.4989</td><td>0.4936</td><td>0.4240</td><td>0.8963</td><td>0.9156</td><td>12.9979</td><td>60.4005</td><td>0.5930</td></tr><tr><td>MSVD [46]</td><td>8.5538</td><td>6.2807</td><td>0.3328</td><td>0.2828</td><td>0.8652</td><td>0.9036</td><td>12.5613</td><td>52.9853</td><td>0.3031</td></tr><tr><td>GTF [47]</td><td>9.5022</td><td>6.5781</td><td>0.4400</td><td>0.4494</td><td>0.8169</td><td>0.9056</td><td>13.1562</td><td>66.0773</td><td>0.4071</td></tr><tr><td>DenseFuse [18]</td><td>9.3238</td><td>6.8526</td><td>0.4735</td><td>0.4389</td><td>0.8692</td><td>0.9061</td><td>13.7053</td><td>81.7283</td><td>0.6875</td></tr><tr><td>DeepFuse [48]</td><td>8.3500</td><td>6.6102</td><td>0.3847</td><td>0.4214</td><td>0.9138</td><td>0.9041</td><td>13.2205</td><td>66.8872</td><td>0.5752</td></tr><tr><td>IFCNN [21]</td><td>11.8590</td><td>6.6454</td><td>0.4962</td><td>0.4052</td><td>0.9129</td><td>0.9007</td><td>13.2909</td><td>73.7053</td><td>0.6090</td></tr><tr><td>FusionGAN [20]</td><td>8.0476</td><td>6.5409</td><td>0.2682</td><td>0.4083</td><td>0.6135</td><td>0.8875</td><td>13.0817</td><td>61.6339</td><td>0.4928</td></tr><tr><td>NestFuse [19]</td><td>9.7807</td><td>6.8745</td><td>0.5011</td><td>0.4483</td><td>0.8817</td><td>0.9025</td><td>13.7491</td><td>83.0530</td><td>0.7195</td></tr><tr><td>PMGI [49]</td><td>8.7195</td><td>6.8688</td><td>0.3787</td><td>0.4018</td><td>0.8684</td><td>0.9001</td><td>13.7376</td><td>69.2364</td><td>0.6904</td></tr><tr><td>U2Fusion [24]</td><td>11.0368</td><td>6.7227</td><td>0.3934</td><td>0.3594</td><td>0.9147</td><td>0.8942</td><td>13.4453</td><td>66.5035</td><td>0.7680</td></tr><tr><td>RFN-Nest [16]</td><td>5.8457</td><td>6.7274</td><td>0.3292</td><td>0.3052</td><td>0.8959</td><td>0.9063</td><td>13.4547</td><td>67.8765</td><td>0.5404</td></tr><tr><td>MEFGAN [50]</td><td>7.8481</td><td>6.9727</td><td>0.2076</td><td>0.1826</td><td>0.6709</td><td>0.8844</td><td>13.9454</td><td>43.7332</td><td>0.7330</td></tr><tr><td>TGFuse(ours)</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>"
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "table_caption",
|
| 974 |
+
"bbox": [
|
| 975 |
+
0.14,
|
| 976 |
+
0.404,
|
| 977 |
+
0.856,
|
| 978 |
+
0.43
|
| 979 |
+
],
|
| 980 |
+
"angle": 0,
|
| 981 |
+
"content": "TABLE II THE OBJECTIVE EVALUATION ON WHETHER TO USE GAN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "table",
|
| 985 |
+
"bbox": [
|
| 986 |
+
0.135,
|
| 987 |
+
0.44,
|
| 988 |
+
0.865,
|
| 989 |
+
0.491
|
| 990 |
+
],
|
| 991 |
+
"angle": 0,
|
| 992 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o GAN</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>GAN</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>"
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "table_caption",
|
| 996 |
+
"bbox": [
|
| 997 |
+
0.075,
|
| 998 |
+
0.51,
|
| 999 |
+
0.92,
|
| 1000 |
+
0.537
|
| 1001 |
+
],
|
| 1002 |
+
"angle": 0,
|
| 1003 |
+
"content": "TABLE III THE OBJECTIVE EVALUATION ON DIFFERENT TRANSFORMER FUSION METHOD. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "table",
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
0.113,
|
| 1009 |
+
0.546,
|
| 1010 |
+
0.886,
|
| 1011 |
+
0.628
|
| 1012 |
+
],
|
| 1013 |
+
"angle": 0,
|
| 1014 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>Spatial</td><td>10.8364</td><td>6.8665</td><td>0.5491</td><td>0.4281</td><td>0.9337</td><td>0.9173</td><td>13.7330</td><td>86.2626</td><td>0.7247</td></tr><tr><td>Channel</td><td>11.1283</td><td>6.9520</td><td>0.5622</td><td>0.4328</td><td>0.9107</td><td>0.9169</td><td>13.9040</td><td>91.2356</td><td>0.7417</td></tr><tr><td>Spatial+Channel</td><td>10.8808</td><td>6.9161</td><td>0.5304</td><td>0.4139</td><td>0.9172</td><td>0.9089</td><td>13.8323</td><td>94.6343</td><td>0.7565</td></tr><tr><td>Channel+Spatial</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr></table>"
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "table_caption",
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
0.086,
|
| 1020 |
+
0.646,
|
| 1021 |
+
0.91,
|
| 1022 |
+
0.673
|
| 1023 |
+
],
|
| 1024 |
+
"angle": 0,
|
| 1025 |
+
"content": "TABLE IV THE OBJECTIVE EVALUATION ON WHETHER TO USE POSITION EMBEDDING. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "table",
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
0.143,
|
| 1031 |
+
0.683,
|
| 1032 |
+
0.858,
|
| 1033 |
+
0.735
|
| 1034 |
+
],
|
| 1035 |
+
"angle": 0,
|
| 1036 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o PE</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>PE</td><td>10.8748</td><td>6.9332</td><td>0.5522</td><td>0.4186</td><td>0.9340</td><td>0.9174</td><td>13.8664</td><td>90.5422</td><td>0.7654</td></tr></table>"
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "table_caption",
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
0.086,
|
| 1042 |
+
0.753,
|
| 1043 |
+
0.91,
|
| 1044 |
+
0.793
|
| 1045 |
+
],
|
| 1046 |
+
"angle": 0,
|
| 1047 |
+
"content": "TABLEV THE OBJECTIVE EVALUATION ON DIFFERENT ENCODER LAYERS OF TRANSFORMER. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.(\"/” MEANS TRAINING FAILURE)"
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "table",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
0.141,
|
| 1053 |
+
0.804,
|
| 1054 |
+
0.859,
|
| 1055 |
+
0.871
|
| 1056 |
+
],
|
| 1057 |
+
"angle": 0,
|
| 1058 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>3-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td>11.1740</td><td>6.8722</td><td>0.5623</td><td>0.4209</td><td>0.9404</td><td>0.9198</td><td>13.7443</td><td>86.7715</td><td>0.7539</td></tr></table>"
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "text",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
0.074,
|
| 1064 |
+
0.899,
|
| 1065 |
+
0.492,
|
| 1066 |
+
0.933
|
| 1067 |
+
],
|
| 1068 |
+
"angle": 0,
|
| 1069 |
+
"content": "human of the infrared image at the same time. Compared with other methods, our method not only highlights the"
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "text",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
0.504,
|
| 1075 |
+
0.899,
|
| 1076 |
+
0.922,
|
| 1077 |
+
0.933
|
| 1078 |
+
],
|
| 1079 |
+
"angle": 0,
|
| 1080 |
+
"content": "infrared information of the person in the red frame but also maintains the visible details of the door. The sky as"
|
| 1081 |
+
}
|
| 1082 |
+
],
|
| 1083 |
+
[
|
| 1084 |
+
{
|
| 1085 |
+
"type": "page_number",
|
| 1086 |
+
"bbox": [
|
| 1087 |
+
0.912,
|
| 1088 |
+
0.031,
|
| 1089 |
+
0.92,
|
| 1090 |
+
0.041
|
| 1091 |
+
],
|
| 1092 |
+
"angle": 0,
|
| 1093 |
+
"content": "8"
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "table_caption",
|
| 1097 |
+
"bbox": [
|
| 1098 |
+
0.089,
|
| 1099 |
+
0.071,
|
| 1100 |
+
0.909,
|
| 1101 |
+
0.112
|
| 1102 |
+
],
|
| 1103 |
+
"angle": 0,
|
| 1104 |
+
"content": "TABLE VI THE OBJECTIVE EVALUATION ON DIFFERENT LAYERS OF CNN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.(\"/\") MEANS TRAINING FAILURE)"
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "table",
|
| 1108 |
+
"bbox": [
|
| 1109 |
+
0.141,
|
| 1110 |
+
0.123,
|
| 1111 |
+
0.857,
|
| 1112 |
+
0.205
|
| 1113 |
+
],
|
| 1114 |
+
"angle": 0,
|
| 1115 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>2-layers</td><td>10.3438</td><td>6.7281</td><td>0.5560</td><td>0.4314</td><td>0.9006</td><td>0.9097</td><td>13.4562</td><td>94.2280</td><td>0.6862</td></tr><tr><td>3-layers</td><td>11.0769</td><td>6.8959</td><td>0.5497</td><td>0.4272</td><td>0.9298</td><td>0.9157</td><td>13.7919</td><td>92.5518</td><td>0.7517</td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr></table>"
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "table_caption",
|
| 1119 |
+
"bbox": [
|
| 1120 |
+
0.143,
|
| 1121 |
+
0.223,
|
| 1122 |
+
0.852,
|
| 1123 |
+
0.25
|
| 1124 |
+
],
|
| 1125 |
+
"angle": 0,
|
| 1126 |
+
"content": "TABLE VII THE OBJECTIVE EVALUATION ON DIFFERENT CHANNELS. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS."
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "table",
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
0.124,
|
| 1132 |
+
0.259,
|
| 1133 |
+
0.875,
|
| 1134 |
+
0.326
|
| 1135 |
+
],
|
| 1136 |
+
"angle": 0,
|
| 1137 |
+
"content": "<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>32-channels</td><td>10.6360</td><td>6.9228</td><td>0.5715</td><td>0.4370</td><td>0.9276</td><td>0.9206</td><td>13.8456</td><td>90.1796</td><td>0.7061</td></tr><tr><td>64-channels</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>128-channels</td><td>11.1181</td><td>6.9388</td><td>0.5545</td><td>0.4142</td><td>0.9368</td><td>0.9163</td><td>13.8776</td><td>88.5524</td><td>0.8069</td></tr></table>"
|
| 1138 |
+
},
|
| 1139 |
+
{
|
| 1140 |
+
"type": "text",
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
0.074,
|
| 1143 |
+
0.354,
|
| 1144 |
+
0.492,
|
| 1145 |
+
0.403
|
| 1146 |
+
],
|
| 1147 |
+
"angle": 0,
|
| 1148 |
+
"content": "the background also retains the high-resolution visible scene. Such a fused image is friendly and easy to accept information for human vision."
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "text",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
0.074,
|
| 1154 |
+
0.405,
|
| 1155 |
+
0.493,
|
| 1156 |
+
0.692
|
| 1157 |
+
],
|
| 1158 |
+
"angle": 0,
|
| 1159 |
+
"content": "There are many different evaluation indicators for objective evaluation. We have selected nine common evaluation indicators for the quality of fused images. These are: Spatial Frequency (SF) [51], Entropy (EN) [52], quality of images \\((\\mathrm{Q}_{abf})\\) [53], feature mutual information with wavelet transform(FMIw) [54], multiscale SSIM (MS-SSIM) [55], feature mutual information with pixel(FMIpixel) [54] Standard Deviation of Image (SD) [56], Visual Information Fidelity (VIF) [57], and mutual information (MI) [58], respectively. In Table.I, We compared the performance of all methods on 9 evaluation indicators. The best three results are highlighted in red, brown and blue fonts. Our method performed best on 7 indicators and also achieved third place on the remaining two indicators. Through subjective and objective evaluation, our method is proved to have obvious advantages in performance."
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "title",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
0.076,
|
| 1165 |
+
0.72,
|
| 1166 |
+
0.216,
|
| 1167 |
+
0.736
|
| 1168 |
+
],
|
| 1169 |
+
"angle": 0,
|
| 1170 |
+
"content": "C. Ablation Study"
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "text",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
0.074,
|
| 1176 |
+
0.742,
|
| 1177 |
+
0.492,
|
| 1178 |
+
0.945
|
| 1179 |
+
],
|
| 1180 |
+
"angle": 0,
|
| 1181 |
+
"content": "GAN. Adversarial learning during training is very effective in image generation tasks, but how to combine it with fusion tasks is a problem in its application. Our original method only has the generation part of the fused image and does not include two discriminators. In this case, our method has surpassed the previous method in most objective evaluation indicators. In order to enhance the characteristics of the fused image: the high resolution of the visible image and the highlighted part of the infrared image, we introduce adversarial learning into the training process. We use the pre-trained VGG-16 network as a discriminator to enhance the characteristics"
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"type": "text",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
0.503,
|
| 1187 |
+
0.354,
|
| 1188 |
+
0.921,
|
| 1189 |
+
0.454
|
| 1190 |
+
],
|
| 1191 |
+
"angle": 0,
|
| 1192 |
+
"content": "of different modalities at the feature level. The objective evaluation results are shown in the Table. II. Compared with the method that does not use adversarial training, the new method with GAN has improved on seven indicators. This also proves the effectiveness of introducing generative confrontation methods."
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "text",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
0.503,
|
| 1198 |
+
0.456,
|
| 1199 |
+
0.922,
|
| 1200 |
+
0.641
|
| 1201 |
+
],
|
| 1202 |
+
"angle": 0,
|
| 1203 |
+
"content": "Transformer Fusion Module. We propose two transformer fusion methods: spatial transformer and channel transformer. They can work alone or in combination with each other. In Table. III, we separately verify the results of using the two transformer fusion modules alone and in combination. The effect of passing through the channel transformer first and then passing through the space transformer will be better. We believe that it is more beneficial for fusion to first pay attention to the channel relationship between corresponding blocks in the process of modelling."
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "text",
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
0.503,
|
| 1209 |
+
0.642,
|
| 1210 |
+
0.922,
|
| 1211 |
+
0.825
|
| 1212 |
+
],
|
| 1213 |
+
"angle": 0,
|
| 1214 |
+
"content": "Position Embedding. In our transformer fusion method, position embedding is removed because the category information provided by position embedding is not needed in the fusion task. However, whether the direct removal of position embedding has an effect on the training of the transformer has not been verified. Therefore, we train the TGFuse model with and without position embedding respectively. Comparing the indicators of the fusion results in Table IV, we find that removing position embedding has a positive effect on the results."
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
0.503,
|
| 1220 |
+
0.827,
|
| 1221 |
+
0.922,
|
| 1222 |
+
0.945
|
| 1223 |
+
],
|
| 1224 |
+
"angle": 0,
|
| 1225 |
+
"content": "Transformer Module Layers. The transformer model we use is a multi-layer encoder model based on ViT. The number of encoder layers also has a great impact on performance. Unlike classification tasks, fusion tasks are less complex and require fewer layers. But too few layers may also lead to failure of fusion relationship learning. Therefore, we set different values for experiments to find"
|
| 1226 |
+
}
|
| 1227 |
+
],
|
| 1228 |
+
[
|
| 1229 |
+
{
|
| 1230 |
+
"type": "page_number",
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
0.911,
|
| 1233 |
+
0.031,
|
| 1234 |
+
0.921,
|
| 1235 |
+
0.041
|
| 1236 |
+
],
|
| 1237 |
+
"angle": 0,
|
| 1238 |
+
"content": "9"
|
| 1239 |
+
},
|
| 1240 |
+
{
|
| 1241 |
+
"type": "text",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
0.074,
|
| 1244 |
+
0.07,
|
| 1245 |
+
0.493,
|
| 1246 |
+
0.253
|
| 1247 |
+
],
|
| 1248 |
+
"angle": 0,
|
| 1249 |
+
"content": "the number of layers most suitable for the fusion task. The comparative results of the experiment are shown in the Table. V. When the number of layers is three, the test result is a meaningless black image. It may be that too few layers cause the transformer fusion module can not learn the available fusion relationship. When the number of layers is five, the test result becomes worse. This may be because the fusion relationship learned by the deep transformer fusion module is redundant. We select the most suitable number of layers (4 layers) based on the experimental results."
|
| 1250 |
+
},
|
| 1251 |
+
{
|
| 1252 |
+
"type": "text",
|
| 1253 |
+
"bbox": [
|
| 1254 |
+
0.076,
|
| 1255 |
+
0.256,
|
| 1256 |
+
0.493,
|
| 1257 |
+
0.508
|
| 1258 |
+
],
|
| 1259 |
+
"angle": 0,
|
| 1260 |
+
"content": "CNN Layers. Firstly, multi-layer CNN is used to extract features from the input image, which can help the transformer module to converge faster. The number of layers of CNN (that is, the number of \"Res-Block\") affects the granularity and depth of the extracted features. We set different values to experiment to find the most suitable number of CNN layers. The more layers, the more times the image is downsampled. When the image block is too small, the model cannot learn an effective fusion relationship. As shown in Table. VI, when the depth is 4 layers, the model learns the best fusion relationship. When the layer is deeper, the resulting image is meaningless black blocks. This means that if the feature block is too small, the fusion module cannot fuse information effectively."
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"type": "text",
|
| 1264 |
+
"bbox": [
|
| 1265 |
+
0.074,
|
| 1266 |
+
0.508,
|
| 1267 |
+
0.495,
|
| 1268 |
+
0.695
|
| 1269 |
+
],
|
| 1270 |
+
"angle": 0,
|
| 1271 |
+
"content": "CNN Channels. As an important dimension of image features, the number of feature channels is also an important factor influencing algorithm performance. In the process of feature extraction, we get four image features with the same dimensions but different scales. The difference in the number of channels means that the distribution of channel dimension information is different. In the ablation experiment, we choose a few typical values as the number of channels. After comparison in Table. VII, we select the number of channels (64 channels) with the best performance."
|
| 1272 |
+
},
|
| 1273 |
+
{
|
| 1274 |
+
"type": "title",
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
0.217,
|
| 1277 |
+
0.718,
|
| 1278 |
+
0.352,
|
| 1279 |
+
0.733
|
| 1280 |
+
],
|
| 1281 |
+
"angle": 0,
|
| 1282 |
+
"content": "V. CONCLUSION"
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "text",
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
0.074,
|
| 1288 |
+
0.743,
|
| 1289 |
+
0.496,
|
| 1290 |
+
0.947
|
| 1291 |
+
],
|
| 1292 |
+
"angle": 0,
|
| 1293 |
+
"content": "In this paper, we proposed an infrared and visible image fusion method based on a lightweight transformer module and generative adversarial learning. The proposed transformer is deeply involved in the fusion task as a fusion relation learning module. Adversarial learning provides generators with different modal characteristics during the training process at the feature level. This is the first attempt of deep combination and application of transformer and adversarial learning in the image fusion task. Our method has also achieved outstanding performance in subjective and objective evaluation, which proves the effectiveness and advancement of our method."
|
| 1294 |
+
},
|
| 1295 |
+
{
|
| 1296 |
+
"type": "title",
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
0.662,
|
| 1299 |
+
0.07,
|
| 1300 |
+
0.768,
|
| 1301 |
+
0.085
|
| 1302 |
+
],
|
| 1303 |
+
"angle": 0,
|
| 1304 |
+
"content": "REFERENCES"
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "ref_text",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
0.515,
|
| 1310 |
+
0.106,
|
| 1311 |
+
0.922,
|
| 1312 |
+
0.16
|
| 1313 |
+
],
|
| 1314 |
+
"angle": 0,
|
| 1315 |
+
"content": "[1] J. Sun, C. Li, X.-J. Wu, V. Palade, and W. Fang, \"An effective method of weld defect detection and classification based on machine vision,\" IEEE Transactions on Industrial Informatics, vol. 15, no. 12, pp. 6322-6333, 2019. 1"
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "ref_text",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
0.516,
|
| 1321 |
+
0.161,
|
| 1322 |
+
0.922,
|
| 1323 |
+
0.214
|
| 1324 |
+
],
|
| 1325 |
+
"angle": 0,
|
| 1326 |
+
"content": "[2] X. Luo, Z. Zhang, and X. Wu, \"A novel algorithm of remote sensing image fusion based on shift-invariant shearlet transform and regional selection,\" AEU-International Journal of Electronics and Communications, vol. 70, no. 2, pp. 186-197, 2016. 1"
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "ref_text",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
0.516,
|
| 1332 |
+
0.215,
|
| 1333 |
+
0.922,
|
| 1334 |
+
0.267
|
| 1335 |
+
],
|
| 1336 |
+
"angle": 0,
|
| 1337 |
+
"content": "[3] X. Luo, Z. Zhang, B. Zhang, and X.-J. Wu, \"Image fusion with contextual statistical similarity and nonsubsampled shearlet transform,\" IEEE Sensors Journal, vol. 17, no. 6, pp. 1760-1771, 2017. 1"
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "ref_text",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
0.516,
|
| 1343 |
+
0.268,
|
| 1344 |
+
0.922,
|
| 1345 |
+
0.32
|
| 1346 |
+
],
|
| 1347 |
+
"angle": 0,
|
| 1348 |
+
"content": "[4] H. Li, X.-J. Wu, and J. Kittler, \"Mdlatrr: A novel decomposition method for infrared and visible image fusion,\" IEEE Transactions on Image Processing, vol. 29, pp. 4733-4746, 2020. 1"
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "ref_text",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
0.516,
|
| 1354 |
+
0.322,
|
| 1355 |
+
0.922,
|
| 1356 |
+
0.387
|
| 1357 |
+
],
|
| 1358 |
+
"angle": 0,
|
| 1359 |
+
"content": "[5] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, \"Learning low-rank and sparse discriminative correlation filters for coarse-to-fine visual object tracking,\" IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3727-3739, 2019. 1"
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "ref_text",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
0.516,
|
| 1365 |
+
0.389,
|
| 1366 |
+
0.922,
|
| 1367 |
+
0.443
|
| 1368 |
+
],
|
| 1369 |
+
"angle": 0,
|
| 1370 |
+
"content": "[6] T. Xu, Z. Feng, X.-J. Wu, and J. Kittler, \"Adaptive channel selection for robust visual object tracking with discriminative correlation filters,\" International Journal of Computer Vision, vol. 129, no. 5, pp. 1359-1375, 2021. 1"
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "ref_text",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
0.516,
|
| 1376 |
+
0.444,
|
| 1377 |
+
0.922,
|
| 1378 |
+
0.483
|
| 1379 |
+
],
|
| 1380 |
+
"angle": 0,
|
| 1381 |
+
"content": "[7] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, \"An accelerated correlation filter tracker,\" Pattern Recognition, vol. 102, p. 107172, 2020. 1"
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "ref_text",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
0.516,
|
| 1387 |
+
0.484,
|
| 1388 |
+
0.922,
|
| 1389 |
+
0.525
|
| 1390 |
+
],
|
| 1391 |
+
"angle": 0,
|
| 1392 |
+
"content": "[8] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in 15th Pacific Conference on Computer Graphics and Applications (PG'07). IEEE, 2007, pp. 382–390. 1"
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "ref_text",
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
0.516,
|
| 1398 |
+
0.526,
|
| 1399 |
+
0.922,
|
| 1400 |
+
0.579
|
| 1401 |
+
],
|
| 1402 |
+
"angle": 0,
|
| 1403 |
+
"content": "[9] Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proceedings of the IEEE, vol. 87, no. 8, pp. 1315–1326, 1999. 1"
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "ref_text",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.508,
|
| 1409 |
+
0.58,
|
| 1410 |
+
0.922,
|
| 1411 |
+
0.631
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "[10] S.-G. Chen and X.-J. Wu, “A new fuzzy twin support vector machine for pattern classification,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 9, pp. 1553–1564, 2018. 1"
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "ref_text",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.508,
|
| 1420 |
+
0.633,
|
| 1421 |
+
0.922,
|
| 1422 |
+
0.672
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "[11] C. Li, W. Yuan, A. Bovik, and X. Wu, \"No-reference blur index using blur comparisons,\" *Electronics letters*, vol. 47, no. 17, pp. 962-963, 2011. 1"
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "ref_text",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.508,
|
| 1431 |
+
0.673,
|
| 1432 |
+
0.922,
|
| 1433 |
+
0.728
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": "[12] C. Chen, Y. Li, W. Liu, and J. Huang, \"Image fusion with local spectral consistency and dynamic gradient sparsity,\" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2760-2765. 1"
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "ref_text",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.508,
|
| 1442 |
+
0.729,
|
| 1443 |
+
0.922,
|
| 1444 |
+
0.768
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "[13] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Information Fusion, vol. 25, pp. 72–84, 2015. 1"
|
| 1448 |
+
},
|
| 1449 |
+
{
|
| 1450 |
+
"type": "ref_text",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
0.508,
|
| 1453 |
+
0.769,
|
| 1454 |
+
0.922,
|
| 1455 |
+
0.821
|
| 1456 |
+
],
|
| 1457 |
+
"angle": 0,
|
| 1458 |
+
"content": "[14] Y.-J. Zheng, J.-Y. Yang, J. Yang, X.-J. Wu, and Z. Jin, “Nearest neighbour line nonparametric discriminant analysis for feature extraction,” *Electronics Letters*, vol. 42, no. 12, pp. 679–680, 2006. 1"
|
| 1459 |
+
},
|
| 1460 |
+
{
|
| 1461 |
+
"type": "ref_text",
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
0.508,
|
| 1464 |
+
0.822,
|
| 1465 |
+
0.922,
|
| 1466 |
+
0.863
|
| 1467 |
+
],
|
| 1468 |
+
"angle": 0,
|
| 1469 |
+
"content": "[15] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network,” Information Fusion, vol. 36, pp. 191–207, 2017. 1"
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "ref_text",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
0.508,
|
| 1475 |
+
0.864,
|
| 1476 |
+
0.922,
|
| 1477 |
+
0.903
|
| 1478 |
+
],
|
| 1479 |
+
"angle": 0,
|
| 1480 |
+
"content": "[16] H. Li, X.-J. Wu, and J. Kittler, \"Rfn-nest: An end-to-end residual fusion network for infrared and visible images,\" Information Fusion, 2021. 1, 6, 7"
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "ref_text",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
0.508,
|
| 1486 |
+
0.904,
|
| 1487 |
+
0.922,
|
| 1488 |
+
0.945
|
| 1489 |
+
],
|
| 1490 |
+
"angle": 0,
|
| 1491 |
+
"content": "[17] H. Li, X.-j. Wu, and T. S. Durrani, \"Infrared and visible image fusion with resnet and zero-phase component analysis,\" Infrared Physics & Technology, vol. 102, p. 103039, 2019. 1, 2"
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "list",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
0.508,
|
| 1497 |
+
0.106,
|
| 1498 |
+
0.922,
|
| 1499 |
+
0.945
|
| 1500 |
+
],
|
| 1501 |
+
"angle": 0,
|
| 1502 |
+
"content": null
|
| 1503 |
+
}
|
| 1504 |
+
],
|
| 1505 |
+
[
|
| 1506 |
+
{
|
| 1507 |
+
"type": "page_number",
|
| 1508 |
+
"bbox": [
|
| 1509 |
+
0.905,
|
| 1510 |
+
0.031,
|
| 1511 |
+
0.921,
|
| 1512 |
+
0.041
|
| 1513 |
+
],
|
| 1514 |
+
"angle": 0,
|
| 1515 |
+
"content": "10"
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "ref_text",
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
0.077,
|
| 1521 |
+
0.071,
|
| 1522 |
+
0.492,
|
| 1523 |
+
0.111
|
| 1524 |
+
],
|
| 1525 |
+
"angle": 0,
|
| 1526 |
+
"content": "[18] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614–2623, 2018. 1, 2, 6, 7"
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "ref_text",
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
0.077,
|
| 1532 |
+
0.112,
|
| 1533 |
+
0.492,
|
| 1534 |
+
0.178
|
| 1535 |
+
],
|
| 1536 |
+
"angle": 0,
|
| 1537 |
+
"content": "[19] H. Li, X.-J. Wu, and T. Durrani, “Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 12, pp. 9645–9656, 2020. 1, 6, 7"
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "ref_text",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
0.078,
|
| 1543 |
+
0.179,
|
| 1544 |
+
0.492,
|
| 1545 |
+
0.232
|
| 1546 |
+
],
|
| 1547 |
+
"angle": 0,
|
| 1548 |
+
"content": "[20] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. 2, 3, 6, 7"
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "ref_text",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
0.078,
|
| 1554 |
+
0.233,
|
| 1555 |
+
0.492,
|
| 1556 |
+
0.285
|
| 1557 |
+
],
|
| 1558 |
+
"angle": 0,
|
| 1559 |
+
"content": "[21] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, \"Ifcnn: A general image fusion framework based on convolutional neural network,\" Information Fusion, vol. 54, pp. 99-118, 2020. 2, 6, 7"
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "ref_text",
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
0.078,
|
| 1565 |
+
0.287,
|
| 1566 |
+
0.492,
|
| 1567 |
+
0.327
|
| 1568 |
+
],
|
| 1569 |
+
"angle": 0,
|
| 1570 |
+
"content": "[22] Y. Fu, X.-J. Wu, and T. Durrani, \"Image fusion based on generative adversarial network consistent with perception,\" Information Fusion, 2021. 2, 3"
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "ref_text",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
0.078,
|
| 1576 |
+
0.327,
|
| 1577 |
+
0.492,
|
| 1578 |
+
0.38
|
| 1579 |
+
],
|
| 1580 |
+
"angle": 0,
|
| 1581 |
+
"content": "[23] H. Li, X.-J. Wu, and J. Kittler, \"Infrared and visible image fusion using a deep learning framework,\" in 2018 24th international conference on pattern recognition (ICPR). IEEE, 2018, pp. 2705-2710. 2"
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "ref_text",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
0.078,
|
| 1587 |
+
0.381,
|
| 1588 |
+
0.492,
|
| 1589 |
+
0.42
|
| 1590 |
+
],
|
| 1591 |
+
"angle": 0,
|
| 1592 |
+
"content": "[24] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, \"U2fusion: A unified unsupervised image fusion network,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 6, 7"
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "ref_text",
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
0.078,
|
| 1598 |
+
0.421,
|
| 1599 |
+
0.492,
|
| 1600 |
+
0.472
|
| 1601 |
+
],
|
| 1602 |
+
"angle": 0,
|
| 1603 |
+
"content": "[25] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, \"Infrared and visible image fusion via detail preserving adversarial learning,\" Information Fusion, vol. 54, pp. 85-98, 2020. 2"
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "ref_text",
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
0.078,
|
| 1609 |
+
0.474,
|
| 1610 |
+
0.492,
|
| 1611 |
+
0.527
|
| 1612 |
+
],
|
| 1613 |
+
"angle": 0,
|
| 1614 |
+
"content": "[26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" Advances in neural information processing systems, vol. 27, 2014. 3"
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "ref_text",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
0.078,
|
| 1620 |
+
0.528,
|
| 1621 |
+
0.492,
|
| 1622 |
+
0.58
|
| 1623 |
+
],
|
| 1624 |
+
"angle": 0,
|
| 1625 |
+
"content": "[27] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, \"Least squares generative adversarial networks,\" in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802. 3"
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "ref_text",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
0.078,
|
| 1631 |
+
0.581,
|
| 1632 |
+
0.492,
|
| 1633 |
+
0.622
|
| 1634 |
+
],
|
| 1635 |
+
"angle": 0,
|
| 1636 |
+
"content": "[28] J. Zhao, M. Mathieu, and Y. LeCun, \"Energy-based generative adversarial networks,\" in 5th International Conference on Learning Representations, ICLR 2017, 2017. 3"
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "ref_text",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
0.078,
|
| 1642 |
+
0.622,
|
| 1643 |
+
0.492,
|
| 1644 |
+
0.662
|
| 1645 |
+
],
|
| 1646 |
+
"angle": 0,
|
| 1647 |
+
"content": "[29] D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv preprint arXiv:1703.10717, 2017. 3"
|
| 1648 |
+
},
|
| 1649 |
+
{
|
| 1650 |
+
"type": "ref_text",
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
0.078,
|
| 1653 |
+
0.663,
|
| 1654 |
+
0.492,
|
| 1655 |
+
0.728
|
| 1656 |
+
],
|
| 1657 |
+
"angle": 0,
|
| 1658 |
+
"content": "[30] J. Liang, H. Zeng, and L. Zhang, \"High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9392-9400. 3"
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "ref_text",
|
| 1662 |
+
"bbox": [
|
| 1663 |
+
0.078,
|
| 1664 |
+
0.729,
|
| 1665 |
+
0.492,
|
| 1666 |
+
0.783
|
| 1667 |
+
],
|
| 1668 |
+
"angle": 0,
|
| 1669 |
+
"content": "[31] H. Liu, Z. Wan, W. Huang, Y. Song, X. Han, and J. Liao, \"Pd-gan: Probabilistic diverse gan for image inpainting,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9371-9381. 3"
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "ref_text",
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
0.078,
|
| 1675 |
+
0.784,
|
| 1676 |
+
0.492,
|
| 1677 |
+
0.836
|
| 1678 |
+
],
|
| 1679 |
+
"angle": 0,
|
| 1680 |
+
"content": "[32] W. Xia, Y. Yang, J.-H. Xue, and B. Wu, “Tedigan: Text-guided diverse face image generation and manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2256–2265. 3"
|
| 1681 |
+
},
|
| 1682 |
+
{
|
| 1683 |
+
"type": "ref_text",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
0.078,
|
| 1686 |
+
0.837,
|
| 1687 |
+
0.492,
|
| 1688 |
+
0.89
|
| 1689 |
+
],
|
| 1690 |
+
"angle": 0,
|
| 1691 |
+
"content": "[33] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410. 3"
|
| 1692 |
+
},
|
| 1693 |
+
{
|
| 1694 |
+
"type": "ref_text",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
0.078,
|
| 1697 |
+
0.891,
|
| 1698 |
+
0.492,
|
| 1699 |
+
0.944
|
| 1700 |
+
],
|
| 1701 |
+
"angle": 0,
|
| 1702 |
+
"content": "[34] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, \"Unpaired image-to-image translation using cycle-consistent adversarial networks,\" in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. 3"
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "list",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
0.077,
|
| 1708 |
+
0.071,
|
| 1709 |
+
0.492,
|
| 1710 |
+
0.944
|
| 1711 |
+
],
|
| 1712 |
+
"angle": 0,
|
| 1713 |
+
"content": null
|
| 1714 |
+
},
|
| 1715 |
+
{
|
| 1716 |
+
"type": "ref_text",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
0.508,
|
| 1719 |
+
0.071,
|
| 1720 |
+
0.921,
|
| 1721 |
+
0.124
|
| 1722 |
+
],
|
| 1723 |
+
"angle": 0,
|
| 1724 |
+
"content": "[35] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008. 3"
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "ref_text",
|
| 1728 |
+
"bbox": [
|
| 1729 |
+
0.508,
|
| 1730 |
+
0.125,
|
| 1731 |
+
0.921,
|
| 1732 |
+
0.19
|
| 1733 |
+
],
|
| 1734 |
+
"angle": 0,
|
| 1735 |
+
"content": "[36] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in International Conference on Learning Representations, 2020. 3, 6"
|
| 1736 |
+
},
|
| 1737 |
+
{
|
| 1738 |
+
"type": "ref_text",
|
| 1739 |
+
"bbox": [
|
| 1740 |
+
0.508,
|
| 1741 |
+
0.191,
|
| 1742 |
+
0.921,
|
| 1743 |
+
0.256
|
| 1744 |
+
],
|
| 1745 |
+
"angle": 0,
|
| 1746 |
+
"content": "[37] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12299-12310. 3, 6"
|
| 1747 |
+
},
|
| 1748 |
+
{
|
| 1749 |
+
"type": "ref_text",
|
| 1750 |
+
"bbox": [
|
| 1751 |
+
0.508,
|
| 1752 |
+
0.257,
|
| 1753 |
+
0.921,
|
| 1754 |
+
0.297
|
| 1755 |
+
],
|
| 1756 |
+
"angle": 0,
|
| 1757 |
+
"content": "[38] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision. Springer, 2016, pp. 694–711. 4"
|
| 1758 |
+
},
|
| 1759 |
+
{
|
| 1760 |
+
"type": "ref_text",
|
| 1761 |
+
"bbox": [
|
| 1762 |
+
0.508,
|
| 1763 |
+
0.297,
|
| 1764 |
+
0.921,
|
| 1765 |
+
0.349
|
| 1766 |
+
],
|
| 1767 |
+
"angle": 0,
|
| 1768 |
+
"content": "[39] R. Hou, D. Zhou, R. Nie, D. Liu, L. Xiong, Y. Guo, and C. Yu, \"Vif-net: an unsupervised framework for infrared and visible image fusion,\" IEEE Transactions on Computational Imaging, vol. 6, pp. 640-651, 2020. 5"
|
| 1769 |
+
},
|
| 1770 |
+
{
|
| 1771 |
+
"type": "ref_text",
|
| 1772 |
+
"bbox": [
|
| 1773 |
+
0.508,
|
| 1774 |
+
0.349,
|
| 1775 |
+
0.921,
|
| 1776 |
+
0.402
|
| 1777 |
+
],
|
| 1778 |
+
"angle": 0,
|
| 1779 |
+
"content": "[40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, \"Image quality assessment: from error visibility to structural similarity,\" IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004. 6"
|
| 1780 |
+
},
|
| 1781 |
+
{
|
| 1782 |
+
"type": "ref_text",
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
0.508,
|
| 1785 |
+
0.402,
|
| 1786 |
+
0.921,
|
| 1787 |
+
0.455
|
| 1788 |
+
],
|
| 1789 |
+
"angle": 0,
|
| 1790 |
+
"content": "[41] S. Hwang, J. Park, N. Kim, Y. Choi, and I. So Kweon, \"Multispectral pedestrian detection: Benchmark dataset and baseline,\" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037-1045. 6"
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "ref_text",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
0.508,
|
| 1796 |
+
0.456,
|
| 1797 |
+
0.921,
|
| 1798 |
+
0.481
|
| 1799 |
+
],
|
| 1800 |
+
"angle": 0,
|
| 1801 |
+
"content": "[42] A. Toet, \"Image fusion by a ratio of low-pass pyramid,\" Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989. 6, 7"
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "ref_text",
|
| 1805 |
+
"bbox": [
|
| 1806 |
+
0.508,
|
| 1807 |
+
0.482,
|
| 1808 |
+
0.921,
|
| 1809 |
+
0.522
|
| 1810 |
+
],
|
| 1811 |
+
"angle": 0,
|
| 1812 |
+
"content": "[43] L. J. Chipman, T. M. Orr, and L. N. Graham, \"Wavelets and image fusion,\" in Proceedings., International Conference on Image Processing, vol. 3. IEEE, 1995, pp. 248-251. 6, 7"
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "ref_text",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
0.508,
|
| 1818 |
+
0.522,
|
| 1819 |
+
0.921,
|
| 1820 |
+
0.573
|
| 1821 |
+
],
|
| 1822 |
+
"angle": 0,
|
| 1823 |
+
"content": "[44] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, \"Pixel-and region-based image fusion with complex wavelets,\" Information fusion, vol. 8, no. 2, pp. 119-130, 2007. 6, 7"
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "ref_text",
|
| 1827 |
+
"bbox": [
|
| 1828 |
+
0.508,
|
| 1829 |
+
0.574,
|
| 1830 |
+
0.921,
|
| 1831 |
+
0.614
|
| 1832 |
+
],
|
| 1833 |
+
"angle": 0,
|
| 1834 |
+
"content": "[45] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Information fusion, vol. 8, no. 2, pp. 143–156, 2007. 6, 7"
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "ref_text",
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
0.508,
|
| 1840 |
+
0.614,
|
| 1841 |
+
0.921,
|
| 1842 |
+
0.653
|
| 1843 |
+
],
|
| 1844 |
+
"angle": 0,
|
| 1845 |
+
"content": "[46] V. Naidu, \"Image fusion technique using multi-resolution singular value decomposition,\" Defence Science Journal, vol. 61, no. 5, p. 479, 2011. 6, 7"
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "ref_text",
|
| 1849 |
+
"bbox": [
|
| 1850 |
+
0.508,
|
| 1851 |
+
0.654,
|
| 1852 |
+
0.921,
|
| 1853 |
+
0.693
|
| 1854 |
+
],
|
| 1855 |
+
"angle": 0,
|
| 1856 |
+
"content": "[47] J. Ma, C. Chen, C. Li, and J. Huang, \"Infrared and visible image fusion via gradient transfer and total variation minimization,\" Information Fusion, vol. 31, pp. 100-109, 2016. 6, 7"
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "ref_text",
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
0.508,
|
| 1862 |
+
0.693,
|
| 1863 |
+
0.921,
|
| 1864 |
+
0.733
|
| 1865 |
+
],
|
| 1866 |
+
"angle": 0,
|
| 1867 |
+
"content": "[48] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, \"Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,\" in ICCV, vol. 1, no. 2, 2017, p. 3. 6, 7"
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "ref_text",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
0.508,
|
| 1873 |
+
0.733,
|
| 1874 |
+
0.921,
|
| 1875 |
+
0.798
|
| 1876 |
+
],
|
| 1877 |
+
"angle": 0,
|
| 1878 |
+
"content": "[49] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12797-12804. 6, 7"
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "ref_text",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
0.508,
|
| 1884 |
+
0.799,
|
| 1885 |
+
0.921,
|
| 1886 |
+
0.849
|
| 1887 |
+
],
|
| 1888 |
+
"angle": 0,
|
| 1889 |
+
"content": "[50] H. Xu, J. Ma, and X.-P. Zhang, “Mef-gan: Multi-exposure image fusion via generative adversarial networks,” IEEE Transactions on Image Processing, vol. 29, pp. 7203-7216, 2020. 6, 7"
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "ref_text",
|
| 1893 |
+
"bbox": [
|
| 1894 |
+
0.508,
|
| 1895 |
+
0.851,
|
| 1896 |
+
0.921,
|
| 1897 |
+
0.89
|
| 1898 |
+
],
|
| 1899 |
+
"angle": 0,
|
| 1900 |
+
"content": "[51] A. M. Eskicioglu and P. S. Fisher, \"Image quality measures and their performance,\" IEEE Transactions on communications, vol. 43, no. 12, pp. 2959-2965, 1995. 8"
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "ref_text",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.508,
|
| 1906 |
+
0.891,
|
| 1907 |
+
0.921,
|
| 1908 |
+
0.944
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "[52] J. W. Roberts, J. A. Van Aardt, and F. B. Ahmed, \"Assessment of image fusion procedures using entropy, image quality, and multispectral classification,\" Journal of Applied Remote Sensing, vol. 2, no. 1, p. 023522, 2008. 8"
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "list",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.508,
|
| 1917 |
+
0.071,
|
| 1918 |
+
0.921,
|
| 1919 |
+
0.944
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": null
|
| 1923 |
+
}
|
| 1924 |
+
],
|
| 1925 |
+
[
|
| 1926 |
+
{
|
| 1927 |
+
"type": "page_number",
|
| 1928 |
+
"bbox": [
|
| 1929 |
+
0.905,
|
| 1930 |
+
0.031,
|
| 1931 |
+
0.92,
|
| 1932 |
+
0.041
|
| 1933 |
+
],
|
| 1934 |
+
"angle": 0,
|
| 1935 |
+
"content": "11"
|
| 1936 |
+
},
|
| 1937 |
+
{
|
| 1938 |
+
"type": "ref_text",
|
| 1939 |
+
"bbox": [
|
| 1940 |
+
0.077,
|
| 1941 |
+
0.071,
|
| 1942 |
+
0.492,
|
| 1943 |
+
0.11
|
| 1944 |
+
],
|
| 1945 |
+
"angle": 0,
|
| 1946 |
+
"content": "[53] C. Xydeas, , and V. Petrovic, “Objective image fusion performance measure,” *Electronics letters*, vol. 36, no. 4, pp. 308–309, 2000. 8"
|
| 1947 |
+
},
|
| 1948 |
+
{
|
| 1949 |
+
"type": "ref_text",
|
| 1950 |
+
"bbox": [
|
| 1951 |
+
0.077,
|
| 1952 |
+
0.111,
|
| 1953 |
+
0.492,
|
| 1954 |
+
0.164
|
| 1955 |
+
],
|
| 1956 |
+
"angle": 0,
|
| 1957 |
+
"content": "[54] M. Haghighat and M. A. Razian, \"Fast-fmi: Non-reference image fusion metric,\" in 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2014, pp. 1-3. 8"
|
| 1958 |
+
},
|
| 1959 |
+
{
|
| 1960 |
+
"type": "ref_text",
|
| 1961 |
+
"bbox": [
|
| 1962 |
+
0.078,
|
| 1963 |
+
0.164,
|
| 1964 |
+
0.492,
|
| 1965 |
+
0.204
|
| 1966 |
+
],
|
| 1967 |
+
"angle": 0,
|
| 1968 |
+
"content": "[55] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345–3356, 2015. 8"
|
| 1969 |
+
},
|
| 1970 |
+
{
|
| 1971 |
+
"type": "ref_text",
|
| 1972 |
+
"bbox": [
|
| 1973 |
+
0.078,
|
| 1974 |
+
0.204,
|
| 1975 |
+
0.492,
|
| 1976 |
+
0.231
|
| 1977 |
+
],
|
| 1978 |
+
"angle": 0,
|
| 1979 |
+
"content": "[56] Y.-J. Rao, \"In-fibre bragg grating sensors,\" Measurement science and technology, vol. 8, no. 4, p. 355, 1997. 8"
|
| 1980 |
+
},
|
| 1981 |
+
{
|
| 1982 |
+
"type": "ref_text",
|
| 1983 |
+
"bbox": [
|
| 1984 |
+
0.078,
|
| 1985 |
+
0.231,
|
| 1986 |
+
0.492,
|
| 1987 |
+
0.27
|
| 1988 |
+
],
|
| 1989 |
+
"angle": 0,
|
| 1990 |
+
"content": "[57] H. R. Sheikh and A. C. Bovik, \"Image information and visual quality,\" IEEE Transactions on image processing, vol. 15, no. 2, pp. 430-444, 2006. 8"
|
| 1991 |
+
},
|
| 1992 |
+
{
|
| 1993 |
+
"type": "ref_text",
|
| 1994 |
+
"bbox": [
|
| 1995 |
+
0.078,
|
| 1996 |
+
0.27,
|
| 1997 |
+
0.492,
|
| 1998 |
+
0.309
|
| 1999 |
+
],
|
| 2000 |
+
"angle": 0,
|
| 2001 |
+
"content": "[58] G. Qu, D. Zhang, and P. Yan, \"Information measure for performance of image fusion,\" *Electronics letters*, vol. 38, no. 7, pp. 313-315, 2002. 8"
|
| 2002 |
+
},
|
| 2003 |
+
{
|
| 2004 |
+
"type": "list",
|
| 2005 |
+
"bbox": [
|
| 2006 |
+
0.077,
|
| 2007 |
+
0.071,
|
| 2008 |
+
0.492,
|
| 2009 |
+
0.309
|
| 2010 |
+
],
|
| 2011 |
+
"angle": 0,
|
| 2012 |
+
"content": null
|
| 2013 |
+
}
|
| 2014 |
+
]
|
| 2015 |
+
]
|
2201.10xxx/2201.10147/9cb7a069-4253-49e1-8158-7dbee020a1a3_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87584e33f1f043aef929f2c420ae97abe3e22a61fcb141637897ff48fd415ee5
|
| 3 |
+
size 2125465
|
2201.10xxx/2201.10147/full.md
ADDED
|
@@ -0,0 +1,268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network
|
| 2 |
+
|
| 3 |
+
Dongyu Rao, Xiao-Jun Wu, Tianyang Xu
|
| 4 |
+
|
| 5 |
+
Abstract—The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on a lightweight transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.
|
| 6 |
+
|
| 7 |
+
# I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
With the development of imaging equipment and analysis approaches, multi-modal visual data is emerging rapidly with many practical applications. In general, image fusion has played an important role in helping human vision to perceive information association between multimodal data. Among them, the fusion of infrared and visible images has important applications in military, security, detection [1] and visual tracking [2], [3], [4], [5], [6], [7] etc., becoming an important part of image fusion tasks.
|
| 10 |
+
|
| 11 |
+
D. Rao and X.-J. Wu (Corresponding author) are with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China. (e-mail: raodongyu@163.com, wu_xiaojun@jiangnan.edu.cn).
|
| 12 |
+
T. Xu is with the School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, P.R. China and the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK. (e-mail: tianyang_xu@163.com)
|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
Fig. 1. Infrared image (a), visible image (b) and fused image generated by the proposed method (c).
|
| 16 |
+
|
| 17 |
+
In order to design a natural and efficient image fusion algorithm, researchers have developed many fusion algorithms on the basis of traditional image processing. Firstly, the fusion algorithms based on multiscale transformation are proposed [8], [9], [10], [11], which applied traditional image processing methods to image fusion. Subsequently, fusion algorithms based on sparse / low-rank representation were applied [12], [13], [14]. These algorithms use specific image processing methods to obtain image representations, and obtain the output images by fusing the image representations. However, the image features obtained by these methods are relatively less salient. Most of the fusion methods also require complex designs, so that the fusion results usually introduce a large amount of noise. With the development of deep learning, image fusion methods based on convolutional neural networks have become the mainstream of the topic [15], [16]. However, since most image fusion tasks are unsupervised, the supervised end-to-end training framework is not suitable for training fusion tasks. Drawing on this, some fusion algorithms [17] used large-scale pre-trained networks to extract image features. However, the pre-trained network is mostly used for classification tasks, and the extracted features cannot meet the requirements of the fusion task. Subsequently, Li et al. [18], [19] proposed a fusion algorithm based on an encoder-decoder network, using
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Fig. 2. The framework of ViT (Vision Transformer). "B C H W" respectively represent the batch size, channels, height and width. "p" means patch size. "h w" is the number of patches in height and width. "E" is the reduced dimension.
|
| 21 |
+
|
| 22 |
+
ordinary data sets for encoder-decoder training. This method makes the fusion task get rid of the dependence on multi-modal data sets. But this also makes it unable to effectively learn specific tasks. In order to obtain better performance for specific fusion tasks, the end-to-end image fusion methods [20], [21], [22] are proposed to learn more targeted network parameters through a specific network structure and loss function. This method is dedicated to training fusion tasks, which can usually achieve better fusion results. However, this puts forward higher requirements for the representative ability of the network and the effectiveness of the fusion method. At present, the end-to-end fusion algorithm mainly uses a convolutional neural network for feature extraction and achieves the fusion effect. However, due to the characteristics of CNN, this process usually ignores the global dependency infusion.
|
| 23 |
+
|
| 24 |
+
In order to solve the problem of global dependence and effective integration, we propose an infrared and visible image fusion algorithm based on the lightweight transformer and adversarial learning. Our method uses a general visual transformer for image spatial relationship learning. In particular, we propose a novel cross-channel transformer model to learn the channel relationship. The composite transformer fusion module has learned the global fusion relationship with space and channels. In addition, adversarial learning is introduced in the training process. We use two discriminators (infrared and fused image, visible and fused image) for adversarial training respectively. This allows the fused image to obtain higher-quality infrared and visible image characteristics.
|
| 25 |
+
|
| 26 |
+
The proposed method mainly has the following three innovations:
|
| 27 |
+
|
| 28 |
+
- A channel-token transformer is proposed to explore the channel relationships, which is effectively applied in the fusion method.
|
| 29 |
+
- A transformer module is designed to achieve global fusion relationship learning in complex scenarios.
|
| 30 |
+
- Adversarial learning is introduced into the training process. The discriminator of the two modalities
|
| 31 |
+
|
| 32 |
+
introduces the characteristics of different modalities to the fused image to improve the fusion effect.
|
| 33 |
+
|
| 34 |
+
# II. RELATED WORK
|
| 35 |
+
|
| 36 |
+
Although traditional methods are well investigated [?], [?], deep learning based methods are mainly discussed in this paper.
|
| 37 |
+
|
| 38 |
+
# A. Image Fusion Method Based on Deep Learning
|
| 39 |
+
|
| 40 |
+
The fusion algorithm based on deep learning has shown excellent performance in infrared and visible image fusion, multi-focus image fusion and medical image fusion, etc. Li et al. [23], [17] used a pretrained neural network to extract image features and used them for image fusion weight calculation. This is a preliminary combination of neural network and image fusion tasks. In order to obtain the depth features suitable for reconstructing images, Li et al. [18] first proposed an algorithm based on an auto-encoder network. In the absence of specific data, the algorithm can also achieve a good fusion effect. With the advancement of visual data collection equipment, some large-scale multi-mode data sets have appeared, so end-to-end fusion algorithms [24], [25] have received more attention and applications. This end-to-end fusion algorithm based on convolutional neural networks achieves better performance on a single task. But it still has some limitations, such as the spatial limitation of the fusion method based on a convolutional neural network. In this paper, the proposed method is an end-to-end image fusion algorithm. But compared to the CNN-based fusion network, we expand the network structure of the end-to-end algorithm and introduce the transformer that focuses on building global relationships into the fusion module. Our algorithm opens up new ideas in the design of fusion methods.
|
| 41 |
+
|
| 42 |
+
# B. Generative Adversarial Network
|
| 43 |
+
|
| 44 |
+
A generative adversarial network (GAN) is an algorithm that obtains high-quality generated images by
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
Fig. 3. The framework of our method.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Fig. 4. The framework of discriminator.
|
| 51 |
+
|
| 52 |
+
training two networks against each other. Goodfellow et al. [26] first proposed the idea of a generative adversarial network. The generator generates an image, and the discriminator determines whether the input image is a real image (True) or a generated image (False). Subsequently, many improvements based on the original GAN focused on speeding up the training of the network and improving the quality of the generated images [27], [28], [29]. These improvements also help GAN gain a wider range of applications [30], [31], [32]. Methods based on GAN are also widely used in image generation tasks [33], [34]. There are already some image fusion methods based on GAN [20], [22]. Adversarial learning is an important part of our approach. It improves the infrared and visible image characteristics in the fusion result by obtaining competitive consistency from the inputs. However, we abandon the discriminator of the classification mode and use the difference in the feature level to promote the fused image to have more infrared
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Fig. 5. The framework of transformer fusion module.
|
| 56 |
+
|
| 57 |
+
or visible image information.
|
| 58 |
+
|
| 59 |
+
# C. Visual Transformer
|
| 60 |
+
|
| 61 |
+
The transformer is a model based on a pure attention mechanism [35]. Its success in natural language processing inspires its application in computer vision. Due to the long-range dependence of the transformer in processing input, the visual transformer also has the ability to pay attention to the global relationship in image tasks. As a pioneering work of visual transformer, Dosovitskiy et al. [36] proposed ViT (Vision Transformer) for image classification tasks (Figure.2). This is a simple and effective application of transformer in visual tasks. Subsequently, Chen et al. [37] proposed a multi-task model based on the transformer, which achieved good results on multiple low-level visual tasks. The global spatial dependence of transformers has gained many applications in the field of computer vision. Inspired by the characteristics of the transformer, we pay at
|
| 62 |
+
|
| 63 |
+
tention to the global correlation of images space and channels during the fusion process. We propose a new transformer model that focuses on channel relationships and applies it in the field of image fusion. Compared with the general transformer, our transformer fusion module is a lightweight model. This is a new exploration of transformer applications.
|
| 64 |
+
|
| 65 |
+
# III. PROPOSED METHOD
|
| 66 |
+
|
| 67 |
+
# A. The Framework of Network
|
| 68 |
+
|
| 69 |
+
As shown in Figure. 3, our model is mainly composed of two parts: one transformer-based generator and two discriminators. Typically, the fused image is obtained by the generator. Then, the output is refined during the adversarial learning between the generator and the discriminator.
|
| 70 |
+
|
| 71 |
+
Generator. The generator is used for the generation of the fused image. After the source images are merged in the channel dimension, the initial feature extraction is performed through the convolutional neural network. The mixed CNN features are input to the transformer fusion module to learn global fusion relations. Taking into account the consumption of computing resources and representation of features, three downsampling operators are added before the transformer fusion module. The fusion relationship learned in this process is up-sampled to different scales and multiplied by the corresponding features to achieve the preliminary result. The fusion features of different scales are up-sampled to the original image size and then superimposed to obtain the final fusion result.
|
| 72 |
+
|
| 73 |
+
Discriminator. The discriminator is used to refine the perception quality of the fused image. We set up two discriminators: fused image and infrared image ("Dis-IR"), fused image and visible image ("Dis-VIS"). These two discriminators provide high-resolution details of the visible image and a significant part of the infrared image for the fused image. The pre-trained VGG-16 network is used as the discriminator, which can be further finetuned during training. The network is shown in Figure.4. Taking the visible image discriminator ("Dis-VIS") as an example, the fused image and the visible image are separately input into the VGG-16 network to extract features. We calculate the L1 loss between the two features so that the fused image approximates the visible image from the context perspective. According to the number of downsampling, VGG-16 is divided into 4 layers. Different layers have different feature depths and different feature shapes. Inspired by Johnson et al. [38], we use the features of different depths extracted by VGG-16 to distinguish between infrared and visible features. The infrared discriminator uses the features
|
| 74 |
+
|
| 75 |
+
of the fourth layer of VGG-16 to retain more saliency information. While the visible discriminator uses the features of the first layer of VGG-16 to retain more detailed information.
|
| 76 |
+
|
| 77 |
+
In the training stage, source images are input to the generator to obtain the preliminary fused image. The preliminary fused image then passes through two discriminators with the effect of the fused image being fed back through the loss function. The above two steps are performed alternately to realize the confrontation training between the generator and the discriminator. Finally, we get a generator with an ideal generation effect to achieve the purpose of image fusion.
|
| 78 |
+
|
| 79 |
+
# B. The Transformer Fusion Module
|
| 80 |
+
|
| 81 |
+
As shown in Figure. 5, the transformer fusion module consists of two parts: general transformer ("spatial transformer") and cross-channel transformer ("channel transformer"). This helps us to obtain a more comprehensive global integration relationship.
|
| 82 |
+
|
| 83 |
+
Spatial Transformer As shown in Figure. 2, the image is divided into blocks and stretched into vectors, where "p" means patch size, "w" and "h" respectively represent the number of image blocks in the width and height dimensions of the image, "E" is the reduced dimension. Then, the vector group enters the transformer model for relation learning. The number of image blocks is used to learn the global relationship of the image. Therefore, we consider that the general transformer mainly learns the global spatial relationship between image patches. Inspired by the transformer-based low-level image task, we build a spatial transformer for the fusion task. As shown in Figure. 6, the spatial transformer is basically the same as the first half of ViT (Figure. 2). The difference is that we cancelled the addition of position embedding, and subsequent experiments also proved the rationality and effectiveness of this operation. In addition, when restoring from the vector group to the image, we compress the channel dimension, so that we get a relationship map with a channel number of 1. This corresponds to the spatial relationship of the image we obtained, avoiding the interference of other dimensional relationships.
|
| 84 |
+
|
| 85 |
+
Channel Transformer For image fusion tasks, we believe that the cross-channel relationship of images also plays an important role in fusion. Therefore, we propose a new cross-channel transformer model, which learns the correlation of information across the channel dimension. In the new transformer module, the number of tokens input to the encoder has changed from the number of image blocks to the number of image channels. Since
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Fig. 6. The framework of spatial transformer.
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Fig. 7. The framework of channel transformer.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
Fig. 8. Infrared and visible image fusion experiment on "human" images.
|
| 95 |
+
|
| 96 |
+
position embedding is not required to provide category information in the image generation task, we have removed position embedding, which also makes the size of the input image more flexible. The channel transformer is also a structure similar to the spatial transformer. The main difference is that we change the object modelled by the transformer from the spatial relationship of the image block to the channel relationship. In this specific implementation, we use the number of channels as the token number, which is a simple but effective operation. Through two kinds of the transformer, we can get the relation mapping for the image fusion task.
|
| 97 |
+
|
| 98 |
+
Composite Transformer The transformer of the two modes is combined into a transformer fusion module, which enables our fusion model to simultaneously learn
|
| 99 |
+
|
| 100 |
+
spatial and channel relationships with global correlation. Through experiments, we find that using a channel transformer first and then using a spatial transformer can achieve better results. This shows that the combination of these two fusion modules is used to learn the coefficients that are more suitable for the fusion of infrared and visible images.
|
| 101 |
+
|
| 102 |
+
# C. Loss Function
|
| 103 |
+
|
| 104 |
+
Previous image fusion algorithms based on deep learning usually use multiple loss functions to optimize the fused image from different perspectives during training. But this causes mutual conflict among loss functions. Inspired by [39], we make improvements on the basis of the SSIM loss. A single loss function achieves a good
|
| 105 |
+
|
| 106 |
+
fusion effect and avoids the problem of entanglement of multiple loss functions.
|
| 107 |
+
|
| 108 |
+
SSIM [40] is a measure of structural similarity between images. As shown in Eq. (1), X, Y represent two images respectively. $\mu$ and $\sigma$ stand for mean and standard deviation respectively. $\sigma_{XY}$ means the covariance between X and Y. $C_1$ and $C_2$ are stability coefficients.
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
S S I M (X, Y) = \frac {\left(2 \mu_ {X} \mu_ {Y} + C _ {1}\right) \left(2 \sigma_ {X Y} + C _ {2}\right)}{\left(\mu_ {X} ^ {2} + \mu_ {Y} ^ {2} + C _ {1}\right) \left(\sigma_ {X} ^ {2} + \sigma_ {Y} ^ {2} + C _ {2}\right)} \tag {1}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
Variance reflects the contrast of the image, and an image with high contrast is more helpful for the human visual system to capture information. As shown in Eq. (2), $M$ and $N$ are the image size in the horizontal and vertical directions respectively. $\mu$ represents the mean of the image. We use variance as the standard and choose one as the reference image from infrared and visible images. The structural similarity between the fused image and the reference image is calculated, so that the fused image gradually approaches the reference image during the optimization process. This operation allows the fusion result to better obtain the important information from the infrared or visible image.
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\sigma^ {2} (X) = \frac {\sum_ {i = 0} ^ {M - 1} \sum_ {j = 0} ^ {N - 1} [ X (i , j) - \mu ] ^ {2}}{M N} \tag {2}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
In Eq. (3), $Var\_SSIM$ calculates the structural similarity of the divided image. $\sigma^2$ is the variance of the image. $I_X$ and $I_Y$ represent two source images respectively. $I_F$ means a fused image. $W$ is the number of image blocks after division, and the size of each image block is set to $11 \times 11$ . Image segmentation is achieved through sliding windows. Through the sliding window, the fused image can well coordinate the consistency between different image blocks. The calculation of the loss function is shown in Eq. (4).
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
V a r _ {-} S S I M \left(I _ {X}, I _ {Y}, I _ {F} \mid W\right) = \left\{ \begin{array}{l} S S I M \left(I _ {X}, I _ {F}\right), \\ i f \sigma^ {2} (X) > \sigma^ {2} (Y) \\ S S I M \left(I _ {Y}, I _ {F}\right), \\ i f \sigma^ {2} (Y) > = \sigma^ {2} (X) \end{array} \right. \tag {3}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
L _ {v a r - S S I M} = 1 - \frac {1}{N} \sum_ {W = 1} ^ {N} V a r _ {-} S S I M \left(I _ {X}, I _ {Y}, I _ {F} \mid W\right) \tag {4}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
# IV. EXPERIMENTS
|
| 131 |
+
|
| 132 |
+
# A. Setup
|
| 133 |
+
|
| 134 |
+
Datasets. In the training phase, 40,000 pairs of corresponding infrared and visible images are selected as the training data from the KAIST [41] data set. KAIST data set is a pedestrian data set containing various general
|
| 135 |
+
|
| 136 |
+
scenes of campus, street and countryside. Each picture contains a visible image and a corresponding infrared image. At present, some end-to-end image fusion algorithms [16] use it as training data. The training image size is set to $256 \times 256$ pixels. In the testing phase, we use 10 pairs of images from the test image of [18] as the test set. The size of the test data is arbitrary (generally not more than $2048 \times 2048$ pixels).
|
| 137 |
+
|
| 138 |
+
Hyper-Parameters. In the training phase, we choose Adam as the optimizer and the learning rate is set to a constant of 0.0001. Training data includes 40,000 pairs of images and batch size is set to 16. Complete training requires 20 epochs. Inspired by [36], [37], we chose fixed values for some parameters in the transformer fusion module. The patch size of the spatial transformer and channel transformer is set to 4 and 16 respectively. Taking into account the different dimensions of the data processed by a spatial transformer and channel transformer, the embedding dimensions are set to 2048 and 128 respectively. Our model is implemented with NVIDIA TITAN Xp and Pytorch.
|
| 139 |
+
|
| 140 |
+
Compared Methods. The proposed method is compared with 15 methods in subjective and objective evaluation, including classic and latest methods. These are: Ratio of Low-pass Pyramid (RP) [42], Wavelet [43], Dual-Tree Complex Wavelet Transform (DTCWT) [44], Curvelet Transform (CVT) [45], Multi-resolution Singular Value Decomposition (MSVD) [46], gradient transfer and total variation minimization (GTF) [47], DenseFuse [18], DeepFuse [48], a general end-to-end fusion network(IFCNN) [21], FusionGAN [20], NestFuse [19], PMGI [49], U2Fusion [24], RFN-Nest [16], and MEFGAN [50], respectively.
|
| 141 |
+
|
| 142 |
+
# B. Results Analysis
|
| 143 |
+
|
| 144 |
+
We use subjective evaluation and objective evaluation to measure the performance of the fusion algorithm. Subjective evaluation judges whether the fusion result conforms to human visual perception, such as clarity, salient information, etc. Therefore, the subjective evaluation method puts the fused images obtained by different algorithms together for intuitive visual comparison.
|
| 145 |
+
|
| 146 |
+
In Figure. 8, the fusion results of all methods are put together for subjective judgment. Although some methods can achieve a certain fusion effect, it introduces more artificial noise, which affects the acquisition of visual information, such as (c), (d), (e), (f), (g). In contrast, the fusion result produced by the deep learning method is more in line with human vision. Most methods based on deep learning can maintain the basic environmental information of the visible image and the salient
|
| 147 |
+
|
| 148 |
+
TABLEI QUANTITATIVE EVALUATION RESULTS OF INFRARED AND VISIBLE IMAGE FUSION TASKS. THE BEST THREE RESULTS ARE HIGHLIGHTED IN RED, BROWN AND BLUE FONTS.
|
| 149 |
+
|
| 150 |
+
<table><tr><td>Method</td><td>SF</td><td>EN</td><td>\(Q_{abf}\)</td><td>\(FMI_w\)</td><td>MS-SSIM</td><td>\(FMI_{pixel}\)</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>RP [42]</td><td>12.7249</td><td>6.5397</td><td>0.4341</td><td>0.3831</td><td>0.8404</td><td>0.8929</td><td>13.0794</td><td>63.2427</td><td>0.6420</td></tr><tr><td>Wavelet [43]</td><td>6.2567</td><td>6.2454</td><td>0.3214</td><td>0.4183</td><td>0.8598</td><td>0.9096</td><td>12.4907</td><td>52.2292</td><td>0.2921</td></tr><tr><td>DTCWT [44]</td><td>11.1296</td><td>6.4791</td><td>0.5258</td><td>0.4419</td><td>0.9053</td><td>0.9186</td><td>12.9583</td><td>60.1138</td><td>0.5986</td></tr><tr><td>CVT [45]</td><td>11.1129</td><td>6.4989</td><td>0.4936</td><td>0.4240</td><td>0.8963</td><td>0.9156</td><td>12.9979</td><td>60.4005</td><td>0.5930</td></tr><tr><td>MSVD [46]</td><td>8.5538</td><td>6.2807</td><td>0.3328</td><td>0.2828</td><td>0.8652</td><td>0.9036</td><td>12.5613</td><td>52.9853</td><td>0.3031</td></tr><tr><td>GTF [47]</td><td>9.5022</td><td>6.5781</td><td>0.4400</td><td>0.4494</td><td>0.8169</td><td>0.9056</td><td>13.1562</td><td>66.0773</td><td>0.4071</td></tr><tr><td>DenseFuse [18]</td><td>9.3238</td><td>6.8526</td><td>0.4735</td><td>0.4389</td><td>0.8692</td><td>0.9061</td><td>13.7053</td><td>81.7283</td><td>0.6875</td></tr><tr><td>DeepFuse [48]</td><td>8.3500</td><td>6.6102</td><td>0.3847</td><td>0.4214</td><td>0.9138</td><td>0.9041</td><td>13.2205</td><td>66.8872</td><td>0.5752</td></tr><tr><td>IFCNN [21]</td><td>11.8590</td><td>6.6454</td><td>0.4962</td><td>0.4052</td><td>0.9129</td><td>0.9007</td><td>13.2909</td><td>73.7053</td><td>0.6090</td></tr><tr><td>FusionGAN [20]</td><td>8.0476</td><td>6.5409</td><td>0.2682</td><td>0.4083</td><td>0.6135</td><td>0.8875</td><td>13.0817</td><td>61.6339</td><td>0.4928</td></tr><tr><td>NestFuse [19]</td><td>9.7807</td><td>6.8745</td><td>0.5011</td><td>0.4483</td><td>0.8817</td><td>0.9025</td><td>13.7491</td><td>83.0530</td><td>0.7195</td></tr><tr><td>PMGI [49]</td><td>8.7195</td><td>6.8688</td><td>0.3787</td><td>0.4018</td><td>0.8684</td><td>0.9001</td><td>13.7376</td><td>69.2364</td><td>0.6904</td></tr><tr><td>U2Fusion [24]</td><td>11.0368</td><td>6.7227</td><td>0.3934</td><td>0.3594</td><td>0.9147</td><td>0.8942</td><td>13.4453</td><td>66.5035</td><td>0.7680</td></tr><tr><td>RFN-Nest [16]</td><td>5.8457</td><td>6.7274</td><td>0.3292</td><td>0.3052</td><td>0.8959</td><td>0.9063</td><td>13.4547</td><td>67.8765</td><td>0.5404</td></tr><tr><td>MEFGAN [50]</td><td>7.8481</td><td>6.9727</td><td>0.2076</td><td>0.1826</td><td>0.6709</td><td>0.8844</td><td>13.9454</td><td>43.7332</td><td>0.7330</td></tr><tr><td>TGFuse(ours)</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>
|
| 151 |
+
|
| 152 |
+
TABLE II THE OBJECTIVE EVALUATION ON WHETHER TO USE GAN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.
|
| 153 |
+
|
| 154 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o GAN</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>GAN</td><td>11.3149</td><td>6.9838</td><td>0.5863</td><td>0.4452</td><td>0.9160</td><td>0.9219</td><td>13.9676</td><td>94.7203</td><td>0.7746</td></tr></table>
|
| 155 |
+
|
| 156 |
+
TABLE III THE OBJECTIVE EVALUATION ON DIFFERENT TRANSFORMER FUSION METHOD. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.
|
| 157 |
+
|
| 158 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>Spatial</td><td>10.8364</td><td>6.8665</td><td>0.5491</td><td>0.4281</td><td>0.9337</td><td>0.9173</td><td>13.7330</td><td>86.2626</td><td>0.7247</td></tr><tr><td>Channel</td><td>11.1283</td><td>6.9520</td><td>0.5622</td><td>0.4328</td><td>0.9107</td><td>0.9169</td><td>13.9040</td><td>91.2356</td><td>0.7417</td></tr><tr><td>Spatial+Channel</td><td>10.8808</td><td>6.9161</td><td>0.5304</td><td>0.4139</td><td>0.9172</td><td>0.9089</td><td>13.8323</td><td>94.6343</td><td>0.7565</td></tr><tr><td>Channel+Spatial</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr></table>
|
| 159 |
+
|
| 160 |
+
TABLE IV THE OBJECTIVE EVALUATION ON WHETHER TO USE POSITION EMBEDDING. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.
|
| 161 |
+
|
| 162 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>w/o PE</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>PE</td><td>10.8748</td><td>6.9332</td><td>0.5522</td><td>0.4186</td><td>0.9340</td><td>0.9174</td><td>13.8664</td><td>90.5422</td><td>0.7654</td></tr></table>
|
| 163 |
+
|
| 164 |
+
TABLEV THE OBJECTIVE EVALUATION ON DIFFERENT ENCODER LAYERS OF TRANSFORMER. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.("/” MEANS TRAINING FAILURE)
|
| 165 |
+
|
| 166 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>3-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td>11.1740</td><td>6.8722</td><td>0.5623</td><td>0.4209</td><td>0.9404</td><td>0.9198</td><td>13.7443</td><td>86.7715</td><td>0.7539</td></tr></table>
|
| 167 |
+
|
| 168 |
+
human of the infrared image at the same time. Compared with other methods, our method not only highlights the
|
| 169 |
+
|
| 170 |
+
infrared information of the person in the red frame but also maintains the visible details of the door. The sky as
|
| 171 |
+
|
| 172 |
+
TABLE VI THE OBJECTIVE EVALUATION ON DIFFERENT LAYERS OF CNN. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.("/") MEANS TRAINING FAILURE)
|
| 173 |
+
|
| 174 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>2-layers</td><td>10.3438</td><td>6.7281</td><td>0.5560</td><td>0.4314</td><td>0.9006</td><td>0.9097</td><td>13.4562</td><td>94.2280</td><td>0.6862</td></tr><tr><td>3-layers</td><td>11.0769</td><td>6.8959</td><td>0.5497</td><td>0.4272</td><td>0.9298</td><td>0.9157</td><td>13.7919</td><td>92.5518</td><td>0.7517</td></tr><tr><td>4-layers</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>5-layers</td><td></td><td></td><td></td><td></td><td>/</td><td></td><td></td><td></td><td></td></tr></table>
|
| 175 |
+
|
| 176 |
+
TABLE VII THE OBJECTIVE EVALUATION ON DIFFERENT CHANNELS. THE BEST RESULTS ARE HIGHLIGHTED IN BOLD FONTS.
|
| 177 |
+
|
| 178 |
+
<table><tr><td></td><td>SF</td><td>EN</td><td>Qabf</td><td>FMIw</td><td>MS-SSIM</td><td>FMIpixel</td><td>MI</td><td>SD</td><td>VIF</td></tr><tr><td>32-channels</td><td>10.6360</td><td>6.9228</td><td>0.5715</td><td>0.4370</td><td>0.9276</td><td>0.9206</td><td>13.8456</td><td>90.1796</td><td>0.7061</td></tr><tr><td>64-channels</td><td>11.2253</td><td>6.9547</td><td>0.5794</td><td>0.4425</td><td>0.9240</td><td>0.9212</td><td>13.9094</td><td>92.4749</td><td>0.7870</td></tr><tr><td>128-channels</td><td>11.1181</td><td>6.9388</td><td>0.5545</td><td>0.4142</td><td>0.9368</td><td>0.9163</td><td>13.8776</td><td>88.5524</td><td>0.8069</td></tr></table>
|
| 179 |
+
|
| 180 |
+
the background also retains the high-resolution visible scene. Such a fused image is friendly and easy to accept information for human vision.
|
| 181 |
+
|
| 182 |
+
There are many different evaluation indicators for objective evaluation. We have selected nine common evaluation indicators for the quality of fused images. These are: Spatial Frequency (SF) [51], Entropy (EN) [52], quality of images $(\mathrm{Q}_{abf})$ [53], feature mutual information with wavelet transform(FMIw) [54], multiscale SSIM (MS-SSIM) [55], feature mutual information with pixel(FMIpixel) [54] Standard Deviation of Image (SD) [56], Visual Information Fidelity (VIF) [57], and mutual information (MI) [58], respectively. In Table.I, We compared the performance of all methods on 9 evaluation indicators. The best three results are highlighted in red, brown and blue fonts. Our method performed best on 7 indicators and also achieved third place on the remaining two indicators. Through subjective and objective evaluation, our method is proved to have obvious advantages in performance.
|
| 183 |
+
|
| 184 |
+
# C. Ablation Study
|
| 185 |
+
|
| 186 |
+
GAN. Adversarial learning during training is very effective in image generation tasks, but how to combine it with fusion tasks is a problem in its application. Our original method only has the generation part of the fused image and does not include two discriminators. In this case, our method has surpassed the previous method in most objective evaluation indicators. In order to enhance the characteristics of the fused image: the high resolution of the visible image and the highlighted part of the infrared image, we introduce adversarial learning into the training process. We use the pre-trained VGG-16 network as a discriminator to enhance the characteristics
|
| 187 |
+
|
| 188 |
+
of different modalities at the feature level. The objective evaluation results are shown in the Table. II. Compared with the method that does not use adversarial training, the new method with GAN has improved on seven indicators. This also proves the effectiveness of introducing generative confrontation methods.
|
| 189 |
+
|
| 190 |
+
Transformer Fusion Module. We propose two transformer fusion methods: spatial transformer and channel transformer. They can work alone or in combination with each other. In Table. III, we separately verify the results of using the two transformer fusion modules alone and in combination. The effect of passing through the channel transformer first and then passing through the space transformer will be better. We believe that it is more beneficial for fusion to first pay attention to the channel relationship between corresponding blocks in the process of modelling.
|
| 191 |
+
|
| 192 |
+
Position Embedding. In our transformer fusion method, position embedding is removed because the category information provided by position embedding is not needed in the fusion task. However, whether the direct removal of position embedding has an effect on the training of the transformer has not been verified. Therefore, we train the TGFuse model with and without position embedding respectively. Comparing the indicators of the fusion results in Table IV, we find that removing position embedding has a positive effect on the results.
|
| 193 |
+
|
| 194 |
+
Transformer Module Layers. The transformer model we use is a multi-layer encoder model based on ViT. The number of encoder layers also has a great impact on performance. Unlike classification tasks, fusion tasks are less complex and require fewer layers. But too few layers may also lead to failure of fusion relationship learning. Therefore, we set different values for experiments to find
|
| 195 |
+
|
| 196 |
+
the number of layers most suitable for the fusion task. The comparative results of the experiment are shown in the Table. V. When the number of layers is three, the test result is a meaningless black image. It may be that too few layers cause the transformer fusion module can not learn the available fusion relationship. When the number of layers is five, the test result becomes worse. This may be because the fusion relationship learned by the deep transformer fusion module is redundant. We select the most suitable number of layers (4 layers) based on the experimental results.
|
| 197 |
+
|
| 198 |
+
CNN Layers. Firstly, multi-layer CNN is used to extract features from the input image, which can help the transformer module to converge faster. The number of layers of CNN (that is, the number of "Res-Block") affects the granularity and depth of the extracted features. We set different values to experiment to find the most suitable number of CNN layers. The more layers, the more times the image is downsampled. When the image block is too small, the model cannot learn an effective fusion relationship. As shown in Table. VI, when the depth is 4 layers, the model learns the best fusion relationship. When the layer is deeper, the resulting image is meaningless black blocks. This means that if the feature block is too small, the fusion module cannot fuse information effectively.
|
| 199 |
+
|
| 200 |
+
CNN Channels. As an important dimension of image features, the number of feature channels is also an important factor influencing algorithm performance. In the process of feature extraction, we get four image features with the same dimensions but different scales. The difference in the number of channels means that the distribution of channel dimension information is different. In the ablation experiment, we choose a few typical values as the number of channels. After comparison in Table. VII, we select the number of channels (64 channels) with the best performance.
|
| 201 |
+
|
| 202 |
+
# V. CONCLUSION
|
| 203 |
+
|
| 204 |
+
In this paper, we proposed an infrared and visible image fusion method based on a lightweight transformer module and generative adversarial learning. The proposed transformer is deeply involved in the fusion task as a fusion relation learning module. Adversarial learning provides generators with different modal characteristics during the training process at the feature level. This is the first attempt of deep combination and application of transformer and adversarial learning in the image fusion task. Our method has also achieved outstanding performance in subjective and objective evaluation, which proves the effectiveness and advancement of our method.
|
| 205 |
+
|
| 206 |
+
# REFERENCES
|
| 207 |
+
|
| 208 |
+
[1] J. Sun, C. Li, X.-J. Wu, V. Palade, and W. Fang, "An effective method of weld defect detection and classification based on machine vision," IEEE Transactions on Industrial Informatics, vol. 15, no. 12, pp. 6322-6333, 2019. 1
|
| 209 |
+
[2] X. Luo, Z. Zhang, and X. Wu, "A novel algorithm of remote sensing image fusion based on shift-invariant shearlet transform and regional selection," AEU-International Journal of Electronics and Communications, vol. 70, no. 2, pp. 186-197, 2016. 1
|
| 210 |
+
[3] X. Luo, Z. Zhang, B. Zhang, and X.-J. Wu, "Image fusion with contextual statistical similarity and nonsubsampled shearlet transform," IEEE Sensors Journal, vol. 17, no. 6, pp. 1760-1771, 2017. 1
|
| 211 |
+
[4] H. Li, X.-J. Wu, and J. Kittler, "Mdlatrr: A novel decomposition method for infrared and visible image fusion," IEEE Transactions on Image Processing, vol. 29, pp. 4733-4746, 2020. 1
|
| 212 |
+
[5] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, "Learning low-rank and sparse discriminative correlation filters for coarse-to-fine visual object tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3727-3739, 2019. 1
|
| 213 |
+
[6] T. Xu, Z. Feng, X.-J. Wu, and J. Kittler, "Adaptive channel selection for robust visual object tracking with discriminative correlation filters," International Journal of Computer Vision, vol. 129, no. 5, pp. 1359-1375, 2021. 1
|
| 214 |
+
[7] T. Xu, Z.-H. Feng, X.-J. Wu, and J. Kittler, "An accelerated correlation filter tracker," Pattern Recognition, vol. 102, p. 107172, 2020. 1
|
| 215 |
+
[8] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in 15th Pacific Conference on Computer Graphics and Applications (PG'07). IEEE, 2007, pp. 382–390. 1
|
| 216 |
+
[9] Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” Proceedings of the IEEE, vol. 87, no. 8, pp. 1315–1326, 1999. 1
|
| 217 |
+
[10] S.-G. Chen and X.-J. Wu, “A new fuzzy twin support vector machine for pattern classification,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 9, pp. 1553–1564, 2018. 1
|
| 218 |
+
[11] C. Li, W. Yuan, A. Bovik, and X. Wu, "No-reference blur index using blur comparisons," *Electronics letters*, vol. 47, no. 17, pp. 962-963, 2011. 1
|
| 219 |
+
[12] C. Chen, Y. Li, W. Liu, and J. Huang, "Image fusion with local spectral consistency and dynamic gradient sparsity," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2760-2765. 1
|
| 220 |
+
[13] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus image fusion using dictionary-based sparse representation,” Information Fusion, vol. 25, pp. 72–84, 2015. 1
|
| 221 |
+
[14] Y.-J. Zheng, J.-Y. Yang, J. Yang, X.-J. Wu, and Z. Jin, “Nearest neighbour line nonparametric discriminant analysis for feature extraction,” *Electronics Letters*, vol. 42, no. 12, pp. 679–680, 2006. 1
|
| 222 |
+
[15] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network,” Information Fusion, vol. 36, pp. 191–207, 2017. 1
|
| 223 |
+
[16] H. Li, X.-J. Wu, and J. Kittler, "Rfn-nest: An end-to-end residual fusion network for infrared and visible images," Information Fusion, 2021. 1, 6, 7
|
| 224 |
+
[17] H. Li, X.-j. Wu, and T. S. Durrani, "Infrared and visible image fusion with resnet and zero-phase component analysis," Infrared Physics & Technology, vol. 102, p. 103039, 2019. 1, 2
|
| 225 |
+
|
| 226 |
+
[18] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2614–2623, 2018. 1, 2, 6, 7
|
| 227 |
+
[19] H. Li, X.-J. Wu, and T. Durrani, “Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 12, pp. 9645–9656, 2020. 1, 6, 7
|
| 228 |
+
[20] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019. 2, 3, 6, 7
|
| 229 |
+
[21] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, "Ifcnn: A general image fusion framework based on convolutional neural network," Information Fusion, vol. 54, pp. 99-118, 2020. 2, 6, 7
|
| 230 |
+
[22] Y. Fu, X.-J. Wu, and T. Durrani, "Image fusion based on generative adversarial network consistent with perception," Information Fusion, 2021. 2, 3
|
| 231 |
+
[23] H. Li, X.-J. Wu, and J. Kittler, "Infrared and visible image fusion using a deep learning framework," in 2018 24th international conference on pattern recognition (ICPR). IEEE, 2018, pp. 2705-2710. 2
|
| 232 |
+
[24] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, "U2fusion: A unified unsupervised image fusion network," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 6, 7
|
| 233 |
+
[25] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, "Infrared and visible image fusion via detail preserving adversarial learning," Information Fusion, vol. 54, pp. 85-98, 2020. 2
|
| 234 |
+
[26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014. 3
|
| 235 |
+
[27] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802. 3
|
| 236 |
+
[28] J. Zhao, M. Mathieu, and Y. LeCun, "Energy-based generative adversarial networks," in 5th International Conference on Learning Representations, ICLR 2017, 2017. 3
|
| 237 |
+
[29] D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv preprint arXiv:1703.10717, 2017. 3
|
| 238 |
+
[30] J. Liang, H. Zeng, and L. Zhang, "High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9392-9400. 3
|
| 239 |
+
[31] H. Liu, Z. Wan, W. Huang, Y. Song, X. Han, and J. Liao, "Pd-gan: Probabilistic diverse gan for image inpainting," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9371-9381. 3
|
| 240 |
+
[32] W. Xia, Y. Yang, J.-H. Xue, and B. Wu, “Tedigan: Text-guided diverse face image generation and manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2256–2265. 3
|
| 241 |
+
[33] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410. 3
|
| 242 |
+
[34] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. 3
|
| 243 |
+
|
| 244 |
+
[35] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008. 3
|
| 245 |
+
[36] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth 16x16 words: Transformers for image recognition at scale," in International Conference on Learning Representations, 2020. 3, 6
|
| 246 |
+
[37] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12299-12310. 3, 6
|
| 247 |
+
[38] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision. Springer, 2016, pp. 694–711. 4
|
| 248 |
+
[39] R. Hou, D. Zhou, R. Nie, D. Liu, L. Xiong, Y. Guo, and C. Yu, "Vif-net: an unsupervised framework for infrared and visible image fusion," IEEE Transactions on Computational Imaging, vol. 6, pp. 640-651, 2020. 5
|
| 249 |
+
[40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004. 6
|
| 250 |
+
[41] S. Hwang, J. Park, N. Kim, Y. Choi, and I. So Kweon, "Multispectral pedestrian detection: Benchmark dataset and baseline," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037-1045. 6
|
| 251 |
+
[42] A. Toet, "Image fusion by a ratio of low-pass pyramid," Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989. 6, 7
|
| 252 |
+
[43] L. J. Chipman, T. M. Orr, and L. N. Graham, "Wavelets and image fusion," in Proceedings., International Conference on Image Processing, vol. 3. IEEE, 1995, pp. 248-251. 6, 7
|
| 253 |
+
[44] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, "Pixel-and region-based image fusion with complex wavelets," Information fusion, vol. 8, no. 2, pp. 119-130, 2007. 6, 7
|
| 254 |
+
[45] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Information fusion, vol. 8, no. 2, pp. 143–156, 2007. 6, 7
|
| 255 |
+
[46] V. Naidu, "Image fusion technique using multi-resolution singular value decomposition," Defence Science Journal, vol. 61, no. 5, p. 479, 2011. 6, 7
|
| 256 |
+
[47] J. Ma, C. Chen, C. Li, and J. Huang, "Infrared and visible image fusion via gradient transfer and total variation minimization," Information Fusion, vol. 31, pp. 100-109, 2016. 6, 7
|
| 257 |
+
[48] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, "Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs," in ICCV, vol. 1, no. 2, 2017, p. 3. 6, 7
|
| 258 |
+
[49] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12797-12804. 6, 7
|
| 259 |
+
[50] H. Xu, J. Ma, and X.-P. Zhang, “Mef-gan: Multi-exposure image fusion via generative adversarial networks,” IEEE Transactions on Image Processing, vol. 29, pp. 7203-7216, 2020. 6, 7
|
| 260 |
+
[51] A. M. Eskicioglu and P. S. Fisher, "Image quality measures and their performance," IEEE Transactions on communications, vol. 43, no. 12, pp. 2959-2965, 1995. 8
|
| 261 |
+
[52] J. W. Roberts, J. A. Van Aardt, and F. B. Ahmed, "Assessment of image fusion procedures using entropy, image quality, and multispectral classification," Journal of Applied Remote Sensing, vol. 2, no. 1, p. 023522, 2008. 8
|
| 262 |
+
|
| 263 |
+
[53] C. Xydeas, , and V. Petrovic, “Objective image fusion performance measure,” *Electronics letters*, vol. 36, no. 4, pp. 308–309, 2000. 8
|
| 264 |
+
[54] M. Haghighat and M. A. Razian, "Fast-fmi: Non-reference image fusion metric," in 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2014, pp. 1-3. 8
|
| 265 |
+
[55] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345–3356, 2015. 8
|
| 266 |
+
[56] Y.-J. Rao, "In-fibre bragg grating sensors," Measurement science and technology, vol. 8, no. 4, p. 355, 1997. 8
|
| 267 |
+
[57] H. R. Sheikh and A. C. Bovik, "Image information and visual quality," IEEE Transactions on image processing, vol. 15, no. 2, pp. 430-444, 2006. 8
|
| 268 |
+
[58] G. Qu, D. Zhang, and P. Yan, "Information measure for performance of image fusion," *Electronics letters*, vol. 38, no. 7, pp. 313-315, 2002. 8
|
2201.10xxx/2201.10147/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f48be32bc04c0a14e0cae76b5e043900b1ac73bbce083aaa27530b93f7cda5ef
|
| 3 |
+
size 667943
|
2201.10xxx/2201.10147/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_content_list.json
ADDED
|
@@ -0,0 +1,2224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "DocEnTr: An End-to-End Document Image Enhancement Transformer",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
137,
|
| 8 |
+
59,
|
| 9 |
+
867,
|
| 10 |
+
124
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Mohamed Ali Souibgui",
|
| 17 |
+
"bbox": [
|
| 18 |
+
124,
|
| 19 |
+
141,
|
| 20 |
+
312,
|
| 21 |
+
156
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Computer Vision Center",
|
| 28 |
+
"bbox": [
|
| 29 |
+
134,
|
| 30 |
+
158,
|
| 31 |
+
305,
|
| 32 |
+
172
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Universitat Autonoma de Barcelona",
|
| 39 |
+
"bbox": [
|
| 40 |
+
95,
|
| 41 |
+
173,
|
| 42 |
+
344,
|
| 43 |
+
186
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Barcelona, Spain",
|
| 50 |
+
"bbox": [
|
| 51 |
+
161,
|
| 52 |
+
187,
|
| 53 |
+
280,
|
| 54 |
+
200
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "msouibgui@cvc.uab.es",
|
| 61 |
+
"bbox": [
|
| 62 |
+
141,
|
| 63 |
+
202,
|
| 64 |
+
299,
|
| 65 |
+
216
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "Sanket Biswas $^{\\S}$",
|
| 72 |
+
"bbox": [
|
| 73 |
+
436,
|
| 74 |
+
141,
|
| 75 |
+
557,
|
| 76 |
+
156
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Computer Vision Center",
|
| 83 |
+
"bbox": [
|
| 84 |
+
411,
|
| 85 |
+
158,
|
| 86 |
+
583,
|
| 87 |
+
171
|
| 88 |
+
],
|
| 89 |
+
"page_idx": 0
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"text": "Universitat Autonoma de Barcelona",
|
| 94 |
+
"bbox": [
|
| 95 |
+
373,
|
| 96 |
+
173,
|
| 97 |
+
621,
|
| 98 |
+
186
|
| 99 |
+
],
|
| 100 |
+
"page_idx": 0
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"text": "Barcelona, Spain",
|
| 105 |
+
"bbox": [
|
| 106 |
+
436,
|
| 107 |
+
187,
|
| 108 |
+
557,
|
| 109 |
+
200
|
| 110 |
+
],
|
| 111 |
+
"page_idx": 0
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"text": "sbiswas@cvc.uab.es",
|
| 116 |
+
"bbox": [
|
| 117 |
+
426,
|
| 118 |
+
202,
|
| 119 |
+
566,
|
| 120 |
+
215
|
| 121 |
+
],
|
| 122 |
+
"page_idx": 0
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"text": "Sana Khamekhem Jemni\\*",
|
| 127 |
+
"bbox": [
|
| 128 |
+
690,
|
| 129 |
+
141,
|
| 130 |
+
887,
|
| 131 |
+
156
|
| 132 |
+
],
|
| 133 |
+
"page_idx": 0
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"text": "Digital Research Center of Sfax",
|
| 138 |
+
"bbox": [
|
| 139 |
+
677,
|
| 140 |
+
158,
|
| 141 |
+
902,
|
| 142 |
+
172
|
| 143 |
+
],
|
| 144 |
+
"page_idx": 0
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"text": "MIRACL Laboratory, University of Sfax",
|
| 149 |
+
"bbox": [
|
| 150 |
+
648,
|
| 151 |
+
173,
|
| 152 |
+
931,
|
| 153 |
+
186
|
| 154 |
+
],
|
| 155 |
+
"page_idx": 0
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"text": "Sfax, Tunisia",
|
| 160 |
+
"bbox": [
|
| 161 |
+
742,
|
| 162 |
+
187,
|
| 163 |
+
836,
|
| 164 |
+
200
|
| 165 |
+
],
|
| 166 |
+
"page_idx": 0
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"text": "sana.khamekhem@gmail.com",
|
| 171 |
+
"bbox": [
|
| 172 |
+
687,
|
| 173 |
+
202,
|
| 174 |
+
892,
|
| 175 |
+
216
|
| 176 |
+
],
|
| 177 |
+
"page_idx": 0
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "text",
|
| 181 |
+
"text": "Yousri Kessentini",
|
| 182 |
+
"bbox": [
|
| 183 |
+
109,
|
| 184 |
+
231,
|
| 185 |
+
243,
|
| 186 |
+
243
|
| 187 |
+
],
|
| 188 |
+
"page_idx": 0
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "text",
|
| 192 |
+
"text": "Digital Research Center of Sfax",
|
| 193 |
+
"bbox": [
|
| 194 |
+
65,
|
| 195 |
+
246,
|
| 196 |
+
287,
|
| 197 |
+
260
|
| 198 |
+
],
|
| 199 |
+
"page_idx": 0
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "text",
|
| 203 |
+
"text": "SM@RTS Laboratory",
|
| 204 |
+
"bbox": [
|
| 205 |
+
100,
|
| 206 |
+
261,
|
| 207 |
+
253,
|
| 208 |
+
275
|
| 209 |
+
],
|
| 210 |
+
"page_idx": 0
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"text": "Sfax, Tunisia",
|
| 215 |
+
"bbox": [
|
| 216 |
+
129,
|
| 217 |
+
277,
|
| 218 |
+
223,
|
| 219 |
+
288
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 0
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"text": "yousri.kessentini@crns.rnrt.cn",
|
| 226 |
+
"bbox": [
|
| 227 |
+
73,
|
| 228 |
+
291,
|
| 229 |
+
278,
|
| 230 |
+
304
|
| 231 |
+
],
|
| 232 |
+
"page_idx": 0
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "text",
|
| 236 |
+
"text": "Alicia Fornés, Josep Lladós",
|
| 237 |
+
"bbox": [
|
| 238 |
+
393,
|
| 239 |
+
231,
|
| 240 |
+
606,
|
| 241 |
+
244
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 0
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "Computer Vision Center, Computer Science Dept.",
|
| 248 |
+
"bbox": [
|
| 249 |
+
327,
|
| 250 |
+
246,
|
| 251 |
+
673,
|
| 252 |
+
260
|
| 253 |
+
],
|
| 254 |
+
"page_idx": 0
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "text",
|
| 258 |
+
"text": "Universitat Autonoma de Barcelona",
|
| 259 |
+
"bbox": [
|
| 260 |
+
376,
|
| 261 |
+
261,
|
| 262 |
+
625,
|
| 263 |
+
274
|
| 264 |
+
],
|
| 265 |
+
"page_idx": 0
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"type": "text",
|
| 269 |
+
"text": "Barcelona, Spain",
|
| 270 |
+
"bbox": [
|
| 271 |
+
440,
|
| 272 |
+
277,
|
| 273 |
+
559,
|
| 274 |
+
290
|
| 275 |
+
],
|
| 276 |
+
"page_idx": 0
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"type": "text",
|
| 280 |
+
"text": "{afornes, josep} @cvc.uab.es",
|
| 281 |
+
"bbox": [
|
| 282 |
+
401,
|
| 283 |
+
291,
|
| 284 |
+
600,
|
| 285 |
+
305
|
| 286 |
+
],
|
| 287 |
+
"page_idx": 0
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"type": "text",
|
| 291 |
+
"text": "Umapada Pal",
|
| 292 |
+
"bbox": [
|
| 293 |
+
749,
|
| 294 |
+
231,
|
| 295 |
+
853,
|
| 296 |
+
244
|
| 297 |
+
],
|
| 298 |
+
"page_idx": 0
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"type": "text",
|
| 302 |
+
"text": "CVPR Unit",
|
| 303 |
+
"bbox": [
|
| 304 |
+
759,
|
| 305 |
+
247,
|
| 306 |
+
842,
|
| 307 |
+
259
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 0
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "text",
|
| 313 |
+
"text": "Indian Statistical Institute",
|
| 314 |
+
"bbox": [
|
| 315 |
+
712,
|
| 316 |
+
260,
|
| 317 |
+
890,
|
| 318 |
+
273
|
| 319 |
+
],
|
| 320 |
+
"page_idx": 0
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"type": "text",
|
| 324 |
+
"text": "Kolkata, India",
|
| 325 |
+
"bbox": [
|
| 326 |
+
751,
|
| 327 |
+
275,
|
| 328 |
+
852,
|
| 329 |
+
288
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 0
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "text",
|
| 335 |
+
"text": "umapada@isical.ac.in",
|
| 336 |
+
"bbox": [
|
| 337 |
+
724,
|
| 338 |
+
291,
|
| 339 |
+
877,
|
| 340 |
+
304
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 0
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "Abstract—Document images can be affected by many degradation scenarios, which cause recognition and processing difficulties. In this age of digitization, it is important to denoise them for proper usage. To address this challenge, we present a new encoder-decoder architecture based on vision transformers to enhance both machine-printed and handwritten document images, in an end-to-end fashion. The encoder operates directly on the pixel patches with their positional information without the use of any convolutional layers, while the decoder reconstructs a clean image from the encoded patches. Conducted experiments show a superiority of the proposed model compared to the state-of-the-art methods on several DIBCO benchmarks. Code and models will be publicly available at: https://github.com/dali92002/DocEnTR.",
|
| 347 |
+
"bbox": [
|
| 348 |
+
62,
|
| 349 |
+
343,
|
| 350 |
+
492,
|
| 351 |
+
508
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 0
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "I. INTRODUCTION",
|
| 358 |
+
"text_level": 1,
|
| 359 |
+
"bbox": [
|
| 360 |
+
208,
|
| 361 |
+
521,
|
| 362 |
+
346,
|
| 363 |
+
535
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 0
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "The preservation and legibility of document images (especially the historical ones) are of utmost priority for the Document Image Analysis and Recognition (DIAR) research. Document records usually contain significant information and in the historical cases it dates back centuries and decades [1]. The conservation of document records can be hampered by several kinds of degradation such as smears, stains, artefacts, pen strokes, bleed-through effects and uneven illumination. These distortions could heavily impact the subsequent downstream tasks for information processing, such as segmentation, Optical Character Recognition (OCR), information spotting and layout analysis. This manifests the need for a robust preprocessing task that denoises and reconstructs a high-quality clean image from its already degraded counterpart. Document Image Enhancement (DIE) aims towards restoring the quality of the degraded document samples to yield a clear enhanced version that is locally uniform.",
|
| 370 |
+
"bbox": [
|
| 371 |
+
62,
|
| 372 |
+
542,
|
| 373 |
+
492,
|
| 374 |
+
784
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 0
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "In recent times, Convolutional Neural Network (CNN)-based approaches have been widely applied to DIE related subtasks, like binarization [2], [3], deblurring [4], shadow [5] and",
|
| 381 |
+
"bbox": [
|
| 382 |
+
62,
|
| 383 |
+
784,
|
| 384 |
+
490,
|
| 385 |
+
828
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 0
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "watermark removal [6], etc. Although the performance of these models has significantly improved over classical handcrafted techniques, they do have their own set of drawbacks. Firstly, CNNs operate on regular grids and using the same convolutional filter to restore different regions of a degraded document image may not be a sensible choice. Secondly, CNNs fail to capture high-level long-range dependencies as they are more suited for extracting low-level spatial information from images.",
|
| 392 |
+
"bbox": [
|
| 393 |
+
502,
|
| 394 |
+
342,
|
| 395 |
+
934,
|
| 396 |
+
469
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 0
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "text",
|
| 402 |
+
"text": "With the recent success of transformers in Natural Language Processing (NLP) [7], [8], its application to computer vision problems (like image recognition [9], object detection [10], visual question answering [11], handwritten text recognition (HTR) [12], etc.) also started getting more prominence. The self-attention mechanism proposed in [7] helps to capture global interactions between contextual features. Using local information combined with the knowledge of long-range global spatial arrangement is beneficial for an efficient image restoration model. This local information is often encoded in the patch content of an image and the large scale organization is contained in the redundancy of this information across the patches of the image [13]. Contrary to CNNs, which process pixel arrays, Vision Transformers (ViTs) [9] split an image into fixed-size patches (eg. 8x8, 16x16 etc.), they correctly embed each of them as latent representation, and include positional embedding information as input to the transformer encoder. This allows to encode the relative location of the patches, along with both local (spatial) and global (semantic) long-range dependencies. The motivation of using ViTs for our overall proposed baseline model is that a missing/degraded patch in the distorted document image can be recovered from the neighbouring patches information with the power of the multi-head self-attention in ViTs, which quantifies pairwise global reasoning between them. Also, ViTs have been adapted in the overall model pipeline in an encoder-decoder based setting, inspired by the concept of denoising autoencoders",
|
| 403 |
+
"bbox": [
|
| 404 |
+
502,
|
| 405 |
+
470,
|
| 406 |
+
936,
|
| 407 |
+
854
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 0
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "aside_text",
|
| 413 |
+
"text": "arXiv:2201.10252v1 [cs.CV] 25 Jan 2022",
|
| 414 |
+
"bbox": [
|
| 415 |
+
21,
|
| 416 |
+
310,
|
| 417 |
+
58,
|
| 418 |
+
724
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 0
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "page_footnote",
|
| 424 |
+
"text": "\\$Equal contribution",
|
| 425 |
+
"bbox": [
|
| 426 |
+
87,
|
| 427 |
+
840,
|
| 428 |
+
201,
|
| 429 |
+
853
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 0
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "[14] used in reconstruction of corrupted input data. The encoder is mapping the degraded image patches into latent representations, whereas the decoder is recovering a clean image version from those encoded representations.",
|
| 436 |
+
"bbox": [
|
| 437 |
+
65,
|
| 438 |
+
59,
|
| 439 |
+
489,
|
| 440 |
+
114
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 1
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "The overall contributions of our work can be summarized into three folds:",
|
| 447 |
+
"bbox": [
|
| 448 |
+
65,
|
| 449 |
+
115,
|
| 450 |
+
489,
|
| 451 |
+
142
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 1
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "list",
|
| 457 |
+
"sub_type": "text",
|
| 458 |
+
"list_items": [
|
| 459 |
+
"- We introduce a simple and flexible Document image Enhancement Transformer (DocEnTr), an end-to-end image enhancement approach, that effectively restores and enhances a degraded document image provided as input. As far as we know, DocEnTr is the first pure transformer-based baseline that leverages the effectiveness of Vision Transformers (ViTs) in an encoder-decoder based framework, without any dependency on CNNs.",
|
| 460 |
+
"- We have addressed document binarization as the key problem study in this work to investigate the power of DocEnTr architecture. Experimental evaluation shows that DocEnTr achieves state-of-the-art results on standard document binarization benchmarks (DIBCO), for both machine-printed and handwritten degraded document images.",
|
| 461 |
+
"- A comprehensive and intuitive case study has been dedicated in Section IV to prove the utility of ViTs with its multi-headed self-attention mechanism in the task of document enhancement."
|
| 462 |
+
],
|
| 463 |
+
"bbox": [
|
| 464 |
+
82,
|
| 465 |
+
147,
|
| 466 |
+
489,
|
| 467 |
+
414
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 1
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "The rest of this paper is organized as follows. In Section II we review the state of the art. The Document image Enhancement Transformer (DocEnTr) is described in Section III. Section IV contains an analysis of the extensive experimentation that has been conducted, including different quantitative and qualitative studies. Finally, in Section V we draw the conclusions and propose open challenges for future research directions.",
|
| 474 |
+
"bbox": [
|
| 475 |
+
65,
|
| 476 |
+
420,
|
| 477 |
+
489,
|
| 478 |
+
532
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 1
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "text",
|
| 484 |
+
"text": "II. RELATED WORK",
|
| 485 |
+
"text_level": 1,
|
| 486 |
+
"bbox": [
|
| 487 |
+
205,
|
| 488 |
+
545,
|
| 489 |
+
349,
|
| 490 |
+
556
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 1
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "text",
|
| 496 |
+
"text": "A. Document Image Enhancement",
|
| 497 |
+
"text_level": 1,
|
| 498 |
+
"bbox": [
|
| 499 |
+
65,
|
| 500 |
+
565,
|
| 501 |
+
302,
|
| 502 |
+
577
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 1
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"text": "This work is an application within the DIE, which has been an active filed within the DIAR community. The first classic methods were based on thresholding, which means finding a single (global) or multiple (local) threshold(s) value(s) for the document. These threshold values are used to classify the document image pixels into foreground (black) or background (white) [15], [16]. These methods are still evolving in the recent years using machine learning tools, for instance, with support vector machines (SVM) [17]. Later, energy based methods were introduced. These are based on tracking the text pixels by maximizing its energy function [18], while minimizing the one of the degraded background. However, the results using those approaches were unsatisfactory [19].",
|
| 509 |
+
"bbox": [
|
| 510 |
+
65,
|
| 511 |
+
583,
|
| 512 |
+
489,
|
| 513 |
+
766
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 1
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "text",
|
| 519 |
+
"text": "Recently, deep learning based methods were used to tackle this problem by learning the enhancement directly from raw data. In [20], the problem was formulated as pixels classification. Each pixel is classified as black or white depending on a sequence of the surrounding pixels, where a 2D Long Short-Term Memory (LSTM) was trained for this task. This process",
|
| 520 |
+
"bbox": [
|
| 521 |
+
65,
|
| 522 |
+
769,
|
| 523 |
+
489,
|
| 524 |
+
852
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 1
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"text": "is, of course, time consuming. A more practical solution is to map the images from the degraded domain to the enhanced one in an end-to-end fashion with CNN auto-encoders. These latter, hence, were leading the recent improvements in image denoising [21] and more particularly documents enhancement tasks, like binarization [22], [23], [24], deblurring problems [4] and so on. Following this strategy, a fully CNN model was proposed in [25] to binarize the degraded document images at multiple image scales. Similarly, [2] proposed an autoencoder architecture that performs a cascade of pre-trained U-Net models [26] to learn the binarization using less amount of data. Moreover, generation models (GAN) were employed for this task to generate clean images by conditioning on the degraded versions. These architectures are composed of a generative model that produces a clean version of the image and a discriminator to assess the binarization result. Both models are usually composed of fully (or partially) CNN layers. In [6], a conditional GAN approach was proposed for different enhancement tasks achieving good results in document images cleaning, binarization, deblurring and dense watermarks removal. This method was recently extended in [3] by adding a second discriminator to assess the text readability for the goal of obtaining an enhanced image that is clean and readable at the same time. A similar cGAN's based method was also proposed in [27], [28], [29], [30].",
|
| 531 |
+
"bbox": [
|
| 532 |
+
507,
|
| 533 |
+
59,
|
| 534 |
+
931,
|
| 535 |
+
412
|
| 536 |
+
],
|
| 537 |
+
"page_idx": 1
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "text",
|
| 541 |
+
"text": "B. Transformers in Vision and Image Enhancement Tasks",
|
| 542 |
+
"text_level": 1,
|
| 543 |
+
"bbox": [
|
| 544 |
+
507,
|
| 545 |
+
423,
|
| 546 |
+
905,
|
| 547 |
+
435
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 1
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "In the very recent years, transformers are behind the advances in deep learning applications. Transformer based architectures firstly showed a great success in NLP tasks [7], [8] for text translation and embedding, surpassing the previous LSTM approaches. This motivates many works to employ them for the vision tasks, for instance, classification [9], object detection [10], document understanding [31], [32], [33], etc. More related to this paper, transformers were also used for natural image restoration [34] and document images dewarping [35]. However, the architectures that were used in these later image and document enhancement approaches are still relying on the CNN feature extractors before passing to the transformers stage. Also, the CNN are used to reconstruct the output image. Contrary, what we are proposing in this work is a fully transformer approach that attends directly to the patches on the input images and reconstruct the pixels without the using of any CNN layer.",
|
| 554 |
+
"bbox": [
|
| 555 |
+
507,
|
| 556 |
+
442,
|
| 557 |
+
931,
|
| 558 |
+
681
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 1
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "III. METHOD",
|
| 565 |
+
"text_level": 1,
|
| 566 |
+
"bbox": [
|
| 567 |
+
670,
|
| 568 |
+
693,
|
| 569 |
+
766,
|
| 570 |
+
705
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 1
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "The proposed model is a scalable auto-encoder that uses vision transformers in its encoder and decoder parts, as illustrated in Fig 1. The degraded image is first divided into patches before entering to the encoder part. During encoding, the patches are mapped to a latent representation of tokens, where each token is associated with a degraded patch. Then, the tokens are passed to the decoder that outputs the enhanced version of patches. Unlike the CNN based auto-encoders, which were usually employed for the document image enhancement tasks, the transformer auto-encoder is",
|
| 577 |
+
"bbox": [
|
| 578 |
+
507,
|
| 579 |
+
712,
|
| 580 |
+
931,
|
| 581 |
+
852
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 1
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "image",
|
| 587 |
+
"img_path": "images/ee8b4d39b390da056be026402adb5b1d6fb25c02ed8c44d3ff6fc92bf2fb76da.jpg",
|
| 588 |
+
"image_caption": [
|
| 589 |
+
"Fig. 1. Proposed model: The input image is split into patches, which are linearly embedded, and the position information are added to them. The resulting sequence of vectors are fed to a standard Transformer encoder to obtain the latent representations. These representations are fed to another Transformer representing the decoder to obtain the decoded vector, which is linearly projected to vectors of pixels representing the output image patches."
|
| 590 |
+
],
|
| 591 |
+
"image_footnote": [],
|
| 592 |
+
"bbox": [
|
| 593 |
+
63,
|
| 594 |
+
55,
|
| 595 |
+
934,
|
| 596 |
+
236
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 2
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "text",
|
| 602 |
+
"text": "profitting from the self attention mechanism which gives a global information during every patch enhancement. Both decoder and especially encoder are inspired from the vision transformer (ViT) [9] architecture. We present more details of the model's architecture in what follows.",
|
| 603 |
+
"bbox": [
|
| 604 |
+
60,
|
| 605 |
+
303,
|
| 606 |
+
492,
|
| 607 |
+
375
|
| 608 |
+
],
|
| 609 |
+
"page_idx": 2
|
| 610 |
+
},
|
| 611 |
+
{
|
| 612 |
+
"type": "text",
|
| 613 |
+
"text": "A. Encoder",
|
| 614 |
+
"text_level": 1,
|
| 615 |
+
"bbox": [
|
| 616 |
+
62,
|
| 617 |
+
384,
|
| 618 |
+
149,
|
| 619 |
+
395
|
| 620 |
+
],
|
| 621 |
+
"page_idx": 2
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"text": "In the encoding stage (left part of Fig.1), given an image, we divide it into a set of patches. Then, we embed these patches to obtain the tokens and add their positional information. After that, a number of transformer blocks is employed to map these tokens into the encoded latent representation. These blocks follow the same structure as [9], composed of alternating layers of multi-headed self-attention and multi-layered perceptron (MLP). Each of these blocks are preceded by a LayerNorm (LN) [36], and followed by a residual connection. The patches embedding size and the number of transformer blocks are set depending on the model size.",
|
| 626 |
+
"bbox": [
|
| 627 |
+
60,
|
| 628 |
+
401,
|
| 629 |
+
490,
|
| 630 |
+
557
|
| 631 |
+
],
|
| 632 |
+
"page_idx": 2
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"text": "B. Decoder",
|
| 637 |
+
"text_level": 1,
|
| 638 |
+
"bbox": [
|
| 639 |
+
63,
|
| 640 |
+
565,
|
| 641 |
+
149,
|
| 642 |
+
579
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 2
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "The decoder part consists of a series of transformer blocks (having the same number as the encoder blocks) that take as an input the sequence of outputted tokens from the encoder. These tokens are propagated in the transformer decoder blocks, and then projected with a linear layer to the desired pixel values. This makes each element of the output correspond to a vector representing a flattened patch in the output image. The ground truth pixel values are obtained by dividing the ground truth (GT) clean image into patches (in the same way as the input degraded image) and flattening them into vectors. A mean squared error (MSE) loss is used between the model's output and the GT pixel patches to train the model.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
60,
|
| 651 |
+
583,
|
| 652 |
+
490,
|
| 653 |
+
755
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 2
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "C. Model Variants",
|
| 660 |
+
"text_level": 1,
|
| 661 |
+
"bbox": [
|
| 662 |
+
63,
|
| 663 |
+
764,
|
| 664 |
+
196,
|
| 665 |
+
778
|
| 666 |
+
],
|
| 667 |
+
"page_idx": 2
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"text": "Following a similar convention as previous works [8], [9], the proposed model configuration can be modified to produce different variants. In our experiments we define three types of variants which are \"Small\", \"Base\" and \"Large\", as enlisted in Table I. Evidently, setting a larger model require more",
|
| 672 |
+
"bbox": [
|
| 673 |
+
60,
|
| 674 |
+
782,
|
| 675 |
+
490,
|
| 676 |
+
853
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 2
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "computational memory and training time since the number of model parameters is increasing. Thus, a trade off between the model size and its enhancement performance must be taken into consideration.",
|
| 683 |
+
"bbox": [
|
| 684 |
+
502,
|
| 685 |
+
303,
|
| 686 |
+
934,
|
| 687 |
+
360
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 2
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "table",
|
| 693 |
+
"img_path": "images/b3929824d394b62cfc3d46d49999adc3481293bf18610a77acaa9f65637cfa8f.jpg",
|
| 694 |
+
"table_caption": [
|
| 695 |
+
"TABLEI DETAILS OF OUR MODEL VARIANTS"
|
| 696 |
+
],
|
| 697 |
+
"table_footnote": [],
|
| 698 |
+
"table_body": "<table><tr><td>Model</td><td>Layers</td><td>Dim</td><td>Attention Heads</td><td># Parameters</td></tr><tr><td>DocEnTr-Small</td><td>6</td><td>512</td><td>4</td><td>17M</td></tr><tr><td>DocEnTr-Base</td><td>12</td><td>768</td><td>8</td><td>68M</td></tr><tr><td>DocEnTr-Large</td><td>24</td><td>1024</td><td>16</td><td>255M</td></tr></table>",
|
| 699 |
+
"bbox": [
|
| 700 |
+
512,
|
| 701 |
+
406,
|
| 702 |
+
924,
|
| 703 |
+
467
|
| 704 |
+
],
|
| 705 |
+
"page_idx": 2
|
| 706 |
+
},
|
| 707 |
+
{
|
| 708 |
+
"type": "text",
|
| 709 |
+
"text": "IV. EXPERIMENTAL VALIDATION",
|
| 710 |
+
"text_level": 1,
|
| 711 |
+
"bbox": [
|
| 712 |
+
596,
|
| 713 |
+
492,
|
| 714 |
+
840,
|
| 715 |
+
505
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 2
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "text",
|
| 721 |
+
"text": "To validate our model, we use the datasets proposed in the different DIBCO and H-DIBCO contests [37] for printed and handwritten degraded document images binarization and compare our results with the state of the art methods. Before these experiments, we have conducted different investigations for a proper selection of the hyperparameters.",
|
| 722 |
+
"bbox": [
|
| 723 |
+
502,
|
| 724 |
+
511,
|
| 725 |
+
932,
|
| 726 |
+
596
|
| 727 |
+
],
|
| 728 |
+
"page_idx": 2
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"type": "text",
|
| 732 |
+
"text": "A. Choosing the Best Model Configuration",
|
| 733 |
+
"text_level": 1,
|
| 734 |
+
"bbox": [
|
| 735 |
+
502,
|
| 736 |
+
607,
|
| 737 |
+
806,
|
| 738 |
+
621
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 2
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "text",
|
| 744 |
+
"text": "We begin our experiments by choosing the configuration that gives the best performance from our model variants (Small, Base or Large). For training, each degraded image and its GT clean one is divided into overlapped patches with sizes $256 \\times 256 \\times 3$ , the overlapping was set vertically and horizontally by a half of the patches size (means 128). These resultant images (patches) will be used by our models as an input and expected output (training data). For results evaluation, and same as the usual approaches [38], we utilize the following metrics: Peak signal-to-noise ratio (PSNR), F-Measure (FM), pseudo-F-measure $(\\mathrm{F}_{ps})$ and Distance reciprocal distortion metric (DRD). We used in this experiment the DIBCO 2017 dataset, and the obtained results are given in Table II. As it can be seen, a larger model gives a better result in all the metrics, but it requires more computation resources. Thus, we recommend using a Base model for a binarization",
|
| 745 |
+
"bbox": [
|
| 746 |
+
502,
|
| 747 |
+
626,
|
| 748 |
+
932,
|
| 749 |
+
853
|
| 750 |
+
],
|
| 751 |
+
"page_idx": 2
|
| 752 |
+
},
|
| 753 |
+
{
|
| 754 |
+
"type": "text",
|
| 755 |
+
"text": "task. Nevertheless, we will test as well the Large version in following experiments.",
|
| 756 |
+
"bbox": [
|
| 757 |
+
63,
|
| 758 |
+
58,
|
| 759 |
+
492,
|
| 760 |
+
87
|
| 761 |
+
],
|
| 762 |
+
"page_idx": 3
|
| 763 |
+
},
|
| 764 |
+
{
|
| 765 |
+
"type": "table",
|
| 766 |
+
"img_path": "images/f83130cf075a3ededf48da634d4948d99ecac889599c3f6fa7b37b5050ba6799.jpg",
|
| 767 |
+
"table_caption": [
|
| 768 |
+
"TABLE II RESULTS OF VARYING THE MODEL SIZE FOR THE DIBCO 2017 DATASET. $\\uparrow$ : THE HIGHER THE BETTER. $\\downarrow$ : THE LOWER THE BETTER."
|
| 769 |
+
],
|
| 770 |
+
"table_footnote": [],
|
| 771 |
+
"table_body": "<table><tr><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>DocEnTr-Small</td><td>18.29</td><td>91.06</td><td>93.82</td><td>2.78</td></tr><tr><td>DocEnTr-Base</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>DocEnTr-Large</td><td>18.85</td><td>92.14</td><td>94.58</td><td>2.53</td></tr></table>",
|
| 772 |
+
"bbox": [
|
| 773 |
+
107,
|
| 774 |
+
145,
|
| 775 |
+
448,
|
| 776 |
+
205
|
| 777 |
+
],
|
| 778 |
+
"page_idx": 3
|
| 779 |
+
},
|
| 780 |
+
{
|
| 781 |
+
"type": "text",
|
| 782 |
+
"text": "Next, we do another experiment related to the input image size, and the patches size that are used by our model. The reason behind is that having different image size and patch size can affect the binarization since the model is accessing to different type of information (from global to local). The obtained results using the Base model are given in Table III. As it can be seen, a slightly better performance is obtained using an input with the smaller size $(256\\times 256\\times 3$ compared to $512\\times 512\\times 3)$ . However, we can notice that the performance is highly improved when using a smaller patch size. The reason is that, by employing a smaller patch size, we make each patch of the image attending to more and much local patches during the self-attention. Thus, the model is looking to more and much fine information during the enhancement process with $8\\times 8$ patch size. But, as before, using a smaller patch size means augmenting the model parameters, requiring more computation resources.",
|
| 783 |
+
"bbox": [
|
| 784 |
+
65,
|
| 785 |
+
217,
|
| 786 |
+
492,
|
| 787 |
+
457
|
| 788 |
+
],
|
| 789 |
+
"page_idx": 3
|
| 790 |
+
},
|
| 791 |
+
{
|
| 792 |
+
"type": "table",
|
| 793 |
+
"img_path": "images/a4ee4c48e99ac63e0932f5aa49e2ac1f3324baa60ba5d69db22d615ebb28cd3a.jpg",
|
| 794 |
+
"table_caption": [
|
| 795 |
+
"TABLE III RESULTS OF VARYING THE INPUT AND PATCH SIZES FOR THE DIBCO 2017 DATASET"
|
| 796 |
+
],
|
| 797 |
+
"table_footnote": [],
|
| 798 |
+
"table_body": "<table><tr><td>Input Size</td><td>Patch Size</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>256 × 256 × 3</td><td>8 × 8</td><td>19.11</td><td>92.53</td><td>95.15</td><td>2.37</td></tr><tr><td>256 × 256 × 3</td><td>16 × 16</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>256 × 256 × 3</td><td>32 × 32</td><td>17.57</td><td>89.37</td><td>91.99</td><td>3.44</td></tr><tr><td>512 × 512 × 3</td><td>8 × 8</td><td>18.91</td><td>92.2</td><td>94.93</td><td>2.45</td></tr><tr><td>512 × 512 × 3</td><td>16 × 16</td><td>18.66</td><td>92.15</td><td>93.89</td><td>2.54</td></tr><tr><td>512 × 512 × 3</td><td>32 × 32</td><td>17.27</td><td>89.43</td><td>91.51</td><td>3.54</td></tr></table>",
|
| 799 |
+
"bbox": [
|
| 800 |
+
70,
|
| 801 |
+
512,
|
| 802 |
+
485,
|
| 803 |
+
615
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 3
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "B. Quantitative Evaluation",
|
| 810 |
+
"text_level": 1,
|
| 811 |
+
"bbox": [
|
| 812 |
+
63,
|
| 813 |
+
637,
|
| 814 |
+
253,
|
| 815 |
+
651
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 3
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "text",
|
| 821 |
+
"text": "After choosing the best hyper-parameters of the model, we conduct the experiments on the different datasets and compare our result with the related approaches. We begin by testing with the DIBCO 2011 dataset [39]. This dataset contains degraded document images with handwritten and printed text. For training, we use all the images from the other DIBCO and H-DIBCO datasets (except DIBCO 2019) and the Palm Leaf dataset [40]. These images are split into overlapped images with size $256 \\times 256 \\times 3$ before being fed to the model. The obtained results are given in Table IV, where we can notice a superiority of out method compared to the different variations of the related approaches. We choose to compare with different families of approaches: classic thresholding and deep learning based methods (whether basing on CNN or cGAN). Our model",
|
| 822 |
+
"bbox": [
|
| 823 |
+
62,
|
| 824 |
+
655,
|
| 825 |
+
492,
|
| 826 |
+
853
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 3
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "text",
|
| 832 |
+
"text": "DocEnTr-Base\\{8\\}, which means using the Base setting with a patch size of $8 \\times 8$ , gives the best PSNR and DRD compared to all the other methods. While the model DocEnTr-Large\\{16\\}, which means using the Large setting with a patch size of $16 \\times 16$ , leads to the second best performance in the metrics PSNR, $\\mathbf{F}_{ps}$ and DRD. We note that for a computation reason, we were not able to train the Large setting with a patch size of $8 \\times 8$ .",
|
| 833 |
+
"bbox": [
|
| 834 |
+
502,
|
| 835 |
+
58,
|
| 836 |
+
934,
|
| 837 |
+
171
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 3
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "table",
|
| 843 |
+
"img_path": "images/c200a5c2012f35ac6d2b9a465dbff2987fdb9558dacdf3a2615f8af24b755df1.jpg",
|
| 844 |
+
"table_caption": [
|
| 845 |
+
"TABLE IV COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2011 DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS."
|
| 846 |
+
],
|
| 847 |
+
"table_footnote": [],
|
| 848 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>15.70</td><td>82.10</td><td>-</td><td>9.00</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>15.60</td><td>82.10</td><td>-</td><td>8.50</td></tr><tr><td>Vo et al. [41]</td><td>CNN</td><td>20.10</td><td>93.30</td><td>-</td><td>2.00</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>19.90</td><td>95.50</td><td>-</td><td>1.80</td></tr><tr><td>Tensmeyer et al [25]</td><td>CNN</td><td>20.11</td><td>93.60</td><td>97.70</td><td>1.85</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>20.30</td><td>93.80</td><td>-</td><td>1.80</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>20.81</td><td>94.37</td><td>96.15</td><td>1.63</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>20.11</td><td>93.48</td><td>96.12</td><td>1.93</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>20.62</td><td>94.24</td><td>96.71</td><td>1.69</td></tr></table>",
|
| 849 |
+
"bbox": [
|
| 850 |
+
505,
|
| 851 |
+
228,
|
| 852 |
+
936,
|
| 853 |
+
343
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 3
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "text",
|
| 859 |
+
"text": "After that, we test our model on the H-DIBCO 2012 dataset [42], which contains degraded handwritten document images. As in the previous experiment, we use the other datasets for training with the same split size. The obtained results are shown in Table V, where we can notice that our model gives the best performance in terms of PSNR and FM with the Base{8} configuration. We notice also that the other configuration gives competitive results compared to the other approaches.",
|
| 860 |
+
"bbox": [
|
| 861 |
+
502,
|
| 862 |
+
356,
|
| 863 |
+
934,
|
| 864 |
+
483
|
| 865 |
+
],
|
| 866 |
+
"page_idx": 3
|
| 867 |
+
},
|
| 868 |
+
{
|
| 869 |
+
"type": "table",
|
| 870 |
+
"img_path": "images/97fbdef6a83bddf7df3f9e24c5cb5b3a820a9dc602fed63cb05bd0ce03858c2f.jpg",
|
| 871 |
+
"table_caption": [
|
| 872 |
+
"TABLE V COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON H-DIBCO 2012 DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS."
|
| 873 |
+
],
|
| 874 |
+
"table_footnote": [],
|
| 875 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>15.03</td><td>80.18</td><td>82.65</td><td>26.46</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>16.71</td><td>82.89</td><td>87.95</td><td>6.59</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>21.37</td><td>95.16</td><td>96.44</td><td>1.13</td></tr><tr><td>Tensmeyer et al [25]</td><td>CNN</td><td>20.60</td><td>92.53</td><td>96.67</td><td>2.48</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>21.91</td><td>94.96</td><td>96.15</td><td>1.55</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>22.00</td><td>95.18</td><td>94.63</td><td>1.62</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>22.29</td><td>95.31</td><td>96.29</td><td>1.60</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>21.03</td><td>93.31</td><td>94.72</td><td>2.31</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>22.04</td><td>95.09</td><td>96.00</td><td>1.64</td></tr></table>",
|
| 876 |
+
"bbox": [
|
| 877 |
+
505,
|
| 878 |
+
542,
|
| 879 |
+
936,
|
| 880 |
+
657
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 3
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "text",
|
| 886 |
+
"text": "Moreover, we tested with the more recent DIBCO 2017 dataset. In this dataset our model achieves the best performance in all the evaluation metrics, as presented in Table VI.",
|
| 887 |
+
"bbox": [
|
| 888 |
+
502,
|
| 889 |
+
668,
|
| 890 |
+
932,
|
| 891 |
+
711
|
| 892 |
+
],
|
| 893 |
+
"page_idx": 3
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "text",
|
| 897 |
+
"text": "Lastly, we test on the H-DIBCO 2018 dataset. Here, as shown in Table VII, the best performance is achieved by [3] basing on cGAN. Anyway, we can notice that our model is still very competitive since it ranks second in the PSNR, FM and $\\mathrm{F}_{ps}$ metrics.",
|
| 898 |
+
"bbox": [
|
| 899 |
+
502,
|
| 900 |
+
711,
|
| 901 |
+
932,
|
| 902 |
+
783
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 3
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"text": "To summarize the quantitative evaluation, we demonstrate that our model gives good results compared to the state of the art approaches. This was shown by obtaining the best results in most of the evaluation metrics with the H-DIBCO 2011, DIBCO 2012 and DIBCO 2017 benchmarks.",
|
| 909 |
+
"bbox": [
|
| 910 |
+
502,
|
| 911 |
+
783,
|
| 912 |
+
932,
|
| 913 |
+
852
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 3
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "text",
|
| 919 |
+
"text": "TABLE VI",
|
| 920 |
+
"text_level": 1,
|
| 921 |
+
"bbox": [
|
| 922 |
+
243,
|
| 923 |
+
60,
|
| 924 |
+
309,
|
| 925 |
+
71
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 4
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "text",
|
| 931 |
+
"text": "COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2017",
|
| 932 |
+
"bbox": [
|
| 933 |
+
73,
|
| 934 |
+
71,
|
| 935 |
+
478,
|
| 936 |
+
80
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 4
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "text",
|
| 942 |
+
"text": "DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.",
|
| 943 |
+
"bbox": [
|
| 944 |
+
107,
|
| 945 |
+
82,
|
| 946 |
+
448,
|
| 947 |
+
92
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 4
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "table",
|
| 953 |
+
"img_path": "images/a7f1a6e9dd07a1e396a6c3ae353769031a2937ad1e2ac2ae8379cf7a1b10ac5a.jpg",
|
| 954 |
+
"table_caption": [],
|
| 955 |
+
"table_footnote": [],
|
| 956 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>13.85</td><td>77.73</td><td>77.89</td><td>15.54</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>14.25</td><td>77.11</td><td>84.1</td><td>8.85</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>15.85</td><td>91.57</td><td>93.55</td><td>2.92</td></tr><tr><td>Competition top [19]</td><td>CNN</td><td>18.28</td><td>91.04</td><td>92.86</td><td>3.40</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>17.83</td><td>90.73</td><td>92.58</td><td>3.58</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>17.45</td><td>89.8</td><td>89.95</td><td>4.03</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>19.11</td><td>92.53</td><td>95.15</td><td>2.37</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>18.85</td><td>92.14</td><td>94.58</td><td>2.53</td></tr></table>",
|
| 957 |
+
"bbox": [
|
| 958 |
+
65,
|
| 959 |
+
102,
|
| 960 |
+
494,
|
| 961 |
+
216
|
| 962 |
+
],
|
| 963 |
+
"page_idx": 4
|
| 964 |
+
},
|
| 965 |
+
{
|
| 966 |
+
"type": "text",
|
| 967 |
+
"text": "TABLE VII",
|
| 968 |
+
"text_level": 1,
|
| 969 |
+
"bbox": [
|
| 970 |
+
240,
|
| 971 |
+
231,
|
| 972 |
+
312,
|
| 973 |
+
242
|
| 974 |
+
],
|
| 975 |
+
"page_idx": 4
|
| 976 |
+
},
|
| 977 |
+
{
|
| 978 |
+
"type": "text",
|
| 979 |
+
"text": "COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2018",
|
| 980 |
+
"bbox": [
|
| 981 |
+
73,
|
| 982 |
+
243,
|
| 983 |
+
480,
|
| 984 |
+
254
|
| 985 |
+
],
|
| 986 |
+
"page_idx": 4
|
| 987 |
+
},
|
| 988 |
+
{
|
| 989 |
+
"type": "text",
|
| 990 |
+
"text": "DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.",
|
| 991 |
+
"bbox": [
|
| 992 |
+
107,
|
| 993 |
+
254,
|
| 994 |
+
448,
|
| 995 |
+
265
|
| 996 |
+
],
|
| 997 |
+
"page_idx": 4
|
| 998 |
+
},
|
| 999 |
+
{
|
| 1000 |
+
"type": "table",
|
| 1001 |
+
"img_path": "images/727479ecb52f4dc16e83b5865e5bafbf66870016f42390702e9e63bf8e7d51c0.jpg",
|
| 1002 |
+
"table_caption": [],
|
| 1003 |
+
"table_footnote": [],
|
| 1004 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>9.74</td><td>51.45</td><td>53.05</td><td>59.07</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>13.78</td><td>67.81</td><td>74.08</td><td>17.69</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>19.39</td><td>89.71</td><td>91.62</td><td>2.51</td></tr><tr><td>Competition top [19]</td><td>CNN</td><td>19.11</td><td>88.34</td><td>90.24</td><td>4.92</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>18.37</td><td>87.73</td><td>90.60</td><td>4.58</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>20.18</td><td>92.41</td><td>94.35</td><td>2.60</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>19.46</td><td>90.59</td><td>93.97</td><td>3.35</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>19.33</td><td>89.97</td><td>93.5</td><td>3.68</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>19.47</td><td>89.21</td><td>92.54</td><td>3.96</td></tr></table>",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
65,
|
| 1007 |
+
274,
|
| 1008 |
+
494,
|
| 1009 |
+
388
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 4
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "C. Qualitative Evaluation",
|
| 1016 |
+
"text_level": 1,
|
| 1017 |
+
"bbox": [
|
| 1018 |
+
63,
|
| 1019 |
+
413,
|
| 1020 |
+
247,
|
| 1021 |
+
426
|
| 1022 |
+
],
|
| 1023 |
+
"page_idx": 4
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"text": "After presenting the achieved quantitative results by our model, we present in this subsection some qualitative results. We begin by showing the enhancing performance of our method. This is illustrated in Fig. 2, where we compare our binarization results with the GT clean images. As it can be seen, our model produces highly clean images, which are very close to the optimal GT images, reflecting the good quantitative performance that was obtained in the previous subsection.",
|
| 1028 |
+
"bbox": [
|
| 1029 |
+
62,
|
| 1030 |
+
431,
|
| 1031 |
+
490,
|
| 1032 |
+
557
|
| 1033 |
+
],
|
| 1034 |
+
"page_idx": 4
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"text": "Then, we present a quantitative comparison of our method with the related approaches. This is shown in Fig. 3, where we can notice the superiority of our model in recovering a highly degraded image over the classic thresholding [15], [16], CNN [2], and cGAN [3] methods.",
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
62,
|
| 1041 |
+
558,
|
| 1042 |
+
490,
|
| 1043 |
+
630
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 4
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"text": "D. Self-attention Mechanism",
|
| 1050 |
+
"text_level": 1,
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
63,
|
| 1053 |
+
637,
|
| 1054 |
+
265,
|
| 1055 |
+
651
|
| 1056 |
+
],
|
| 1057 |
+
"page_idx": 4
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"type": "text",
|
| 1061 |
+
"text": "As we stated above, our method differs from the CNN related ones by employing the transformers to enhance the degraded document images. The self-attention mechanism used in the transformer blocks gives a global view to every token on the other tokens that represents the patches within the image for a better enhancing result. A visual illustration of the attention maps of the last layer from the encoder is given in Fig. 4. As it can be seen, a token can attend to all the patches within the image. In these test cases each token (patch representation) is focusing on the text elements, while ignoring the degraded patches. Thus, the attending patches are decoded later and projected to pixels while taking into consideration a high-level global information from the attended neighbouring patches that cover the full input image. We also notice that",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
62,
|
| 1064 |
+
655,
|
| 1065 |
+
490,
|
| 1066 |
+
853
|
| 1067 |
+
],
|
| 1068 |
+
"page_idx": 4
|
| 1069 |
+
},
|
| 1070 |
+
{
|
| 1071 |
+
"type": "image",
|
| 1072 |
+
"img_path": "images/e74385f648e90e7ad4ba3980ea1e5a7eb1aacaf926d255893f9f67dd275c341c.jpg",
|
| 1073 |
+
"image_caption": [],
|
| 1074 |
+
"image_footnote": [],
|
| 1075 |
+
"bbox": [
|
| 1076 |
+
517,
|
| 1077 |
+
58,
|
| 1078 |
+
647,
|
| 1079 |
+
127
|
| 1080 |
+
],
|
| 1081 |
+
"page_idx": 4
|
| 1082 |
+
},
|
| 1083 |
+
{
|
| 1084 |
+
"type": "image",
|
| 1085 |
+
"img_path": "images/ea87d09592d552e07efdc9700d0d77d718bb44c5dcec1c2608a70522af9c7cbb.jpg",
|
| 1086 |
+
"image_caption": [],
|
| 1087 |
+
"image_footnote": [],
|
| 1088 |
+
"bbox": [
|
| 1089 |
+
517,
|
| 1090 |
+
131,
|
| 1091 |
+
647,
|
| 1092 |
+
202
|
| 1093 |
+
],
|
| 1094 |
+
"page_idx": 4
|
| 1095 |
+
},
|
| 1096 |
+
{
|
| 1097 |
+
"type": "image",
|
| 1098 |
+
"img_path": "images/54a074cdaa9f8532b88e4488922e5624b22c87389ec9f6d1323f4144441ee06d.jpg",
|
| 1099 |
+
"image_caption": [],
|
| 1100 |
+
"image_footnote": [],
|
| 1101 |
+
"bbox": [
|
| 1102 |
+
517,
|
| 1103 |
+
208,
|
| 1104 |
+
647,
|
| 1105 |
+
277
|
| 1106 |
+
],
|
| 1107 |
+
"page_idx": 4
|
| 1108 |
+
},
|
| 1109 |
+
{
|
| 1110 |
+
"type": "image",
|
| 1111 |
+
"img_path": "images/a13d8d4a429bc6a9905ce54802aaae8685800cb0dd39f9d622c53f96a282f3f2.jpg",
|
| 1112 |
+
"image_caption": [],
|
| 1113 |
+
"image_footnote": [],
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
517,
|
| 1116 |
+
282,
|
| 1117 |
+
647,
|
| 1118 |
+
351
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 4
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "image",
|
| 1124 |
+
"img_path": "images/b4dfcbd41a11cf0901824206f61ff84d58c9bf37df48cb173c6e7d6c59bc2f40.jpg",
|
| 1125 |
+
"image_caption": [],
|
| 1126 |
+
"image_footnote": [],
|
| 1127 |
+
"bbox": [
|
| 1128 |
+
517,
|
| 1129 |
+
357,
|
| 1130 |
+
647,
|
| 1131 |
+
426
|
| 1132 |
+
],
|
| 1133 |
+
"page_idx": 4
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "image",
|
| 1137 |
+
"img_path": "images/a852fd68b1c8650285281724ea0398fe260eeb8417df454aae0188bafd96a084.jpg",
|
| 1138 |
+
"image_caption": [],
|
| 1139 |
+
"image_footnote": [],
|
| 1140 |
+
"bbox": [
|
| 1141 |
+
517,
|
| 1142 |
+
432,
|
| 1143 |
+
647,
|
| 1144 |
+
504
|
| 1145 |
+
],
|
| 1146 |
+
"page_idx": 4
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "image",
|
| 1150 |
+
"img_path": "images/a882b3d06f1eeb302710f6177f62f31f8f725b4bbafe440d7dd5783209e0ad12.jpg",
|
| 1151 |
+
"image_caption": [],
|
| 1152 |
+
"image_footnote": [],
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
517,
|
| 1155 |
+
508,
|
| 1156 |
+
647,
|
| 1157 |
+
577
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 4
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "image",
|
| 1163 |
+
"img_path": "images/d480faa2e6c4617ce6db6b5e8cf9a5fb21d4b8ba4c8a1ab8f4fe0efaf0b837fb.jpg",
|
| 1164 |
+
"image_caption": [
|
| 1165 |
+
"Fig. 2. Qualitative results of our proposed method in binarization of some samples from the DIBCO and H-DIBCO datasets. Images in columns are: Left: original image, Middle: GT image, Right: Binarized image using our proposed method."
|
| 1166 |
+
],
|
| 1167 |
+
"image_footnote": [],
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
517,
|
| 1170 |
+
583,
|
| 1171 |
+
647,
|
| 1172 |
+
653
|
| 1173 |
+
],
|
| 1174 |
+
"page_idx": 4
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "image",
|
| 1178 |
+
"img_path": "images/c475cc619cba2fc4a1f4ac4395d90e21ddd0baafd5cc03b2921e7bcc76fbd62d.jpg",
|
| 1179 |
+
"image_caption": [],
|
| 1180 |
+
"image_footnote": [],
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
663,
|
| 1183 |
+
65,
|
| 1184 |
+
794,
|
| 1185 |
+
202
|
| 1186 |
+
],
|
| 1187 |
+
"page_idx": 4
|
| 1188 |
+
},
|
| 1189 |
+
{
|
| 1190 |
+
"type": "image",
|
| 1191 |
+
"img_path": "images/56fd8ecd77fadb27a0e93f5ff9bcfed51a10130e1f551c83f80d5f3916fa2d31.jpg",
|
| 1192 |
+
"image_caption": [],
|
| 1193 |
+
"image_footnote": [],
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
663,
|
| 1196 |
+
212,
|
| 1197 |
+
793,
|
| 1198 |
+
278
|
| 1199 |
+
],
|
| 1200 |
+
"page_idx": 4
|
| 1201 |
+
},
|
| 1202 |
+
{
|
| 1203 |
+
"type": "image",
|
| 1204 |
+
"img_path": "images/af2b74b65cbfc3efc7c0355849857e0766bdbdca5e5892c340e1974ec5cde938.jpg",
|
| 1205 |
+
"image_caption": [],
|
| 1206 |
+
"image_footnote": [],
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
663,
|
| 1209 |
+
282,
|
| 1210 |
+
791,
|
| 1211 |
+
304
|
| 1212 |
+
],
|
| 1213 |
+
"page_idx": 4
|
| 1214 |
+
},
|
| 1215 |
+
{
|
| 1216 |
+
"type": "image",
|
| 1217 |
+
"img_path": "images/fa655df629b19b13823ebe0a64f03cbf1dce47389287f3690cc7048f8aaefd0f.jpg",
|
| 1218 |
+
"image_caption": [],
|
| 1219 |
+
"image_footnote": [],
|
| 1220 |
+
"bbox": [
|
| 1221 |
+
680,
|
| 1222 |
+
307,
|
| 1223 |
+
781,
|
| 1224 |
+
323
|
| 1225 |
+
],
|
| 1226 |
+
"page_idx": 4
|
| 1227 |
+
},
|
| 1228 |
+
{
|
| 1229 |
+
"type": "image",
|
| 1230 |
+
"img_path": "images/a9d71151d58f9076d83f140bea2998054d7f44e35e777cc11d5727dc94b4aae4.jpg",
|
| 1231 |
+
"image_caption": [],
|
| 1232 |
+
"image_footnote": [],
|
| 1233 |
+
"bbox": [
|
| 1234 |
+
702,
|
| 1235 |
+
329,
|
| 1236 |
+
779,
|
| 1237 |
+
350
|
| 1238 |
+
],
|
| 1239 |
+
"page_idx": 4
|
| 1240 |
+
},
|
| 1241 |
+
{
|
| 1242 |
+
"type": "image",
|
| 1243 |
+
"img_path": "images/ff4de8c46547c14cb0f76585b3439f0fd7f29ed042b1f5c31ef870ec09092f48.jpg",
|
| 1244 |
+
"image_caption": [
|
| 1245 |
+
"Personne n'avait aperçu le jeune homme."
|
| 1246 |
+
],
|
| 1247 |
+
"image_footnote": [],
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
662,
|
| 1250 |
+
357,
|
| 1251 |
+
794,
|
| 1252 |
+
429
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 4
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "- Que faîreet se dit à l'Jeux en regardant en vain de tous états; puis se rappellant tout-a-coup le plaisir qu'éprouvait toujours Augustin à feuilletter les cartons de gravules exposées prés de l'Institut, il pressi-dessent cette direction, tout en explorant des yeux les différents quantiers, qu'il trouvait sur son passage.",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
668,
|
| 1261 |
+
445,
|
| 1262 |
+
786,
|
| 1263 |
+
495
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 4
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "text",
|
| 1269 |
+
"text": "Pythagorica. D. 14.",
|
| 1270 |
+
"bbox": [
|
| 1271 |
+
663,
|
| 1272 |
+
508,
|
| 1273 |
+
742,
|
| 1274 |
+
516
|
| 1275 |
+
],
|
| 1276 |
+
"page_idx": 4
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "text",
|
| 1280 |
+
"text": "8c8bebeure. 5.40x.33.",
|
| 1281 |
+
"bbox": [
|
| 1282 |
+
663,
|
| 1283 |
+
516,
|
| 1284 |
+
752,
|
| 1285 |
+
521
|
| 1286 |
+
],
|
| 1287 |
+
"page_idx": 4
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "text",
|
| 1291 |
+
"text": "Aegmng. 2.30. \nmregula.E66.A.S.D.",
|
| 1292 |
+
"bbox": [
|
| 1293 |
+
663,
|
| 1294 |
+
521,
|
| 1295 |
+
773,
|
| 1296 |
+
529
|
| 1297 |
+
],
|
| 1298 |
+
"page_idx": 4
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "text",
|
| 1302 |
+
"text": "oendreRegef. E.55. 242",
|
| 1303 |
+
"bbox": [
|
| 1304 |
+
663,
|
| 1305 |
+
529,
|
| 1306 |
+
751,
|
| 1307 |
+
536
|
| 1308 |
+
],
|
| 1309 |
+
"page_idx": 4
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "text",
|
| 1313 |
+
"text": "rednun. 46. a.38. eRege. 56. a.44. d.57.",
|
| 1314 |
+
"bbox": [
|
| 1315 |
+
663,
|
| 1316 |
+
536,
|
| 1317 |
+
763,
|
| 1318 |
+
545
|
| 1319 |
+
],
|
| 1320 |
+
"page_idx": 4
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "text",
|
| 1324 |
+
"text": "Igge Iefer wolle hieiriut fur gattennen: Sunfen vbn Dannebaet feir fahr / aegre mehern nadyuinnen vbn feige Zagangen.",
|
| 1325 |
+
"bbox": [
|
| 1326 |
+
663,
|
| 1327 |
+
552,
|
| 1328 |
+
794,
|
| 1329 |
+
576
|
| 1330 |
+
],
|
| 1331 |
+
"page_idx": 4
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "text",
|
| 1335 |
+
"text": "E. BERLINER'S",
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
697,
|
| 1338 |
+
596,
|
| 1339 |
+
757,
|
| 1340 |
+
602
|
| 1341 |
+
],
|
| 1342 |
+
"page_idx": 4
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "text",
|
| 1346 |
+
"text": "GRAMOPHONE.",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
673,
|
| 1349 |
+
607,
|
| 1350 |
+
779,
|
| 1351 |
+
621
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 4
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"text": "DIRECTIONS FOR USERS OF THE SEVEN-INCH",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
675,
|
| 1360 |
+
627,
|
| 1361 |
+
779,
|
| 1362 |
+
634
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 4
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "text",
|
| 1368 |
+
"text": "AMERICAN HAND MACHINE",
|
| 1369 |
+
"bbox": [
|
| 1370 |
+
695,
|
| 1371 |
+
634,
|
| 1372 |
+
757,
|
| 1373 |
+
640
|
| 1374 |
+
],
|
| 1375 |
+
"page_idx": 4
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "image",
|
| 1379 |
+
"img_path": "images/47ee0f2949f399aa6c4529643e86fed3d87a059122a4582141efd74b5be13a72.jpg",
|
| 1380 |
+
"image_caption": [],
|
| 1381 |
+
"image_footnote": [],
|
| 1382 |
+
"bbox": [
|
| 1383 |
+
810,
|
| 1384 |
+
65,
|
| 1385 |
+
939,
|
| 1386 |
+
202
|
| 1387 |
+
],
|
| 1388 |
+
"page_idx": 4
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "image",
|
| 1392 |
+
"img_path": "images/cff05b025d6994552e031968065c75fb6d68b66338382a2b76f63b25ee8cbfb5.jpg",
|
| 1393 |
+
"image_caption": [],
|
| 1394 |
+
"image_footnote": [],
|
| 1395 |
+
"bbox": [
|
| 1396 |
+
810,
|
| 1397 |
+
206,
|
| 1398 |
+
939,
|
| 1399 |
+
278
|
| 1400 |
+
],
|
| 1401 |
+
"page_idx": 4
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "image",
|
| 1405 |
+
"img_path": "images/2957890457d783b0a5e7759987f2b974351a94a081110a48fb20f0f4ab27e660.jpg",
|
| 1406 |
+
"image_caption": [],
|
| 1407 |
+
"image_footnote": [],
|
| 1408 |
+
"bbox": [
|
| 1409 |
+
810,
|
| 1410 |
+
282,
|
| 1411 |
+
939,
|
| 1412 |
+
304
|
| 1413 |
+
],
|
| 1414 |
+
"page_idx": 4
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "image",
|
| 1418 |
+
"img_path": "images/221c9cd5561f1a0d2581e79fb6b762e127ed4f42faecf7af36a9c5f12676ffad.jpg",
|
| 1419 |
+
"image_caption": [],
|
| 1420 |
+
"image_footnote": [],
|
| 1421 |
+
"bbox": [
|
| 1422 |
+
826,
|
| 1423 |
+
307,
|
| 1424 |
+
929,
|
| 1425 |
+
323
|
| 1426 |
+
],
|
| 1427 |
+
"page_idx": 4
|
| 1428 |
+
},
|
| 1429 |
+
{
|
| 1430 |
+
"type": "image",
|
| 1431 |
+
"img_path": "images/89d092f199d2eae22afc3d58c919259b0f875b099863380cbb445b2c31af3994.jpg",
|
| 1432 |
+
"image_caption": [],
|
| 1433 |
+
"image_footnote": [],
|
| 1434 |
+
"bbox": [
|
| 1435 |
+
847,
|
| 1436 |
+
329,
|
| 1437 |
+
927,
|
| 1438 |
+
350
|
| 1439 |
+
],
|
| 1440 |
+
"page_idx": 4
|
| 1441 |
+
},
|
| 1442 |
+
{
|
| 1443 |
+
"type": "image",
|
| 1444 |
+
"img_path": "images/0d7a116d743cb7699f772c4253360ca24b0eebc1d8d5bbfa1a85a442e7946176.jpg",
|
| 1445 |
+
"image_caption": [
|
| 1446 |
+
"Personne n'avait aperçu le jeune homme."
|
| 1447 |
+
],
|
| 1448 |
+
"image_footnote": [],
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
808,
|
| 1451 |
+
357,
|
| 1452 |
+
939,
|
| 1453 |
+
429
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 4
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "- Que faïreet se dit Pàjou en regardant en vain de tous états; puis que rappelant tout-a-coup le plait qu'épuavait toujours, Augustin a feuilletier les cranges de granuèrées exposées prés de l'Institut, il tipt rapièment cette direction, tout en explorant, des yeux des différentes quantiers qu'il trouvait sur son passage.",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
815,
|
| 1462 |
+
445,
|
| 1463 |
+
936,
|
| 1464 |
+
495
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 4
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "Pythagorica. D. $i_{4}$",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
810,
|
| 1473 |
+
508,
|
| 1474 |
+
889,
|
| 1475 |
+
514
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 4
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "text",
|
| 1481 |
+
"text": "8c8bebeure. 40.233",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
810,
|
| 1484 |
+
516,
|
| 1485 |
+
899,
|
| 1486 |
+
521
|
| 1487 |
+
],
|
| 1488 |
+
"page_idx": 4
|
| 1489 |
+
},
|
| 1490 |
+
{
|
| 1491 |
+
"type": "text",
|
| 1492 |
+
"text": "XeJeHg. 2.40. mcrgla.E66.A.c.D.29",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
810,
|
| 1495 |
+
521,
|
| 1496 |
+
919,
|
| 1497 |
+
529
|
| 1498 |
+
],
|
| 1499 |
+
"page_idx": 4
|
| 1500 |
+
},
|
| 1501 |
+
{
|
| 1502 |
+
"type": "text",
|
| 1503 |
+
"text": "oendbeXegel. E. 57. A.42",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
810,
|
| 1506 |
+
529,
|
| 1507 |
+
899,
|
| 1508 |
+
536
|
| 1509 |
+
],
|
| 1510 |
+
"page_idx": 4
|
| 1511 |
+
},
|
| 1512 |
+
{
|
| 1513 |
+
"type": "text",
|
| 1514 |
+
"text": "rednuing. C.46. A.18. \ncReigel. E.56. A.44. D. 39",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
810,
|
| 1517 |
+
536,
|
| 1518 |
+
907,
|
| 1519 |
+
545
|
| 1520 |
+
],
|
| 1521 |
+
"page_idx": 4
|
| 1522 |
+
},
|
| 1523 |
+
{
|
| 1524 |
+
"type": "text",
|
| 1525 |
+
"text": "Bige feler wolle hieiriur fur gerteminen: Sunnen van Ndaanbaeifur paben/aere mehemr nanduigenn van feige Zagangene.",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
810,
|
| 1528 |
+
552,
|
| 1529 |
+
941,
|
| 1530 |
+
576
|
| 1531 |
+
],
|
| 1532 |
+
"page_idx": 4
|
| 1533 |
+
},
|
| 1534 |
+
{
|
| 1535 |
+
"type": "text",
|
| 1536 |
+
"text": "F. BERLINER'S",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
843,
|
| 1539 |
+
596,
|
| 1540 |
+
904,
|
| 1541 |
+
602
|
| 1542 |
+
],
|
| 1543 |
+
"page_idx": 4
|
| 1544 |
+
},
|
| 1545 |
+
{
|
| 1546 |
+
"type": "text",
|
| 1547 |
+
"text": "GRAMOPHONE.",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
821,
|
| 1550 |
+
607,
|
| 1551 |
+
924,
|
| 1552 |
+
621
|
| 1553 |
+
],
|
| 1554 |
+
"page_idx": 4
|
| 1555 |
+
},
|
| 1556 |
+
{
|
| 1557 |
+
"type": "text",
|
| 1558 |
+
"text": "DIRECTIONS FOR USERS OF THE SEVEN-ING",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
821,
|
| 1561 |
+
627,
|
| 1562 |
+
924,
|
| 1563 |
+
634
|
| 1564 |
+
],
|
| 1565 |
+
"page_idx": 4
|
| 1566 |
+
},
|
| 1567 |
+
{
|
| 1568 |
+
"type": "text",
|
| 1569 |
+
"text": "AMERICAN HAND MACHINE",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
843,
|
| 1572 |
+
634,
|
| 1573 |
+
905,
|
| 1574 |
+
640
|
| 1575 |
+
],
|
| 1576 |
+
"page_idx": 4
|
| 1577 |
+
},
|
| 1578 |
+
{
|
| 1579 |
+
"type": "text",
|
| 1580 |
+
"text": "the attention maps are mostly matching the text of the GT images, which lead to a satisfactory binarization result that is closer to the GT. This supports the utility of using the transformers with its powerful self-attention mechanism in the image enhancement task. However, in other sample cases as illustrated in Fig. 5, we observe that the attention maps are considering some portions of the text as a background",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
502,
|
| 1583 |
+
753,
|
| 1584 |
+
932,
|
| 1585 |
+
853
|
| 1586 |
+
],
|
| 1587 |
+
"page_idx": 4
|
| 1588 |
+
},
|
| 1589 |
+
{
|
| 1590 |
+
"type": "image",
|
| 1591 |
+
"img_path": "images/8e05f8b0c6f3ae5dd61f557e6291af0454aa8e064d3528b8c3b74a35ca7f03f9.jpg",
|
| 1592 |
+
"image_caption": [
|
| 1593 |
+
"Invalc."
|
| 1594 |
+
],
|
| 1595 |
+
"image_footnote": [],
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
97,
|
| 1598 |
+
57,
|
| 1599 |
+
268,
|
| 1600 |
+
145
|
| 1601 |
+
],
|
| 1602 |
+
"page_idx": 5
|
| 1603 |
+
},
|
| 1604 |
+
{
|
| 1605 |
+
"type": "text",
|
| 1606 |
+
"text": "Boun Cefite. 342",
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
295,
|
| 1609 |
+
68,
|
| 1610 |
+
450,
|
| 1611 |
+
76
|
| 1612 |
+
],
|
| 1613 |
+
"page_idx": 5
|
| 1614 |
+
},
|
| 1615 |
+
{
|
| 1616 |
+
"type": "text",
|
| 1617 |
+
"text": "Bom Gerudc. 344",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
297,
|
| 1620 |
+
76,
|
| 1621 |
+
448,
|
| 1622 |
+
83
|
| 1623 |
+
],
|
| 1624 |
+
"page_idx": 5
|
| 1625 |
+
},
|
| 1626 |
+
{
|
| 1627 |
+
"type": "text",
|
| 1628 |
+
"text": "Ceflrungen feltfamer Gefufe; Anmundigen",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
297,
|
| 1631 |
+
83,
|
| 1632 |
+
433,
|
| 1633 |
+
90
|
| 1634 |
+
],
|
| 1635 |
+
"page_idx": 5
|
| 1636 |
+
},
|
| 1637 |
+
{
|
| 1638 |
+
"type": "text",
|
| 1639 |
+
"text": "munderlicher Begerden der Schwangen und Duerrichsen. 250",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
297,
|
| 1642 |
+
90,
|
| 1643 |
+
448,
|
| 1644 |
+
99
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 5
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "text",
|
| 1650 |
+
"text": "Bor den Leidenchaften, und der Nethoendiaeite 350",
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
297,
|
| 1653 |
+
99,
|
| 1654 |
+
448,
|
| 1655 |
+
107
|
| 1656 |
+
],
|
| 1657 |
+
"page_idx": 5
|
| 1658 |
+
},
|
| 1659 |
+
{
|
| 1660 |
+
"type": "text",
|
| 1661 |
+
"text": "des Studiums der Menndienfeunntn 364",
|
| 1662 |
+
"bbox": [
|
| 1663 |
+
297,
|
| 1664 |
+
107,
|
| 1665 |
+
448,
|
| 1666 |
+
115
|
| 1667 |
+
],
|
| 1668 |
+
"page_idx": 5
|
| 1669 |
+
},
|
| 1670 |
+
{
|
| 1671 |
+
"type": "text",
|
| 1672 |
+
"text": "Bon psychologischen Geheimnifen oder den Jiffen:",
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
297,
|
| 1675 |
+
115,
|
| 1676 |
+
448,
|
| 1677 |
+
123
|
| 1678 |
+
],
|
| 1679 |
+
"page_idx": 5
|
| 1680 |
+
},
|
| 1681 |
+
{
|
| 1682 |
+
"type": "text",
|
| 1683 |
+
"text": "fchaften ber Spillen. 368",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
297,
|
| 1686 |
+
123,
|
| 1687 |
+
448,
|
| 1688 |
+
130
|
| 1689 |
+
],
|
| 1690 |
+
"page_idx": 5
|
| 1691 |
+
},
|
| 1692 |
+
{
|
| 1693 |
+
"type": "text",
|
| 1694 |
+
"text": "Bon fonerberbeitlichen gefolgenden und Empfindungen. 375",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
297,
|
| 1697 |
+
130,
|
| 1698 |
+
448,
|
| 1699 |
+
137
|
| 1700 |
+
],
|
| 1701 |
+
"page_idx": 5
|
| 1702 |
+
},
|
| 1703 |
+
{
|
| 1704 |
+
"type": "text",
|
| 1705 |
+
"text": "2theie angeneomet omphnoiungen. 378",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
297,
|
| 1708 |
+
137,
|
| 1709 |
+
448,
|
| 1710 |
+
143
|
| 1711 |
+
],
|
| 1712 |
+
"page_idx": 5
|
| 1713 |
+
},
|
| 1714 |
+
{
|
| 1715 |
+
"type": "text",
|
| 1716 |
+
"text": "Original",
|
| 1717 |
+
"text_level": 1,
|
| 1718 |
+
"bbox": [
|
| 1719 |
+
154,
|
| 1720 |
+
151,
|
| 1721 |
+
211,
|
| 1722 |
+
165
|
| 1723 |
+
],
|
| 1724 |
+
"page_idx": 5
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "text",
|
| 1728 |
+
"text": "yeshalr.",
|
| 1729 |
+
"text_level": 1,
|
| 1730 |
+
"bbox": [
|
| 1731 |
+
164,
|
| 1732 |
+
168,
|
| 1733 |
+
196,
|
| 1734 |
+
175
|
| 1735 |
+
],
|
| 1736 |
+
"page_idx": 5
|
| 1737 |
+
},
|
| 1738 |
+
{
|
| 1739 |
+
"type": "list",
|
| 1740 |
+
"sub_type": "text",
|
| 1741 |
+
"list_items": [
|
| 1742 |
+
"1 10omAeote. 1981 100-001e 124",
|
| 1743 |
+
"p 34",
|
| 1744 |
+
"1:erflrungen feltner Gefleke;Ranandfugene 2",
|
| 1745 |
+
"1",
|
| 1746 |
+
"350",
|
| 1747 |
+
"2. Schon den Zieferungen, und der Stufumfugräte 364",
|
| 1748 |
+
"100 200",
|
| 1749 |
+
"aaftn 8r.6bifen.",
|
| 1750 |
+
"non berfeitenden Gefühlen und Empfindungen. 375",
|
| 1751 |
+
"Theorie angeneherer Empfindungen 378"
|
| 1752 |
+
],
|
| 1753 |
+
"bbox": [
|
| 1754 |
+
95,
|
| 1755 |
+
178,
|
| 1756 |
+
260,
|
| 1757 |
+
253
|
| 1758 |
+
],
|
| 1759 |
+
"page_idx": 5
|
| 1760 |
+
},
|
| 1761 |
+
{
|
| 1762 |
+
"type": "text",
|
| 1763 |
+
"text": "Otsu [15]",
|
| 1764 |
+
"text_level": 1,
|
| 1765 |
+
"bbox": [
|
| 1766 |
+
147,
|
| 1767 |
+
260,
|
| 1768 |
+
216,
|
| 1769 |
+
273
|
| 1770 |
+
],
|
| 1771 |
+
"page_idx": 5
|
| 1772 |
+
},
|
| 1773 |
+
{
|
| 1774 |
+
"type": "text",
|
| 1775 |
+
"text": "#",
|
| 1776 |
+
"text_level": 1,
|
| 1777 |
+
"bbox": [
|
| 1778 |
+
164,
|
| 1779 |
+
279,
|
| 1780 |
+
196,
|
| 1781 |
+
285
|
| 1782 |
+
],
|
| 1783 |
+
"page_idx": 5
|
| 1784 |
+
},
|
| 1785 |
+
{
|
| 1786 |
+
"type": "list",
|
| 1787 |
+
"sub_type": "text",
|
| 1788 |
+
"list_items": [
|
| 1789 |
+
"1BomGefte. 20n- nHgite 342",
|
| 1790 |
+
"bppm Gcrude. gnuudnuruydRnnu",
|
| 1791 |
+
"Ferlungen feltamer Geltel; Anbundungen mumberlicher Reigendien der Schmernungen und.",
|
| 1792 |
+
"350",
|
| 1793 |
+
", Bon den Reifenbchaften, und der Nothwenbdiafei",
|
| 1794 |
+
"desStubiums derMienchenfeuntni3 364",
|
| 1795 |
+
"yon phtologijden Gechinnifen over den Biffien",
|
| 1796 |
+
"H 368",
|
| 1797 |
+
"2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013",
|
| 1798 |
+
"180eotie unngnnti eepnnnng 607978"
|
| 1799 |
+
],
|
| 1800 |
+
"bbox": [
|
| 1801 |
+
102,
|
| 1802 |
+
288,
|
| 1803 |
+
260,
|
| 1804 |
+
365
|
| 1805 |
+
],
|
| 1806 |
+
"page_idx": 5
|
| 1807 |
+
},
|
| 1808 |
+
{
|
| 1809 |
+
"type": "text",
|
| 1810 |
+
"text": "Jemni et al. [3]",
|
| 1811 |
+
"text_level": 1,
|
| 1812 |
+
"bbox": [
|
| 1813 |
+
127,
|
| 1814 |
+
370,
|
| 1815 |
+
236,
|
| 1816 |
+
384
|
| 1817 |
+
],
|
| 1818 |
+
"page_idx": 5
|
| 1819 |
+
},
|
| 1820 |
+
{
|
| 1821 |
+
"type": "text",
|
| 1822 |
+
"text": "#",
|
| 1823 |
+
"text_level": 1,
|
| 1824 |
+
"bbox": [
|
| 1825 |
+
168,
|
| 1826 |
+
388,
|
| 1827 |
+
200,
|
| 1828 |
+
395
|
| 1829 |
+
],
|
| 1830 |
+
"page_idx": 5
|
| 1831 |
+
},
|
| 1832 |
+
{
|
| 1833 |
+
"type": "text",
|
| 1834 |
+
"text": "120omGefo. 13075 13076eite 342",
|
| 1835 |
+
"bbox": [
|
| 1836 |
+
105,
|
| 1837 |
+
397,
|
| 1838 |
+
260,
|
| 1839 |
+
405
|
| 1840 |
+
],
|
| 1841 |
+
"page_idx": 5
|
| 1842 |
+
},
|
| 1843 |
+
{
|
| 1844 |
+
"type": "list",
|
| 1845 |
+
"sub_type": "text",
|
| 1846 |
+
"list_items": [
|
| 1847 |
+
"bBpM Gerude. nssnns nnn 344",
|
| 1848 |
+
"Eerfahrungen felfauer Geltze; Anmablungen zu wertberichtigte Pläne in der Stammwerte.",
|
| 1849 |
+
"1,Jeunenrder Sgeiien der Gwangernn and"
|
| 1850 |
+
],
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
105,
|
| 1853 |
+
405,
|
| 1854 |
+
260,
|
| 1855 |
+
428
|
| 1856 |
+
],
|
| 1857 |
+
"page_idx": 5
|
| 1858 |
+
},
|
| 1859 |
+
{
|
| 1860 |
+
"type": "list",
|
| 1861 |
+
"sub_type": "text",
|
| 1862 |
+
"list_items": [
|
| 1863 |
+
"Bn den feidenchaften, und der Nethwendniafeit.",
|
| 1864 |
+
"364",
|
| 1865 |
+
"Bnon pdyhologifden Geelimnifen ober den Bifen",
|
| 1866 |
+
"gfofep.ber Sobillen. 368",
|
| 1867 |
+
"Vou fonderheitlichen Gefühlen und Empfindungen. 375",
|
| 1868 |
+
"182beorie angenehmeir Gnpufubungen, 378"
|
| 1869 |
+
],
|
| 1870 |
+
"bbox": [
|
| 1871 |
+
105,
|
| 1872 |
+
429,
|
| 1873 |
+
260,
|
| 1874 |
+
473
|
| 1875 |
+
],
|
| 1876 |
+
"page_idx": 5
|
| 1877 |
+
},
|
| 1878 |
+
{
|
| 1879 |
+
"type": "text",
|
| 1880 |
+
"text": "Competition Winner",
|
| 1881 |
+
"text_level": 1,
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
112,
|
| 1884 |
+
480,
|
| 1885 |
+
253,
|
| 1886 |
+
494
|
| 1887 |
+
],
|
| 1888 |
+
"page_idx": 5
|
| 1889 |
+
},
|
| 1890 |
+
{
|
| 1891 |
+
"type": "text",
|
| 1892 |
+
"text": "Ground Truth",
|
| 1893 |
+
"text_level": 1,
|
| 1894 |
+
"bbox": [
|
| 1895 |
+
324,
|
| 1896 |
+
151,
|
| 1897 |
+
418,
|
| 1898 |
+
162
|
| 1899 |
+
],
|
| 1900 |
+
"page_idx": 5
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "text",
|
| 1904 |
+
"text": "y",
|
| 1905 |
+
"text_level": 1,
|
| 1906 |
+
"bbox": [
|
| 1907 |
+
354,
|
| 1908 |
+
168,
|
| 1909 |
+
386,
|
| 1910 |
+
175
|
| 1911 |
+
],
|
| 1912 |
+
"page_idx": 5
|
| 1913 |
+
},
|
| 1914 |
+
{
|
| 1915 |
+
"type": "list",
|
| 1916 |
+
"sub_type": "text",
|
| 1917 |
+
"list_items": [
|
| 1918 |
+
"1BemGgte. 2015 90",
|
| 1919 |
+
"bBm Gerude. pueushunrnrre 1n Ss",
|
| 1920 |
+
"Fertlungen feltner Gefelte; Anbunfängen 20 mertendes Reinein der Forderung am 1.",
|
| 1921 |
+
"359",
|
| 1922 |
+
"Bennen geifenchaften und der Nethwenndigfe",
|
| 1923 |
+
"34",
|
| 1924 |
+
"Bn pndoligijden Geheimnifen er den Wijifen:",
|
| 1925 |
+
"Gchaften der Schillen. 368",
|
| 1926 |
+
"von fenderbeitlichen Gefühlen und Empfindungen. 375",
|
| 1927 |
+
"Theorie angenehmener Empfindungen. 378"
|
| 1928 |
+
],
|
| 1929 |
+
"bbox": [
|
| 1930 |
+
290,
|
| 1931 |
+
179,
|
| 1932 |
+
448,
|
| 1933 |
+
253
|
| 1934 |
+
],
|
| 1935 |
+
"page_idx": 5
|
| 1936 |
+
},
|
| 1937 |
+
{
|
| 1938 |
+
"type": "text",
|
| 1939 |
+
"text": "Sauvola et al. [16]",
|
| 1940 |
+
"text_level": 1,
|
| 1941 |
+
"bbox": [
|
| 1942 |
+
305,
|
| 1943 |
+
260,
|
| 1944 |
+
436,
|
| 1945 |
+
273
|
| 1946 |
+
],
|
| 1947 |
+
"page_idx": 5
|
| 1948 |
+
},
|
| 1949 |
+
{
|
| 1950 |
+
"type": "text",
|
| 1951 |
+
"text": "#",
|
| 1952 |
+
"text_level": 1,
|
| 1953 |
+
"bbox": [
|
| 1954 |
+
357,
|
| 1955 |
+
281,
|
| 1956 |
+
386,
|
| 1957 |
+
287
|
| 1958 |
+
],
|
| 1959 |
+
"page_idx": 5
|
| 1960 |
+
},
|
| 1961 |
+
{
|
| 1962 |
+
"type": "list",
|
| 1963 |
+
"sub_type": "text",
|
| 1964 |
+
"list_items": [
|
| 1965 |
+
"1FoeomGf#t.",
|
| 1966 |
+
"b90Pm Gcrude. 134",
|
| 1967 |
+
":Gefalungs Rittauer GdR; Bausstangsmits, gemiberlicher Steiger, den Schapenbauten und/",
|
| 1968 |
+
"Sonerifden 350",
|
| 1969 |
+
"Bon den teidenbauten, und der Nothnundige.",
|
| 1970 |
+
"bet zuumbe der Reifenfennnnti, 364",
|
| 1971 |
+
"sof prnnebnien Gcnnnnienr Her 163",
|
| 1972 |
+
"Den bonberbeitlichen Gefühlen und Gruppindungen. 375",
|
| 1973 |
+
"I8 theie angenebner Enapubungen 378"
|
| 1974 |
+
],
|
| 1975 |
+
"bbox": [
|
| 1976 |
+
295,
|
| 1977 |
+
288,
|
| 1978 |
+
448,
|
| 1979 |
+
356
|
| 1980 |
+
],
|
| 1981 |
+
"page_idx": 5
|
| 1982 |
+
},
|
| 1983 |
+
{
|
| 1984 |
+
"type": "text",
|
| 1985 |
+
"text": "Kang et al. [2]",
|
| 1986 |
+
"text_level": 1,
|
| 1987 |
+
"bbox": [
|
| 1988 |
+
319,
|
| 1989 |
+
370,
|
| 1990 |
+
423,
|
| 1991 |
+
384
|
| 1992 |
+
],
|
| 1993 |
+
"page_idx": 5
|
| 1994 |
+
},
|
| 1995 |
+
{
|
| 1996 |
+
"type": "text",
|
| 1997 |
+
"text": "Julal.",
|
| 1998 |
+
"text_level": 1,
|
| 1999 |
+
"bbox": [
|
| 2000 |
+
359,
|
| 2001 |
+
388,
|
| 2002 |
+
386,
|
| 2003 |
+
395
|
| 2004 |
+
],
|
| 2005 |
+
"page_idx": 5
|
| 2006 |
+
},
|
| 2007 |
+
{
|
| 2008 |
+
"type": "list",
|
| 2009 |
+
"sub_type": "text",
|
| 2010 |
+
"list_items": [
|
| 2011 |
+
"BcmGefdte. 342",
|
| 2012 |
+
"Bem Gierudc. 344",
|
| 2013 |
+
"Griffarungen feltfamer Geltige; Annuandungen",
|
| 2014 |
+
"Buchwirrher der Begrenen der Schwerangern und Hauferfaden 359",
|
| 2015 |
+
"Von den Leidenchaften, und der Norfhwembigfeit.",
|
| 2016 |
+
"desubiums derMnndeuefuntunj: 364",
|
| 2017 |
+
"Ben phychologifden Geheimnifen erden Bifen",
|
| 2018 |
+
"gaften der Sobillen. 368",
|
| 2019 |
+
"Von fonderheitlichen Gefühlen und Eupfindungen. 375",
|
| 2020 |
+
"2. Theorie angenehmier Empfindungen. 378"
|
| 2021 |
+
],
|
| 2022 |
+
"bbox": [
|
| 2023 |
+
297,
|
| 2024 |
+
398,
|
| 2025 |
+
448,
|
| 2026 |
+
473
|
| 2027 |
+
],
|
| 2028 |
+
"page_idx": 5
|
| 2029 |
+
},
|
| 2030 |
+
{
|
| 2031 |
+
"type": "text",
|
| 2032 |
+
"text": "Ours",
|
| 2033 |
+
"text_level": 1,
|
| 2034 |
+
"bbox": [
|
| 2035 |
+
352,
|
| 2036 |
+
480,
|
| 2037 |
+
391,
|
| 2038 |
+
492
|
| 2039 |
+
],
|
| 2040 |
+
"page_idx": 5
|
| 2041 |
+
},
|
| 2042 |
+
{
|
| 2043 |
+
"type": "text",
|
| 2044 |
+
"text": "region. Hence, the resultant enhanced image is removing foreground text because it considers it as a background noise. This explains the failure of the self-attention paradigm in these scenarios.",
|
| 2045 |
+
"bbox": [
|
| 2046 |
+
60,
|
| 2047 |
+
557,
|
| 2048 |
+
490,
|
| 2049 |
+
613
|
| 2050 |
+
],
|
| 2051 |
+
"page_idx": 5
|
| 2052 |
+
},
|
| 2053 |
+
{
|
| 2054 |
+
"type": "text",
|
| 2055 |
+
"text": "V. CONCLUSION",
|
| 2056 |
+
"text_level": 1,
|
| 2057 |
+
"bbox": [
|
| 2058 |
+
213,
|
| 2059 |
+
631,
|
| 2060 |
+
339,
|
| 2061 |
+
643
|
| 2062 |
+
],
|
| 2063 |
+
"page_idx": 5
|
| 2064 |
+
},
|
| 2065 |
+
{
|
| 2066 |
+
"type": "text",
|
| 2067 |
+
"text": "This paper presents a novel transformer-based architecture called DocEnTr for document image enhancement. To the best of our knowledge, this is the first pure transformer model addressing DIE related problems. The model captures high-level global long-range dependencies using the self-attention mechanism for a better performance. Quantitative and qualitative results on the DIBCO benchmarks prove the effectiveness of DocEnTr in recovering highly degraded document images. It is a simple and flexible framework that can also be easily applied to enhance other kinds of degradation occurring in document images (like blur, shadow, warps, stains etc). These aspects will be investigated in a future work. We also wish to investigate a self-supervised learning stage that can substantially benefit from large amounts of unlabeled data.",
|
| 2068 |
+
"bbox": [
|
| 2069 |
+
60,
|
| 2070 |
+
653,
|
| 2071 |
+
492,
|
| 2072 |
+
853
|
| 2073 |
+
],
|
| 2074 |
+
"page_idx": 5
|
| 2075 |
+
},
|
| 2076 |
+
{
|
| 2077 |
+
"type": "image",
|
| 2078 |
+
"img_path": "images/1fee423ec6487f4b59d9d6e287919094894b23593832d2913442237a04d2c126.jpg",
|
| 2079 |
+
"image_caption": [
|
| 2080 |
+
"Fig. 4. Attention maps from the $2^{nd}$ head of the last layer of DocEnTr{8} encoder. We display the self-attention for different (random) tokens."
|
| 2081 |
+
],
|
| 2082 |
+
"image_footnote": [],
|
| 2083 |
+
"bbox": [
|
| 2084 |
+
507,
|
| 2085 |
+
57,
|
| 2086 |
+
929,
|
| 2087 |
+
332
|
| 2088 |
+
],
|
| 2089 |
+
"page_idx": 5
|
| 2090 |
+
},
|
| 2091 |
+
{
|
| 2092 |
+
"type": "image",
|
| 2093 |
+
"img_path": "images/9723aa7cac14422b5d96e35ce693ae535e736437a42ef0058742a1adbccc6ba7.jpg",
|
| 2094 |
+
"image_caption": [
|
| 2095 |
+
"Fig. 3. Qualitative results of the different binarization methods on the sample number 12 from DIBCO 2017 Dataset.",
|
| 2096 |
+
"Fig. 5. Attention maps from the $2^{nd}$ head of the last layer of DocEnTr{8} encoder. We display the self-attention for different (random) tokens. (A failure case)."
|
| 2097 |
+
],
|
| 2098 |
+
"image_footnote": [],
|
| 2099 |
+
"bbox": [
|
| 2100 |
+
510,
|
| 2101 |
+
375,
|
| 2102 |
+
929,
|
| 2103 |
+
521
|
| 2104 |
+
],
|
| 2105 |
+
"page_idx": 5
|
| 2106 |
+
},
|
| 2107 |
+
{
|
| 2108 |
+
"type": "text",
|
| 2109 |
+
"text": "ACKNOWLEDGMENT",
|
| 2110 |
+
"text_level": 1,
|
| 2111 |
+
"bbox": [
|
| 2112 |
+
643,
|
| 2113 |
+
585,
|
| 2114 |
+
794,
|
| 2115 |
+
596
|
| 2116 |
+
],
|
| 2117 |
+
"page_idx": 5
|
| 2118 |
+
},
|
| 2119 |
+
{
|
| 2120 |
+
"type": "text",
|
| 2121 |
+
"text": "This work has been partially supported by the Swedish Research Council (grant 2018-06074, DECRYPT), the Spanish projects RTI2018-095645-B-C21, the CERCA Program / Generalitat de Catalunya, the FCT-19-15244, the Catalan projects 2017-SGR-1783, PhD Scholarship from AGAUR (2021FIB-10010) and DocPRESERV project (Swedish STINT grant).",
|
| 2122 |
+
"bbox": [
|
| 2123 |
+
502,
|
| 2124 |
+
601,
|
| 2125 |
+
932,
|
| 2126 |
+
687
|
| 2127 |
+
],
|
| 2128 |
+
"page_idx": 5
|
| 2129 |
+
},
|
| 2130 |
+
{
|
| 2131 |
+
"type": "text",
|
| 2132 |
+
"text": "REFERENCES",
|
| 2133 |
+
"text_level": 1,
|
| 2134 |
+
"bbox": [
|
| 2135 |
+
668,
|
| 2136 |
+
695,
|
| 2137 |
+
766,
|
| 2138 |
+
706
|
| 2139 |
+
],
|
| 2140 |
+
"page_idx": 5
|
| 2141 |
+
},
|
| 2142 |
+
{
|
| 2143 |
+
"type": "list",
|
| 2144 |
+
"sub_type": "ref_text",
|
| 2145 |
+
"list_items": [
|
| 2146 |
+
"[1] B. Megyesi, N. Blomqvist, and E. Pettersson, “The decode database: Collection of historical ciphers and keys,” in The 2nd International Conference on Historical Cryptology, HistoCrypt 2019, June 23-26 2019, Mons, Belgium, 2019, pp. 69-78.",
|
| 2147 |
+
"[2] S. Kang, B. K. Iwana, and S. Uchida, \"Complex image processing with less data document image binarization by integrating multiple pretrained u-net modules,\" Pattern Recognition, vol. 109, p. 107577, 2021.",
|
| 2148 |
+
"[3] S. K. Jemni, M. A. Souibgui, Y. Kessentini, and A. Fornés, \"Enhance to read better: A multi-task adversarial network for handwritten document image enhancement,\" Pattern Recognition, vol. 123, p. 108370, 2022.",
|
| 2149 |
+
"[4] M. Hradis, J. Kotera, P. Zemcik, and F. Sroubek, “Convolutional neural networks for direct text deblurring,” in Proceedings of BMVC, vol. 10, no. 2, 2015."
|
| 2150 |
+
],
|
| 2151 |
+
"bbox": [
|
| 2152 |
+
512,
|
| 2153 |
+
713,
|
| 2154 |
+
932,
|
| 2155 |
+
852
|
| 2156 |
+
],
|
| 2157 |
+
"page_idx": 5
|
| 2158 |
+
},
|
| 2159 |
+
{
|
| 2160 |
+
"type": "list",
|
| 2161 |
+
"sub_type": "ref_text",
|
| 2162 |
+
"list_items": [
|
| 2163 |
+
"[5] B. Wang and C. L. P. Chen, \"An effective background estimation method for shadows removal of document images,\" in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 3611-3615.",
|
| 2164 |
+
"[6] M. A. Souibgui and Y. Kessentini, “De-gan: A conditional generative adversarial network for document enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.",
|
| 2165 |
+
"[7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998-6008.",
|
| 2166 |
+
"[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.",
|
| 2167 |
+
"[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in International Conference on Learning Representations, 2021.",
|
| 2168 |
+
"[10] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European Conference on Computer Vision. Springer, 2020, pp. 213–229.",
|
| 2169 |
+
"[11] A. F. Biten, R. Litman, Y. Xie, S. Appalaraju, and R. Manmatha, \"Latr: Layout-aware transformer for scene-text vqa,\" arXiv preprint arXiv:2112.12494, 2021.",
|
| 2170 |
+
"[12] A. C. Rouhou, M. Dhiaf, Y. Kessentini, and S. B. Salem, \"Transformer-based approach for joint handwriting and named entity recognition in historical document,\" Pattern Recognition Letters, 2021.",
|
| 2171 |
+
"[13] V. De Bortoli, A. Desolneux, B. Galerne, and A. Leclaire, \"Patch redundancy in images: A statistical testing framework and some applications,\" SIAM Journal on Imaging Sciences, vol. 12, no. 2, pp. 893-926, 2019.",
|
| 2172 |
+
"[14] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, \"Extracting and composing robust features with denoising autoencoders,\" in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1096-1103.",
|
| 2173 |
+
"[15] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.",
|
| 2174 |
+
"[16] J. Sauvola and M. Pietikainen, \"Adaptive document image binarization,\" Pattern recognition, vol. 33, no. 2, pp. 225-236, 2000.",
|
| 2175 |
+
"[17] W. Xiong, J. Xu, Z. Xiong, J. Wang, and M. Liu, \"Degraded historical document image binarization using local features and support vector machine (svm),\" Optik, vol. 164, pp. 218-223, 2018.",
|
| 2176 |
+
"[18] R. Hedjam, M. Cheriet, and M. Kalacska, “Constrained energy maximization and self-referencing method for invisible ink detection from multispectral historical document images,” in 2014 22nd International Conference on Pattern Recognition. IEEE, 2014, pp. 3026–3031.",
|
| 2177 |
+
"[19] I. Pratikakis, K. Zagoris, G. Barlas, B. Gatos, \"Icdar 2017 competition on document image binarization (dibco 2017),\" in 2017 International Conference on Document Analysis and Recognition. IEEE, 2017, pp. 1395-1403.",
|
| 2178 |
+
"[20] M. Z. Afzal, J. Pastor-Pellicer, F. Shafait, T. M. Breuel, A. Dengel, and M. Liwicki, \"Document image binarization using lstm: A sequence learning approach,\" in Proceedings of the 3rd international workshop on historical document imaging and processing, 2015, pp. 79-84.",
|
| 2179 |
+
"[21] X.-J. Mao, C. Shen, and Y.-B. Yang, \"Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,\" in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 2810-2818.",
|
| 2180 |
+
"[22] K. G. Lore, A. Akintayo, and S. Sarkar, “LNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017.",
|
| 2181 |
+
"[23] J. Calvo-Zaragoza and A.-J. Gallego, “A selectional auto-encoder approach for document image binarization,” Pattern Recognition, vol. 86, pp. 37–47, 2019.",
|
| 2182 |
+
"[24] Y. Akbari, S. Al-Maadeed, and K. Adam, \"Binarization of degraded document images using convolutional neural networks and wavelet-based multichannel images,\" IEEE Access, vol. 8, pp. 153-517-153-534, 2020.",
|
| 2183 |
+
"[25] C. Tensmeyer and T. Martinez, \"Document image binarization with fully convolutional neural networks,\" in 2017 14th IAPR international conference on document analysis and recognition (ICDAR), vol. 1. IEEE, 2017, pp. 99-104.",
|
| 2184 |
+
"[26] O. Ronneberger, P. Fischer, and T. Brox, \"U-net: Convolutional networks for biomedical image segmentation,\" in International Conference on"
|
| 2185 |
+
],
|
| 2186 |
+
"bbox": [
|
| 2187 |
+
65,
|
| 2188 |
+
60,
|
| 2189 |
+
489,
|
| 2190 |
+
853
|
| 2191 |
+
],
|
| 2192 |
+
"page_idx": 6
|
| 2193 |
+
},
|
| 2194 |
+
{
|
| 2195 |
+
"type": "list",
|
| 2196 |
+
"sub_type": "ref_text",
|
| 2197 |
+
"list_items": [
|
| 2198 |
+
"Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241.",
|
| 2199 |
+
"[27] J. Zhao, C. Shi, F. Jia, Y. Wang, and B. Xiao, \"Document image binarization with cascaded generators of conditional generative adversarial networks,\" Pattern Recognition, vol. 96, p. 106968, 2019.",
|
| 2200 |
+
"[28] A. K. Bhunia, A. K. Bhunia, A. Sain, and P. P. Roy, “Improving document binarization via adversarial noise-texture augmentation,” in 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 2721–2725.",
|
| 2201 |
+
"[29] M. O. Tamrin, M. El-Amine Ech-Cherif, and M. Cheriet, \"A two-stage unsupervised deep learning framework for degradation removal in ancient documents,\" in Pattern Recognition. ICPR International Workshops and Challenges. Springer International Publishing, 2021, pp. 292-303.",
|
| 2202 |
+
"[30] M. A. Souibgui, Y. Kessentini, and A. Fornés, “A conditional gan based approach for distorted camera captured documents recovery,” in Mediterranean Conference on Pattern Recognition and Artificial Intelligence. Springer, 2020.",
|
| 2203 |
+
"[31] Y. Xu, Y. Xu, T. Lv, L. Cui, F. Wei, G. Wang, Y. Lu, D. Florencio, C. Zhang, W. Che et al., \"Layoutlmv2: Multi-modal pretraining for visually-rich document understanding,\" arXiv preprint arXiv:2012.14740, 2020.",
|
| 2204 |
+
"[32] S. Appalaraju, B. Jasani, B. U. Kota, Y. Xie, and R. Manmatha, “Docformer: End-to-end transformer for document understanding,” ICCV, 2021.",
|
| 2205 |
+
"[33] P. Li, J. Gu, J. Kuen, V. I. Morariu, H. Zhao, R. Jain, V. Manjunatha, and H. Liu, \"Selfdoc: Self-supervised document representation learning,\" in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5652-5660.",
|
| 2206 |
+
"[34] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, \"Swinir: Image restoration using swim transformer,\" in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833-1844.",
|
| 2207 |
+
"[35] H. Feng, Y. Wang, W. Zhou, J. Deng, and H. Li, \"Doctr: Document image transformer for geometric unwarping and illumination correction,\" arXiv preprint arXiv:2110.12942, 2021.",
|
| 2208 |
+
"[36] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.",
|
| 2209 |
+
"[37] I. Pratikakis, K. Zagori, P. Kaddas, and B. Gatos, \"Icfhr 2018 competition on handwritten document image binarization (h-dibco 2018),\" in 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018, pp. 489-493.",
|
| 2210 |
+
"[38] I. Pratikakis, B. Gatos, K. Ntirogiannis, “H-dibco 2010 - handwritten docu-ment image binarization competition,” in International Conference on Frontiers in Handwriting Recognition. IEEE, 2010, pp. 727--732.",
|
| 2211 |
+
"[39] K. N. I. Pratikakis, B. Gatos, \"Icdar 2011 document image binarization contest (dibco 2011),\" in 2011 International Conference on Document Analysis and Recognition, 2011, p. 1506-1510.",
|
| 2212 |
+
"[40] J.-C. Burie, M. Coustaty, S. Hadi, M. W. A. Kesiman, J.-M. Ogier, E. Paulus, K. Sok, I. M. G. Sunarya, and D. Valy, \"Icfhr2016 competition on the analysis of handwritten text in images of balinese palm leaf manuscripts,\" in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR). IEEE, 2016, pp. 596-601.",
|
| 2213 |
+
"[41] Q.N. Vo, S.H. Kim, H.J. Yang, G. Lee, “Binarization of degraded document images based on hierarchical deep supervised network,” Pattern Recognition, vol. 74, pp. 568—586, 2018.",
|
| 2214 |
+
"[42] I. Pratikakis, B. Gatos and K. Ntirogiannis, “ICFHR 2012 competition on handwritten document image binarization (H-DIBCO 2012),” in Proceedings of the International Conference on Frontiers in Handwriting Recognition. IEEE, 2012, pp. 817—822."
|
| 2215 |
+
],
|
| 2216 |
+
"bbox": [
|
| 2217 |
+
507,
|
| 2218 |
+
60,
|
| 2219 |
+
932,
|
| 2220 |
+
700
|
| 2221 |
+
],
|
| 2222 |
+
"page_idx": 6
|
| 2223 |
+
}
|
| 2224 |
+
]
|
2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10252/ded762cf-022c-45bd-bdb1-21253f13ccd6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d148be79cc1ab12e19ed88231fa1d60299083690c164868307211389dcac537d
|
| 3 |
+
size 6491056
|
2201.10xxx/2201.10252/full.md
ADDED
|
@@ -0,0 +1,466 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DocEnTr: An End-to-End Document Image Enhancement Transformer
|
| 2 |
+
|
| 3 |
+
Mohamed Ali Souibgui
|
| 4 |
+
|
| 5 |
+
Computer Vision Center
|
| 6 |
+
|
| 7 |
+
Universitat Autonoma de Barcelona
|
| 8 |
+
|
| 9 |
+
Barcelona, Spain
|
| 10 |
+
|
| 11 |
+
msouibgui@cvc.uab.es
|
| 12 |
+
|
| 13 |
+
Sanket Biswas $^{\S}$
|
| 14 |
+
|
| 15 |
+
Computer Vision Center
|
| 16 |
+
|
| 17 |
+
Universitat Autonoma de Barcelona
|
| 18 |
+
|
| 19 |
+
Barcelona, Spain
|
| 20 |
+
|
| 21 |
+
sbiswas@cvc.uab.es
|
| 22 |
+
|
| 23 |
+
Sana Khamekhem Jemni\*
|
| 24 |
+
|
| 25 |
+
Digital Research Center of Sfax
|
| 26 |
+
|
| 27 |
+
MIRACL Laboratory, University of Sfax
|
| 28 |
+
|
| 29 |
+
Sfax, Tunisia
|
| 30 |
+
|
| 31 |
+
sana.khamekhem@gmail.com
|
| 32 |
+
|
| 33 |
+
Yousri Kessentini
|
| 34 |
+
|
| 35 |
+
Digital Research Center of Sfax
|
| 36 |
+
|
| 37 |
+
SM@RTS Laboratory
|
| 38 |
+
|
| 39 |
+
Sfax, Tunisia
|
| 40 |
+
|
| 41 |
+
yousri.kessentini@crns.rnrt.cn
|
| 42 |
+
|
| 43 |
+
Alicia Fornés, Josep Lladós
|
| 44 |
+
|
| 45 |
+
Computer Vision Center, Computer Science Dept.
|
| 46 |
+
|
| 47 |
+
Universitat Autonoma de Barcelona
|
| 48 |
+
|
| 49 |
+
Barcelona, Spain
|
| 50 |
+
|
| 51 |
+
{afornes, josep} @cvc.uab.es
|
| 52 |
+
|
| 53 |
+
Umapada Pal
|
| 54 |
+
|
| 55 |
+
CVPR Unit
|
| 56 |
+
|
| 57 |
+
Indian Statistical Institute
|
| 58 |
+
|
| 59 |
+
Kolkata, India
|
| 60 |
+
|
| 61 |
+
umapada@isical.ac.in
|
| 62 |
+
|
| 63 |
+
Abstract—Document images can be affected by many degradation scenarios, which cause recognition and processing difficulties. In this age of digitization, it is important to denoise them for proper usage. To address this challenge, we present a new encoder-decoder architecture based on vision transformers to enhance both machine-printed and handwritten document images, in an end-to-end fashion. The encoder operates directly on the pixel patches with their positional information without the use of any convolutional layers, while the decoder reconstructs a clean image from the encoded patches. Conducted experiments show a superiority of the proposed model compared to the state-of-the-art methods on several DIBCO benchmarks. Code and models will be publicly available at: https://github.com/dali92002/DocEnTR.
|
| 64 |
+
|
| 65 |
+
# I. INTRODUCTION
|
| 66 |
+
|
| 67 |
+
The preservation and legibility of document images (especially the historical ones) are of utmost priority for the Document Image Analysis and Recognition (DIAR) research. Document records usually contain significant information and in the historical cases it dates back centuries and decades [1]. The conservation of document records can be hampered by several kinds of degradation such as smears, stains, artefacts, pen strokes, bleed-through effects and uneven illumination. These distortions could heavily impact the subsequent downstream tasks for information processing, such as segmentation, Optical Character Recognition (OCR), information spotting and layout analysis. This manifests the need for a robust preprocessing task that denoises and reconstructs a high-quality clean image from its already degraded counterpart. Document Image Enhancement (DIE) aims towards restoring the quality of the degraded document samples to yield a clear enhanced version that is locally uniform.
|
| 68 |
+
|
| 69 |
+
In recent times, Convolutional Neural Network (CNN)-based approaches have been widely applied to DIE related subtasks, like binarization [2], [3], deblurring [4], shadow [5] and
|
| 70 |
+
|
| 71 |
+
watermark removal [6], etc. Although the performance of these models has significantly improved over classical handcrafted techniques, they do have their own set of drawbacks. Firstly, CNNs operate on regular grids and using the same convolutional filter to restore different regions of a degraded document image may not be a sensible choice. Secondly, CNNs fail to capture high-level long-range dependencies as they are more suited for extracting low-level spatial information from images.
|
| 72 |
+
|
| 73 |
+
With the recent success of transformers in Natural Language Processing (NLP) [7], [8], its application to computer vision problems (like image recognition [9], object detection [10], visual question answering [11], handwritten text recognition (HTR) [12], etc.) also started getting more prominence. The self-attention mechanism proposed in [7] helps to capture global interactions between contextual features. Using local information combined with the knowledge of long-range global spatial arrangement is beneficial for an efficient image restoration model. This local information is often encoded in the patch content of an image and the large scale organization is contained in the redundancy of this information across the patches of the image [13]. Contrary to CNNs, which process pixel arrays, Vision Transformers (ViTs) [9] split an image into fixed-size patches (eg. 8x8, 16x16 etc.), they correctly embed each of them as latent representation, and include positional embedding information as input to the transformer encoder. This allows to encode the relative location of the patches, along with both local (spatial) and global (semantic) long-range dependencies. The motivation of using ViTs for our overall proposed baseline model is that a missing/degraded patch in the distorted document image can be recovered from the neighbouring patches information with the power of the multi-head self-attention in ViTs, which quantifies pairwise global reasoning between them. Also, ViTs have been adapted in the overall model pipeline in an encoder-decoder based setting, inspired by the concept of denoising autoencoders
|
| 74 |
+
|
| 75 |
+
[14] used in reconstruction of corrupted input data. The encoder is mapping the degraded image patches into latent representations, whereas the decoder is recovering a clean image version from those encoded representations.
|
| 76 |
+
|
| 77 |
+
The overall contributions of our work can be summarized into three folds:
|
| 78 |
+
|
| 79 |
+
- We introduce a simple and flexible Document image Enhancement Transformer (DocEnTr), an end-to-end image enhancement approach, that effectively restores and enhances a degraded document image provided as input. As far as we know, DocEnTr is the first pure transformer-based baseline that leverages the effectiveness of Vision Transformers (ViTs) in an encoder-decoder based framework, without any dependency on CNNs.
|
| 80 |
+
- We have addressed document binarization as the key problem study in this work to investigate the power of DocEnTr architecture. Experimental evaluation shows that DocEnTr achieves state-of-the-art results on standard document binarization benchmarks (DIBCO), for both machine-printed and handwritten degraded document images.
|
| 81 |
+
- A comprehensive and intuitive case study has been dedicated in Section IV to prove the utility of ViTs with its multi-headed self-attention mechanism in the task of document enhancement.
|
| 82 |
+
|
| 83 |
+
The rest of this paper is organized as follows. In Section II we review the state of the art. The Document image Enhancement Transformer (DocEnTr) is described in Section III. Section IV contains an analysis of the extensive experimentation that has been conducted, including different quantitative and qualitative studies. Finally, in Section V we draw the conclusions and propose open challenges for future research directions.
|
| 84 |
+
|
| 85 |
+
# II. RELATED WORK
|
| 86 |
+
|
| 87 |
+
# A. Document Image Enhancement
|
| 88 |
+
|
| 89 |
+
This work is an application within the DIE, which has been an active filed within the DIAR community. The first classic methods were based on thresholding, which means finding a single (global) or multiple (local) threshold(s) value(s) for the document. These threshold values are used to classify the document image pixels into foreground (black) or background (white) [15], [16]. These methods are still evolving in the recent years using machine learning tools, for instance, with support vector machines (SVM) [17]. Later, energy based methods were introduced. These are based on tracking the text pixels by maximizing its energy function [18], while minimizing the one of the degraded background. However, the results using those approaches were unsatisfactory [19].
|
| 90 |
+
|
| 91 |
+
Recently, deep learning based methods were used to tackle this problem by learning the enhancement directly from raw data. In [20], the problem was formulated as pixels classification. Each pixel is classified as black or white depending on a sequence of the surrounding pixels, where a 2D Long Short-Term Memory (LSTM) was trained for this task. This process
|
| 92 |
+
|
| 93 |
+
is, of course, time consuming. A more practical solution is to map the images from the degraded domain to the enhanced one in an end-to-end fashion with CNN auto-encoders. These latter, hence, were leading the recent improvements in image denoising [21] and more particularly documents enhancement tasks, like binarization [22], [23], [24], deblurring problems [4] and so on. Following this strategy, a fully CNN model was proposed in [25] to binarize the degraded document images at multiple image scales. Similarly, [2] proposed an autoencoder architecture that performs a cascade of pre-trained U-Net models [26] to learn the binarization using less amount of data. Moreover, generation models (GAN) were employed for this task to generate clean images by conditioning on the degraded versions. These architectures are composed of a generative model that produces a clean version of the image and a discriminator to assess the binarization result. Both models are usually composed of fully (or partially) CNN layers. In [6], a conditional GAN approach was proposed for different enhancement tasks achieving good results in document images cleaning, binarization, deblurring and dense watermarks removal. This method was recently extended in [3] by adding a second discriminator to assess the text readability for the goal of obtaining an enhanced image that is clean and readable at the same time. A similar cGAN's based method was also proposed in [27], [28], [29], [30].
|
| 94 |
+
|
| 95 |
+
# B. Transformers in Vision and Image Enhancement Tasks
|
| 96 |
+
|
| 97 |
+
In the very recent years, transformers are behind the advances in deep learning applications. Transformer based architectures firstly showed a great success in NLP tasks [7], [8] for text translation and embedding, surpassing the previous LSTM approaches. This motivates many works to employ them for the vision tasks, for instance, classification [9], object detection [10], document understanding [31], [32], [33], etc. More related to this paper, transformers were also used for natural image restoration [34] and document images dewarping [35]. However, the architectures that were used in these later image and document enhancement approaches are still relying on the CNN feature extractors before passing to the transformers stage. Also, the CNN are used to reconstruct the output image. Contrary, what we are proposing in this work is a fully transformer approach that attends directly to the patches on the input images and reconstruct the pixels without the using of any CNN layer.
|
| 98 |
+
|
| 99 |
+
# III. METHOD
|
| 100 |
+
|
| 101 |
+
The proposed model is a scalable auto-encoder that uses vision transformers in its encoder and decoder parts, as illustrated in Fig 1. The degraded image is first divided into patches before entering to the encoder part. During encoding, the patches are mapped to a latent representation of tokens, where each token is associated with a degraded patch. Then, the tokens are passed to the decoder that outputs the enhanced version of patches. Unlike the CNN based auto-encoders, which were usually employed for the document image enhancement tasks, the transformer auto-encoder is
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Fig. 1. Proposed model: The input image is split into patches, which are linearly embedded, and the position information are added to them. The resulting sequence of vectors are fed to a standard Transformer encoder to obtain the latent representations. These representations are fed to another Transformer representing the decoder to obtain the decoded vector, which is linearly projected to vectors of pixels representing the output image patches.
|
| 105 |
+
|
| 106 |
+
profitting from the self attention mechanism which gives a global information during every patch enhancement. Both decoder and especially encoder are inspired from the vision transformer (ViT) [9] architecture. We present more details of the model's architecture in what follows.
|
| 107 |
+
|
| 108 |
+
# A. Encoder
|
| 109 |
+
|
| 110 |
+
In the encoding stage (left part of Fig.1), given an image, we divide it into a set of patches. Then, we embed these patches to obtain the tokens and add their positional information. After that, a number of transformer blocks is employed to map these tokens into the encoded latent representation. These blocks follow the same structure as [9], composed of alternating layers of multi-headed self-attention and multi-layered perceptron (MLP). Each of these blocks are preceded by a LayerNorm (LN) [36], and followed by a residual connection. The patches embedding size and the number of transformer blocks are set depending on the model size.
|
| 111 |
+
|
| 112 |
+
# B. Decoder
|
| 113 |
+
|
| 114 |
+
The decoder part consists of a series of transformer blocks (having the same number as the encoder blocks) that take as an input the sequence of outputted tokens from the encoder. These tokens are propagated in the transformer decoder blocks, and then projected with a linear layer to the desired pixel values. This makes each element of the output correspond to a vector representing a flattened patch in the output image. The ground truth pixel values are obtained by dividing the ground truth (GT) clean image into patches (in the same way as the input degraded image) and flattening them into vectors. A mean squared error (MSE) loss is used between the model's output and the GT pixel patches to train the model.
|
| 115 |
+
|
| 116 |
+
# C. Model Variants
|
| 117 |
+
|
| 118 |
+
Following a similar convention as previous works [8], [9], the proposed model configuration can be modified to produce different variants. In our experiments we define three types of variants which are "Small", "Base" and "Large", as enlisted in Table I. Evidently, setting a larger model require more
|
| 119 |
+
|
| 120 |
+
computational memory and training time since the number of model parameters is increasing. Thus, a trade off between the model size and its enhancement performance must be taken into consideration.
|
| 121 |
+
|
| 122 |
+
TABLEI DETAILS OF OUR MODEL VARIANTS
|
| 123 |
+
|
| 124 |
+
<table><tr><td>Model</td><td>Layers</td><td>Dim</td><td>Attention Heads</td><td># Parameters</td></tr><tr><td>DocEnTr-Small</td><td>6</td><td>512</td><td>4</td><td>17M</td></tr><tr><td>DocEnTr-Base</td><td>12</td><td>768</td><td>8</td><td>68M</td></tr><tr><td>DocEnTr-Large</td><td>24</td><td>1024</td><td>16</td><td>255M</td></tr></table>
|
| 125 |
+
|
| 126 |
+
# IV. EXPERIMENTAL VALIDATION
|
| 127 |
+
|
| 128 |
+
To validate our model, we use the datasets proposed in the different DIBCO and H-DIBCO contests [37] for printed and handwritten degraded document images binarization and compare our results with the state of the art methods. Before these experiments, we have conducted different investigations for a proper selection of the hyperparameters.
|
| 129 |
+
|
| 130 |
+
# A. Choosing the Best Model Configuration
|
| 131 |
+
|
| 132 |
+
We begin our experiments by choosing the configuration that gives the best performance from our model variants (Small, Base or Large). For training, each degraded image and its GT clean one is divided into overlapped patches with sizes $256 \times 256 \times 3$ , the overlapping was set vertically and horizontally by a half of the patches size (means 128). These resultant images (patches) will be used by our models as an input and expected output (training data). For results evaluation, and same as the usual approaches [38], we utilize the following metrics: Peak signal-to-noise ratio (PSNR), F-Measure (FM), pseudo-F-measure $(\mathrm{F}_{ps})$ and Distance reciprocal distortion metric (DRD). We used in this experiment the DIBCO 2017 dataset, and the obtained results are given in Table II. As it can be seen, a larger model gives a better result in all the metrics, but it requires more computation resources. Thus, we recommend using a Base model for a binarization
|
| 133 |
+
|
| 134 |
+
task. Nevertheless, we will test as well the Large version in following experiments.
|
| 135 |
+
|
| 136 |
+
TABLE II RESULTS OF VARYING THE MODEL SIZE FOR THE DIBCO 2017 DATASET. $\uparrow$ : THE HIGHER THE BETTER. $\downarrow$ : THE LOWER THE BETTER.
|
| 137 |
+
|
| 138 |
+
<table><tr><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>DocEnTr-Small</td><td>18.29</td><td>91.06</td><td>93.82</td><td>2.78</td></tr><tr><td>DocEnTr-Base</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>DocEnTr-Large</td><td>18.85</td><td>92.14</td><td>94.58</td><td>2.53</td></tr></table>
|
| 139 |
+
|
| 140 |
+
Next, we do another experiment related to the input image size, and the patches size that are used by our model. The reason behind is that having different image size and patch size can affect the binarization since the model is accessing to different type of information (from global to local). The obtained results using the Base model are given in Table III. As it can be seen, a slightly better performance is obtained using an input with the smaller size $(256\times 256\times 3$ compared to $512\times 512\times 3)$ . However, we can notice that the performance is highly improved when using a smaller patch size. The reason is that, by employing a smaller patch size, we make each patch of the image attending to more and much local patches during the self-attention. Thus, the model is looking to more and much fine information during the enhancement process with $8\times 8$ patch size. But, as before, using a smaller patch size means augmenting the model parameters, requiring more computation resources.
|
| 141 |
+
|
| 142 |
+
TABLE III RESULTS OF VARYING THE INPUT AND PATCH SIZES FOR THE DIBCO 2017 DATASET
|
| 143 |
+
|
| 144 |
+
<table><tr><td>Input Size</td><td>Patch Size</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>256 × 256 × 3</td><td>8 × 8</td><td>19.11</td><td>92.53</td><td>95.15</td><td>2.37</td></tr><tr><td>256 × 256 × 3</td><td>16 × 16</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>256 × 256 × 3</td><td>32 × 32</td><td>17.57</td><td>89.37</td><td>91.99</td><td>3.44</td></tr><tr><td>512 × 512 × 3</td><td>8 × 8</td><td>18.91</td><td>92.2</td><td>94.93</td><td>2.45</td></tr><tr><td>512 × 512 × 3</td><td>16 × 16</td><td>18.66</td><td>92.15</td><td>93.89</td><td>2.54</td></tr><tr><td>512 × 512 × 3</td><td>32 × 32</td><td>17.27</td><td>89.43</td><td>91.51</td><td>3.54</td></tr></table>
|
| 145 |
+
|
| 146 |
+
# B. Quantitative Evaluation
|
| 147 |
+
|
| 148 |
+
After choosing the best hyper-parameters of the model, we conduct the experiments on the different datasets and compare our result with the related approaches. We begin by testing with the DIBCO 2011 dataset [39]. This dataset contains degraded document images with handwritten and printed text. For training, we use all the images from the other DIBCO and H-DIBCO datasets (except DIBCO 2019) and the Palm Leaf dataset [40]. These images are split into overlapped images with size $256 \times 256 \times 3$ before being fed to the model. The obtained results are given in Table IV, where we can notice a superiority of out method compared to the different variations of the related approaches. We choose to compare with different families of approaches: classic thresholding and deep learning based methods (whether basing on CNN or cGAN). Our model
|
| 149 |
+
|
| 150 |
+
DocEnTr-Base\{8\}, which means using the Base setting with a patch size of $8 \times 8$ , gives the best PSNR and DRD compared to all the other methods. While the model DocEnTr-Large\{16\}, which means using the Large setting with a patch size of $16 \times 16$ , leads to the second best performance in the metrics PSNR, $\mathbf{F}_{ps}$ and DRD. We note that for a computation reason, we were not able to train the Large setting with a patch size of $8 \times 8$ .
|
| 151 |
+
|
| 152 |
+
TABLE IV COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2011 DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.
|
| 153 |
+
|
| 154 |
+
<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>15.70</td><td>82.10</td><td>-</td><td>9.00</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>15.60</td><td>82.10</td><td>-</td><td>8.50</td></tr><tr><td>Vo et al. [41]</td><td>CNN</td><td>20.10</td><td>93.30</td><td>-</td><td>2.00</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>19.90</td><td>95.50</td><td>-</td><td>1.80</td></tr><tr><td>Tensmeyer et al [25]</td><td>CNN</td><td>20.11</td><td>93.60</td><td>97.70</td><td>1.85</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>20.30</td><td>93.80</td><td>-</td><td>1.80</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>20.81</td><td>94.37</td><td>96.15</td><td>1.63</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>20.11</td><td>93.48</td><td>96.12</td><td>1.93</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>20.62</td><td>94.24</td><td>96.71</td><td>1.69</td></tr></table>
|
| 155 |
+
|
| 156 |
+
After that, we test our model on the H-DIBCO 2012 dataset [42], which contains degraded handwritten document images. As in the previous experiment, we use the other datasets for training with the same split size. The obtained results are shown in Table V, where we can notice that our model gives the best performance in terms of PSNR and FM with the Base{8} configuration. We notice also that the other configuration gives competitive results compared to the other approaches.
|
| 157 |
+
|
| 158 |
+
TABLE V COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON H-DIBCO 2012 DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.
|
| 159 |
+
|
| 160 |
+
<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>15.03</td><td>80.18</td><td>82.65</td><td>26.46</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>16.71</td><td>82.89</td><td>87.95</td><td>6.59</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>21.37</td><td>95.16</td><td>96.44</td><td>1.13</td></tr><tr><td>Tensmeyer et al [25]</td><td>CNN</td><td>20.60</td><td>92.53</td><td>96.67</td><td>2.48</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>21.91</td><td>94.96</td><td>96.15</td><td>1.55</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>22.00</td><td>95.18</td><td>94.63</td><td>1.62</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>22.29</td><td>95.31</td><td>96.29</td><td>1.60</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>21.03</td><td>93.31</td><td>94.72</td><td>2.31</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>22.04</td><td>95.09</td><td>96.00</td><td>1.64</td></tr></table>
|
| 161 |
+
|
| 162 |
+
Moreover, we tested with the more recent DIBCO 2017 dataset. In this dataset our model achieves the best performance in all the evaluation metrics, as presented in Table VI.
|
| 163 |
+
|
| 164 |
+
Lastly, we test on the H-DIBCO 2018 dataset. Here, as shown in Table VII, the best performance is achieved by [3] basing on cGAN. Anyway, we can notice that our model is still very competitive since it ranks second in the PSNR, FM and $\mathrm{F}_{ps}$ metrics.
|
| 165 |
+
|
| 166 |
+
To summarize the quantitative evaluation, we demonstrate that our model gives good results compared to the state of the art approaches. This was shown by obtaining the best results in most of the evaluation metrics with the H-DIBCO 2011, DIBCO 2012 and DIBCO 2017 benchmarks.
|
| 167 |
+
|
| 168 |
+
# TABLE VI
|
| 169 |
+
|
| 170 |
+
COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2017
|
| 171 |
+
|
| 172 |
+
DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.
|
| 173 |
+
|
| 174 |
+
<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>13.85</td><td>77.73</td><td>77.89</td><td>15.54</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>14.25</td><td>77.11</td><td>84.1</td><td>8.85</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>15.85</td><td>91.57</td><td>93.55</td><td>2.92</td></tr><tr><td>Competition top [19]</td><td>CNN</td><td>18.28</td><td>91.04</td><td>92.86</td><td>3.40</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>17.83</td><td>90.73</td><td>92.58</td><td>3.58</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>17.45</td><td>89.8</td><td>89.95</td><td>4.03</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>19.11</td><td>92.53</td><td>95.15</td><td>2.37</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>18.69</td><td>91.66</td><td>94.11</td><td>2.63</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>18.85</td><td>92.14</td><td>94.58</td><td>2.53</td></tr></table>
|
| 175 |
+
|
| 176 |
+
# TABLE VII
|
| 177 |
+
|
| 178 |
+
COMPARATIVE RESULTS OF OUR PROPOSED METHOD ON DIBCO 2018
|
| 179 |
+
|
| 180 |
+
DATASET. THRESH: THRESHOLDING, TR: TRANSFORMERS.
|
| 181 |
+
|
| 182 |
+
<table><tr><td>Method</td><td>Model</td><td>PSNR ↑</td><td>FM ↑</td><td>Fps ↑</td><td>DRD ↓</td></tr><tr><td>Otsu [15]</td><td>Thres.</td><td>9.74</td><td>51.45</td><td>53.05</td><td>59.07</td></tr><tr><td>Savoula et al. [16]</td><td>Thres.</td><td>13.78</td><td>67.81</td><td>74.08</td><td>17.69</td></tr><tr><td>Kang et al [2]</td><td>CNN</td><td>19.39</td><td>89.71</td><td>91.62</td><td>2.51</td></tr><tr><td>Competition top [19]</td><td>CNN</td><td>19.11</td><td>88.34</td><td>90.24</td><td>4.92</td></tr><tr><td>Zhao et al. [41]</td><td>cGAN</td><td>18.37</td><td>87.73</td><td>90.60</td><td>4.58</td></tr><tr><td>Jemni et al. [3]</td><td>cGAN</td><td>20.18</td><td>92.41</td><td>94.35</td><td>2.60</td></tr><tr><td>DocEnTr-Base{8}</td><td>Tr</td><td>19.46</td><td>90.59</td><td>93.97</td><td>3.35</td></tr><tr><td>DocEnTr-Base{16}</td><td>Tr</td><td>19.33</td><td>89.97</td><td>93.5</td><td>3.68</td></tr><tr><td>DocEnTr-Large{16}</td><td>Tr</td><td>19.47</td><td>89.21</td><td>92.54</td><td>3.96</td></tr></table>
|
| 183 |
+
|
| 184 |
+
# C. Qualitative Evaluation
|
| 185 |
+
|
| 186 |
+
After presenting the achieved quantitative results by our model, we present in this subsection some qualitative results. We begin by showing the enhancing performance of our method. This is illustrated in Fig. 2, where we compare our binarization results with the GT clean images. As it can be seen, our model produces highly clean images, which are very close to the optimal GT images, reflecting the good quantitative performance that was obtained in the previous subsection.
|
| 187 |
+
|
| 188 |
+
Then, we present a quantitative comparison of our method with the related approaches. This is shown in Fig. 3, where we can notice the superiority of our model in recovering a highly degraded image over the classic thresholding [15], [16], CNN [2], and cGAN [3] methods.
|
| 189 |
+
|
| 190 |
+
# D. Self-attention Mechanism
|
| 191 |
+
|
| 192 |
+
As we stated above, our method differs from the CNN related ones by employing the transformers to enhance the degraded document images. The self-attention mechanism used in the transformer blocks gives a global view to every token on the other tokens that represents the patches within the image for a better enhancing result. A visual illustration of the attention maps of the last layer from the encoder is given in Fig. 4. As it can be seen, a token can attend to all the patches within the image. In these test cases each token (patch representation) is focusing on the text elements, while ignoring the degraded patches. Thus, the attending patches are decoded later and projected to pixels while taking into consideration a high-level global information from the attended neighbouring patches that cover the full input image. We also notice that
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Fig. 2. Qualitative results of our proposed method in binarization of some samples from the DIBCO and H-DIBCO datasets. Images in columns are: Left: original image, Middle: GT image, Right: Binarized image using our proposed method.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
Personne n'avait aperçu le jeune homme.
|
| 223 |
+
|
| 224 |
+
- Que faîreet se dit à l'Jeux en regardant en vain de tous états; puis se rappellant tout-a-coup le plaisir qu'éprouvait toujours Augustin à feuilletter les cartons de gravules exposées prés de l'Institut, il pressi-dessent cette direction, tout en explorant des yeux les différents quantiers, qu'il trouvait sur son passage.
|
| 225 |
+
|
| 226 |
+
Pythagorica. D. 14.
|
| 227 |
+
|
| 228 |
+
8c8bebeure. 5.40x.33.
|
| 229 |
+
|
| 230 |
+
Aegmng. 2.30.
|
| 231 |
+
mregula.E66.A.S.D.
|
| 232 |
+
|
| 233 |
+
oendreRegef. E.55. 242
|
| 234 |
+
|
| 235 |
+
rednun. 46. a.38. eRege. 56. a.44. d.57.
|
| 236 |
+
|
| 237 |
+
Igge Iefer wolle hieiriut fur gattennen: Sunfen vbn Dannebaet feir fahr / aegre mehern nadyuinnen vbn feige Zagangen.
|
| 238 |
+
|
| 239 |
+
E. BERLINER'S
|
| 240 |
+
|
| 241 |
+
GRAMOPHONE.
|
| 242 |
+
|
| 243 |
+
DIRECTIONS FOR USERS OF THE SEVEN-INCH
|
| 244 |
+
|
| 245 |
+
AMERICAN HAND MACHINE
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
Personne n'avait aperçu le jeune homme.
|
| 259 |
+
|
| 260 |
+
- Que faïreet se dit Pàjou en regardant en vain de tous états; puis que rappelant tout-a-coup le plait qu'épuavait toujours, Augustin a feuilletier les cranges de granuèrées exposées prés de l'Institut, il tipt rapièment cette direction, tout en explorant, des yeux des différentes quantiers qu'il trouvait sur son passage.
|
| 261 |
+
|
| 262 |
+
Pythagorica. D. $i_{4}$
|
| 263 |
+
|
| 264 |
+
8c8bebeure. 40.233
|
| 265 |
+
|
| 266 |
+
XeJeHg. 2.40. mcrgla.E66.A.c.D.29
|
| 267 |
+
|
| 268 |
+
oendbeXegel. E. 57. A.42
|
| 269 |
+
|
| 270 |
+
rednuing. C.46. A.18.
|
| 271 |
+
cReigel. E.56. A.44. D. 39
|
| 272 |
+
|
| 273 |
+
Bige feler wolle hieiriur fur gerteminen: Sunnen van Ndaanbaeifur paben/aere mehemr nanduigenn van feige Zagangene.
|
| 274 |
+
|
| 275 |
+
F. BERLINER'S
|
| 276 |
+
|
| 277 |
+
GRAMOPHONE.
|
| 278 |
+
|
| 279 |
+
DIRECTIONS FOR USERS OF THE SEVEN-ING
|
| 280 |
+
|
| 281 |
+
AMERICAN HAND MACHINE
|
| 282 |
+
|
| 283 |
+
the attention maps are mostly matching the text of the GT images, which lead to a satisfactory binarization result that is closer to the GT. This supports the utility of using the transformers with its powerful self-attention mechanism in the image enhancement task. However, in other sample cases as illustrated in Fig. 5, we observe that the attention maps are considering some portions of the text as a background
|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
Invalc.
|
| 287 |
+
|
| 288 |
+
Boun Cefite. 342
|
| 289 |
+
|
| 290 |
+
Bom Gerudc. 344
|
| 291 |
+
|
| 292 |
+
Ceflrungen feltfamer Gefufe; Anmundigen
|
| 293 |
+
|
| 294 |
+
munderlicher Begerden der Schwangen und Duerrichsen. 250
|
| 295 |
+
|
| 296 |
+
Bor den Leidenchaften, und der Nethoendiaeite 350
|
| 297 |
+
|
| 298 |
+
des Studiums der Menndienfeunntn 364
|
| 299 |
+
|
| 300 |
+
Bon psychologischen Geheimnifen oder den Jiffen:
|
| 301 |
+
|
| 302 |
+
fchaften ber Spillen. 368
|
| 303 |
+
|
| 304 |
+
Bon fonerberbeitlichen gefolgenden und Empfindungen. 375
|
| 305 |
+
|
| 306 |
+
2theie angeneomet omphnoiungen. 378
|
| 307 |
+
|
| 308 |
+
# Original
|
| 309 |
+
|
| 310 |
+
# yeshalr.
|
| 311 |
+
|
| 312 |
+
1 10omAeote. 1981 100-001e 124
|
| 313 |
+
p 34
|
| 314 |
+
1:erflrungen feltner Gefleke;Ranandfugene 2
|
| 315 |
+
1
|
| 316 |
+
350
|
| 317 |
+
2. Schon den Zieferungen, und der Stufumfugräte 364
|
| 318 |
+
100 200
|
| 319 |
+
aaftn 8r.6bifen.
|
| 320 |
+
non berfeitenden Gefühlen und Empfindungen. 375
|
| 321 |
+
Theorie angeneherer Empfindungen 378
|
| 322 |
+
|
| 323 |
+
# Otsu [15]
|
| 324 |
+
|
| 325 |
+
# #
|
| 326 |
+
|
| 327 |
+
1BomGefte. 20n- nHgite 342
|
| 328 |
+
bppm Gcrude. gnuudnuruydRnnu
|
| 329 |
+
Ferlungen feltamer Geltel; Anbundungen mumberlicher Reigendien der Schmernungen und.
|
| 330 |
+
350
|
| 331 |
+
, Bon den Reifenbchaften, und der Nothwenbdiafei
|
| 332 |
+
desStubiums derMienchenfeuntni3 364
|
| 333 |
+
yon phtologijden Gechinnifen over den Biffien
|
| 334 |
+
H 368
|
| 335 |
+
2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 2013
|
| 336 |
+
180eotie unngnnti eepnnnng 607978
|
| 337 |
+
|
| 338 |
+
# Jemni et al. [3]
|
| 339 |
+
|
| 340 |
+
# #
|
| 341 |
+
|
| 342 |
+
120omGefo. 13075 13076eite 342
|
| 343 |
+
|
| 344 |
+
bBpM Gerude. nssnns nnn 344
|
| 345 |
+
Eerfahrungen felfauer Geltze; Anmablungen zu wertberichtigte Pläne in der Stammwerte.
|
| 346 |
+
1,Jeunenrder Sgeiien der Gwangernn and
|
| 347 |
+
|
| 348 |
+
Bn den feidenchaften, und der Nethwendniafeit.
|
| 349 |
+
364
|
| 350 |
+
Bnon pdyhologifden Geelimnifen ober den Bifen
|
| 351 |
+
gfofep.ber Sobillen. 368
|
| 352 |
+
Vou fonderheitlichen Gefühlen und Empfindungen. 375
|
| 353 |
+
182beorie angenehmeir Gnpufubungen, 378
|
| 354 |
+
|
| 355 |
+
# Competition Winner
|
| 356 |
+
|
| 357 |
+
# Ground Truth
|
| 358 |
+
|
| 359 |
+
# y
|
| 360 |
+
|
| 361 |
+
1BemGgte. 2015 90
|
| 362 |
+
bBm Gerude. pueushunrnrre 1n Ss
|
| 363 |
+
Fertlungen feltner Gefelte; Anbunfängen 20 mertendes Reinein der Forderung am 1.
|
| 364 |
+
359
|
| 365 |
+
Bennen geifenchaften und der Nethwenndigfe
|
| 366 |
+
34
|
| 367 |
+
Bn pndoligijden Geheimnifen er den Wijifen:
|
| 368 |
+
Gchaften der Schillen. 368
|
| 369 |
+
von fenderbeitlichen Gefühlen und Empfindungen. 375
|
| 370 |
+
Theorie angenehmener Empfindungen. 378
|
| 371 |
+
|
| 372 |
+
# Sauvola et al. [16]
|
| 373 |
+
|
| 374 |
+
# #
|
| 375 |
+
|
| 376 |
+
1FoeomGf#t.
|
| 377 |
+
b90Pm Gcrude. 134
|
| 378 |
+
:Gefalungs Rittauer GdR; Bausstangsmits, gemiberlicher Steiger, den Schapenbauten und/
|
| 379 |
+
Sonerifden 350
|
| 380 |
+
Bon den teidenbauten, und der Nothnundige.
|
| 381 |
+
bet zuumbe der Reifenfennnnti, 364
|
| 382 |
+
sof prnnebnien Gcnnnnienr Her 163
|
| 383 |
+
Den bonberbeitlichen Gefühlen und Gruppindungen. 375
|
| 384 |
+
I8 theie angenebner Enapubungen 378
|
| 385 |
+
|
| 386 |
+
# Kang et al. [2]
|
| 387 |
+
|
| 388 |
+
# Julal.
|
| 389 |
+
|
| 390 |
+
BcmGefdte. 342
|
| 391 |
+
Bem Gierudc. 344
|
| 392 |
+
Griffarungen feltfamer Geltige; Annuandungen
|
| 393 |
+
Buchwirrher der Begrenen der Schwerangern und Hauferfaden 359
|
| 394 |
+
Von den Leidenchaften, und der Norfhwembigfeit.
|
| 395 |
+
desubiums derMnndeuefuntunj: 364
|
| 396 |
+
Ben phychologifden Geheimnifen erden Bifen
|
| 397 |
+
gaften der Sobillen. 368
|
| 398 |
+
Von fonderheitlichen Gefühlen und Eupfindungen. 375
|
| 399 |
+
2. Theorie angenehmier Empfindungen. 378
|
| 400 |
+
|
| 401 |
+
# Ours
|
| 402 |
+
|
| 403 |
+
region. Hence, the resultant enhanced image is removing foreground text because it considers it as a background noise. This explains the failure of the self-attention paradigm in these scenarios.
|
| 404 |
+
|
| 405 |
+
# V. CONCLUSION
|
| 406 |
+
|
| 407 |
+
This paper presents a novel transformer-based architecture called DocEnTr for document image enhancement. To the best of our knowledge, this is the first pure transformer model addressing DIE related problems. The model captures high-level global long-range dependencies using the self-attention mechanism for a better performance. Quantitative and qualitative results on the DIBCO benchmarks prove the effectiveness of DocEnTr in recovering highly degraded document images. It is a simple and flexible framework that can also be easily applied to enhance other kinds of degradation occurring in document images (like blur, shadow, warps, stains etc). These aspects will be investigated in a future work. We also wish to investigate a self-supervised learning stage that can substantially benefit from large amounts of unlabeled data.
|
| 408 |
+
|
| 409 |
+

|
| 410 |
+
Fig. 4. Attention maps from the $2^{nd}$ head of the last layer of DocEnTr{8} encoder. We display the self-attention for different (random) tokens.
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
Fig. 3. Qualitative results of the different binarization methods on the sample number 12 from DIBCO 2017 Dataset.
|
| 414 |
+
Fig. 5. Attention maps from the $2^{nd}$ head of the last layer of DocEnTr{8} encoder. We display the self-attention for different (random) tokens. (A failure case).
|
| 415 |
+
|
| 416 |
+
# ACKNOWLEDGMENT
|
| 417 |
+
|
| 418 |
+
This work has been partially supported by the Swedish Research Council (grant 2018-06074, DECRYPT), the Spanish projects RTI2018-095645-B-C21, the CERCA Program / Generalitat de Catalunya, the FCT-19-15244, the Catalan projects 2017-SGR-1783, PhD Scholarship from AGAUR (2021FIB-10010) and DocPRESERV project (Swedish STINT grant).
|
| 419 |
+
|
| 420 |
+
# REFERENCES
|
| 421 |
+
|
| 422 |
+
[1] B. Megyesi, N. Blomqvist, and E. Pettersson, “The decode database: Collection of historical ciphers and keys,” in The 2nd International Conference on Historical Cryptology, HistoCrypt 2019, June 23-26 2019, Mons, Belgium, 2019, pp. 69-78.
|
| 423 |
+
[2] S. Kang, B. K. Iwana, and S. Uchida, "Complex image processing with less data document image binarization by integrating multiple pretrained u-net modules," Pattern Recognition, vol. 109, p. 107577, 2021.
|
| 424 |
+
[3] S. K. Jemni, M. A. Souibgui, Y. Kessentini, and A. Fornés, "Enhance to read better: A multi-task adversarial network for handwritten document image enhancement," Pattern Recognition, vol. 123, p. 108370, 2022.
|
| 425 |
+
[4] M. Hradis, J. Kotera, P. Zemcik, and F. Sroubek, “Convolutional neural networks for direct text deblurring,” in Proceedings of BMVC, vol. 10, no. 2, 2015.
|
| 426 |
+
|
| 427 |
+
[5] B. Wang and C. L. P. Chen, "An effective background estimation method for shadows removal of document images," in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 3611-3615.
|
| 428 |
+
[6] M. A. Souibgui and Y. Kessentini, “De-gan: A conditional generative adversarial network for document enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
|
| 429 |
+
[7] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998-6008.
|
| 430 |
+
[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
|
| 431 |
+
[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, "An image is worth 16x16 words: Transformers for image recognition at scale," in International Conference on Learning Representations, 2021.
|
| 432 |
+
[10] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European Conference on Computer Vision. Springer, 2020, pp. 213–229.
|
| 433 |
+
[11] A. F. Biten, R. Litman, Y. Xie, S. Appalaraju, and R. Manmatha, "Latr: Layout-aware transformer for scene-text vqa," arXiv preprint arXiv:2112.12494, 2021.
|
| 434 |
+
[12] A. C. Rouhou, M. Dhiaf, Y. Kessentini, and S. B. Salem, "Transformer-based approach for joint handwriting and named entity recognition in historical document," Pattern Recognition Letters, 2021.
|
| 435 |
+
[13] V. De Bortoli, A. Desolneux, B. Galerne, and A. Leclaire, "Patch redundancy in images: A statistical testing framework and some applications," SIAM Journal on Imaging Sciences, vol. 12, no. 2, pp. 893-926, 2019.
|
| 436 |
+
[14] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1096-1103.
|
| 437 |
+
[15] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
|
| 438 |
+
[16] J. Sauvola and M. Pietikainen, "Adaptive document image binarization," Pattern recognition, vol. 33, no. 2, pp. 225-236, 2000.
|
| 439 |
+
[17] W. Xiong, J. Xu, Z. Xiong, J. Wang, and M. Liu, "Degraded historical document image binarization using local features and support vector machine (svm)," Optik, vol. 164, pp. 218-223, 2018.
|
| 440 |
+
[18] R. Hedjam, M. Cheriet, and M. Kalacska, “Constrained energy maximization and self-referencing method for invisible ink detection from multispectral historical document images,” in 2014 22nd International Conference on Pattern Recognition. IEEE, 2014, pp. 3026–3031.
|
| 441 |
+
[19] I. Pratikakis, K. Zagoris, G. Barlas, B. Gatos, "Icdar 2017 competition on document image binarization (dibco 2017)," in 2017 International Conference on Document Analysis and Recognition. IEEE, 2017, pp. 1395-1403.
|
| 442 |
+
[20] M. Z. Afzal, J. Pastor-Pellicer, F. Shafait, T. M. Breuel, A. Dengel, and M. Liwicki, "Document image binarization using lstm: A sequence learning approach," in Proceedings of the 3rd international workshop on historical document imaging and processing, 2015, pp. 79-84.
|
| 443 |
+
[21] X.-J. Mao, C. Shen, and Y.-B. Yang, "Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections," in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 2810-2818.
|
| 444 |
+
[22] K. G. Lore, A. Akintayo, and S. Sarkar, “LNet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017.
|
| 445 |
+
[23] J. Calvo-Zaragoza and A.-J. Gallego, “A selectional auto-encoder approach for document image binarization,” Pattern Recognition, vol. 86, pp. 37–47, 2019.
|
| 446 |
+
[24] Y. Akbari, S. Al-Maadeed, and K. Adam, "Binarization of degraded document images using convolutional neural networks and wavelet-based multichannel images," IEEE Access, vol. 8, pp. 153-517-153-534, 2020.
|
| 447 |
+
[25] C. Tensmeyer and T. Martinez, "Document image binarization with fully convolutional neural networks," in 2017 14th IAPR international conference on document analysis and recognition (ICDAR), vol. 1. IEEE, 2017, pp. 99-104.
|
| 448 |
+
[26] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on
|
| 449 |
+
|
| 450 |
+
Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241.
|
| 451 |
+
[27] J. Zhao, C. Shi, F. Jia, Y. Wang, and B. Xiao, "Document image binarization with cascaded generators of conditional generative adversarial networks," Pattern Recognition, vol. 96, p. 106968, 2019.
|
| 452 |
+
[28] A. K. Bhunia, A. K. Bhunia, A. Sain, and P. P. Roy, “Improving document binarization via adversarial noise-texture augmentation,” in 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 2721–2725.
|
| 453 |
+
[29] M. O. Tamrin, M. El-Amine Ech-Cherif, and M. Cheriet, "A two-stage unsupervised deep learning framework for degradation removal in ancient documents," in Pattern Recognition. ICPR International Workshops and Challenges. Springer International Publishing, 2021, pp. 292-303.
|
| 454 |
+
[30] M. A. Souibgui, Y. Kessentini, and A. Fornés, “A conditional gan based approach for distorted camera captured documents recovery,” in Mediterranean Conference on Pattern Recognition and Artificial Intelligence. Springer, 2020.
|
| 455 |
+
[31] Y. Xu, Y. Xu, T. Lv, L. Cui, F. Wei, G. Wang, Y. Lu, D. Florencio, C. Zhang, W. Che et al., "Layoutlmv2: Multi-modal pretraining for visually-rich document understanding," arXiv preprint arXiv:2012.14740, 2020.
|
| 456 |
+
[32] S. Appalaraju, B. Jasani, B. U. Kota, Y. Xie, and R. Manmatha, “Docformer: End-to-end transformer for document understanding,” ICCV, 2021.
|
| 457 |
+
[33] P. Li, J. Gu, J. Kuen, V. I. Morariu, H. Zhao, R. Jain, V. Manjunatha, and H. Liu, "Selfdoc: Self-supervised document representation learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5652-5660.
|
| 458 |
+
[34] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, "Swinir: Image restoration using swim transformer," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833-1844.
|
| 459 |
+
[35] H. Feng, Y. Wang, W. Zhou, J. Deng, and H. Li, "Doctr: Document image transformer for geometric unwarping and illumination correction," arXiv preprint arXiv:2110.12942, 2021.
|
| 460 |
+
[36] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
|
| 461 |
+
[37] I. Pratikakis, K. Zagori, P. Kaddas, and B. Gatos, "Icfhr 2018 competition on handwritten document image binarization (h-dibco 2018)," in 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2018, pp. 489-493.
|
| 462 |
+
[38] I. Pratikakis, B. Gatos, K. Ntirogiannis, “H-dibco 2010 - handwritten docu-ment image binarization competition,” in International Conference on Frontiers in Handwriting Recognition. IEEE, 2010, pp. 727--732.
|
| 463 |
+
[39] K. N. I. Pratikakis, B. Gatos, "Icdar 2011 document image binarization contest (dibco 2011)," in 2011 International Conference on Document Analysis and Recognition, 2011, p. 1506-1510.
|
| 464 |
+
[40] J.-C. Burie, M. Coustaty, S. Hadi, M. W. A. Kesiman, J.-M. Ogier, E. Paulus, K. Sok, I. M. G. Sunarya, and D. Valy, "Icfhr2016 competition on the analysis of handwritten text in images of balinese palm leaf manuscripts," in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR). IEEE, 2016, pp. 596-601.
|
| 465 |
+
[41] Q.N. Vo, S.H. Kim, H.J. Yang, G. Lee, “Binarization of degraded document images based on hierarchical deep supervised network,” Pattern Recognition, vol. 74, pp. 568—586, 2018.
|
| 466 |
+
[42] I. Pratikakis, B. Gatos and K. Ntirogiannis, “ICFHR 2012 competition on handwritten document image binarization (H-DIBCO 2012),” in Proceedings of the International Conference on Frontiers in Handwriting Recognition. IEEE, 2012, pp. 817—822.
|
2201.10xxx/2201.10252/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:24da90e3b71f62d847a124c7d404e04602ed00ebae000c98ff5c3e24b356367a
|
| 3 |
+
size 613961
|
2201.10xxx/2201.10252/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10276/dc3ea2dd-0d88-4010-908a-f57a83c691a6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b74a031b8a5b9bfe00f1975d76a67b201778f062a247b8ad0806d744c0bbde5c
|
| 3 |
+
size 28750789
|
2201.10xxx/2201.10276/full.md
ADDED
|
@@ -0,0 +1,421 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Article
|
| 2 |
+
|
| 3 |
+
# City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point Clouds
|
| 4 |
+
|
| 5 |
+
Jin Huang $\mathbb{D}$ , Jantien Stoter $\mathbb{D}$ , Ravi Peters and Liangliang Nan * $\mathbb{D}$
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
check for updates
|
| 10 |
+
|
| 11 |
+
Citation: Huang, J.; Stoter, J.; Peters, R.; Nan, L. City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point Clouds. Remote Sens. 2022, 14, 2254. https://doi.org/10.3390/rs14092254
|
| 12 |
+
|
| 13 |
+
Academic Editors: Mohammad Awrangjeb, Jiaojiao Tian, Qin Yan, Beril Sirmacek and Nusret Demir
|
| 14 |
+
|
| 15 |
+
Received: 24 March 2022
|
| 16 |
+
|
| 17 |
+
Accepted: 2 May 2022
|
| 18 |
+
|
| 19 |
+
Published: 7 May 2022
|
| 20 |
+
|
| 21 |
+
Publisher's Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
|
| 26 |
+
|
| 27 |
+
3D Geoinformation Research Group, Faculty of Architecture and the Built Environment, Delft University of Technology, 2628 BL Delft, The Netherlands; j.huang-1@tudelft.nl (J.H.); j.e.stoter@tudelft.nl (J.S.); r.y.peters@tudelft.nl (R.P.)
|
| 28 |
+
|
| 29 |
+
* Correspondence: liangliang,nan@tudelft.nl
|
| 30 |
+
|
| 31 |
+
Abstract: We present a fully automatic approach for reconstructing compact 3D building models from large-scale airborne point clouds. A major challenge of urban reconstruction from airborne LiDAR point clouds lies in that the vertical walls are typically missing. Based on the observation that urban buildings typically consist of planar roofs connected with vertical walls to the ground, we propose an approach to infer the vertical walls directly from the data. With the planar segments of both roofs and walls, we hypothesize the faces of the building surface, and the final model is obtained by using an extended hypothesis-and-selection-based polygonal surface reconstruction framework. Specifically, we introduce a new energy term to encourage roof preferences and two additional hard constraints into the optimization step to ensure correct topology and enhance detail recovery. Experiments on various large-scale airborne LiDAR point clouds have demonstrated that the method is superior to the state-of-the-art methods in terms of reconstruction accuracy and robustness. In addition, we have generated a new dataset with our method consisting of the point clouds and 3D models of 20k real-world buildings. We believe this dataset can stimulate research in urban reconstruction from airborne LiDAR point clouds and the use of 3D city models in urban applications.
|
| 32 |
+
|
| 33 |
+
Keywords: building reconstruction; LiDAR; point clouds; integer programming
|
| 34 |
+
|
| 35 |
+
# 1. Introduction
|
| 36 |
+
|
| 37 |
+
Digitizing urban scenes is an important research problem in computer vision, computer graphics, and photogrammetry communities. Three-dimensional models of urban buildings have become the infrastructure for a variety of real-world applications such as visualization [1], simulation [2-4], navigation [5], and entertainment [6]. These applications typically require high-accuracy and compact 3D building models of large-scale urban environments.
|
| 38 |
+
|
| 39 |
+
Existing urban building reconstruction methods strive to bring in a great level of detail and automate the process for large-scale urban environments. Interactive reconstruction techniques are successful in reconstructing accurate 3D building models with great detail [7,8], but they require either high-quality laser scans as input or considerable amounts of user interaction. These methods can thus hardly be applied to large-scale urban scenes. To facilitate practical applications that require large-scale 3D building models, researchers have attempted to address the reconstruction challenge using various data sources [9-16]. Existing methods based on aerial images [10,12,13] and dense triangle meshes [11] typically require good coverage of the buildings, which imposes challenges in data acquisition [17]. Approaches based on airborne LiDAR point clouds alleviate data acquisition issues. However, the accuracy and geometric details are usually compromised [9,14-16]. Following previous works using widely available
|
| 40 |
+
|
| 41 |
+
airborne LiDAR point clouds, we strive to recover desired geometric details of real-world buildings while ensuring topological correctness, reconstruction accuracy, and good efficiency.
|
| 42 |
+
|
| 43 |
+
The challenges for large-scale urban reconstruction from airborne LiDAR point clouds include:
|
| 44 |
+
|
| 45 |
+
- Building instance segmentation. Urban scenes are populated with diverse objects, such as buildings, trees, city furniture, and dynamic objects (e.g., vehicles and pedestrians). The cluttered nature of urban scenes poses a severe challenge to the identification and separation of individual buildings from the massive point clouds. This has drawn considerable attention in recent years [18,19].
|
| 46 |
+
- Incomplete data. Some important structures (e.g., vertical walls) of buildings are typically not captured in airborne LiDAR point clouds due to the restricted positioning and moving trajectories of airborne scanners.
|
| 47 |
+
- Complex structures. Real-world buildings demonstrate complex structures with varying styles. However, limited cues about structure can be extracted from the sparse and noisy point clouds, which further introduces ambiguities in obtaining topologically correct surface models.
|
| 48 |
+
|
| 49 |
+
In this work, we address the above challenges with the following strategies. Firstly, we address the building instance segmentation challenge by separating individual buildings using increasingly-available vectorized building footprint data. Secondly, we exploit prior knowledge about the structures of buildings to infer their vertical planes. Based on the fact that vertical planes in airborne LiDAR point clouds are typically walls connecting the piecewise planar roofs to the ground, we propose an algorithm to infer the vertical planes from incomplete point clouds. Our method has the option to extrude outer walls directly from the given building footprint. Finally, we approach surface reconstruction by introducing the inferred vertical planes as constraints into an existing hypothesis-and-selection-based polygonal surface reconstruction framework [20], which favors good fitting to the input point cloud, encourages compactness, and enforces manifoldness of the final model (see Figure 1 for an example of the reconstruction results). The main contributions of this work include:
|
| 50 |
+
|
| 51 |
+
- A robust framework for fully automatic reconstruction of large-scale urban buildings from airborne LiDAR point clouds.
|
| 52 |
+
- An extension of an existing hypothesis-and-selection-based surface reconstruction method for buildings, which is achieved by introducing a new energy term to encourage roof preferences and two additional hard constraints to ensure correct topology and enhance detail recovery.
|
| 53 |
+
- A novel approach for inferring vertical planes of buildings from airborne LiDAR point clouds, for which we introduce an optimal-transport method to extract polylines from 2D bounding contours.
|
| 54 |
+
- A new dataset consisting of the point clouds and reconstructed surface models of $20\mathrm{km}$ real-world buildings.
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
(a) Input airborne LiDAR point cloud.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
Figure 1.Cont.
|
| 61 |
+
(b) Our reconstruction result
|
| 62 |
+
Figure 1. The automatic reconstruction result of all the buildings in a large scene from the AHN3 dataset [21].
|
| 63 |
+
|
| 64 |
+
# 2. Related Work
|
| 65 |
+
|
| 66 |
+
A large volume of methods for urban building reconstruction has been proposed. In this section, we mainly review the techniques relevant to the key components of our method. Since our method relies on footprint data for extracting building instances from the massive point clouds of large scenes, and it can also be used for footprint extraction, we also discuss related techniques in footprint extraction.
|
| 67 |
+
|
| 68 |
+
Roof primitive extraction. The commonly used method for extracting basic primitives (e.g., planes and cylinders) from point clouds is random sample consensus (RANSAC) [22] and its variants [23,24], which are robust against noise and outliers. Another group of widely used methods is based on region growing [25-27], which assumes roofs are piece-wise planar and iteratively propagates planar regions by advancing the boundaries. The main difference between existing region growing methods lies in the generation of seed points and the criteria for region expansion. In this paper, we utilize an existing region growing method to extract roof primitives given its simplicity and robustness, which is detailed in Rabbani et al. [25].
|
| 69 |
+
|
| 70 |
+
Footprint extraction. Footprints are 2D outlines of buildings, capturing the geometry of outer walls projected onto the ground plane. Methods for footprint extraction commonly project the points to a 2D grid and analyze their distributions [28]. Chen et al. [27] detect rooftop boundaries and cluster them by taking into account topological consistency between the contours. To obtain simplified footprints, polyline simplification methods such as the Douglas-Peucker algorithm [29] are commonly used to reduce the complexity of the extracted
|
| 71 |
+
|
| 72 |
+
contours [12,30,31]. To favor structural regularities, Zhou and Neumann [32] compute the principal directions of a building and regularize the roof boundary polylines along with these directions. Following these works, we infer the vertical planes of a building by detecting its contours from a heightmap generated from a 2D projection of the input points. The contour polylines are then regularized by orientation-based clustering followed by an adjustment step.
|
| 73 |
+
|
| 74 |
+
Building surface reconstruction. This type of methods aims at obtaining a simplified surface representation of buildings by exploiting geometric cues, e.g., planar primitives and their boundaries [15,32-36]. Zhou and Neumann [37] approached this by simplifying the 2.5D TIN (triangulated irregular network) of buildings, which may result in artifacts in building contours due to its limited capability in capturing complex topology. To address this issue, the authors proposed an extended 2.5D contouring method with improved topology control [38]. To cope with missing walls, Chauve et al. [39] also incorporated additional primitives inferred from the point clouds. Another group of building surface reconstruction methods involves predefined building parts, commonly known as model-driven approaches [40,41]. These methods rely on templates of known roof structures and deform-to-fit the templates to the input points. Therefore, the results are usually limited to the predefined shape templates, regardless of the diverse and complex nature of roof structures or high intraclass variations. Given the fact that buildings demonstrate mainly piecewise planar regions, methods have also been proposed to obtain an arrangement of extracted planar primitives to represent the building geometry [20,42-44]. These methods first detect a set of planar primitives from the input point clouds and then hypothesize a set of polyhedral cells or polygonal faces using the supporting planes of the extracted planar primitives. Finally, a compact polygonal mesh is extracted from the hypothesized cells or faces. These methods focus on the assembly of planar primitives, for which obtaining a complete set of planar primitives from airborne LiDAR point clouds is still a challenge.
|
| 75 |
+
|
| 76 |
+
In this work, we extend an existing hypothesis-and-selection-based general polygonal surface reconstruction method [20] to reconstruct buildings that consist of piecewise planar roofs connected to the ground by vertical walls. We approach this by introducing a novel energy term and a few hard constraints specially designed for buildings to ensure correct topology and decent details.
|
| 77 |
+
|
| 78 |
+
# 3. Methodology
|
| 79 |
+
|
| 80 |
+
# 3.1. Overview
|
| 81 |
+
|
| 82 |
+
The proposed approach takes as input a raw airborne LiDAR point cloud of a large urban scene and the corresponding building footprints, and it outputs 2-manifold and watertight 3D polygonal models of the buildings in the scene. Figure 2 shows the pipeline of the proposed method. It first extracts the point clouds of individual buildings by projecting all points onto the ground plane and collecting the points lying inside the footprint polygon of each building. Then, we reconstruct a compact polygonal model from the point cloud of each building.
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 2. The pipeline of the proposed method (only one building is selected to illustrate the workflow). (a) Input point cloud and corresponding footprint data. (b) A building extracted from the input point cloud using its footprint polygon. (c) Planar segments extracted from the point cloud. (d) The heightmap (right) generated from the TIN (left, colored as a height field). (e) The polylines extracted from the heightmap. (f) The vertical planes obtained by extruding the inferred polylines. (g) The hypothesized building faces generated using both the extracted planes and inferred vertical planes. (h) The final model obtained through optimization.
|
| 86 |
+
|
| 87 |
+
Our reconstruction of a single building is based on the hypothesis-and-selection-based framework of PolyFit [20], which is for reconstructing general piecewise-planar objects from a set of planar segments extracted from the point cloud. Our method exploits not only the planar segments directly extracted from the point cloud but also the vertical planes inferred from the point cloud. From these two types of planar primitives, we hypothesize the faces of the building. The final model is then obtained by choosing the optimal subset of the faces through optimization.
|
| 88 |
+
|
| 89 |
+
The differences between our method and PolyFit are: (1) our method is dedicated to reconstructing urban buildings, and it makes use of vertical planes as hard constraints, for which we propose a novel algorithm for inferring the vertical planes of buildings that are commonly missing in airborne LiDAR point clouds. (2) We introduce a new roof preference energy term and two additional hard constraints into the optimization to ensure correct topology and enhance detail recovery. In the following sections, we detail the key steps of our method with an emphasis on the processes that differ from PolyFit [20].
|
| 90 |
+
|
| 91 |
+
# 3.2. Inferring Vertical Planes
|
| 92 |
+
|
| 93 |
+
With airborne LiDAR point clouds, important structures like vertical walls of a building are commonly missed due to the restricted positioning and moving trajectories of the scanner. In contrast, the roof surfaces are usually well captured. This inspired us to infer the missing walls from the available points containing the roof surfaces. We infer the vertical planes representing not only the outer walls but also the vertical walls within the footprint of a building. We achieve this by generating a 2D rasterized height map from its 3D points and looking for the contours that demonstrate considerable variations in the height values. To this end, an optimal-transport method is proposed to extract closed polylines from the contours. The polylines are then extruded to obtain the vertical walls. The process for inferring the vertical planes is outlined in Figure 2d-f.
|
| 94 |
+
|
| 95 |
+
Specifically, after obtaining the point cloud of a building, we project the points onto the ground plane, from which we create a height map. To cope with the non-uniform distribution of the points (e.g., some regions have holes while others may have repeating points), we construct a Triangulated Irregular Network (TIN) model using 2D Delaunay triangulation. The TIN model is a continuous surface and naturally completes the missing regions. Then, a height map is generated by rasterizing the TIN model with a specified resolution $r$ . The issue of small
|
| 96 |
+
|
| 97 |
+
holes in the height maps (due to uneven distribution of roof points) is further alleviated by image morphological operators while preserving the shape and size of the building [45]. After that, a set of contours are extracted from the height map using the Canny detector [46], which serves as the initial estimation of the vertical planes. We propose an optimal-transport method to extract polylines from the initial set of contours.
|
| 98 |
+
|
| 99 |
+
Optimal-transport method for polyline extraction. The initial set of contours are discrete pixels, denoted as $S$ , from which we would like to extract simplified polylines that best describe the 2D geometry of $S$ . Our optimal-transport method for extracting polylines from $S$ works as follows. First, a 2D Delaunay triangulation $T_0$ is constructed from the discrete points in $S$ . Then, the initial triangulation $T_0$ is simplified through iterative edge collapse and vertex removal operations. In each iteration, the most suitable vertex to be removed is determined in a way such that the following conditions are met:
|
| 100 |
+
|
| 101 |
+
- The maximum Hausdorff distance from the simplified mesh $T_{0}$ to $S$ is less than a distance threshold $\epsilon_{d}$ .
|
| 102 |
+
The increase of the total transport cost [47] between $S$ and $T_{0}$ is kept at a minimum.
|
| 103 |
+
|
| 104 |
+
In each iteration, a vertex satisfying the above conditions is removed from $T_{0}$ by edge collapse, and the overall transportation cost is updated.
|
| 105 |
+
|
| 106 |
+
As the iterative simplification process continues, the overall transportation cost will increase. The edge collapse operation stops until no vertex can be further removed, or the overall transportation cost has increased above a user-specified tolerance $\epsilon_{c}$ . After that, we apply an edge filtering step [47] to eliminate small groups of undesirable edges caused by noise and outliers. Finally, the polylines are derived from the remaining vertices and edges of the simplified triangulation using the procedure described in [47]. Compared to [47], our method not only minimizes the total transport cost but also provides control over local geometry, ensuring that the distance between every vertex in the final polylines and the initial contours is smaller than the specified distance threshold $\epsilon_{d}$ .
|
| 107 |
+
|
| 108 |
+
Regularity enhancement. Due to noise and uneven point density in the point cloud, the polylines generated by the optimal-transport algorithm are unavoidably inaccurate and irregular (see Figure 3a), which often leads to artifacts in the final reconstruction. We alleviate these artifacts by enforcing structure regularities that commonly dominate urban buildings. We consider the structure regularities, namely parallelism, collinearity, and orthogonality, defined by [48]. Please note that since all the lines will be extruded vertically to obtain the vertical planes, the verticality regularity will inherently be satisfied. We propose a clustering-based method to identify the groups of line segments that potentially satisfy these regularities. Our method achieves structure regularization in two steps: clustering and adjustment.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
(a) Before (28 segments)
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
(b) After (22 segments)
|
| 115 |
+
Figure 3. The effect of the clustering-based regularity enhancement on the polylines inferring the vertical walls. (a) Before regularity enhancement. (b) After regularity enhancement.
|
| 116 |
+
|
| 117 |
+
Clustering. In this work, we cluster the line segments of the polylines generated by the optimal-transport algorithm based on their orientation and pairwise Euclidean distance [49]. The pairwise Euclidean distance is measured by the minimum distance between a line segment and the supporting line of the other line segment.
|
| 118 |
+
|
| 119 |
+
Adjustment. For each cluster that contains multiple line segments, we compute its average direction. Then each line segment in the cluster is adjusted to align with the average direction. In case the building footprint is provided, the structure regularity can be further improved by aligning the segments with the edges in the footprint. After average adjustment, the near-collinear and near-orthogonal line segments are adjusted to be perfectly collinear and orthogonal, respectively (we use an angle threshold of $20^{\circ}$ ).
|
| 120 |
+
|
| 121 |
+
After regularity enhancement, the vertical planes of the building can be obtained by vertical extrusion of the regularized polylines. The effect of the regularity enhancement is demonstrated in Figure 3, from which we can see that it significantly improves structure regularity and reduces the complexity of the building outlines.
|
| 122 |
+
|
| 123 |
+
# 3.3. Reconstruction
|
| 124 |
+
|
| 125 |
+
Our surface reconstruction involves two types of planar primitives, i.e., vertical planes inferred in the previous step (see Section 3.2) and roof planes directly extracted from the point cloud. Unlike PolyFit [20] that hypothesizes faces by computing pairwise intersections using all planar primitives, we compute pairwise intersections using only the roof planes, and then the resulted faces are cropped with the outer vertical planes (see Figure 2g). This process ensures that the roof boundaries of the reconstructed building can be precisely connected with the inferred vertical walls. Additionally, since the object to be reconstructed is a real-world building, we introduce a roof preference energy term and a set of new hard constraints specially designed for buildings into the original formulation. Specifically, our objective for obtaining the model faces $F^{*}$ can be written as
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
F ^ {*} = \arg \min _ {X} \lambda_ {d} E _ {d} + \lambda_ {c} E _ {c} + \lambda_ {r} E _ {r}, \tag {1}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $X = \{x_{i} | x_{i} \in \{0,1\}\}$ denotes the binary variables for the faces (1 for selected and 0 otherwise). $E_{d}$ is the data fitting term that encourages selecting faces supported by more points, and $E_{c}$ is the model complexity term that favors simple planar structures. For more details about the data fitting term and the model complexity term, please refer to the original paper of PolyFit [20]. In the following part, we elaborate on the new energy term and hard constraints.
|
| 132 |
+
|
| 133 |
+
New energy term: roof preference. We have observed in rare cases that a building in aerial point clouds may demonstrate more than one layer of roofs, e.g., semi-transparent or overhung roofs. In such a case, we assume a higher roof face is always preferable to the ones underneath. We formulate this preference as an additional energy term called roof preference, which is defined as
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
E _ {r} = \frac {1}{| F |} \sum_ {i = 1} ^ {| F |} x _ {i} \cdot \frac {z _ {\max } - z _ {i}}{z _ {\max } - z _ {\min }} \tag {2}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $z_{i}$ denotes the $Z$ coordinate of the centroid of a hypothesized face $f_{i}$ . $z_{\text{max}}$ and $z_{\text{min}}$ are, respectively, the highest and lowest $Z$ coordinates of the building points. $|F|$ denotes the total number of hypothesized faces.
|
| 140 |
+
|
| 141 |
+
New hard constraints. We impose two hard constraints to enhance the topological correctness of the final reconstruction.
|
| 142 |
+
|
| 143 |
+
- Single-layer roof. This constraint ensures that the reconstructed 3D model of a real-world building has a single layer of roofs, which can be written as,
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\sum_ {k \in V (f _ {i})} x _ {k} = 1, (1 \leq i \leq | F |)
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $V(f_{i})$ denotes the set of hypothesized faces that have overlap with face $f_{i} \in F$ in the vertical direction.
|
| 150 |
+
|
| 151 |
+
- Face prior. This constraint enforces that for all the derived faces from the same planar segment, the one with the highest confidence value is always selected as a prior. Here, the confidence of a face is measured by the number of its supporting points. This constraint can be simply written as
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
x _ {l} = 1,
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $x_{l}$ is the variable whose value denotes the status of the most confident face $f_{l}$ of a planar segment. This constraint resolves ambiguities if two hypothesized faces are near coplanar and close to each other, which preserves finer geometric details. The effect of this constraint is demonstrated in Figure 4.
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
(a)
|
| 161 |
+
Figure 4. The effect of the face prior constraint. The insets illustrate the assembly of the hypothesized faces in the corresponding marked regions (each line segment denotes a hypothesized face, and line segments of the same color represent faces derived from the same planar primitive). (a) Reconstruction without the face prior constraint. (b) Reconstruction with the face prior constraint, for which faces 1 and 4 both satisfy the face prior constraint. The numbers 1-7 denote the 7 candidate faces.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
(b)
|
| 165 |
+
|
| 166 |
+
The final surface model of the building can be obtained by solving the optimization problem given in Equation (A4), subject to the single-layer roof and face prior hard constraints.
|
| 167 |
+
|
| 168 |
+
# 4. Results and Evaluation
|
| 169 |
+
|
| 170 |
+
Our method is implemented in C++ using CGAL [50]. All experiments were conducted on a desktop PC with a 3.5 GHz AMD Ryzen Threadripper 1920X and 64 GB RAM.
|
| 171 |
+
|
| 172 |
+
# 4.1. Test Datasets
|
| 173 |
+
|
| 174 |
+
We have tested our method on three datasets of large-scale urban point clouds including more than $20\mathrm{k}$ buildings.
|
| 175 |
+
|
| 176 |
+
- AHN3 [21]. An openly available country-wide airborne LiDAR point cloud dataset covering the entire Netherlands, with an average point density of 8 points/ $\mathrm{m}^2$ . The corresponding footprints of the buildings are obtained from the Register of Buildings and Addresses (BAG) [51]. The geometry of footprint is acquired from aerial photos and terrestrial measurements with an accuracy of $0.3\mathrm{m}$ . The polygons in the BAG represent the outlines of buildings as their outer walls seen from above, which are slightly different from footprints. We still use 'footprint' in this paper.
|
| 177 |
+
- DALES [52]. A large-scale aerial point cloud dataset consisting of forty scenes spanning an area of $10\mathrm{km}^2$ , with instance labels of $6\mathrm{k}$ buildings. The data was collected using a Riegl Q1560 dual-channel system with a flight altitude of $1300\mathrm{m}$ above ground and a speed of $72\mathrm{m/s}$ . Each area was collected by a minimum of 5 laser pulses per meter in four directions. The LiDAR swaths were calibrated using the BayesStripAlign 2.0 software and registered, taking both relative and absolute errors into account and correcting for altitude and positional errors. The average point density is 50 points/ $\mathrm{m}^2$ . No footprint data is available in this dataset.
|
| 178 |
+
- Vaihingen [53]. An airborne LiDAR point cloud dataset published by ISPRS, which has been widely used in semantic segmentation and reconstruction of urban scenes. The data were obtained using a Leica ALS50 system with $45^{\circ}$ field of view and a mean flying height above ground of $500\mathrm{m}$ . The average strip overlap is $30\%$ and multiple pulses were recorded. The point cloud was pre-processed to compensate for systematic offsets between the strips. We use in our experiments a training set that contains footprint information and covers an area of $399\mathrm{m} \times 421\mathrm{m}$ with $753\mathrm{k}$ points. The average point density is 4 points/ $\mathrm{m}^2$ .
|
| 179 |
+
|
| 180 |
+
# 4.2. Reconstruction Results
|
| 181 |
+
|
| 182 |
+
Visual results. We have used our method to reconstruct more than $20\mathrm{k}$ buildings from the aforementioned three datasets. For the AHN3 [21] and Vaihingen [53] datasets, the provided footprints were used for both building instance segmentation and extrusion of the outer walls. Our inferred vertical planes were used to complete the missed inner walls. For the DALES [52] dataset, we used the provided instance labels to extract building instances, and we used our inferred vertical walls for the reconstruction.
|
| 183 |
+
|
| 184 |
+
Figures 1 and 5 show the 3D reconstruction of all buildings in two large scenes from the AHN3 dataset [21]. For the buildings reconstructed in Figure 1, their models are simplified polygonal meshes with an average face count of 34. To better reveal the quality of our reconstructed building models, we demonstrate in Figure 6 a set of individual buildings reconstructed from the three test datasets. From these visual results, we can see that although the buildings have diverse structures of different styles, and the input point clouds have varying densities and different levels of noise, outliers, and missing data, our method succeeded in obtaining visually plausible reconstruction results. These experiments also indicate that our approach is successful in inferring the vertical planes of buildings from airborne LiDAR point clouds and it is effective to include these planes in the 3D reconstruction of urban buildings.
|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
Figure 5. Reconstruction of a large scene from the AHN3 dataset [21].
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
Figure 6. The reconstruction results of a set of buildings from various dataset. (1-14) are from the AHN3 dataset [21], (15-22) are from the DALES dataset [52], (23-28) are from the Vaihingen dataset [53].
|
| 191 |
+
|
| 192 |
+
Quantitative results. We have also evaluated the reconstruction results quantitatively. Since ground-truth reconstruction is not available for all buildings in the three datasets, we chose to use the commonly used accuracy measure, Root Mean Square Error (RMSE), to quantify the quality of each reconstructed model. In the context of surface reconstruction, RMSE is defined as the square root of the average of squared Euclidean distances from the points to the reconstructed model. In Table 1, we report the statistics of our quantitative results on the buildings shown in Figure 6. We can see that our method has obtained good reconstruction accuracy, i.e., the RMSE for all buildings is between $0.04\mathrm{m}$ to $0.26\mathrm{m}$ , which is quite promising for 3D reconstruction of real-world buildings from noisy and sparse airborne LiDAR point clouds. As observed from the number of faces column of Table 1, our results are simplified polygonal models and are more compact than those obtained from commonly used approaches such as the Poisson surface reconstruction method [54] (that produces dense triangles). Table 1 also shows that the running times for most buildings are less than $30~s$ . The reconstruction of the large complex building shown in Figure 6 (12) took $42\mathrm{min}$ . This long reconstruction time is due to that our method computes the pairwise intersection of the detected planar primitives and inferred vertical planes, and it generates a large number of candidate faces and results in a large optimization problem [20] (see also Section 4.7). The running time with respect to the number of detected planar segments for the reconstruction of more buildings is reported in Figure 7.
|
| 193 |
+
|
| 194 |
+
Table 1. Statistics on the reconstructed buildings shown in Figure 6. For each building, the number of points in the input, number of faces in the reconstructed model, fitting error (i.e., RMSE in meters), and running time (in seconds) are reported.
|
| 195 |
+
|
| 196 |
+
<table><tr><td>Dataset</td><td>Model</td><td>#Points</td><td>#Faces</td><td>RMSE (m)</td><td>Time (s)</td></tr><tr><td rowspan="14">AHN3</td><td>(1)</td><td>732</td><td>23</td><td>0.07</td><td>3</td></tr><tr><td>(2)</td><td>532</td><td>42</td><td>0.12</td><td>4</td></tr><tr><td>(3)</td><td>1165</td><td>31</td><td>0.04</td><td>3</td></tr><tr><td>(4)</td><td>20,365</td><td>127</td><td>0.15</td><td>62</td></tr><tr><td>(5)</td><td>1371</td><td>48</td><td>0.04</td><td>5</td></tr><tr><td>(6)</td><td>1611</td><td>45</td><td>0.06</td><td>4</td></tr><tr><td>(7)</td><td>3636</td><td>68</td><td>0.21</td><td>18</td></tr><tr><td>(8)</td><td>2545</td><td>52</td><td>0.04</td><td>8</td></tr><tr><td>(9)</td><td>15,022</td><td>63</td><td>0.11</td><td>28</td></tr><tr><td>(10)</td><td>23,654</td><td>262</td><td>0.26</td><td>115</td></tr><tr><td>(11)</td><td>13,269</td><td>102</td><td>0.11</td><td>34</td></tr><tr><td>(12)</td><td>155,360</td><td>1520</td><td>0.09</td><td>2520</td></tr><tr><td>(13)</td><td>24,027</td><td>176</td><td>0.24</td><td>141</td></tr><tr><td>(14)</td><td>28,522</td><td>227</td><td>0.15</td><td>78</td></tr><tr><td rowspan="8">DALES</td><td>(15)</td><td>8662</td><td>39</td><td>0.04</td><td>11</td></tr><tr><td>(16)</td><td>11,830</td><td>73</td><td>0.1</td><td>8</td></tr><tr><td>(17)</td><td>10,673</td><td>47</td><td>0.07</td><td>7</td></tr><tr><td>(18)</td><td>7594</td><td>33</td><td>0.07</td><td>14</td></tr><tr><td>(19)</td><td>13,060</td><td>278</td><td>0.05</td><td>145</td></tr><tr><td>(20)</td><td>11,114</td><td>55</td><td>0.06</td><td>24</td></tr><tr><td>(21)</td><td>8589</td><td>51</td><td>0.06</td><td>15</td></tr><tr><td>(22)</td><td>18,909</td><td>282</td><td>0.08</td><td>86</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Table 1. Cont.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Dataset</td><td>Model</td><td>#Points</td><td>#Faces</td><td>RMSE (m)</td><td>Time (s)</td></tr><tr><td rowspan="6">Vaihingen</td><td>(23)</td><td>7701</td><td>51</td><td>0.24</td><td>25</td></tr><tr><td>(24)</td><td>6845</td><td>99</td><td>0.12</td><td>8</td></tr><tr><td>(25)</td><td>1007</td><td>24</td><td>0.11</td><td>2</td></tr><tr><td>(26)</td><td>11,591</td><td>206</td><td>0.17</td><td>10</td></tr><tr><td>(27)</td><td>4026</td><td>42</td><td>0.26</td><td>6</td></tr><tr><td>(28)</td><td>5059</td><td>61</td><td>0.22</td><td>9</td></tr></table>
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
Figure 7. The running time of our method with respect to the number of the detected planar segments. These statistics are obtained by testing on the AHN3 dataset.
|
| 204 |
+
|
| 205 |
+
New dataset. Our method has been applied to city-scale building reconstruction. The results are released as a new dataset consisting of $20\mathrm{k}$ buildings (including the reconstructed 3D models and the corresponding airborne LiDAR point clouds). We believe this dataset can stimulate research in urban reconstruction from airborne LiDAR point clouds and the use of 3D city models in urban applications.
|
| 206 |
+
|
| 207 |
+
# 4.3. Parameters
|
| 208 |
+
|
| 209 |
+
Our method involves a few parameters that are empirically set to fixed values for all experiments, i.e., the distance threshold $\epsilon_{d} = 0.25$ and the tolerance for overall transportation cost $\epsilon_{c} = 2.0$ . The resolution $r$ for the rasterization of the TIN model to generate heightmaps is dataset dependent due to the difference in point density. It is set to $0.20\mathrm{m}$ from AHN3, $0.15\mathrm{m}$ for DALES, and $0.25\mathrm{m}$ for Vaihingen. The weight of the roof preference energy term $\lambda_{r} = 0.04$ (while the weights for the data fitting and model complexity terms are set to $\lambda_{d} = 0.34$ and $\lambda_{c} = 0.62$ , respectively).
|
| 210 |
+
|
| 211 |
+
# 4.4. Comparisons
|
| 212 |
+
|
| 213 |
+
We have compared our method with two successful open-source methods, i.e., 2.5D Dual Contouring (dedicated for urban buildings) [37] and PolyFit (for general piecewise-planar objects) [20], on the AHN3 [21], DALES [52], and Vaihingen [53] datasets. The city block from the AHN3 dataset [21] is sparse and contains only 80,447 points for 160 buildings (i.e., on average 503 points per building). The city region from DALES is denser and contains 214,601 points for 41 buildings (i.e., on average 5234 points per building). The city area from the Vaihingen dataset contains 69,254 points for 57 buildings (i.e., on average 1215 points per building). The walls of all the point clouds are severely occluded. Figure 8 shows the visual comparison of one of the buildings. PolyFit assumes a complete set of input planar primitives, which is not the case for airborne LiDAR point clouds because the vertical walls are often missing. For PolyFit to be effective, we added our inferred vertical planes to its initial set of planar primitives. From the result, we can observe that both PolyFit and our
|
| 214 |
+
|
| 215 |
+
method can generate compact building models, and the number of faces in the result is an order of magnitude less than that of the 2.5D Dual Contouring method. It is worth noting that even with the additional planes, PolyFit still failed to reconstruct some walls and performed poorly in recovering geometric details. In contrast, our method produces the most plausible 3D models. By inferring missing vertical planes, our method can recover inner walls, which further split the roof planes and bring in more geometric details into the final reconstruction. Table 2 reports the statistics of the comparison, from which we can see that the reconstructed building models from our method have the highest accuracy. In terms of running time, our method is slower than the other two, but it is still acceptable in practical applications (on average 4.9 s per building).
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
(a) Input point cloud
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
Figure 8. Comparison with 2.5D Dual Contouring (2.5DC) [37] and PolyFit [20] on a single building from the AHN3 dataset [21].
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
(b) 2.5DC (296 faces)
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
(c) PolyFit (58 faces)
|
| 228 |
+
(d) Ours (86 faces)
|
| 229 |
+
|
| 230 |
+
Table 2. Statistics on the comparison of 2.5D Dual Contouring [37], PolyFit [20], and our method on the reconstruction from the AHN3 [21], DALES [52], and Vaihingen [53] datasets. Total face numbers, running times, and average errors are reported.
|
| 231 |
+
|
| 232 |
+
<table><tr><td>Dataset</td><td>Method</td><td>#Faces</td><td>RMSE (m)</td><td>Time (s)</td></tr><tr><td rowspan="3">AHN3</td><td>2.5D DC [37]</td><td>12,781</td><td>0.213</td><td>13</td></tr><tr><td>PolyFit [20]</td><td>1848</td><td>0.242</td><td>160</td></tr><tr><td>Ours</td><td>2453</td><td>0.128</td><td>380</td></tr><tr><td rowspan="3">DALES</td><td>2.5D DC [37]</td><td>2297</td><td>0.204</td><td>10</td></tr><tr><td>PolyFit [20]</td><td>444</td><td>0.287</td><td>230</td></tr><tr><td>Ours</td><td>583</td><td>0.184</td><td>670</td></tr><tr><td rowspan="3">Vaihingen</td><td>2.5D DC [37]</td><td>2695</td><td>0.168</td><td>6</td></tr><tr><td>PolyFit [20]</td><td>647</td><td>0.275</td><td>102</td></tr><tr><td>Ours</td><td>798</td><td>0.157</td><td>212</td></tr></table>
|
| 233 |
+
|
| 234 |
+
We also performed an extensive quantitative comparison with the 3D building models from the BAG3D [55], which is a public 3D city platform that provides 3D models of urban buildings at the LoD2 level. For this comparison, we picked four different regions consisting of 1113 buildings in total from the BAG3D. In Figure 9, we demonstrate a visual comparison, from which we can see that our models demonstrate more regularity. The quantitative result is reported in Table 3, from which we can see that our results have higher accuracy.
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
(a) Result from BAG3D
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
Figure 9. A visual comparison with BAG3D [55]. A building from Table 3 (b) is shown.
|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
(b) Ours result
|
| 246 |
+
|
| 247 |
+
Table 3. Quantitative comparison with the BAG3D [55] on four urban scenes (a)-(d). Both BAG3D and our method used the point clouds from the AHN3 dataset [21] as input. The bold font indicates smaller RMSE values.
|
| 248 |
+
|
| 249 |
+
<table><tr><td>Region</td><td>#Points</td><td>#Building</td><td>RMSE (m) BAG3D</td><td>RMSE (m) Ours</td></tr><tr><td>(a)</td><td>1,694,247</td><td>198</td><td>0.088</td><td>0.079</td></tr><tr><td>(b)</td><td>329,593</td><td>387</td><td>0.139</td><td>0.138</td></tr><tr><td>(c)</td><td>224,970</td><td>368</td><td>0.140</td><td>0.132</td></tr><tr><td>(d)</td><td>80,447</td><td>160</td><td>0.146</td><td>0.128</td></tr></table>
|
| 250 |
+
|
| 251 |
+
# 4.5. With vs. Without Footprint
|
| 252 |
+
|
| 253 |
+
Our method can infer the vertical planes of a building from its roof points, and then the outer walls are completed using the vertical planes. It also has the option to directly use given footprint data for reconstruction. With a given footprint, vertically planes are firstly obtained by extruding the footprint polygons. Then these planes and those extracted from the point clouds are intersected to hypothesize the model faces, followed by the optimization step to obtain the final reconstruction. Figure 10 shows such a comparison on two buildings.
|
| 254 |
+
|
| 255 |
+
# 4.6. Reconstruction Using Point Clouds with Vertical Planes
|
| 256 |
+
|
| 257 |
+
The methodology presented in our paper only focuses on airborne LiDAR point clouds, in which vertical walls of buildings are typically missing. In practice, our method can be easily adapted to work with other types of point clouds that contain points of vertical walls, e.g., point clouds reconstructed from drone images. For such point clouds, our method can still be effective by replacing the inferred vertical planes with those directly detected from the point clouds. Figure 11 shows two such examples.
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
(a) Input
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
(b) With footprint
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
(c) Without footprint
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
Figure 10. Comparison between the reconstruction with (b) and without (c) footprint data on two buildings (a) from the AHN3 dataset [21]. The number below each model denotes the root mean square error (RMSE). Using the inferred vertical planes slightly increases reconstruction errors.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
Figure 11. Reconstruction from aerial point clouds. In these point clouds, the vertical walls can be extracted from the point clouds and directly used in reconstruction, and thus the vertical plane inference step was skipped. The dataset is obtained from Can et al. [56].
|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
|
| 286 |
+
# 4.7. Limitations
|
| 287 |
+
|
| 288 |
+
Our method can infer the missing vertical planes of buildings, from which the outer vertical planes serve as outer walls in the reconstruction. Since the vertical planes are inferred from the 3D points of rooftops, the walls in the final models may not perfectly align with the ground-truth footprints (see the figure below). Thus, we recommend the use of high-quality footprint data whenever it is available. Besides, our method extends the hypothesis-and-selection-based surface reconstruction framework of PolyFit [20] by introducing new energy terms and hard constraints. It naturally inherits the limitation of PolyFit, i.e., it may encounter computation bottlenecks for buildings with complex structures (e.g., buildings with more than 100 planar regions). An example has already been shown in Figure 6 (12).
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
|
| 292 |
+
# 5. Conclusions and Future Work
|
| 293 |
+
|
| 294 |
+
We have presented a fully automatic approach for large-scale 3D reconstruction of urban buildings from airborne LiDAR point clouds. We propose to infer the vertical planes of buildings that are commonly missing from airborne LiDAR point clouds. The inferred vertical planes play two different roles during the reconstruction. The outer vertical planes directly become part of the exterior walls of the building, and the inner vertical planes enrich building details by splitting the roof planes at proper locations and forming the necessary inner walls in final models. Our method can also incorporate given building footprints for reconstruction. In case footprints are used, they are extruded to serve the exterior walls of the models, and the inferred inner planes enrich building details. Extensive experiments on different datasets have demonstrated that inferring vertical planes is an effective strategy for building reconstruction from airborne LiDAR point clouds, and the proposed roof preference energy term and the novel hard constraints ensure topologically correct and accurate reconstruction.
|
| 295 |
+
|
| 296 |
+
Our current framework uses only planar primitives and it is sufficient for reconstructing most urban buildings. In the real world, there still exist buildings with curved surfaces, which our current implementation could not handle. However, our hypothesize-and-selection strategy is general and can be extended to process different types of primitives. As a future work direction, our method can be extended to incorporate other geometric primitives, such as spheres, cylinders, or even parametric surfaces. With such an extension, buildings with curved surfaces can also be reconstructed.
|
| 297 |
+
|
| 298 |
+
Author Contributions: J.H. performed the study and implemented the algorithms. R.P. and J.S. provided constructive comments and suggestions. L.N. proposed this topic, provided daily supervision, and wrote the paper together with J.H. All authors have read and agreed to the published version of the manuscript.
|
| 299 |
+
|
| 300 |
+
Funding: Jin Huang is financially supported by the China Scholarship Council.
|
| 301 |
+
|
| 302 |
+
Data Availability Statement: Our code and data are available at https://github.com/yidahuang/City3D, accessed on 23 March 2022.
|
| 303 |
+
|
| 304 |
+
Acknowledgments: We thank Zexin Yang, Zhaiyu Chen, and Noortje van der Horst for proofreading the paper.
|
| 305 |
+
|
| 306 |
+
Conflicts of Interest: The authors declare no conflict of interest.
|
| 307 |
+
|
| 308 |
+
# Abbreviations
|
| 309 |
+
|
| 310 |
+
The following abbreviations are used in this manuscript:
|
| 311 |
+
|
| 312 |
+
LiDAR Light Detection and Ranging
|
| 313 |
+
|
| 314 |
+
TIN Triangular Irregular Network
|
| 315 |
+
|
| 316 |
+
RMSE Root Mean Square Error
|
| 317 |
+
|
| 318 |
+
# Appendix A. The Complete Formulation
|
| 319 |
+
|
| 320 |
+
Our reconstruction is obtained by finding the optimal subset of the hypothesized faces. We formulate this as an optimization problem, with an objective function consisting of three energy terms: data fitting, model complexity, and roof preference. The first two terms are the same
|
| 321 |
+
|
| 322 |
+
as in [20]. In the following, we briefly introduce all these terms and provide the final complete formulation.
|
| 323 |
+
|
| 324 |
+
- Data fitting. It is defined to measure how well the final model (i.e., the assembly of the chosen faces) fits to the input point cloud,
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
E _ {d} = 1 - \frac {1}{| P |} \sum_ {i = 1} ^ {| F |} x _ {i} \cdot \operatorname {s u p p o r t} \left(f _ {i}\right), \tag {A1}
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
where $|P|$ is the number of points in the point cloud. $\text{support}(f_i)$ measures the number of points that are $\epsilon$ -close to a face $f_i \in F$ , and $x_i \in \{0,1\}$ denotes the binary status of the face $f_i$ (1 for selected and 0 otherwise). $|F|$ denotes the total number of hypothesized faces.
|
| 331 |
+
|
| 332 |
+
- Model complexity. To avoid defects introduced by noise and outliers, this term is introduced to encourage large planar structures,
|
| 333 |
+
|
| 334 |
+
$$
|
| 335 |
+
E _ {c} = \frac {1}{| E |} \sum_ {i = 1} ^ {| E |} \operatorname {c o r n e r} \left(e _ {i}\right), \tag {A2}
|
| 336 |
+
$$
|
| 337 |
+
|
| 338 |
+
where $|E|$ denotes the total number of pairwise intersections in the hypothesized face set. $\text{corner}(e_i)$ is an indicator function denoting if choosing two faces connected by an edge $e_i$ results in a sharp edge in the final model (1 for sharp and 0 otherwise).
|
| 339 |
+
|
| 340 |
+
- Roof preference. We have observed in rare cases that a building in aerial point clouds may demonstrate more than one layer of roofs, e.g., semi-transparent or overhung roofs. In such a case, we assume a higher roof face is preferable to the ones underneath. We formulate this preference as an additional roof preference energy term,
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
E _ {r} = \frac {1}{| F |} \sum_ {i = 1} ^ {| F |} x _ {i} \cdot \frac {z _ {\max } - z _ {i}}{z _ {\max } - z _ {\min }} \tag {A3}
|
| 344 |
+
$$
|
| 345 |
+
|
| 346 |
+
where $z_{i}$ denotes the $Z$ coordinate of the centroid of a face $f_{i}$ . $z_{max}$ and $z_{min}$ are, respectively, the highest and lowest $Z$ coordinates of the building points.
|
| 347 |
+
|
| 348 |
+
With all the constraints, the complete optimization problem is written as
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\min _ {X} \lambda_ {d} E _ {d} + \lambda_ {c} E _ {c} + \lambda_ {r} E _ {r}
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
\begin{array}{l} \text {s . t .} \left\{ \begin{array}{l} \sum_ {k \in V (f _ {i})} x _ {k} = 1, (1 \leq i \leq | F |) \\ \sum_ {j \in N (e _ {i})} x _ {j} = 0 \quad o r \quad 2, \quad (1 \leq j \leq | E |) \end{array} \right. \end{array} \tag {A4}
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
where the first constraint is call single roof, which ensures that the reconstructed building model has a single layer of roofs. The second constraint enforces that in the final model an edge is associated with two adjacent faces, ensuring the final model to be watertight and manifold. The third constraint is call face prior, which ensures that, for the faces derived from the same planar segment, the one with the highest confidence value is selected as a prior.
|
| 359 |
+
|
| 360 |
+
By solving the above optimization problem, the set of selected faces $\{f_i | x_i = 1\}$ forms the final surface model of a building.
|
| 361 |
+
|
| 362 |
+
# References
|
| 363 |
+
|
| 364 |
+
1. Yao, Z.; Nagel, C.; Kunde, F.; Hudra, G.; Willkommen, P.; Donaubauer, A.; Adolphi, T.; Kolbe, T.H. 3DCityDB—A 3D geodatabase solution for the management, analysis, and visualization of semantic 3D city models based on CityGML. Open Geospat. Data Softw. Stand. 2018, 3, 1-26. [CrossRef]
|
| 365 |
+
2. Zhivov, A.M.; Case, M.P.; Jank, R.; Eicker, U.; Booth, S. Planning tools to simulate and optimize neighborhood energy systems. In Green Defense Technology; Springer: Dordrecht, The Netherlands, 2017; pp. 137-163.
|
| 366 |
+
3. Stoter, J.; Peters, R.; Commandeur, T.; Dukai, B.; Kumar, K.; Ledoux, H. Automated reconstruction of 3D input data for noise simulation. Comput. Environ. Urban Syst. 2020, 80, 101424. [CrossRef]
|
| 367 |
+
4. Widl, E.; Agugiaro, G.; Peters-Anders, J. Linking Semantic 3D City Models with Domain-Specific Simulation Tools for the Planning and Validation of Energy Applications at District Level. Sustainability 2021, 13, 8782. [CrossRef]
|
| 368 |
+
5. Cappelle, C.; El Najjar, M.E.; Charpillet, F.; Pomorski, D. Virtual 3D city model for navigation in urban areas. J. Intell. Robot. Syst. 2012, 66, 377-399. [CrossRef]
|
| 369 |
+
6. Kargas, A.; Loumos, G.; Varoutas, D. Using different ways of 3D reconstruction of historical cities for gaming purposes: The case study of Nafplio. Heritage 2019, 2, 1799-1811. [CrossRef]
|
| 370 |
+
7. Nan, L.; Sharf, A.; Zhang, H.; Cohen-Or, D.; Chen, B. Smartboxes for interactive urban reconstruction. In ACM Siggraph 2010 Papers; ACM: New Yrok, NY, USA, 2010; pp. 1-10.
|
| 371 |
+
8. Nan, L.; Jiang, C.; Ghanem, B.; Wonka, P. Template assembly for detailed urban reconstruction. In Computer Graphics Forum; Wiley Online Library: Zurich, Switzerland, 2015; Volume 34, pp. 217-228.
|
| 372 |
+
9. Zhou, Q.Y. 3D Urban Modeling from City-Scale Aerial LiDAR Data; University of Southern California: Los Angeles, CA, USA, 2012.
|
| 373 |
+
10. Haala, N.; Rothermel, M.; Cavegn, S. Extracting 3D urban models from oblique aerial images. In Proceedings of the 2015 Joint Urban Remote Sensing Event (JURSE), Lausanne, Switzerland, 30 March-1 April 2015; pp. 1-4.
|
| 374 |
+
11. Verdie, Y.; Lafarge, F.; Alliez, P. LOD generation for urban scenes. ACM Trans. Graph. 2015, 34, 30. [CrossRef]
|
| 375 |
+
12. Li, M.; Nan, L.; Smith, N.; Wonka, P. Reconstructing building mass models from UAV images. Comput. Graph. 2016, 54, 84-93. [CrossRef]
|
| 376 |
+
13. Buyukdemircioglu, M.; Kocaman, S.; Isikdag, U. Semi-automatic 3D city model generation from large-format aerial images. ISPRS Int. J.-Geo-Inf. 2018, 7, 339. [CrossRef]
|
| 377 |
+
14. Bauchet, J.P.; Lafarge, F. City Reconstruction from Airborne Lidar: A Computational Geometry Approach. In Proceedings of the 3D GeoInfo 2019—14th Conference 3D GeoInfo, Singapore, 26–27 September 2019.
|
| 378 |
+
15. Li, M.; Rottensteiner, F.; Heipke, C. Modelling of buildings from aerial LiDAR point clouds using TINs and label maps. ISPRS J. Photogramm. Remote Sens. 2019, 154, 127-138. [CrossRef]
|
| 379 |
+
16. Ledoux, H.; Biljecki, F.; Dukai, B.; Kumar, K.; Peters, R.; Stoter, J.; Commandeur, T. 3dfier: Automatic reconstruction of 3D city models. J. Open Source Softw. 2021, 6, 2866. [CrossRef]
|
| 380 |
+
17. Zhou, X.; Yi, Z.; Liu, Y.; Huang, K.; Huang, H. Survey on path and view planning for UAVs. Virtual Real. Intell. Hardw. 2020, 2, 56-69. [CrossRef]
|
| 381 |
+
18. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21-26 July 2017; pp. 652-660.
|
| 382 |
+
19. Thomas, H.; Qi, C.R.; Deschaud, J.E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October-2 November 2019; pp. 6411-6420.
|
| 383 |
+
20. Nan, L.; Wonka, P. PolyFit: Polygonal Surface Reconstruction from Point Clouds. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October, 2017.
|
| 384 |
+
21. AHN3. Actuel Hoogtebestand Nederland (AHN). 2018. Available online: https://www.pdok.nl/nl/ahn3-download (accessed on 13 November 2021).
|
| 385 |
+
22. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381-395. [CrossRef]
|
| 386 |
+
23. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. In Computer Graphics Forum; Wiley Online Library: Oxford, UK, 2007; Volume 26, pp. 214-226.
|
| 387 |
+
24. Zuliani, M.; Kenney, C.S.; Manjunath, B. The multiransac algorithm and its application to detect planar homographies. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; Volume 3, p. III-153.
|
| 388 |
+
25. Rabbani, T.; Van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248-253.
|
| 389 |
+
26. Sun, S.; Salvaggio, C. Aerial 3D building detection and modeling from airborne LiDAR point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1440-1449. [CrossRef]
|
| 390 |
+
27. Chen, D.; Wang, R.; Peethambaran, J. Topologically aware building rooftop reconstruction from airborne laser scanning point clouds. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7032-7052. [CrossRef]
|
| 391 |
+
|
| 392 |
+
28. Meng, X.; Wang, L.; Currit, N. Morphology-based building detection from airborne LIDAR data. *Photogramm. Eng. Remote Sens.* 2009, 75, 437-442. [CrossRef]
|
| 393 |
+
29. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartogr. Int. J. Geogr. Inf. Geovis. 1973, 10, 112-122. [CrossRef]
|
| 394 |
+
30. Zhang, K.; Yan, J.; Chen, S.C. Automatic construction of building footprints from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2523-2533. [CrossRef]
|
| 395 |
+
31. Xiong, B.; Elberink, S.O.; Vosselman, G. Footprint map partitioning using airborne laser scanning data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 241-247. [CrossRef]
|
| 396 |
+
32. Zhou, Q.Y.; Neumann, U. Fast and extensible building modeling from airborne LiDAR data. In Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Irvine, CA, USA, 5-7 November 2008; pp. 1-8.
|
| 397 |
+
33. Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008, 8, 7323-7343. [CrossRef]
|
| 398 |
+
34. Lafarge, F.; Mallet, C. Creating large-scale city models from 3D-point clouds: A robust approach with hybrid representation. Int. J. Comput. Vis. 2012, 99, 69-85. [CrossRef]
|
| 399 |
+
35. Xiao, Y.; Wang, C.; Li, J.; Zhang, W.; Xi, X.; Wang, C.; Dong, P. Building segmentation and modeling from airborne LiDAR data. Int. J. Digit. Earth 2015, 8, 694-709. [CrossRef]
|
| 400 |
+
36. Yi, C.; Zhang, Y.; Wu, Q.; Xu, Y.; Remil, O.; Wei, M.; Wang, J. Urban building reconstruction from raw LiDAR point data. Comput.-Aided Des. 2017, 93, 1-14. [CrossRef]
|
| 401 |
+
37. Zhou, Q.Y.; Neumann, U. 2.5 d dual contouring: A robust approach to creating building models from aerial lidar point clouds. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2010; pp. 115-128.
|
| 402 |
+
38. Zhou, Q.Y.; Neumann, U. 2.5 D building modeling with topology control. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011; pp. 2489-2496.
|
| 403 |
+
39. Chauve, A.L.; Labatut, P.; Pons, J.P. Robust piecewise-planar 3D reconstruction and completion from large-scale unstructured point data. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13-18 June 2010; pp. 1261-1268.
|
| 404 |
+
40. Lafarge, F.; Descombes, X.; Zerubia, J.; Pierrot-Deseilligny, M. Structural approach for building reconstruction from a single DSM. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 135-147. [CrossRef]
|
| 405 |
+
41. Xiong, B.; Elberink, S.O.; Vosselman, G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 93, 227-242. [CrossRef]
|
| 406 |
+
42. Li, M.; Wonka, P.; Nan, L. Manhattan-world Urban Reconstruction from Point Clouds. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016.
|
| 407 |
+
43. Bauchet, J.P.; Lafarge, F. Kinetic shape reconstruction. ACM Trans. Graph. (TOG) 2020, 39, 1-14. [CrossRef]
|
| 408 |
+
44. Fang, H.; Lafarge, F. Connect-and-Slice: An hybrid approach for reconstructing 3D objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14-19 June 2020; pp. 13490-13498.
|
| 409 |
+
45. Huang, H.; Brenner, C.; Sester, M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 29-43. [CrossRef]
|
| 410 |
+
46. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679-698. [CrossRef]
|
| 411 |
+
47. De Goes, F.; Cohen-Steiner, D.; Alliez, P.; Desbrun, M. An optimal transport approach to robust reconstruction and simplification of 2D shapes. In Computer Graphics Forum; Wiley Online Library: Oxford, UK, 2011; Volume 30, pp. 1593-1602.
|
| 412 |
+
48. Li, Y.; Wu, B. Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds. Remote Sens. 2021, 13, 129. [CrossRef]
|
| 413 |
+
49. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. (TODS) 2017, 42, 1-21. [CrossRef]
|
| 414 |
+
50. CGAL Library. CGAL User and Reference Manual, 5.0.2 ed.; CGAL Editorial Board: Valbonne, French, 2020.
|
| 415 |
+
51. BAG. Basisregistrarie Adressen en Gebouwen (BAG). 2019. Available online: https://bag.basisregistraties.overheid.nl/datamodel (accessed on 13 November 2021).
|
| 416 |
+
52. Varney, N.; Asari, V.K.; Graehling, Q. DALES: A large-scale aerial LiDAR data set for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14-19 June 2020; pp. 186-187.
|
| 417 |
+
53. Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. I-3 2012, 1, 293–298. [CrossRef]
|
| 418 |
+
54. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, Cagliari, Italy, 26-28 June 2006; Volume 7.
|
| 419 |
+
55. 3D BAG (v21.09.8). 2021. Available online: https://3dbag.nl/en/viewer (accessed on 13 November 2021).
|
| 420 |
+
|
| 421 |
+
56. Can, G.; Mantegazza, D.; Abbate, G.; Chappuis, S.; Giusti, A. Semantic segmentation on Swiss3DCities: A benchmark study on aerial photogrammetric 3D pointcloud dataset. Pattern Recognit. Lett. 2021, 150, 108-114. [CrossRef]
|
2201.10xxx/2201.10276/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2ccfbab0ea0b3cba92d840364b18bfa3ff67052ed8aece0e155f3e5867be8395
|
| 3 |
+
size 926215
|
2201.10xxx/2201.10276/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10295/b1faa0c2-a656-4495-9e8d-6b56b0af39ad_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c88ce1adfdf48a52d613718f2bc191e648849eb4de312376bbe2b88a3d9a7d3a
|
| 3 |
+
size 1737219
|
2201.10xxx/2201.10295/full.md
ADDED
|
@@ -0,0 +1,509 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
|
| 2 |
+
|
| 3 |
+
Sebastian Bordt
|
| 4 |
+
|
| 5 |
+
sebastian.bordt@uni-tuebingen.de
|
| 6 |
+
|
| 7 |
+
University of Tübingen, Germany
|
| 8 |
+
|
| 9 |
+
EricRaidl
|
| 10 |
+
|
| 11 |
+
eric.raidl@uni-tuebingen.de
|
| 12 |
+
|
| 13 |
+
University of Tübingen, Germany
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms and their functioning, often interpreted as obligations to "explain". Many researchers suggest using post-hoc explanation algorithms for this purpose. In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives. Indeed, most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends. We show that this fundamental conflict cannot be resolved because of the high degree of ambiguity of post-hoc explanations in realistic application scenarios. As a consequence, post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms. Instead, there is a need to more explicitly discuss the objectives underlying "explainability" obligations as these can often be better achieved through other mechanisms. There is an urgent need for a more open and honest discussion regarding the potential and limitations of post-hoc explanations in adversarial contexts, in particular in light of the current negotiations of the European Union's draft Artificial Intelligence Act.
|
| 18 |
+
|
| 19 |
+
# KEYWORDS
|
| 20 |
+
|
| 21 |
+
Explainability, Transparency, Regulation, Artificial Intelligence Act, GDPR, Counterfactual Explanations, SHAP, LIME
|
| 22 |
+
|
| 23 |
+
# ACM Reference Format:
|
| 24 |
+
|
| 25 |
+
Sebastian Bordt, Michele Finck, Eric Raidl, and Ulrike von Luxburg. 2022. Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22), June 21-24, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA, 21 pages. https://doi.org/10.1145/3531146.3533153
|
| 26 |
+
|
| 27 |
+
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
|
| 28 |
+
|
| 29 |
+
FAccT '22, June 21-24, 2022, Seoul, Republic of Korea
|
| 30 |
+
|
| 31 |
+
© 2022 Copyright held by the owner/author(s).
|
| 32 |
+
|
| 33 |
+
ACM ISBN 978-1-4503-9352-2/22/06.
|
| 34 |
+
|
| 35 |
+
https://doi.org/10.1145/3531146.3533153
|
| 36 |
+
|
| 37 |
+
Michèle Finck
|
| 38 |
+
|
| 39 |
+
michele.finck@uni-tuebingen.de
|
| 40 |
+
|
| 41 |
+
University of Tübingen, Germany
|
| 42 |
+
|
| 43 |
+
Ulrike von Luxburg
|
| 44 |
+
|
| 45 |
+
ulrike.luxburg@uni-tuebingen.de
|
| 46 |
+
|
| 47 |
+
University of Tübingen, Germany
|
| 48 |
+
|
| 49 |
+
# 1 INTRODUCTION
|
| 50 |
+
|
| 51 |
+
Explainability is one of the concepts dominating debates about the ethics and regulation of machine learning algorithms. Intuitively, requests for explainability are reactions to the prevalent unease about machine learning algorithms, including concerns regarding discrimination, biases, manipulation, and data protection. The fact that machine learning systems are often "black boxes" is considered a major hurdle towards their implementation, supervision and control, and explainability is often praised as a remedy against such risks. Existing legislation such as the EU General Data Protection Regulation ('GDPR') has sometimes been interpreted as containing a "right for explanation". The draft Artificial Intelligence Act, a piece of proposed EU legislation, alludes to explainability but does, in its current form, not make clear whether and when exactly explainability is legally required. On the technical side, explainability has evolved into its own field of research [33]. The current machine learning literature knows two different approaches towards explainability. One approach is to build machine learning models that are constrained to be "inherently interpretable" [42]. The other approach is to use any machine learning model, even a "back-box", and then employ any of an increasing number of approaches in order to "explain" the behavior of the black-box after the decision has been made ("post-hoc"). Because there exists no general way to summarize the entire behavior of a black-box model, these explanations are usually local, meaning that they only describe the behavior of the function for a single prediction or decision. The natural advantage of local post-hoc explanation methods, such as feature highlighting methods [30, 41] and counterfactual explanations [60], is that they place no constraints on model complexity and do not require model disclosure [7]. This has led a number of researchers to suggest that these methods might be able to comply with existing legal requirements [7, 60].
|
| 52 |
+
|
| 53 |
+
In this paper, we put forward an important distinction that has not yet been extensively discussed in the literature on explainable AI: whether the explanation's context is adversarial or cooperative. By "cooperative contexts" we broadly summarize situations where all involved parties have aligned interests. This includes model development and debugging, scientific discovery, and, to a degree, areas such as medical diagnosis. In a
|
| 54 |
+
|
| 55 |
+
cooperative context, the explanation provider and the explanation receiver share the same interests: to identify the most suitable and insightful explanation algorithm for the given problem. In "adversarial contexts", in contrast, parties have opposing interests. This is the case for example when a bank denies a customer a loan and the customer wants to contest the decision because it was discriminatory. Since the explanation provider anticipates that one might use the provided explanations to challenge the functioning of the system, the explanation provider does not have any incentive to provide "true" insights into the functioning of the system; but rather to render the internal functioning of the machine learning system incontestable. Indeed, it has been pointed out repeatedly that post-hoc explanation algorithms can be manipulated or cheated upon [5, 47, 48]. Many machine learning papers on explanation algorithms implicitly consider collaborative contexts where explanations are used to improve machine learning algorithms and can help developers to understand the biases of complex systems, or where they are used in an explorative spirit towards new scientific discoveries [63]. In contrast, the legal discussion focuses predominantly on adversarial scenarios. Here explainability is portrayed as a mechanism to add more transparency, fairness and accountability to AI, and post-hoc explanations are often seen as a technical tool to achieve these goals.
|
| 56 |
+
|
| 57 |
+
Combining insights from computer science, philosophy and law, we offer a critical multidisciplinary perspective on the usage of post-hoc explanations to achieve transparency and accountability obligations in adversarial contexts. We highlight the blurry legal landscape around explainability as well as the philosophical and technical limitations of post-hoc explanations. In Section 2 we introduce different scenarios – cooperative and adversarial – under which an external examiner might audit a black-box and its generated explanations. We focus on adversarial scenarios – where the explanation provider has opposing interests to the explanation receiver – and local post-hoc explanations – where the explanation explains a single decision for one particular person. In Section 3 we argue that existing and planned legislation, specifically the GDPR and the EU Artificial Intelligence Act, can either be read as portraying explainability as one possible mechanism to achieve more transparency or as presenting it as a free-standing objective. We also highlight the current lack of legal certainty as to how existing legal norms around explainability ought to be interpreted and implemented. These issues have been the source of confusion and uncertainty. This is why we propose to capture the role of explainability by a discussion of its motivations: Explanations are thought to build trust, and also enable actions, such as debugging, contesting, recourse. In Section 4 we show from a philosophical and technical perspective that the goals associated with explainability are unlikely to be achieved by post-hoc explanations. The reason is that the truth assumptions under which explanations are
|
| 58 |
+
|
| 59 |
+
expected to fulfill their legal goal are lacking in the adversarial context. To the contrary, due to the inherent geometric ambiguity of local post-hoc explanations, the explanation provider has a multitude of options to influence explanations in a subtle, undetectable way and to pick those that suit her goals. In Section 5 we show that testing explanations is also problematic. While at best we can test for internal consistency of the explanation with the decision, in more typical cases the explanations become redundant and we would better rely on testing decisions and predictions directly. In Section 6 we conclude and argue that there needs to be a deeper and more honest debate about what the underlying objectives of explainability obligations are. We also argue that one needs to be honest about the fact that using a black-box entails considerable discretion: Neither post-hoc explanation methods, nor regulation can completely compel the deployer of a black-box to align his interests with the public good. As such, if one is absolutely unwilling to award any discretion to the deployer of the black-box, the only solution is to forbid its deployment and favor inherently interpretable or otherwise constrained machine learning methods. The question under which circumstances the deployment of a black-box might still be admissible depends on our ability to examine and audit the black-box. How exactly this might be done is still an area for future research. We hope that our paper contributes to an open discussion regarding the (lack of) potential of post-hoc explanations in the context of the on-going negotiation of the Artificial Intelligence Act.
|
| 60 |
+
|
| 61 |
+
# 2 EXPLANATIONS IN COOPERATIVE AND ADVERSARIAL CONTEXTS
|
| 62 |
+
|
| 63 |
+
In this work we broadly distinguish between "cooperative" and "adversarial" explanation contexts. In a cooperative context, all parties involved in the process of building the system, providing explanations and using the system share the same goal: to create a system as good and supportive as possible. Prototypical examples are model debugging and scientific research. But also a company building a medical decision support system, say for skin cancer detection, will closely collaborate with the doctors who use it [53]. The company's goal would be to provide explanations that are as helpful as possible. The situation is very different in adversarial contexts, where parties do not share the same goal, such as in the oft-repeated example of a denial of a loan application. Here, the applicant and bank have opposing interests and incentives. Accordingly, should the bank be mandated to provide the applicant with an explanation, this explanation will be shaped by the bank's incentives and existing power asymmetries. For reasons that we outline below, the distinction between cooperative and adversarial contexts is crucial. In particular, we argue that local post-hoc explanations, which have a variety of use-cases in the cooperative scenario, are pointless or even harmful in adversarial contexts.
|
| 64 |
+
|
| 65 |
+
# 2.1 Parties involved in the adversarial explanation process
|
| 66 |
+
|
| 67 |
+
We consider adversarial explanation contexts where an AI decision system is used to make decisions about individuals. Prominent examples are university admissions, job and loan applications, or bail and sentencing decisions. Under existing and planned legislation, such as the EU Artificial Intelligence Act, the creator of the system ought to provide information about how the system comes to its decisions (see Section 3 below for a detailed discussion of the legal background). The creator of the system is the entity that has built the machine learning system and uses it to support decision making. $^{1}$ The creator could be a private company (such as a bank) or a public entity (such as a university). The decision subject is the person about whom the automated system makes a decision: the person who applies for a loan, or the person who applies to for university admission. After the decision has been communicated, the explanation recipient asks for an explanation, which is communicated by the explanation provider. The explanation recipient could be the decision subject herself, or an external examiner who is supposed to investigate the decisions or explanations on behalf of the decision subject or to defend her interests. The explanation provider is typically the creator of the system. $^{2}$
|
| 68 |
+
|
| 69 |
+
# 2.2 Machine learning problem: Supervised learning, tabular data, point-wise post-hoc explanations
|
| 70 |
+
|
| 71 |
+
In our technical discussion, we assume that the inputs $x \in \mathbb{R}^d$ of a decision algorithm are given in tabular form. Each dimension of the input encodes a different property of a person, for example age, income, etc. Typically, the number of dimensions $d$ is large: persons are described by dozens or hundreds of features. A machine learning algorithm is used to learn a decision function $f: \mathbb{R}^d \to \mathbb{R}$ . The resulting decision $y = f(x)$ for input $x$ could be a binary decision ("receives the loan" or "does not receive the loan") or a numeric risk score on which such a decision is based, as in the often discussed COMPAS algorithm to predict recidivism risk. We focus on supervised machine learning, where $f$ is learned based on training data consisting of pairs $(x_1, y_1), \dots, (x_n, y_n)$ with $x_i$ the training points and $y_i$ the training labels. An explanation algorithm $E$ is an algorithm operating on a decision function with the purpose of explaining it. We focus on local post-hoc explanation algorithms: The explanation algorithm $E$ gets queried with a data point $x$ and the corresponding decision $y$ , and produces an explanation $E(x, y)$ . Internally, the algorithm has access to the decision function $f$ , and in some cases also to the training data. The explanation $E(x, y)$ is supposed to explain why the decision function $f$ came to decide $y$ for
|
| 72 |
+
|
| 73 |
+
$x$ . The explanation can be in linguistic form. For example, "The low income of Mr. Smith was relevant for the refusal of the loan" or "Mr. Smith would have received the loan had his income been 10.000 Euros higher".
|
| 74 |
+
|
| 75 |
+
# 2.3 Explanation algorithms that fall into this framework
|
| 76 |
+
|
| 77 |
+
In this paper we consider local post-hoc explanation algorithms such as LIME, SHAP, and DiCE [30, 34, 41]. The explanations generated by these algorithms do not provide a global or holistic view of the decision function $f$ but merely try to explain individual decisions $y = f(x)$ . The often-cited advantage of these algorithms is that they work, at least in principle, for any decision function [7, 41]. Different algorithms take different approaches as to what constitutes an explanation: LIME and SHAP provide feature attributions that aim to quantify the influence of the different input-features for the particular decision. Feature attributions correspond to the linguistic form "The low income of Mr. Smith was relevant for the refusal of the loan". Another approach is to provide counterfactual explanations [60]. These explanations are based on searching for a sufficiently close or the closest alternative point $x'$ to the actual input point $x$ that yields a decision $y' = f(x')$ that differs from the original decision $y = f(x)$ . Comparing the two we can arrive at factors that are relevant to the decision [24]. The resulting counterfactual explanations have the linguistic form "Mr. Smith would have received the loan had his income been 10.000 Euros higher".
|
| 78 |
+
|
| 79 |
+
# 3 LEGAL FRAMEWORK: EXPLAINABILITY IN EU LAW
|
| 80 |
+
|
| 81 |
+
This paper argues that post-hoc explanation algorithms are unsuitable in adversarial contexts. Before we elaborate this from a philosophical and technical perspective (Section 4), it is important to understand the related legal framework. We focus on European Union law as the EU has often been a first-mover regarding the regulation of data and its analysis, and over time its legislation will likely inspire other jurisdictions (for a broader view, see [21]). Our analysis focuses on the draft Artificial Intelligence Act (AIA), a piece of proposed legislation that would be the first to specifically target AI. This pioneering approach would be a global blueprint for the regulation of AI. In its current form it creates different legal obligations for different AI applications on the basis of the perceived risks. The AIA would apply to general AI systems (Section 3.1). We also consider the General Data Protection Regulation (GDPR), which applies to the processing of personal data (Section 3.2). It will be seen that whereas EU law contains various obligations to provide information about a machine learning algorithm and its functioning, it remains unclear how these legal norms should be implemented from a technical perspective and whether explainability should be understood as a free-standing legal obligation or whether it should rather be seen as one of various mechanisms to achieve algorithmic transparency (Section 3.3). To better
|
| 82 |
+
|
| 83 |
+
understand the latter we also review their underlying rationales and objectives from a philosophical and legal perspective (Section 3.4).
|
| 84 |
+
|
| 85 |
+
# 3.1 The draft Artificial Intelligence Act (AIA)
|
| 86 |
+
|
| 87 |
+
The current draft of the AIA defines AI systems as "software (... that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with". Generally, the AIA regulates AI on the basis of its perceived risk by introducing four different categories of AI. Most relevant to our discussion are the two categories of systems that are high-risk, as opposed to systems that are not high-risk (the remaining two categories are practices that are subject to qualified prohibitions, and a residual category of AI systems that includes law enforcement software, emotion recognition system, biometric categorisation systems and deep fakes) [54]. The stronger the risk, the heavier regulatory obligations apply, also regarding transparency and interpretability.
|
| 88 |
+
|
| 89 |
+
There are two categories of high-risk AI systems. First, AI systems that relate to products that are already subject to supranational harmonisation, namely AI systems intended to be used as a safety component of a product, which are themselves products covered by Union harmonising legislation or which are required to undergo third-party conformity assessments. Second, a list of systems that are currently considered to carry a high-risk such as, for instance, biometric identification systems, systems for the management and operation of critical infrastructure, those used in education and employment, some law enforcement systems as well as others (see further Art 3(1) of the draft AIA). Article 13 governs explainability for high-risk AI systems, which have to be "designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately". Furthermore, users (the entity deploying the AI) need to have access to instructions for use in an appropriate digital format that contains information about the characteristics, capabilities and limitations of performance, including information about the level of accuracy, robustness and cybersecurity, risks to health, safety or fundamental rights, specifications for the input data, expected lifetime of the AI system and necessary maintenance measures. Finally, human oversight must be ensured. These measures are designed to minimize risks to health, safety or fundamental rights. Human oversight shall either be (i) identified and built into the system by the provider before it is placed on the market or put into service, or (ii) identified by the provider before the system is placed on the market or put into service but only implemented by the user.
|
| 90 |
+
|
| 91 |
+
In its current version, the AIA would thus require that high-risk AI systems are sufficiently transparent to enable the interpretation of the system's output. Is this an explainability obligation? Recital 47 sheds some light on how to interpret these notions. It specifies that high-risk AI systems should be transparent to a "certain degree" to "address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons". To this end, users "should be able to interpret the system output and use it appropriately" through the provision of "relevant documentation and instructions of use". This does not read like an obligation to make systems explainable in the sense that the way in which data has been processed must be entirely traceable. Rather, the AIA would require that an "interpretation" of the output must be facilitated through sufficient transparency. Importantly, this does not necessarily seem to imply that an absolute truth must be identified post-hoc (see Sections 4.1 and 4.2 below) but rather the overall functioning of the system and how it comes to an output. The draft AIA leaves open the question of what transparency and interpretability imply from a technical perspective. This certainly includes the elements listed in its Article 16 such as technical documentation, keeping logs or quality management systems. Article 13 leaves open whether there are additional requirements and what, exactly, interpretability requires from a technical perspective. If input data ought to be entirely traceable, "black-box" systems cannot be used in high-risk applications. This highlights that it is important to think about the objectives of transparency and explainability. If these can be achieved through alternative means, excluding black-box systems such as deep neural networks from high-risk scenarios (such as healthcare as devices falling under the Medical Devices Regulation qualify as high-risk) might unduly hinder innovation in important domains.
|
| 92 |
+
|
| 93 |
+
Article 52 AIA creates some general transparency obligations for AI systems that are not high-risk. These are general disclosure obligations such as to (i) inform users that they are interacting with an AI system unless this is obvious from context, (ii) users of an emotion recognition system or biometric categorization system shall inform natural persons exposed thereto, (iii) deep fakes must be disclosed as such. Some exceptions apply where the AI is used in the context of law enforcement. These are thus obligations of transparency that require disclosure that AI is used, as opposed to how it is used.
|
| 94 |
+
|
| 95 |
+
To summarize, the draft AIA would thus not, in its current form, create a general explainability obligation for machine learning systems. Such an obligation clearly is not foreseen in relation to AI systems that are not qualified as high-risk. Arguably, there is also no explainability obligation in relation to high-risk AI systems. Rather, what is required is transparency of the system's functioning and output generation. This transparency must make these elements interpretable but not necessarily amount to the provision of an explanation as it is commonly understood in computer science.
|
| 96 |
+
|
| 97 |
+
# 3.2 The General Data Protection Regulation (GDPR)
|
| 98 |
+
|
| 99 |
+
The GDPR creates some general transparency requirements that form part of the data controller's (the entity that determines the purposes and means of processing) general informational obligations vis-à-vis the data subject (the natural person that personal data relates to). In addition, it also contains a specific regime for "solely automated data processing". In contrast to the draft AIA, which creates vague obligations resting on the user, the GDPR creates specific rights for the individual subjected to such decisions.
|
| 100 |
+
|
| 101 |
+
Article 13 requires that data controllers provide specific information to data subjects where personal data is collected from them at the time of collection such as whether "automated decision-making" is used, and, if so, provide "meaningful information about the logic involved<sup>3</sup>, as well as the significance and the envisaged consequences of such processing". Article 14(1)(h) creates the same obligation in cases where data is not directly collected from the data subject. Pursuant to Recital 62 this information does not have to be provided where it is redundant, or where compliance proves impossible or involves a disproportionate effort. The same wording can also be found in Article 15, which deals with the data subject's right to access data. Whereas Articles 13 and 14 relate to the pre-processing stage, data subjects can exercise their rights under Article 15 at any time, including after processing has taken place. This raises the question of whether - despite the identical wording of these provisions - Article 15 may substantively require something different when referring to the "logic" of the automated decision-making process.
|
| 102 |
+
|
| 103 |
+
There is no general right to an explanation under the GDPR. Some explainability requirements may, however, arise in respect of machine learning algorithms that produce legal effect or similarly significantly affect a data subject. Article 22 creates a qualified prohibition of "solely automated data processing", including profiling. This implies that such techniques can only be used in some circumstances, namely (i) where necessary to enter into or perform a contract between the data subject and controller, (ii) where it is authorized by law or where the data subject has provided explicit consent. In these circumstances automated processing can take place, but the data subject has the right to human intervention and to express her point of view and to contest the decision. Recital 71 mentions an additional element, namely that the data subject has the right "to obtain an explanation" after human review of the decision "and to challenge this decision".<sup>4</sup> Recitals, however, do not have the same legally binding force as the text of the GDPR itself.
|
| 104 |
+
|
| 105 |
+
Over the past years there has been a vivid academic debate around whether the reference to "an explanation" in Recital 71 amounts to a "right to an explanation" that data subjects can exercise via-à-vis controllers [59] [32] [45] [14]. The Article 29 Working Party's guidance suggests that Article 22, read in conjunction with Recital 71, should be understood to require that controllers (i) tell data subjects that they are engaging in automated decision making, (ii) deliver meaningful information about the logic, and (iii) explain the processing's significance and envisaged consequences. The information provided should include details about the categories of data; why data is seen as pertinent; how profiles are built; why the profile is relevant for the decision-making process and how it is used to reach a decision about the data subject. The last three criteria appear to apply to profiling only [36]. Information with respect to the "logic" means "simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision". What is required is "not necessarily a complex explanation of the algorithms used or disclosure of the full algorithm". Nonetheless, the information transmitted to the data subject should be sufficiently comprehensive to "understand the reasons for the decision". Thus, an explanation of algorithms or disclosure of the full algorithm isn't "necessarily" required and that the controller ought to find "simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision". Unfortunately, this guidance leaves a lot of room for doubt regarding what exactly is required of controllers. In any event the GDPR does not create a general right to an explanation but applies only to automated decision-making that legally affect the data subject or have similarly significant effects on them.
|
| 106 |
+
|
| 107 |
+
# 3.3 Explainability as a sub-component of transparency
|
| 108 |
+
|
| 109 |
+
While there is a persistent myth that EU law requires that all decisions based on AI are "explainable" our analysis has painted a more nuanced picture. First, there is no overarching explainability norm that would apply to any usage of AI. To what degree secondary law requires explanations has not been authoritatively settled. Ultimately, the Court of Justice of the European Union will need to settle this question in respect of the GDPR. Concerning the draft AIA, however, legislators should clarify in the final text whether explainability is a free-standing legal obligation in respect of high-risk AI systems or whether it should rather be understood as a sub-component of transparency. As shown above, it is indeed possible to read references to explainability as elements of the broader transparency obligation Article 13 AIA is explicitly about transparency, but the reference that this transparency must allow users to "interpret the system's output" has been understood as an explainability obligation by some. Further iterations should clarify the link between transparency and explainability to enhance legal certainty. An analysis of the history behind the AIA confirms the lack of precision
|
| 110 |
+
|
| 111 |
+
of the AIA itself. The EU High Level Expert Group on AI's report on the one hand portrayed explainability as a component of transparency. On the other hand, it repeatedly referred to another concept, "explicitity", which was introduced as an ethical principle and as the "procedural dimension" of fairness. In contrast, the AIA White Paper made no reference to explainability other than to mention that symbolic reasoning could help make deep neural networks more explainable. This part of the AIA legislative history underlines the lack of consensus about what exactly explainability is. Similarly, the GDPR could also be read as referring to explainability as a sub-component of transparency. Articles 12-15 derive from the core data protection principle of transparency in Article 5(1)(a) and likewise, one reading of Article 22 in conjunction with relevant recitals could also be understood as a more general transparency rather than explainability obligation.
|
| 112 |
+
|
| 113 |
+
This, of course, raises the question of what transparency means and what it should enable. There is broad consensus that the GDPR requires that decisions reached through automated decision making be justifiable. Indeed, Hildebrandt has highlighted that data protection requires "the justification of such decision-making rather than an explanation in the sense of its heuristics" (p. 113 in [18]). Kamimski and Urban deem that justification should enable "understanding, revealing and making challengeable the normative grounds of a decision" (p. 1980 in [21]). Wachter, Mittelstadt and Russell have argued that explainability is ultimately designed to help the data subject understand, contest and alter decisions and that this could also be achieved by counterfactual explanations [60]. If explainability is merely one means of achieving transparency, there needs to be a more thorough discussion as to what other, alternative, means of achieving transparency there are, particularly in situations where explainability strictu sensu proves impossible. Considering the lack of consensus as to how the legislative texts of the AIA and the GDPR ought to be interpreted and applied in practice, it is helpful to consider their underlying objectives.
|
| 114 |
+
|
| 115 |
+
# 3.4 Rationale and objectives of explainability norms in an adversarial setting
|
| 116 |
+
|
| 117 |
+
The vague formulation of explainability rights, coupled with uncertainty regarding their function makes it legitimate to ask whether explanations serve any meaningful purpose. Indeed, as Edwards and Veale [14] have argued, "the search for a legally enforceable right to an explanation may be at best distracting and at worst nurture a new kind of transparency fallacy". This is essentially a warning that if explainability obligations just become a box-ticking exercise, they might give a misleading appearance of compliance rather than to be of any real value to the decision subject. In addition, explainability
|
| 118 |
+
|
| 119 |
+
rights in the GDPR inevitably also suffer from the general shortcomings of the low enforcement of the GDPR.
|
| 120 |
+
|
| 121 |
+
In order to better understand the above-examined norms we propose to consider their underlying objectives. Before discussing legislative history let us recapitulate what philosophers have identified as main objectives for algorithmic explanations. One major motivation for explainability of AI systems is the hope that this may foster trust in these systems [10, 26, 35, 57]. This has been called the "Explainability-Trust" hypothesis [22]. The hypothesis is controversial, and it is not exactly clear how explanations would induce trust. The underlying rationale seems to rest on an analogy with human interactions. Consider decisions made by human experts. When the decision doesn't satisfy us, we are drawn to ask for an additional explanation. Given such an explanation, we may check whether it conforms to our expectations about good decision making. If so, this may be a ground for further trusting the decision maker. This is not a one-shot process, but an ever evolving interaction on a long term time-scale. We tend to trust a person that proved repeatedly to predict correctly, make good decisions, or provide well informed explanations. The trust raising potential of an explanation however requires that we can submit explanations and decisions to tests, possibly by delegating it to other experts. The trust raising potential of a single explanation thus presupposes that the explanation provider stays in the information-exchange on the long run: only then does she have an incentive to provide a correct explanation, since an incorrect one would lead to a loss of trust in the long run but not in a one-shot exchange. If an algorithm rather than a human expert makes a decision, we might have similar expectations. We would like to engage in a similar information exchange with an algorithm as we engage with humans. The demand for an explanation is then a demand for a piece of communicative interaction. The hope that this builds trust stems from the intuition that the interaction with the algorithm is similar to the interaction among humans, as depicted above. This assumption may however fail either because the algorithmic explanations cannot be submitted to sensible tests or because the exchange is one-shot and not long run. In the first case, explanations lose their trust raising potential. In the second, the explanation provider may not have the incentive to tell the truth. A second implicit motivation for explainability stems from the idea that information provided by explanations can be used to perform actions, and may in fact be needed for such actions. In the adversarial setting, a data subject might want to use an explanation to contest a decision [7, 60], by claiming, or arguing that the decision is not right, not good, or not fair. The data subject might also want to use the
|
| 122 |
+
|
| 123 |
+
explanation for recourse, in order to do better next time [4, 55, 60] (see also [2, 29]).<sup>7</sup> But such explanations are only of value when true or correct. A false explanation will not help in doing better next time, and may even be devised such as to render a decision incontestable.
|
| 124 |
+
|
| 125 |
+
The two motivations from philosophy - building trust and enabling recipients to act - can also be found as objectives in the legal texts. The EU High Level Expert Group on AI described explainability as one tool to achieve trust in AI systems [35].<sup>8</sup> The AIA provides that explainability norms are designed to allow users to fully understand the capacities and limitations of high-risk systems, leading again to trust. Partly related to trust, one can understand explainability as a tool for risk management, in line with the AIA's overall risk-based approach. Indeed, for high-risk AI systems, transparency must be ensured by monitoring the system's operation, detect signs of anomalies, dysfunctions and unexpected performance in order to counteract automation bias or to potentially intervene in the system (the idea of a "stop button"). The European Commission White Paper also emphasized the risk-based approach and stressed that due to the potential scale of AI systems [11]: a hidden bias or an incorrect assumption of an AI system, say deciding on tens of thousands of university admission decisions, will have a large systemic effect. This differentiates large-scale AI systems from human decision-making systems. In philosophy explanations are considered as a tool towards future actions. Similarly, the legal discussion also portrays explainability as an enabling right. The High-Level Expert Group on AI has drawn attention to the fact that to be able to contest decisions, they must be traceable. Also outside the AIA and the GDPR, explainability serves a related purpose. In consumer protection law, explainability is linked to the unequal power dynamics between the business and the consumer. In the public administration, it has been argued that being subjected to an intransparent black-box decision would undermine human dignity and is also to be avoided, unlike in the private sector, individuals cannot vote with their feet and go elsewhere.
|
| 126 |
+
|
| 127 |
+
Overall, the motivations for explanations seem to presuppose that such explanations are true or correct. Only then does a single explanation raise trust, and only then can an explanation be used to perform the intended actions, such as contesting or recourse. We will, however, see in the next section that this truth-presupposition for explanations fails in adversarial scenarios of algorithmic post-hoc explanations.
|
| 128 |
+
|
| 129 |
+
# 4 THE PROBLEMS WITH POST-HOC EXPLANATIONS IN ADVERSARIAL CONTEXTS
|
| 130 |
+
|
| 131 |
+
We now discuss the problems with post-hoc explanations in adversarial scenarios. What can we expect from an algorithmic explanation in these contexts? We roughly know what to expect from human explanations. For example, witnesses giving evidence in court are expected to tell the truth. Can we expect something similar of an algorithmic explanation? If the algorithm decided, for example, to reject a loan application, can we expect to discover the true reason why it decided to do so? The answer is that we cannot, for two reasons. First, the algorithm's view of the world is coarse-grained and incomplete, and this significantly restricts the vocabulary available for potential explanations (Section 4.1). Second, even within the limited picture of the world that the algorithm has access to (the "algorithm's own world") uniquely preferred or "ground truth" explanations do not exist (Section 4.2). This directly ties with the computer science perspective of why post-hoc explanations should not be used in adversarial contexts: the task of providing post-hoc explanations is underdetermined. The objective of the adversary explanation provider is to deploy a classifier that has high accuracy and generate post-hoc explanations that cannot be contested by the data subject or an examiner. We argue that due to the high degree of ambiguity inherent to algorithmic explanations, the adversary has sufficient degrees of freedom to devise incontestable explanations - even without explicitly optimizing against a particular explanation method [46, 47]. We identify four key quantities that allow the adversary to influence the resulting explanations: the choice of an explanation algorithm and its particular parameters (Sections 4.3 and 4.4); the exact shape of the high-dimensional decision boundary (Section 4.5); and, when applicable, the choice of the reference dataset (Section 4.6). This section contains a number of figures and simulation results. Additional figures can be found in the supplement. The code to replicate the results in this paper is available at https://github.com/tml-tuebingen/facct-post-hoc.
|
| 132 |
+
|
| 133 |
+
# 4.1 The algorithm's view of the world is coarse-grained and incomplete - this limits potential explanations
|
| 134 |
+
|
| 135 |
+
Learning and explanation algorithms only have access to a coarse-grained description of the real world. Their vocabulary is restricted to certain features, and possible relations between them. The "experience" of such algorithms given by the finite training data is formulated in the restricted vocabulary and provides only a small window to the world. Overall, the algorithm's representation of the real world is coarse-grained and
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
(a) SHAP
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
(b) LIME
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
(c) DiCE
|
| 145 |
+
Figure 1: Different explanation algorithms lead to different explanations. Depicted are the feature attribution explanations of four different explanation algorithms: Exact SHAP for trees [31], LIME [41], DiCE [24], and Interventional SHAP [20]. All four explanation algorithms attempt to explain the prediction for the same individual with the same decision function (a gradient boosted tree) on the same dataset (AdultIncome). The idea of feature attribution explanations is to determine how much each dimension of the input contributed towards the decision. The figures depict these attributions by drawing a bar for each of the 12 input dimensions. The larger the bar, the higher is the influence of the corresponding feature. Some methods distinguish between positive and negative attributions. In the depicted example, the first bar in Panel (a) is relatively large, which indicates that the SHAP algorithm determined that the value of the first feature contributed strongly to the prediction. The DiCE algorithm in Panel (c), in contrast, determined that the value of feature 9 contributed most strongly to the prediction. More figures showing results for other data points can be found in the supplement.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
(d) Interventional SHAP
|
| 149 |
+
|
| 150 |
+
incomplete. $^{9}$ The learning algorithm just sees features and training labels. The explanation algorithm, additionally, sees the learning algorithm's association between input and output. This is what we call "the world of the explanation algorithm", and this is all what it can exploit. As a consequence, all the explanation algorithm could talk about are geometric properties in the world of the algorithm: distances of points to the decision surface, proximity between points, their true or predicted labels, the gradient of the decision function at a point, the necessary change of a feature to change the decision, etc. Although a true explanation for a decision might exist in the real world, it might not be represented in the data or other aspects of the algorithm's world, which could thus not provide any such explanation. This is even the case in a cooperative setting. Consider the example of a medical diagnosis of a disease for which a true (say, causal) explanation exists in the real world. If the learning algorithm was trained on feature-based data such as age, blood pressure, etc, the explanation algorithm could suggest that age was the cause. However, in reality the cause for the disease may not be age, but rather a smoking habit that was not represented in the data. So even if a true explanation exists (say, a cause) this may neither be identifiable nor expressible by the explanation algorithm.
|
| 151 |
+
|
| 152 |
+
# 4.2 Even within the algorithm's own world, a unique preferred reason does not exist
|
| 153 |
+
|
| 154 |
+
Even within the limited world that the explanation algorithm has access to, a "true internal reason" why the learned decision function comes to a certain decision
|
| 155 |
+
|
| 156 |
+
does generally not exist. This is particularly the case for complicated black-box functions. Even machine learning experts digging into the learning algorithm or properties of the function could not reveal a unique true reason. All we can do is to provide vague approximations of how the algorithm arrives at its decision, by summarizing which features contributed how much to the decision (the approach of LIME and SHAP), or whether a change in some features would alter the decision (the approach of counterfactual explanations). For example, in the case of a loan rejection, we might want to know whether it was rather our low income or our postal code which determined the decision, and whether we could change something about the decision, if in the future we had a higher income or moved to another area. However, these explanation attempts are all subject to choices. A mathematically unique way to determine how much each feature of a complicated black-box function contributed to the decision does not exist. Consequently, all feature attribution methods rely on particular assumptions and mechanisms in order to construct explanations: LIME, for example, looks at the gradient of the decision function at the point to be explained [15, 41]. SHAP compares the point with other datapoints from a reference population [16, 30]. Yet another approach would be to re-train the classifier on subsets of features or to use counterfactual feature importance, where one looks at the distance to the decision surface in various directions. All these mechanisms and choices seem plausible but, as we will see in the next section, they all deliver different explanations.
|
| 157 |
+
|
| 158 |
+
# 4.3 Different explanation algorithms lead to different explanations
|
| 159 |
+
|
| 160 |
+
Different explanation algorithms lead to different explanations [25]. This is true even if the algorithms have access to exactly the same information (the geometry of the data, the learned decision function, etc). In an adversarial context, this is problematic because it means that the creator of the system can modify the explanations by choosing a particular explanation algorithm. In practice, different explanation algorithms lead to different explanations even on the most simple machine learning problems. In high dimensions, that is in real-world problems, the difference between the explanations obtained from two different explanation algorithms can be so significant that the explanations are entirely different. This is illustrated in Figure 1. The figure depicts the feature attribution explanations that four different explanation algorithms determined for the same individual. From the difference between the four panels in Figure 1 it is quite clear that different explanation algorithms can lead to markedly different explanations, even if they all attempt to explain the same decision for the same individual.[10] Details on the machine learning problem, dataset and explanation algorithms can be found in the supplement.
|
| 161 |
+
|
| 162 |
+
That different explanation algorithms lead to different explanations is also true for counterfactual explanation methods [34, 60]. Indeed, there is a variety of ways in which the optimization problem can be set up, which in turn leads to different explanations. However, already a single counterfactual explanation method can lead to a large number of counterfactual explanations. In a cooperative context, being able to generate many different counterfactual explanations for the same individual can be beneficial [34]. In an adversarial context this is problematic because there is no principled way to choose among different counterfactual explanations, and the adversary is again awarded considerable discretion to determine explanations. In realistic, high-dimensional applications, the number of potential counterfactual explanations can quickly become very large. Let us illustrate this point on the German Credit Dataset. The German Credit Dataset is a 20-dimensional dataset with features on credit history and personal characteristic. The task is to predict credit risk in binary form. How many different counterfactual explanations exist for a single individual? With a common black-box decision function, more than 100 different counterfactual explanations exist for each individual.
|
| 163 |
+
|
| 164 |
+
At its core, the fundamental difficulty of explainable machine learning is then the same as in other fields of unsupervised learning: the lack of a ground truth explanation impedes the development of an algorithmic
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
(a) SHAP
|
| 168 |
+
Figure 2: For any given datapoint, different explanation algorithms might lead to very similar or completely different explanations. In many cases, however, there are both similarities and dissimilarities. The Figure depicts the SHAP and LIME feature attributions for a datapoint in the folktables ACSIncome prediction task [13]: Are these attributions similar or different? More figures showing results for other data points can be found in the supplement.
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
(b) LIME
|
| 172 |
+
|
| 173 |
+
framework to automatically evaluate explanations. Every explanation algorithm needs to make assumptions about which properties of the decision function it seeks to highlight. As a result, it is possible to develop sanity checks for explanation algorithms and exclude unreasonable approaches [3, 9], but not to discern whether any of two post-hoc explanations is "more correct", which would be equivalent to discussing whether any of two different clusterings is "more correct" [58].
|
| 174 |
+
|
| 175 |
+
# 4.4 The explanation provider can choose between a large number of possible explanation algorithms and parametrizations
|
| 176 |
+
|
| 177 |
+
Even for a single explanation algorithm, there can be many different parameter choices that all lead to different explanations. LIME explanations, for example, depend on the bandwidth and the number of perturbations [15, 27, 46]. The uniqueness properties of Shapley values non-withstanding, there is a multiplicity of ways in which Shapley values can be operationalized to generate explanations [51]. Counterfactual explanation algorithms depend on the underlying metric chosen to represent closeness (e.g. Euclidean distance vs. $L1$ -norm)[11] as well as additional hyperparameters to weight-off between closeness and prediction, and, at least in principle, any number of additional penalty terms [34]. In certain cases, it might be possible to come up with good default parameter choices. For example, recent work has demonstrated how to choose the bandwidth parameter of LIME in a principled way or quantify uncertainty in the resulting explanations [27, 46, 64]. It is also possible to exclude explanation algorithms and parametrizations that are
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
(a) Diabetes, Lin. Regr.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
(b) Diabetes, Random Forest
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
Figure 3: Explanations depend on the exact shape of the classifiers high-dimensional decision boundary. Panel (a) and (b): On the diabetes dataset, linear regression and a random forest agree for $94\%$ of their predictions. Shown are the SHAP explanations on a data point where the prediction of both methods agree. As we can see, the explanations differ. Panel (c) and (d): the dependence on the decision boundary is subtle. It can even be hard to tell from the explanations whether the classifier had been trained trained at all. On the Wisconsin Breast Cancer dataset, the SHAP explanations of a classifier trained to achieve an accuracy of $96\%$ are hard to distinguish from those of the same classifier trained on random labels. More figures showing results for other data points can be found in the supplement.
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
(c) Cancer, $36\%$ Accuracy
|
| 190 |
+
(d) Cancer, $96\%$ Accuracy
|
| 191 |
+
|
| 192 |
+
completely unreasonable, for example because they are not sensitive to the decision function [3, 9]. This nevertheless leaves an ever-increasing number of plausible explanation algorithms and corresponding parametrizations. Quite generally, different explanation algorithms vary among many different dimensions, and there is an ever-increasing number of suggestions as to how black-box functions might be explained. This can be seen, for example, in the recent work of Covert et al. [12], who summarize 25 existing methods in a unified framework. As already discussed above, there are no fundamental reasons that impede us from using any particular method.[12]
|
| 193 |
+
|
| 194 |
+
# 4.5 Explanations depend on the exact shape of the high-dimensional decision boundary
|
| 195 |
+
|
| 196 |
+
Even if we fix a particular explanation method and its parameters, the generated explanations still depend on the exact shape of the learned decision boundary. In high dimensions, there are often many different black-box functions that solve a particular classification problem to a desired accuracy, that is they represent the data sufficiently well. However, these functions often lead to different explanations. To a certain extent, we may say that the exact shape of the learned decision boundary is arbitrary, but since the explanations depend on it, these turn out to be arbitrary as well. One of the reasons for the sensitivity of the explanation to the function's shape is that many explanation methods evaluate the function $f$ at datapoints that are outside the data distribution or at points that are unlike most points from the data distribution. In the adversarial scenario, this is problematic because the adversary can freely modify the values
|
| 197 |
+
|
| 198 |
+
of the function $f$ outside the data distribution without changing the classification behavior. Recent work has demonstrated that this property can be used to explicitly manipulate and attack explanation methods [47, 48]. But even without explicit attacks, there are many different choices, in particular hyperparameter and architecture choices, that influence the shape of the decision boundary, and thus the resulting explanations. For an external examiner, this presents a challenging problem: while certain explicit attacks on explanation methods could in principle be detected through code review (see also Section 5.2), it is far less clear how one would argue about choosing one classifier over another, or any particular choice of hyperparameters. This problem is illustrated in Figure 3. Here, we solved the same machine learning problem both with linear regression and a random forest. The two methods have comparable performance on the test set, where $94\%$ of their predictions agree. Nevertheless, the explanations obtained for the two different decision functions can be quite different - even for points that receive the same prediction.
|
| 199 |
+
|
| 200 |
+
Turning to counterfactual explanations, it is well-known that these depend on the exact shape of the decision boundary. Let us give an example, again using the German Credit Dataset. Consider two different decision functions, a gradient boosted tree and logistic regression. If we generate a number of diverse counterfactual explanations [34] for a typical individual with respect to one decision function, are these also counterfactual explanations with respect to the other decision function (at least as long as both functions arrive at the same decision)? In this simple experiment less than $50\%$ of counterfactual explanations that work for the gradient boosted tree also work for logistic regression. As discussed above, the fact that the explanations depend on the exact shape of the decision boundary is problematic because it allows the creator of the system to influence the resulting explanations. The particular choice of the decision function can
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
(a) Dataset
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
(b) Reference data set: Group 1 only
|
| 207 |
+
Figure 4: A simple toy example of how the choice of the explanation's reference dataset can influence the resulting explanations. The dataset in Panel (a) consists of two different population groups. The blue and orange color depicts the binary label that the classifier is supposed to predict at each data point (to get an intuition, you might think of the groups as "male" and "female", and the label as "is awarded the credit" or "is not awarded the credit"). Panels (b) and (c) depict the interventional SHAP feature attributions [20] for the same data point in Group 1. In Panel (b), the explanation's reference dataset consists of the observations of Group 1 only. In Panel (c), the reference dataset is the entire dataset. The example shows that changing the reference dataset can almost completely change the feature attribution from one feature to another.
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
(c) Reference data set: entire dataset
|
| 211 |
+
|
| 212 |
+
even determine whether certain types of counterfactual explanations exist at all. Let us give an example on the Wisconsin Breast Cancer Dataset. To demonstrate the dependence on the decision boundary, we consider again two different decision functions, linear regression and a random forest. For linear regression, there exist a large number of counterfactual explanations that modify only a single variable. For the random forest, it is impossible to find any such counterfactual explanations. This is despite the fact that both classifiers exhibit similarly low test error.
|
| 213 |
+
|
| 214 |
+
# 4.6 It is unclear how to choose the reference dataset that many explanations depend on
|
| 215 |
+
|
| 216 |
+
In recent years, there has been an increased focus on the composition of datasets, for example on the representation of different sociodemographic groups in machine learning datasets [6, 37]. In many real-world problems such as credit lending, the criteria for choosing an appropriate dataset are not clear. In both cooperative and adversarial contexts, the creator of the system has to make numerous choices, many of which can have significant effects on both the shape of the learned decision boundary and the generated explanations. For example, Anders et al. [5] have shown that gradient-based explanations can be manipulated by adding additional variables to the dataset. In this section, we highlight the additional role that the dataset can have on algorithmic explanations, even when keeping the learned decision boundary constant. Indeed, while some explanation algorithms such as LIME only rely on the learned decision boundary, other methods such as SHAP and some counterfactual explanation methods make additional use of the data in order to generate explanations. The relevant dataset could be the training data, but it could also be a different dataset. We refer to it as the reference dataset.
|
| 217 |
+
|
| 218 |
+
While the usage of such a dataset to generate explanations can be seen as a remedy to the vagaries of high dimensions, or as a possibility to generate counterfactual explanations that look like they come from the data, this approach is problematic as long as the adversary determines the composition of the dataset. The reason is that whether certain datapoints are included in the dataset or not can determine whether an explanation algorithm provides one or another explanation. Figure 4 illustrates this with a simple example: By deciding between two different reference datasets, one can effectively decide whether one ore another feature was relevant to the decision.
|
| 219 |
+
|
| 220 |
+
# 4.7 Bottom line: Post-hoc explanations are highly problematic in an adversarial context
|
| 221 |
+
|
| 222 |
+
It is extremely important to understand that an explanation algorithm is based on many human choices that are shaped by human objectives and preferences. While many choices are plausible, there is no objective reason to prefer one algorithm over the other, or one explanation over the other. Apart from the explanation algorithm and its particular parameters, explanations are influenced by human choices such as the selection of the classifier and the composition of the dataset. In adversarial contexts it implies that the adversary can choose, among many different plausible explanations, one that suits their incentives. This complicated situation makes it particularly difficult for external observers, including judges and regulatory bodies, to determine whether an explanation is acceptable. Explanation algorithms appear to provide objective explanations, yet as explained above this is not the case (compare Section 4.2).
|
| 223 |
+
|
| 224 |
+
# 5 ONCE AN EXAMINER IS ALLOWED TO ASSESS THE PROVIDED POST-HOC EXPLANATIONS, SHE'D BETTER INVESTIGATE THE DECISION FUNCTION DIRECTLY
|
| 225 |
+
|
| 226 |
+
So far we have discussed explainability obligations in European Union law and their motivation (Section 3), and pointed out theoretical (Sections 4.1-4.2) and practical (Sections 4.3-4.6) shortcomings of post-hoc explanations. In this section, we add yet another component to our argument. In an adversarial setting, it is not only the AI decision system itself but also the corresponding explanation algorithm which might need to be examined by a third party. Even if the examiner only attempts to assess the most basic consistency properties of the provided explanations, that is to check whether the explanations relate to the AI decision system at all, this necessarily requires that the examiner is able to query the AI system. But then, the explanations become entirely redundant: Rather than relying on explanations to enable risk management, provide trust or bias and discrimination detection (compare Section 3.4), the examiner could directly query the AI system for problematic decision behavior. Because the creator of the system and the examiner have competing interests, it is important to distinguish degrees of transparent interaction between the two. Naturally, the examiner would like to have access to as much information as possible, whereas the adversary creator wants to disclose as little information as possible. We distinguish between a minimal and a fully transparent scenario of information disclosure (Sections 5.1-5.2).
|
| 227 |
+
|
| 228 |
+
# 5.1 Minimalist scenario where decision function and explanation algorithm can be queried
|
| 229 |
+
|
| 230 |
+
To determine whether the adversary's explanations actually correspond to the used decision function $f$ instead of being arbitrary justifications not related to the decision process, the examiner needs to be able to query the decision function and the generated explanations.[13]
|
| 231 |
+
|
| 232 |
+
This includes a fair amount of related knowledge, such as which variables are input to the algorithm, but excludes explicit access to the decision function, explanation algorithm, source code and training dataset. A related but slightly more limited version of this scenario arises when individuals jointly collect the decisions and explanations from the creator of the system. In this minimalist scenario, the examiner can validate the internal consistency of the provided explanations. Researchers have proposed a number of criteria that the examiner can test for such as faithfulness to the model, robustness to local perturbations, as well as necessity and sufficiency notions for
|
| 233 |
+
|
| 234 |
+
individual feature attributions [3, 24, 56]. The examiner might also want to perform tests as to whether the provided explanations have been manipulated [48]. More importantly however, even just with the ability to query the decision function, the examiner can ignore the explanations and directly investigate the decision function for problematic properties. For example, the examiner could conduct a systematic evaluation of, say, fairness metrics such as equal opportunity and demographic parity, based on an independent reference dataset of her choice (see [6] for these and other notions of fairness and discrimination). Indeed, because the adversary designing the explanation algorithm has no interest in choosing explanations that highlight any discriminatory behavior of the decision algorithm, the examiner is well-advised to simply ignore the explanations and test the decision algorithm directly. Although such tests might be similar to certain explanation algorithms, what is important is that the examiner (as opposed to the creator) designs and implements them. Note that we are not saying that the minimalist scenario actually allows the examiner to assess all legally relevant properties of the decision function. What exactly can be assessed with querying access is a question that still requires more research. Our point is that once we have querying access, the explanations are useless.
|
| 235 |
+
|
| 236 |
+
# 5.2 Fully transparent scenario where algorithms' source code and training data are disclosed
|
| 237 |
+
|
| 238 |
+
At the opposite end of the minimalist scenario is the fully transparent scenario where the examiner is allowed to investigate the decision function, source code and training data. An examiner could then scrutinize whether the explanation algorithms have been implemented according to the state of the art with sensible parameter choices. This directly rules out the possibility for the creator of the system to manipulate explanations. Are post-hoc explanations useful in the transparent scenario, perhaps because the examiner now has the tools to verify whether the adversary has chosen the "correct" explanations? As we have already discussed above, the problem is that there is no notion of "correct" explanation (Sections 4.2 and 4.3). Thus, except for notions of internal consistency [3, 24], there is, in general, nothing the examiner can say about the explanations. Another issue, already observed in Sections 4.5 and 4.6, are hyperparameter choices and decisions regarding the composition of the dataset. For these decisions, it is highly non-trivial to come up with uniquely reasonable defaults: If the adversary has found a particular neural network architecture with hyperparameters that generalize well on the adversary's own dataset, how exactly could the examiner argue that this is inappropriate? Nevertheless, all of these choices can influence the resulting explanations, even if we fix a particular explanation algorithm. Of course, the examiner could scrutinize the source code, re-train the system with different parameters, perform tests on the data, and generate alternative explanations. Some have argued
|
| 239 |
+
|
| 240 |
+
that this might be sufficient in order to assess a variety of legal requirements [23]. While we think that more research is needed on what can be realistically achieved in the fully transparent scenario, it is quite clear that the examiner can, at least in principle, perform a variety of powerful tests (whether this is achievable in practice, based on the limited resources of an examiner, is yet a different story). At any rate, just as in the minimalist scenario, the examiner is well-advised to examine and test the system on her own, and to ignore the explanations provided by the adversary creator.
|
| 241 |
+
|
| 242 |
+
# 6 DISCUSSION
|
| 243 |
+
|
| 244 |
+
Explainability is often praised as a tool to mitigate some of the risks of black-box AI systems. Our paper demonstrates that in adversarial contexts, post-hoc explanations are of very limited use. From a technical and philosophical point of view these explanations can never reveal the "unique, true reason" why an algorithm came to a certain decision. In complicated black-box models, such a true reason simply does not exist. We moreover demonstrated that post-hoc explanations of standard decision algorithms on simple datasets possess a high degree of ambiguity that cannot be resolved in principle. For these reasons, post-hoc explanations of black-box systems are, to a certain degree, incontestable. In the best case, post-hoc explanation algorithms can point out some of the factors that contributed to a decision - these algorithms are therefore useful for model debugging, scientific discovery and practical applications where all parties share a common goal. In adversarial contexts, in contrast, we demonstrated that local post-hoc explanations are either trivial or harmful. In the worst case, the explanations may induce us into falsely believing that a "justified", or "objective" decision has been made even when this is not the case.
|
| 245 |
+
|
| 246 |
+
It was also seen that it remains unclear how expectations of explainability in the GDPR or the AIA ought to be interpreted. The GDPR does not give rise to a general explainability obligation, and the draft AI Act currently would only require some degree of explainability in relation to high-risk applications of AI. We call on legislators to formulate related provisions with more specificity in order to create legal certainty in this respect. If the final version of the AIA requires a strong version of explainability for high-risk AI systems, black-boxes simply cannot be used: they cannot be explained directly, and the only indirect means of explaining them – local post-hoc explanations – are unsuitable. In this case, one would have to resort to the use of simple, inherently interpretable machine learning models rather than black-box models (compare [42]) although this may impede innovations. We would expect that these algorithms and their explanations are more robust and less susceptible to manipulation, such that large parts of our criticism would not apply to inherently interpretable models. However, future research needs to clarify whether this is the case,
|
| 247 |
+
|
| 248 |
+
because we are not aware of any research that investigates inherently interpretable machine learning in an adversarial setting. If, on the other hand, explainability in the final version of the AIA is to be understood as one of several means to achieve more transparency in machine learning, other methods than post-hoc explanations might be more suitable to achieve the desired goals of transparency. For example, as far as testing for biases and discrimination is concerned, it is unlikely that the creator of the system will choose to generate explanations that can be used to uncover hidden biases. But there is a much more direct route to assess discrimination than implicitly through explanations. Indeed, external examiners could directly test the system for discriminatory properties [23]. As such, the external examination of black-boxes may be a more suitable means of enabling more accountable AI systems.
|
| 249 |
+
|
| 250 |
+
The current draft of the AIA already requires documentation regarding the functioning of AI systems. However, one has to be aware of the versatile manipulation possibilities that lie in the development process of AI systems itself, through choice of training data, features, algorithms, parameters, and so on. Even in the fully transparent scenario where the entire development pipeline including the source code is open [23], a considerable leeway for manipulations remains. In order to address these, an external examiner would need access to considerable manpower and resources. Even when training data and source code can in principle be examined, algorithms re-applied or even retrained, actually doing so for a system that has been developed by a large team might be very difficult if not impossible. More research is needed to understand exactly which legal objectives can be satisfied by such extended documentation of AI systems, or whether the documentation would again just serve as a means to provide an appearance of objectivity without any real value.
|
| 251 |
+
|
| 252 |
+
Overall, we believe that the question of testing and certifying machine learning systems in an adversarial scenario is a research direction that is still heavily underexplored. There is no single way to achieve all the desired transparency and control goals for such AI systems. Even complete transparency, open code, open data might not lead to all the desired goals. For this reason, it is important to investigate in more detail what objective can be achieved by which means, and which goals might not be possible to achieve at all. Only then can we engage in a meaningful debate about responsible use of AI systems in social contexts.
|
| 253 |
+
|
| 254 |
+
Finally, we recall that our criticism of explainability, in particular local post-hoc explanations, concerns adversarial scenarios. In cooperative scenarios, many interesting discoveries might be made with the help of explainable machine learning.
|
| 255 |
+
|
| 256 |
+
# 7 FUNDING DISCLOSURE
|
| 257 |
+
|
| 258 |
+
This work has been partially supported by the German Research Foundation through the Cluster of Excellence "Machine Learning - New Perspectives for Science" (EXC 2064/1 number 390727645), the Baden-Württemberg Foundation (program "Verantwortliche Künstliche Intelligenz"), the BMBF Tübingen AI Center (FKZ: 01IS18039A), the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the Carl Zeiss Foundation. The authors declare no additional sources of funding and no financial interests.
|
| 259 |
+
|
| 260 |
+
# REFERENCES
|
| 261 |
+
|
| 262 |
+
[1] P. Achinstein. 1983. The Nature of Explanation. Oxford University Press, New York.
|
| 263 |
+
[2] A. Adadi and M. Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138-52160.
|
| 264 |
+
[3] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim. 2018. Sanity checks for saliency maps. In Neural Information Processing Systems (NeurIPS).
|
| 265 |
+
[4] A.Karimi, G. Barthe, B. Scholkopf, and I. Valera. 2021. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv:2010.04050
|
| 266 |
+
[5] C. Anders, P. Pasliev, A. K. Dombrowski, K. R. Müller, and P. Kessel. 2020. Fairwashing explanations with off-manifold detergent. In International Conference on Machine Learning (ICML).
|
| 267 |
+
[6] S. Barocas, M. Hardt, and A. Narayanan. 2019. *Fairness and Machine Learning*. fairmlbook.org. http://www.fairmlbook.org.
|
| 268 |
+
[7] S. Barocas, A. Selbst, and M. Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In ACM Conference on Fairness, Accountability, and Transparency.
|
| 269 |
+
[8] R. B. Braithwaite. 1953. Scientific Explanation: A Study of the Function of Theory, Probability and Law in Science. Cambridge University Press, Cambridge.
|
| 270 |
+
[9] O. Camburu, E. Giunchiglia, J. Foerster, T. Lukasiewicz, and P. Blunsom. 2019. Can I trust the explainer? Verifying post-hoc explanatory methods. arXiv:1910.02065 (2019).
|
| 271 |
+
[10] L. Chazette, W. Brunotte, and T. Speith. 2021. Exploring explainability: A definition, a model, and a knowledge catalogue. In IEEE 29th International Requirements Engineering Conference (RE).
|
| 272 |
+
[11] European Commission. 2020. White Paper on Artificial Intelligence: A European approach to excellence and trust. Com (2020) 65 Final (2020).
|
| 273 |
+
[12] I. Covert, S. Lundberg, and S.I. Lee. 2021. Explaining by removing: A unified framework for model explanation. Journal of Machine Learning Research (JMLR) 22, 209 (2021), 1-90.
|
| 274 |
+
[13] F. Ding, M. Hardt, J. Miller, and L. Schmidt. 2021. Retiring Adult: New Datasets for Fair Machine Learning. In Neural Information Processing Systems (NeurIPS).
|
| 275 |
+
[14] L. Edwards and M. Veale. 2017. Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law and Technology Review 16 (2017).
|
| 276 |
+
[15] D. Garreau and U. von Luxburg, 2020. Explaining the Explainer: A First Theoretical Analysis of LIME. In Conference on Artificial Intelligence and Statistics (AISTATS).
|
| 277 |
+
[16] S. Ghalebikesabi, L. Ter-Minassian, K. DiazOrdaz, and C. C. Holmes. 2021. On locality of local explanation models. In Advances in Neural Information Processing Systems (NeurlPS).
|
| 278 |
+
[17] C. Hempel. 1965. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. Free Press, New York.
|
| 279 |
+
[18] M. Hildebrandt. 2019. Privacy as protection of the incomputable self: From agnostic to agonistic machine learning. Theoretical Inquiries in Law 20, 1 (2019), 83-121.
|
| 280 |
+
[19] A. Z. Jacobs and H. Wallach. 2021. Measurement and fairness. In ACM conference on Fairness, Accountability, and Transparency.
|
| 281 |
+
[20] D. Janzing, L. Minorons, and P. Blöbaum. 2020. Feature relevance quantification in explainable AI: A causal problem. In International Conference on Artificial Intelligence and Statistics (AISTATS).
|
| 282 |
+
[21] M. Kaminski and J. Urban. 2021. The Right to Contest AI. Columbia Law Review (2021).
|
| 283 |
+
[22] L. Kästner, M. Langer, V. Lazar, A. Schomäcker, T. Speith, and S. Sterz. 2021. On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness. In IEEE 29th International Requirements Engineering Conference Workshops (REW).
|
| 284 |
+
[23] J. Kleinberg, J. Ludwig, S. Mullainathan, and C. Sunstein. 2018. Discrimination in the Age of Algorithms. Journal of Legal Analysis
|
| 285 |
+
|
| 286 |
+
10 (2018), 113-174.
|
| 287 |
+
[24] R. Kommiya Mothilal, D. Mahajan, C. Tan, and A. Sharma. 2021. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In AAAI/ACM Conference on AI, Ethics, and Society.
|
| 288 |
+
[25] S. Krishna, T. Han, A. Gu, J. Pombra, S. Jabbari, S. Wu, and H. Lakkaraju. 2022. The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective. arXiv preprint arXiv:2202.01602 (2022).
|
| 289 |
+
[26] M. Langer, D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, and K. Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021).
|
| 290 |
+
[27] E. Lee, D. Braines, Mi. Stiffler, A. Hudler, and D. Harborne. 2019. Developing the sensitivity of LIME for better machine learning explanation. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications.
|
| 291 |
+
[28] D. Lewis. 1973. Counterfactuals. Blackwell.
|
| 292 |
+
[29] Q. V. Liao and K. R. Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021).
|
| 293 |
+
[30] S. Lundberg and S. Lee. 2017. A unified approach to interpreting model predictions. In Neural Information Processing Systems (NeurIPS).
|
| 294 |
+
[31] S. M. Lundberg, G. Erion, H. Chen, A. DeGrave, J. M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S. I. Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature machine intelligence 2, 1 (2020), 56-67.
|
| 295 |
+
[32] G. Malgieri and G. Comandé. 2017. Why a Right to Legibility of Automated Decision-Making exists in the General Data Protection Regulation. International Data Privacy Law 7, 4 (11 2017), 243-265.
|
| 296 |
+
[33] C. Molnar. 2020. Interpretable machine learning. Lulu.com.
|
| 297 |
+
[34] R. Mothilal, A. Sharma, and C. Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In ACM Conference on Fairness, Accountability, and Transparency.
|
| 298 |
+
[35] High-Level Expert Group on AI. 2019. Ethics Guidelines for Trustworthy AI.
|
| 299 |
+
[36] Working Party. 2016. Guidelines on Automated individual decision-making and Profiling for the purposes of RegulationGuidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.
|
| 300 |
+
[37] A. Paullada, I. Raji, E. Bender, E.and Denton, and A. Hanna. 2021. Data and its (dis) contents: A survey of dataset development and use in machine learning research. *Patterns* 2, 11 (2021).
|
| 301 |
+
[38] J. Pearl. 2000. Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge.
|
| 302 |
+
[39] K. Popper. 1959. The Logic of Scientific Discovery. Hutchinson, London.
|
| 303 |
+
[40] A. Reutlinger and J. Saatsi. 2018. Explanation Beyond Causation: Philosophical Perspectives on Non-Causal Explanations. Oxford University Press, Oxford.
|
| 304 |
+
[41] M. T. Ribeiro, S. Singh, and C. Guestrin. 2016. Why should i trust you? Explaining the predictions of any classifier. In 22nd ACM SIGKDD international conference on knowledge discovery and data mining.
|
| 305 |
+
[42] C. Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206-215.
|
| 306 |
+
[43] W. Salmon. 1971. Statistical Explanation and Statistical Relevance. University of Pittsburgh Press, Pittsburgh, PA.
|
| 307 |
+
[44] W. Salmon. 1989. Four Decades of Scientific Explanation. In Scientific Explanation, Kitcher and Salmon (Eds.). Minnesota Studies in the Philosophy of Science, Vol. 13. University of Minnesota Press, 3-219.
|
| 308 |
+
[45] A. Selbst and J. Powles. 2018. Meaningful Information and the Right to Explanation. In ACM Conference on Fairness, Accountability, and Transparency.
|
| 309 |
+
[46] D. Slack, A. Hilgard, S. Singh, and H. Lakkaraju. 2021. Reliable post hoc explanations: Modeling uncertainty in explainability. In Neural Information Processing Systems (NeurIPS).
|
| 310 |
+
[47] D. Slack, S. Hilgard, E. Jia, S. Singh, and H. Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In AAAI/ACM Conference on AI, Ethics, and Society.
|
| 311 |
+
[48] D. Slack, S. Hilgard, H. Lakkaraju, and S. Singh. 2021. Counterfactual Explanations Can Be Manipulated. arXiv:2106.02666 (2021).
|
| 312 |
+
[49] P. Spirtes, C. Glymour, and R. Scheines. 1993. Causation, Prediction, and Search. Springer, Berlin.
|
| 313 |
+
[50] W. Spohn. 1980. Stochastic independence, causal independence, and shieldability. Journal of Philosophical Logic 9 (1980), 73-99.
|
| 314 |
+
[51] M. Sundararajan and A. Najmi. 2020. The many Shapley values for model explanation. In International Conference on Machine Learning
|
| 315 |
+
|
| 316 |
+
(ICML).
|
| 317 |
+
[52] R. Tomsett, D. Braines, D. Harborne, A. Preece, and S. Chakraborty. 2018. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. In ICML Workshop on Human Interpretability in Machine Learning.
|
| 318 |
+
[53] P. Tschandl, C. Rinner, Z. Apalla, G. Argenziano, N. Codella, A. Halpern, M. Janda, A. Lallas, C. Longo, J. Malvehy, J. Paoli, S. Puig, C. Rosendahl, H. Soyer, I. Zalaudek, and H. Kittler. 2020. Human-computer collaboration for skin cancer recognition. Nature Medicine 26, 8 (2020), 1229-1234.
|
| 319 |
+
[54] M. Veale and F. Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act—Analyzing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22, 4 (2021), 97–112.
|
| 320 |
+
[55] S. Venkatasubramanian and M. Alfano. 2020. The Philosophical Basis of Algorithmic Recourse. In ACM Conference on Fairness, Accountability, and Transparency.
|
| 321 |
+
[56] G. Vilone and L. Longo. 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76 (2021), 89-106.
|
| 322 |
+
[57] W. J. von Eschenbach. 2021. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philos. Technol. 34 (2021), 1607-1622.
|
| 323 |
+
[58] U. von Luxburg, R. Williamson, and I. Guyon. 2012. Clustering: Science or Art? JMLR Workshop and Conference Proceedings (Workshop on Unsupervised Learning and Transfer Learning) (2012), 65 - 79.
|
| 324 |
+
[59] S. Wachter, B. Mittelstadt, and L. Floridi. 2017. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7, 2 (06 2017), 76-99.
|
| 325 |
+
[60] S. Wachter, B. Mittelstadt, and C. Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
|
| 326 |
+
[61] J. Woodward. 2003. *Making Things Happen: A Theory of Causal Explanation*. Oxford University Press.
|
| 327 |
+
[62] J. Woodward and L. Ross. 2003. Scientific Explanation. The Stanford Encyclopedia of Philosophy (Summer Edition 2021) (2003). https://plato.stanford.edu/archives/sum2021/entries/scientific-explanation/
|
| 328 |
+
[63] C. Zednik and H. Boelsen. forthcoming. Scientific Exploration and Explainable Artificial Intelligence. *Minds and Machines* (forthcoming).
|
| 329 |
+
[64] Y. Zhang, K. Song, Y. Sun, S. Tan, and M. Udell. 2019. Why Should You Trust My Explanation? Understanding Uncertainty in LIME Explanations. arXiv preprint arXiv:1904.12991 (2019).
|
| 330 |
+
|
| 331 |
+
# A POST-HOC EXPLANATIONS FAIL TO ACHIEVE THEIR PURPOSE IN ADVERSARIAL CONTEXTS: SUPPLEMENTARY MATERIALS
|
| 332 |
+
|
| 333 |
+
# A.1 Code
|
| 334 |
+
|
| 335 |
+
The python code to replicate all results in this paper is available at https://github.com/tml-tuebingen/facct-post-hoc.
|
| 336 |
+
|
| 337 |
+
# A.2 Datasets
|
| 338 |
+
|
| 339 |
+
In our experiments, we used the following datasets.
|
| 340 |
+
|
| 341 |
+
Adult-Income. This dataset contains information about individuals based on the 1994 US Census. It is available from the UCI machine learning repository. We obtained it from the SHAP package https://github.com/slundberg/shap. The dataset contains the 12 features age, workclass, education-num, marital status, occupation, relationship, race, sex, capital gain, capital loss, hours per week, country. In the figures, the features are numbered F1-F12 in this order. The machine learning problem is to predict whether whether an individual's income is over $50,000. We trained a gradient boosted tree which achieved a test accuracy of $87\%$ .
|
| 342 |
+
|
| 343 |
+
German Credit. The German Credit Dataset is a dataset with 20 different features on individual's credit history and personal characteristic. The machine learning problem is to predict credit risk in binary form. We obtained the dataset from the UCI machine learning repository. We trained a gradient boosted tree which achieved a test accuracy of $76\%$ . We also trained logistic regression which achieved a test accuracy of $74\%$ .
|
| 344 |
+
|
| 345 |
+
Folktables. Folktables is a Python package that provides access to datasets derived from recent US Censuses https://github.com/zykls/folktables. We used this package to obtain the data from the 2016 Census in California. The machine learning problem is the ACSIncome prediction task, that is to predict whether an individual's income is above \(50,000, based on 8 personal characteristics. We trained a gradient boosted tree which achieved a test accuracy of \(83\%\).
|
| 346 |
+
|
| 347 |
+
Diabetes. The Diabetes dataset is a dataset of diabetes patient records. It is available from the UCI machine learning repository. We obtained it from the scikit-learn machine learning library https://scikit-learn.org. The dataset contains 10 features about each individual at baseline: age, sex, body mass index, average blood pressure, and six blood serum measurements. The machine learning problem is to predict disease progression one year after baseline. We converted the scalar outcome into a binary by thresholding at the median. We trained linear regression which achieved a test accuracy of $71\%$ . We also trained a random forest which achieved a test accuracy of $74\%$ .
|
| 348 |
+
|
| 349 |
+
Wisconsin Breast Cancer. The Wisconsin Breast Cancer dataset is a tabular dataset with features of breast mass images. The dataset contains 30 features that describe the characteristics of the cell nuclei present in the image. The dataset is available from the UCI machine learning repository. We obtained it from the scikit-learn machine learning library https://scikit-learn.org. The machine learning problem is to predict the binary diagnosis (malignant/benign). We trained linear regression which achieved a test accuracy of $96\%$ . We also trained linear regression on random labels which achieved a test accuracy of $36\%$ .
|
| 350 |
+
|
| 351 |
+
# A.3 Explanation Algorithms
|
| 352 |
+
|
| 353 |
+
In our experiments, we used the following explanation algorithms.
|
| 354 |
+
|
| 355 |
+
SHAP The SHAP algorithm was proposed by [30]. We use it via the accompanying python package https://github.com/slundberg/shap. With (gradient boosted) trees, we use the exact computation method proposed in [31]. With all other classifiers, we use the Kernel SHAP method. The approach by Janzing et al. [20] is also implemented in this package. Whenever available, we use parametrizations proposed in the documentation of the package.
|
| 356 |
+
|
| 357 |
+
LIME The LIME algorithm was proposed by [41]. We use it via the accompanying python package https://github.com/marcotcr/lime. Whenever available, we use parametrizations proposed in the documentation of the package.
|
| 358 |
+
|
| 359 |
+
DiCE The DiCE algorithm was proposed by [34]. We use it via the accompanying python package https://github.com/interpretml/DiCE. To generate counterfactual explanations, we used the model-agnostic randomized sampling method.
|
| 360 |
+
|
| 361 |
+
# A.4 Figures
|
| 362 |
+
|
| 363 |
+
To create the figures, we normalized the feature attributions to have $l1$ -norm 1.
|
| 364 |
+
|
| 365 |
+
# A.5 Additional Figures
|
| 366 |
+
|
| 367 |
+
The following pages contain additional figures. These follow the figures in the main paper and depict the first observations from the test set, so they are not hand-selected in any way. The reader might notice that we selected the figures in the main paper from these. Figures for all observations from the test are available with the code that will be made available upon publication.
|
| 368 |
+
|
| 369 |
+
# Additional Figures Related to Figure 1 in the Main Paper
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
|
| 409 |
+

|
| 410 |
+
|
| 411 |
+

|
| 412 |
+
(a) SHAP
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
(b) LIME
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
(c) DiCE
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
(d) Interventional SHAP
|
| 422 |
+
Figure A.1: Different explanation algorithms lead to different explanations (compare Figure 1 in the main paper). Every row depicts the explanations of the four different explanation algorithms for another individual. The Figure depicts the first 6 observations from the test set.
|
| 423 |
+
|
| 424 |
+
# Additional Figures Related to Figure 2 in the Main Paper
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
|
| 434 |
+

|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
(a) SHAP
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
(b) LIME
|
| 451 |
+
Figure A.2: For any given datapoint, different explanation algorithms might lead to very similar or completely different explanations. In many cases, however, there are both similarities and dissimilarities (compare Figure 2 in the main paper). Every row depicts the explanations of the two different explanation algorithms for another individual. The Figure depicts the first 6 observations from the test set.
|
| 452 |
+
|
| 453 |
+
# Additional Figures Related to Figure 3 (a), (b) in the Main Paper
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
|
| 469 |
+

|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
|
| 475 |
+

|
| 476 |
+
(a) Diabetes, Linear Regression
|
| 477 |
+
|
| 478 |
+

|
| 479 |
+
(b) Diabetes, Random Forest
|
| 480 |
+
Figure A.3: Explanations depend on the exact shape of the decision boundary (compare Figure 3 in the main paper). Every row depicts the explanations of the two different explanation algorithms for another individual. The Figure depicts the first 6 observations from the test set.
|
| 481 |
+
|
| 482 |
+
# Additional Figures Related to Figure 3 (c), (d) in the Main Paper
|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
|
| 490 |
+

|
| 491 |
+
|
| 492 |
+

|
| 493 |
+
|
| 494 |
+

|
| 495 |
+
(a) Breast Cancer, $36\%$ Accuracy
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
(b) Breast Cancer, $96\%$ Accuracy
|
| 509 |
+
Figure A.4: Explanations depend on the exact shape of the decision boundary (compare Figure 3 in the main paper). Every row depicts the explanations of the two different explanation algorithms for another individual. The Figure depicts the first 6 observations from the test set.
|
2201.10xxx/2201.10295/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e5b10241f734eabdd656a6b40a4df7c0762dbf35fa6aaeb34ba7456f17a11883
|
| 3 |
+
size 580636
|
2201.10xxx/2201.10295/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10326/0a377601-1e77-4eb9-8e3b-b1344e36800e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1b7410dcda1383dd712f0af2049bc85fcaedd005089072db8bba2d1eff362a03
|
| 3 |
+
size 28382874
|
2201.10xxx/2201.10326/full.md
ADDED
|
@@ -0,0 +1,680 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ShapeFormer: Transformer-based Shape Completion via Sparse Representation
|
| 2 |
+
|
| 3 |
+
Xingguang Yan $^{1}$ Liqiang Lin $^{1}$ Niloy J. Mitra $^{2,3}$ Dani Lischinski $^{4}$ Daniel Cohen-Or $^{5}$ Hui Huang $^{1*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Shenzhen University <sup>2</sup>University College London <sup>3</sup>Adobe Research <sup>4</sup>Hebrew University of Jerusalem <sup>5</sup>Tel Aviv University
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
We present ShapeFormer, a transformer-based network that produces a distribution of object completions, conditioned on incomplete, and possibly noisy, point clouds. The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
|
| 10 |
+
|
| 11 |
+
To facilitate the use of transformers for 3D, we introduce a compact 3D representation, vector quantized deep implicit function (VQDIF), that utilizes spatial sparsity to represent a close approximation of a 3D shape by a short sequence of discrete variables. Experiments demonstrate that ShapeFormer outperforms prior art for shape completion from ambiguous partial inputs in terms of both completion quality and diversity. We also show that our approach effectively handles a variety of shape types, incomplete patterns, and real-world scans.
|
| 12 |
+
|
| 13 |
+
# 1. Introduction
|
| 14 |
+
|
| 15 |
+
Shapes are typically acquired with cameras that probe and sample surfaces. The process relies on line of sight and, at best, can obtain partial information from the visible parts of objects. Hence, sampling complex real-world geometry is inevitably imperfect, resulting in varying sampling densities and missing parts. This problem of surface completion has been extensively investigated over multiple decades [5]. The central challenge is to compensate for incomplete data by inspecting non-local hints in the observed data to infer missing parts using various forms of priors.
|
| 16 |
+
|
| 17 |
+
Recently, deep implicit function (DIF) has emerged as an effective representation for learning high-quality surface completion. To learn shape priors, earlier DIFs [13, 44, 51] encode each shape using a single global latent vector. Combining a global code with region-specific local latent codes [14, 15, 23, 28, 38, 53] can faithfully preserve geometric details of the input in the completion. However, when presented with ambiguous partial input, for which multiple
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1. ShapeFormer predicts multiple completions for a real-world scan of a sports car (left column), a chair with missing parts (middle column), and a partial point cloud of human lower legs (right column). The input point clouds are superimposed with the generated shapes to emphasize the faithfulness of the completion to the input point cloud.
|
| 21 |
+
|
| 22 |
+
plausible completions are possible (see Fig. 1), the deterministic nature of local DIF usually fails to produce meaningful completions for unseen regions. A viable alternative is to combine generative models to handle the input uncer-
|
| 23 |
+
|
| 24 |
+
tainty. However, for representations that contain enormous statistical redundancy, as in the case of current local methods, such combination [60] excessively allocates model capacity towards perceptually irrelevant details [22, 26].
|
| 25 |
+
|
| 26 |
+
We present ShapeFormer, a transformer-based autoregressive model that learns a distribution over possible shape completions. We use local codes to form a sequence of discrete, vector quantized features, greatly reducing the representation size while keeping the underlying structure. Applying transformer-based generative models toward such sequences of discrete variables have been shown to be effective for generative pretraining [3, 11], generation [24, 56] and completion [67] in image domain.
|
| 27 |
+
|
| 28 |
+
However, directly deploying transformers to 3D feature grids leads to a sequence length cubic in the feature resolution. Since transformers have an innate quadratic complexity on sequence length, only using overly coarse feature resolution, while feasible, can barely represent meaningful shapes. To mitigate the complexity, we first introduce Vector Quantized Deep Implicit Functions (VQDIF), a novel 3D representation that is both compact and structured, that can represent complex 3D shapes with acceptable accuracy, while being rather small in size. The core idea is to sparsely encode shapes as sequences of discrete 2-tuples, each representing both the position and content of a non-empty local feature. These sequences can be decoded to deep implicit functions from which high-quality surfaces can subsequently be extracted. Due to the sparse nature of 3D shapes, such encoding reduces the sequence length from cubic to quadratic in the feature resolution, thus enabling effective combination with generative models.
|
| 29 |
+
|
| 30 |
+
ShapeFormer completes shapes by generating complete sequences, conditioned on the sequence for partial observation. It is trained by sequentially predicting the conditional distribution of both location and content over the next element. Unlike image completion [67], where the model is trained with the BERT [3,21] objective to only predict for unseen regions, in the 3D shape completion setting, the input features may also come from both noisy and incomplete observations, and keeping them intact necessarily yields noisy results. Hence, in order to generate whole complete sequences from scratch while being faithful to the partial observations, we adapt the auto-regressive objective and prepend the partial sequence to the complete one to achieve conditioning. This strategy has been proved effective for conditional synthesis for both text [42] and images [24].
|
| 31 |
+
|
| 32 |
+
We demonstrate the ability of ShapeFormer to produce diverse high-quality completions for ambiguous partial observations of various shape types, including CAD models and human bodies, and of various incomplete sources such as real-world scans with missing parts. In summary, our contributions include: (i) a novel DIF representation based on sequences of discrete variables that compactly
|
| 33 |
+
|
| 34 |
+
represents satisfactory approximations of 3D shapes; (ii) a transformer-based autoregressive model that uses our new representation to predict multiple high-quality completed shapes conditioned on the partial input; and (iii) state-of-the-art results for multi-modal shape completion in terms of completion quality and diversity. The FPD score on PartNet is improved by at most 1.7 compared with prior multi-modal method cGAN [70].
|
| 35 |
+
|
| 36 |
+
# 2. Related Work
|
| 37 |
+
|
| 38 |
+
Shape reconstruction and completion. 3D reconstruction is a longstanding ill-posed problem in computer vision and graphics. Traditional methods can produce faithful reconstruction from complete input such as point cloud [5], or images [27]. Recently, neural network-based methods have demonstrated an impressive performance toward reconstruction from partial input [31], where the unseen regions are completed with the help of data priors. They can be classified according to their output representation, such as voxels, meshes, point clouds, and deep implicit functions. Since voxels can be processed or generated easily through 3D convolutions thanks to their regularity, they are commonly used in earlier works [18, 20, 32, 59]. However, since their cubic complexity toward resolution, the predicted shapes are either too coarse or too heavy in size for later applications. While meshes are more data-efficient, due to the difficulty of handling mesh topology, mesh-based methods have to either use shape template [41, 57, 68], limiting to a single topology, or produce self-intersecting meshes [30]. Point clouds, in contrast, do not have such a problem and are popularly used lately for generation [1, 25] and completion [62, 71, 72, 74]. However, point clouds need to be non-trivially post-processed using classical methods [6, 36, 39, 40] to recover surfaces due to their sparse nature. Recent works that represent shapes as deep implicit functions have been shown to be effective for high-quality 3D reconstruction [13, 44, 51]. By leveraging local priors, follow-up works [15, 23, 28, 43, 53] can further improve the fidelity of geometric details. However, most current methods are not effective toward ambiguous input due to their deterministic nature. Other methods handle such input by leveraging generative models. They learn the conditional distribution of complete shapes represented as either a single global code [2, 70], which, due to their lack of spatial structure, leads to completions misaligned with the input, or raw point cloud [76], which, due to its statistical redundancy, is only effective for completing simple shapes with a limited number of points. In this paper, we show how building generative models upon our new compact, structured representation enables multi-modal high-quality reconstruction for complex shapes.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
Figure 2. Overview of our shape completion approach. Given a partial point cloud $\mathcal{P}$ , possibly from a depth image, as input, our VQDIF encoder first converts it to a sparse feature sequence $\mathbf{z}_{0\dots K - 1}$ , replacing them with the indices of their nearest neighbor $\mathbf{e}_j$ in a learned dictionary $\mathcal{D}$ , forming a sequence of discrete 2-tuples consisting of the coordinate (pink) and the quantized feature index (blue). We refer to this partial sequence as $\mathcal{S}_{\mathcal{P}}$ (drawn with dashed lines). The ShapeFormer then takes $\mathcal{S}_{\mathcal{P}}$ as input and models the conditional distribution $p(S_{\mathcal{C}}|S_{\mathcal{P}})$ . Autoregressive sampling yields a probable complete sequence $\mathcal{S}_{\mathcal{C}}$ . Finally, the VQDIF decoder converts the sequence $\mathcal{S}_{\mathcal{C}}$ to a deep implicit function, from which the surface reconstruction $\mathcal{M}$ can be extracted. To show the faithfulness of our reconstructions, we super-impose the input point cloud on them. Please see the supplementary material for more architectural details.
|
| 42 |
+
|
| 43 |
+
Autoregressive models and Transformers. Autoregressive models are generative models that aim to model distributions of high dimensional data by factoring the joint probability distribution to a series of conditional distributions via the chain rule [4]. Using neural networks to parameterize the conditional distribution has been proved to be effective [29, 63] in general, and more specifically to image generation [12, 50, 65]. Transformers [66], known for their ability to model long-range dependencies through self-attention, have shown the power of autoregressive models in natural languages [8, 54], image generation [11, 52]. Contrary to deterministic masked auto-encoders [33], Transformers can produce diverse image completions [67] that are sharp in masked regions by adopting the BERT [21] training objective. In the 3D domain, autoregressive models have been used to learn the distribution of point clouds [60, 69] and meshes [47]. However, these models can only generate small point clouds or meshes restricted to 1024 vertices due to the lack of efficient representation. In contrast, by eliminating statistical redundancy, a compressed discrete representation enables generative models to focus on data dependencies at a more salient level [56, 64] and recently allows high-resolution image synthesis [24, 55]. Follow-up works utilize data sparsity to obtain even more compact representations [22, 48]. We explore this direction in the context of surface completion. Concurrently with our work, AutoSDF [45] trains Transformers to complete and generation shapes with dense grid. And Point-BERT [73] adopts generative pre-training for several downstream tasks.
|
| 44 |
+
|
| 45 |
+
# 3. Method
|
| 46 |
+
|
| 47 |
+
We model the shape completion problem as mapping a partial point cloud $\mathcal{P} \in \mathbb{R}^{N \times 3}$ to a complete, watertight mesh $\mathcal{M}$ which matches the cloud. Since this is an ill-posed problem, we seek to estimate the probabilistic distribution of such mesh $p(\mathcal{M}|\mathcal{P})$ utilizing the power of Transformers. Instead of working directly on point clouds, meshes, or feature grids, we approximate shapes as short discrete sequences (see Sec. 3.1) to greatly reduce both the number of variables and the variable bit size, which enables Transformers to complete complex 3D shapes (see Sec. 3.2).
|
| 48 |
+
|
| 49 |
+
With such compact representation, the conditional distribution becomes $p(S_{\mathcal{C}}|S_{\mathcal{P}})$ , where $S_{\mathcal{P}}$ and $S_{\mathcal{C}}$ are the sequence encoding of the partial point cloud and the complete shape, respectively. Once such distribution is modeled, we can sample multiple complete sequences $S_{\mathcal{C}}$ , from which different surface reconstructions $\mathcal{M}$ can be obtained through decoding. This process is illustrated in Fig. 2.
|
| 50 |
+
|
| 51 |
+
# 3.1. Compact sequence encoding for 3D shapes
|
| 52 |
+
|
| 53 |
+
We propose VQDIF, whose goal is to approximate 3D shapes with a shape dictionary, with each entry describing a particular type of local shape part inside a cell of volumetric grid $G$ with resolution $R$ . With such a dictionary, shapes can be encoded as short sequences of entry indices, describing the local shapes inside all non-empty grid cells, enabling transformers to efficiently model the global dependencies.
|
| 54 |
+
|
| 55 |
+
We design an auto-encoder architecture to achieve this. The encoder $E$ first maps the input point cloud to a 64 resolution feature grid with local-pooled PointNet and then downsample it to resolution $R$ . Unlike the previous strategy for image synthesis [24], the encoder parameters are
|
| 56 |
+
|
| 57 |
+
carefully set to have the least receptive field, reducing the number of non-empty features to the number of sparse voxels of the voxelized input point cloud $\mathcal{P}$ at resolution $R$ . Then these non-empty features are flattened to a sequence of length $K$ in row-major order. Since these features are sparse, we record their locations with their flattened index $\{c_i\}_{i=0}^{K-1}$ . Other orderings are also possible, but for generation they are not as effective as row-major order [24].
|
| 58 |
+
|
| 59 |
+
Following the idea of neural discrete representation learning [64], we compress the bit size of the feature sequence $\{\mathbf{z}_i\}_{i = 0}^{K - 1}$ through vector quantization, that is, clamping it to its nearest entry in a dictionary $\mathcal{D}$ of $V$ embeddings $\{\mathbf{e}_j\}_{j = 0}^V$ and we save the indices of these entries:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
v _ {i} = \operatorname {a r g m i n} _ {j \in [ 0, V)} \| \mathbf {z} _ {i} - \mathbf {e} _ {j} \|. \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
Thus, we get a compact sequence of discrete 2-tuples representing the 3D shape $S = \{(c_i, v_i)\}_{i=0}^{K-1}$ . Finally, the decoder projects this sequence back to a feature grid and, through a 3D-Unet [19], decodes it to a local deep implicit function $f$ [53], whose iso-surface is the reconstruction $\mathcal{M}$ .
|
| 66 |
+
|
| 67 |
+
Training. We train the VQDIF by simultaneously minimizing the reconstruction loss and updating the dictionary using exponential moving averages [64], where dictionary embeddings are gradually pulled toward the encoded features. We also adopt commitment loss $\mathcal{L}_{\mathrm{commit}}$ [64] to encourage encoded features $\mathbf{z}_i$ to stay close to their nearest entry $\mathbf{e}_{v_i}$ in the dictionary, with index $v_{i}$ , thus keeping the range of the embeddings bounded. We define the loss as,
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathcal {L} _ {\text {c o m m i t}} = \frac {1}{K} \sum_ {i = 0} ^ {K - 1} \left(\mathbf {z} _ {i} - \operatorname {s g} \left[ \mathbf {e} _ {v _ {i}} \right]\right) ^ {2}, \tag {2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where sg stands for stop gradient operator which prevents the embedding being affected by this loss.
|
| 74 |
+
|
| 75 |
+
The full training objective for VQDIF is the combination of reconstruction loss of $\mathcal{L}_{\mathrm{commit}}$ with weighting factor $\beta$ :
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\mathcal {L} _ {\mathrm {v Q D I F}} = \frac {1}{T} \sum_ {i = 0} ^ {T - 1} \operatorname {B C E} \left(f \left(\mathbf {x} _ {i}\right), o _ {i}\right) + \beta \mathcal {L} _ {\text {c o m m i t}}. \tag {3}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
Here, $T$ is the size of the target set and BCE is the binary cross-entropy loss which measures the discrepancy between the predicted and the ground truth occupancy $o_i$ at target point $\mathbf{x}_i$ . During training, we select the target set $\mathcal{T}_x = \{\mathbf{x}_{i=0}^{T-1}\}$ and its occupancy values $\mathcal{T}_o = \{o_{i=0}^{T-1}\}$ in a similar fashion to prior work [44].
|
| 82 |
+
|
| 83 |
+
# 3.2. Sequence generation for shape completion
|
| 84 |
+
|
| 85 |
+
We autoregressively model the distribution $p(S_{\mathcal{C}}|S_{\mathcal{P}})$ , by predicting the distribution of the next element conditioned
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Figure 3. The architecture of the ShapeFormer. The partial sequence $\mathcal{S}_{\mathcal{P}}$ (dashed boxes) and the complete one $S_{\mathcal{C}}$ (solid boxes) both appended with an end token are concatenated before sending their locations, $c_{i}$ (pink) and values $v_{i}$ (blue), to a Coordinate Transformer to predict the next location $c_{i + 1}$ . The Value Transformer takes both $c_{i + 1}$ and the former Transformer's output embedding to predict the next value $v_{i + 1}$ .
|
| 89 |
+
|
| 90 |
+
on the previous elements. We also factor out the tuple distribution for each element: $p(c_{i}, v_{i}) = p(c_{i})p(v_{i}|c_{i})$ . The final factored sequence distribution is as follows:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\begin{array}{l} p \left(\mathcal {S} _ {\mathcal {C}} \mid \mathcal {S} _ {\mathcal {P}}; \theta\right) = \prod_ {i = 0} ^ {K - 1} p _ {c _ {i}} \cdot p _ {v _ {i}} \\ p _ {c _ {i}} = p \left(c _ {i} \mid \mathbf {c} _ {< i}, \mathbf {v} _ {< i}, \mathcal {S} _ {\mathcal {P}}; \theta\right) \\ p _ {v _ {i}} = p \left(v _ {i} \mid \mathbf {c} _ {\leq i}, \mathbf {v} _ {< i}, \mathcal {S} _ {\mathcal {P}}; \theta\right). \\ \end{array}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
Here, $\theta$ indicates model parameters and $p_{c_i}$ and $p_{v_i}$ are the distributions of the coordinate and the index value of the $i$ -th element of $\mathcal{S}_{\mathcal{C}}$ , conditioned on previously generated elements and the partial sequence $\mathcal{S}_{\mathcal{P}}$ . Note that $p_{v_i}$ is also conditioned on the current coordinate $c_i$ .
|
| 97 |
+
|
| 98 |
+
Different approaches have been applied to build a transformer model that can predict tuple sequences. Instead of flattening them [60], which in our case doubles the sequence length, we stack two decoder-only transformers to predict the $p_{c_i}$ and $p_{v_i}$ respectively in a similar way to prior works [22, 48, 69], as illustrated in Fig. 3. Unlike in the image completion case [67], where the partial sequence is strictly a part of the complete sequence so that only the missing regions need to be completed. For our case, however, due to the noise or incompleteness of local observations, we would like to predict complete sequences from scratch to fix such data deficiencies. And thanks to the autoregressive structure of the decoder-only transformer, we can achieve conditioning by simply prepending $S_{\mathcal{P}}$ before $S_{\mathcal{C}}$ to generate complete sequences that are in coordination with the partial one. We also append an additional end token to both sequences to help learning.
|
| 99 |
+
|
| 100 |
+
Training and inference. The training objective of Shape
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
Input
|
| 106 |
+
VQDF-only
|
| 107 |
+
|
| 108 |
+

|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
OccNet
|
| 112 |
+
Ours-1
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
ConvONet
|
| 118 |
+
Ours-2
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
IF-Net
|
| 124 |
+
Ours-3
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
PoinTr
|
| 130 |
+
Ours-4
|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
cGAN-1
|
| 136 |
+
Ours-5
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
CGAN-2
|
| 142 |
+
Ours-6
|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
Input
|
| 148 |
+
VQDIF-only
|
| 149 |
+
Figure 4. Visual comparison with prior shape completion methods on the ShapeNet dataset. Our method can better handle ambiguous scans and produce completions that are more faithful on both observed and unseen regions. More examples are in the supplementary material.
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
OccNet
|
| 155 |
+
Ours-1
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
ConvONet
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Ours-2
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
IF-Net
|
| 167 |
+
Ours-3
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
PoinTr
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
Ours-4
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
cGAN-1
|
| 179 |
+
Ours-5
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
cGAN-2
|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
Ours-6
|
| 186 |
+
|
| 187 |
+
Former is to maximize the log-likelihood given both $S_{\mathcal{C}}$ and $S_{\mathcal{P}}$ : $\mathcal{L}_{\mathrm{ShapeFormer}} = -\log p(S_{\mathcal{C}}|S_{\mathcal{P}};\theta)$ . After the model is trained, ShapeFormer performs shape completion by sequentially sampling the next element of the complete sequence until an end token ([END]) is encountered. Given the partial sequence, we alternatively sample the new coordinate and value index using top-p sampling [35], where only a few top choices, for which the sum of probabilities exceeds a threshold $p_n$ , are kept. Also, we mask out the invalid choices for coordinate to guarantee monotonicity.
|
| 188 |
+
|
| 189 |
+
# 4. Results and Evaluation
|
| 190 |
+
|
| 191 |
+
In this section, we demonstrate our method outperforms prior arts for shape completion from ambiguous scans and part-level incompleteness (Sec. 4.1). Then we show our approach can effectively handle a variety of shape types, out-of-distribution shapes, and real-world scans from the Redwood dataset [16] (Sec. 4.2). Lastly, we show our VQDIF representation has a significantly smaller size compared with prior DIFs while achieving similar accuracy (Sec. 4.3).
|
| 192 |
+
|
| 193 |
+
<table><tr><td colspan="2">SCAN AMBIGUITY</td><td colspan="3">LOW</td><td colspan="3">HIGH</td></tr><tr><td>Method</td><td>CD↓</td><td>F1↑</td><td>FPD↓</td><td>CD↓</td><td>F1↑</td><td>FPD↓</td><td></td></tr><tr><td>OccNet [44]</td><td>1.48</td><td>63.2</td><td>0.34</td><td>2.79</td><td>50.4</td><td>3.12</td><td></td></tr><tr><td>ConvONet [53]</td><td>0.81</td><td>72.9</td><td>0.23</td><td>3.14</td><td>60.4</td><td>2.85</td><td></td></tr><tr><td>IF-Net [15]</td><td>0.79</td><td>73.8</td><td>0.25</td><td>18.4</td><td>51.5</td><td>3.66</td><td></td></tr><tr><td>PoinTr [72]</td><td>0.80</td><td>70.1</td><td>0.23</td><td>3.11</td><td>59.3</td><td>3.29</td><td></td></tr><tr><td>cGAN [70]</td><td>1.33</td><td>62.1</td><td>1.36</td><td>3.49</td><td>59.3</td><td>2.55</td><td></td></tr><tr><td>Ours</td><td>0.74</td><td>70.3</td><td>0.24</td><td>4.72</td><td>60.5</td><td>1.45</td><td></td></tr><tr><td>Ours*</td><td>0.73</td><td>71.4</td><td>0.22</td><td>4.69</td><td>60.7</td><td>1.83</td><td></td></tr><tr><td>VQDIF-only</td><td>0.79</td><td>73.8</td><td>0.25</td><td>3.07</td><td>60.3</td><td>3.14</td><td></td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 1. Quantitative results on ShapeNet with different scan ambiguity. Ours: top-p=0.4 sampling, Ours*: top-p=0 sampling.
|
| 196 |
+
|
| 197 |
+
Throughout all these experiments, we use feature resolution $R = 16$ for VQDIF and set its loss balancing factor $\beta = 0.01$ . We also set the vocabulary of the dictionary $\mathcal{D}$ to be $V = 4096$ . We use 20 and 4 blocks for Coordinate and Value Transformers, respectively. All of these blocks have 16 heads self-attention, and the embedding dimension is 1024. We find that a maximum sequence length of 812
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 5. Visual comparison for multi-modal shape completion of Table, Chair, and Lamp categories on PartNet. We can produce diverse completions that better align with the input.
|
| 201 |
+
|
| 202 |
+
is enough for all of our experiments. We set the default probability factor $p = 0.4$ for sampling. Further implementation details such as architecture and training statistics are provided in the supplementary.
|
| 203 |
+
|
| 204 |
+
# 4.1. Shape completion results
|
| 205 |
+
|
| 206 |
+
Data. We consider two datasets: 1) ShapeNet [9] for testing on partial scan and 2) PartNet [46] for testing on part-level incompleteness; we follow the same setting as in cGAN [70]. For ShapeNet, following prior works [13, 15, 44,53], we use 13 classes of the ShapeNet with train/val/test split from 3D-R2N2 [18]. The data are processed and sampled similarly to IMNet [13] and we create partial input for training via random virtual scanning. For evaluation, we first measure the ambiguity score of a partial point cloud $\mathcal{P}$ to its complete counterpart $\mathcal{C}$ as the mean ratio of the distance of each point $x\in \mathcal{C}$ with its nearest neighbor in $\mathcal{P}$ to its distance toward furthest neighbor in $\mathcal{C}$ . We uniformly sample 70 viewpoints on a sphere for each shape. Then we create two setups for the dataset according to ambiguity. The high scan ambiguity setup selects scans with the top half ambiguity score and vice versa. More details about this score are provided in the supplementary material.
|
| 207 |
+
|
| 208 |
+
Metric. For the low ambiguity setting, we use Chamfer $L_{2}$ Distance (CD) and F-score%1 (F1) [61] to measure how accurate the completion is; this is similar to the previous setup [53]. And to evaluate completion quality for high ambiguity setting, we follow prior work [58] to use pre-trained PointNet [10] classifier as a feature extractor to compute the Fréchet Point Cloud Distance (FPD) between the set of completion results and ground truth shapes. Additionally, for the PartNet dataset, we follow cGAN [70] and use Uni
|
| 209 |
+
|
| 210 |
+
<table><tr><td>Method</td><td>MMD ↓</td><td>TMD ↑</td><td>UHD ↓</td><td>FPD ↓</td></tr><tr><td>cGAN [70]</td><td>1.98</td><td>3.05</td><td>3.39</td><td>2.95</td></tr><tr><td>SInv. [75]</td><td>2.14</td><td>0.62</td><td>2.32</td><td>3.45</td></tr><tr><td>Ours</td><td>1.32</td><td>3.96</td><td>0.98</td><td>1.22</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 2. Quantitative comparison for multi-modal completion on PartNet between our method and prior works. The metrics are averaged across all three categories (Table, Chair, Lamp) and are scaled by $10^{3}$ , $10^{2}$ , $10^{2}$ , $10^{1}$ respectively.
|
| 213 |
+
|
| 214 |
+
directional Hausdorff Distance (UHD) to measure faithfulness toward input, Total Mutual Difference (TMD) to measure diversity, and Minimal Matching Distance (MMD) [1].
|
| 215 |
+
|
| 216 |
+
Baselines. We compare our model with a global DIF method OccNet [44], two local DIF methods ConvONet [53] and IF-Net [15], PoinTr [72], which adopts Transformers without autoregressive learning, and multimodal completion method cGAN [70]. We also compare our VQDIF-only model to illustrate the necessity of ShapeFormer. We train these methods for shape completion in our dataset setting with their official implementation.
|
| 217 |
+
|
| 218 |
+
Results on ShapeNet. As shown in Fig. 4, methods incorporating structured local features can better preserve the input details than those that only operate on global features (OccNet [44], cGAN [70]) And deterministic methods tend to produce averaged shape since they are unable to handle multi-modality. Notice that PoinTr [72] also utilizes the power of Transformers, but they can not alleviate this problem by adopting Transformers without generative modeling. This phenomenon is more apparent for the chair example,
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
RGBD
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
Input
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Completions
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
Figure 6. Shape completion results on real-world depth scan from Redwood dataset. ShapeFormer takes partial point clouds converted from depth images and produces multiple possible completions whose variation depends on the uncertainty of viewpoints.
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
Figure 8. Given partial human body parts (left column), our method generates complete human bodies with different poses (along the rows) and the variety depends on the ambiguity.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
Scan
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
Completions
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
Figure 7. Shape completion results on out-of-distribution shapes. Given a scan of an unseen type of shape, ShapeFormer can produce multiple reasonable completions by generalizing the knowledge learned in the training set.
|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
|
| 288 |
+
which has higher ambiguity. Our VQDIF-only model also fails to produce good completion in this case. Based on VQDIF, our ShapeFormer resolves ambiguity by factoring the estimation into a distribution, with each sampled shape sharp and plausible. In contrast, the multi-modal method cGAN [70] is unable to produce high-quality shapes due to their unstructured representation. Further, we generate one completion per input with top-p sampling for quantitative evaluation. As shown in Tab. 1, our method has a much better FPD for high ambiguity scans. Notice CD is not reliable when ambiguity is high since it often treats plausible completions as significant errors. For low ambiguity scans, our method is also competitive toward previous state-of-the-art completion methods in terms of accuracy.
|
| 289 |
+
|
| 290 |
+
Results on PartNet. We compare our model with cGAN and ShapeInversion [75] on PartNet. The latter method achieves multiple completions through GAN inversion. The quantitative and qualitative comparisons are shown in Tab. 2 and Fig. 5, respectively. Thanks to our structured represent
|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
|
| 294 |
+
tation, we achieve much better faithfulness (UHD) and can generate more varied (TMD) high-quality shapes (MMD and FPD) than these GAN-based methods.
|
| 295 |
+
|
| 296 |
+
# 4.2. More results
|
| 297 |
+
|
| 298 |
+
Results on real scans. We further investigate how our model pre-trained on ShapeNet can be applied to scans of real objects. We test our model on partial point clouds converted from RGBD scans of the Redwood 3D Scans dataset [16]. Figure 6 shows the results for a sofa and a table, both of them have two scans from different views. Notice that our model sensitively captures the uncertainty of a scan, producing a distribution of completions that are faithful to the scan and plausible in unobserved regions. We also show results for a sports car in Fig. 2.
|
| 299 |
+
|
| 300 |
+
Results on out-of-distribution objects. We further evaluate ShapeFormer's generalization by testing scans of unseen types of shapes on our trained model of Sec. 4.1. We pick the novel shapes from the "Famous" dataset collected by Erler et al. [23] which includes many famous geometries for testing, such as the "Utah teapot," and apply virtual scan to get the partial point cloud. Fig. 7 demonstrates our ShapeFormer can grasp general concepts such as symmetry or hollow and filled. Even the model is only trained on the 13 ShapeNet categories, without ever seeing any cups or teapot, it can still successfully produce multiple reasonable completions from the partial scan. Moreover, in the second row, we see the completions of a one-side scan of a cup contain two distinct features: the cups might be solid or empty. These examples show the ShapeFormer's potential for general-purpose shape completion, where once we have it trained, we can apply it for all types of shapes.
|
| 301 |
+
|
| 302 |
+
Results on human shapes. In addition to CAD models, we qualitatively evaluate our completion results on scans of hu
|
| 303 |
+
|
| 304 |
+
<table><tr><td></td><td>Occ.</td><td>CONet.</td><td>IF.</td><td>Ours8</td><td>Ours16</td><td>Ours32</td></tr><tr><td>CD</td><td>3.56</td><td>0.98</td><td>0.43</td><td>1.90</td><td>0.98</td><td>0.55</td></tr><tr><td>F1</td><td>68.2</td><td>89.0</td><td>97.8</td><td>77.5</td><td>88.1</td><td>96.4</td></tr><tr><td>len.</td><td>1</td><td>323</td><td>1283</td><td>57</td><td>217</td><td>889</td></tr></table>
|
| 305 |
+
|
| 306 |
+
Table 3. Auto-encoding results for objects in ShapeNet. len. stands for sequence length of the flattened representation.
|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
Figure 9. The relation between representation size and reconstruction accuracy. With higher feature resolution, our VQDIF achieves satisfactory accuracy while keeping a rather small byte size.
|
| 310 |
+
|
| 311 |
+
man shapes (D-FAUST dataset [7]) using the same setting as Niemeyer et al. [49]. Human shapes are very challenging due to their thin structures and the wide variety of poses. To simulate part level incompleteness, we randomly select a point from the complete cloud and only keep neighboring points within a ball of a fixed radius as partial input. Fig. 8 shows examples of our results. We can see that our completions keep the pose of the observed body parts and generate various possible poses for the unobserved body parts.
|
| 312 |
+
|
| 313 |
+
# 4.3. Surface reconstruction with VQDIF
|
| 314 |
+
|
| 315 |
+
Our final experiment evaluates the representation size and reconstruction accuracy of VQDIF. We compare VQDIF of different feature resolutions (Ours $_8$ , Ours $_{16}$ , Ours $_{32}$ ) with OccNet, ConvONet, IF-Net, which are retrained to auto-encode the complete shape with their released implementations. As shown in Fig. 9, Ours $_{32}$ achieves similar accuracy to the local implicit approach IF-Net while being significantly smaller in size thanks to the sparse and discrete VQDIF features. The minimum receptive field of our encoder keeps the feature as local as possible, which greatly reduces the feature amount. Then the multi-dimensional feature vectors are quantized and can be referred to using a single integer index, which further reduces the size. The accuracy loss is only salient for lower feature resolution, as seen in the w/o quant. comparison, where we train VQDIF without vector quantization. These together allow transformers to effectively model the distri
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Figure 10. Results for auto-encoding complete shapes. Our VQDIF in different feature resolutions achieves better or similar results compared to the prior DIF methods.
|
| 319 |
+
|
| 320 |
+
bution of shapes. We adopt $\mathrm{Ours}_{16}$ for ShapeFormer since it only has an average length of 217 (see Tab. 3) and its accuracy is already comparable with ConvONet (see Fig. 10).
|
| 321 |
+
|
| 322 |
+
# 5. Conclusions
|
| 323 |
+
|
| 324 |
+
We have presented ShapeFormer, a transformer-based architecture that learns a conditional distribution of completions, from which multiple plausible completed shapes can be sampled. By explicitly modeling the underlying distribution, our method produces sharp output instead of regressing to the mean producing a blurry result. To facilitate generative learning for 3D shape, we propose a new 3D representation VQDIF that can significantly compress the shapes into short sequences of sparse, discrete local features, which in turn enables producing better results, both in terms of quality and diversity, than previous methods.
|
| 325 |
+
|
| 326 |
+
The major factor limiting our method to be applied in fields like robotics is the sampling speed, which is currently 20 seconds per generated complete shape. In the future, we would also like to explore utilizing a more efficient attention mechanism to allow Transformers to learn VQDIF with smaller size, producing even higher quality completions. Moreover, the current method is generic, leveraging advances in language models. More research is required to include geometric or physical reasoning in the process to better deal with ambiguities.
|
| 327 |
+
|
| 328 |
+
Acknowledgements. We thank the reviews for their comments. We thank Ziyu Wan, Xuelin Chen and Jiahui Lyu for discussions. This work was supported in parts by NSFC (62161146005, U21B2023, U2001206), GD Talent Program (2019JC05X328), GD Science and Technology Program (2020A0505100064), DEGP Key Project (2018KZDXM058, 2020SFKC059), Shenzhen Science and Technology Program (RCJC20200714114435012, JCYJ20210324120213036), Royal Society (NAF-R1-180099), ISF (3441/21, 2492/20) and Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ).
|
| 329 |
+
|
| 330 |
+
# References
|
| 331 |
+
|
| 332 |
+
[1] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas J. Guibas. Learning representations and generative models for 3D point clouds. In International conference on machine learning, 2017. 2, 6
|
| 333 |
+
[2] Himanshu Arora, Saurabh Mishra, Shichong Peng, Ke Li, and Ali Mahdavi-Amiri. Multimodal shape completion via imle. arXiv preprint arXiv:2106.16237, 2021. 2
|
| 334 |
+
[3] Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers, 2021. 2
|
| 335 |
+
[4] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. Advances in Neural Information Processing Systems, 12:400-406, 2000. 3
|
| 336 |
+
[5] Matthew Berger, Andrea Tagliasacchi, Lee M Seversky, Pierre Alliez, Gael Guennebaud, Joshua A Levine, Andrei Sharf, and Claudio T Silva. A survey of surface reconstruction from point clouds. In Computer Graphics Forum, volume 36, pages 301-329, 2017. 1, 2
|
| 337 |
+
[6] Fausto Bernardini, Joshua Mittleman, Holly Rushmeier, Claudio Silva, and Gabriel Taubin. The ball-pivoting algorithm for surface reconstruction. IEEE transactions on visualization and computer graphics, 5(4):349-359, 1999. 2
|
| 338 |
+
[7] Federica Bogo, Javier Romero, Gerard Pons-Moll, and Michael J. Black. Dynamic FAUST: Registering human bodies in motion. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), July 2017. 8
|
| 339 |
+
[8] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 3
|
| 340 |
+
[9] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository, 2015. 6
|
| 341 |
+
[10] R. Qi Charles, Hao Su, Mo Kaichun, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. 6, 4
|
| 342 |
+
[11] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In International Conference on Machine Learning, pages 1691-1703. PMLR, 2020. 2, 3
|
| 343 |
+
[12] Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. Pixelsnail: An improved autoregressive generative model. In International Conference on Machine Learning, pages 864-872. PMLR, 2018. 3
|
| 344 |
+
[13] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 5939-5948, 2019, 1, 2, 6
|
| 345 |
+
[14] Zhang Chen, Yinda Zhang, Kyle Genova, Sean Fanello, Sofien Bouaziz, Christian Hane, Ruofei Du, Cem Keskin, Thomas Funkhouser, and Danhang Tang. Multiresolution
|
| 346 |
+
|
| 347 |
+
deep implicit functions for 3d shape representation. In Proc. Int. Conf. on Computer Vision, pages 13087-13096, 2021. 1
|
| 348 |
+
[15] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition. IEEE, jun 2020. 1, 2, 5, 6, 8
|
| 349 |
+
[16] Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, and Vladlen Koltun. A large dataset of object scans, 2016. 5, 7
|
| 350 |
+
[17] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. Rethinking attention with performers, 2021. 8
|
| 351 |
+
[18] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 2, 6
|
| 352 |
+
[19] Özgün Çiçek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pages 424-432. Springer, 2016. 4
|
| 353 |
+
[20] Angela Dai, Charles Ruizhongtai Qi, and Matthias NieBner. Shape completion using 3D-Encoder-Predictor CNNs and shape synthesis. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2017. 2
|
| 354 |
+
[21] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 2, 3
|
| 355 |
+
[22] Sander Dieleman, Charlie Nash, Jesse Engel, and Karen Simonyan. Variable-rate discrete representation learning. arXiv preprint arXiv:2103.06089, 2021. 2, 3, 4
|
| 356 |
+
[23] Philipp Erler, Paul Guerrero, Stefan Ohrhallinger, N. Mitra, and M. Wimmer. Points2surf learning implicit surfaces from point clouds. In Proc. Euro. Conf. on Computer Vision, 2020. 1, 2, 7, 4
|
| 357 |
+
[24] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis, 2020. 2, 3, 4
|
| 358 |
+
[25] Haoqiang Fan, Hao Su, and Leonidas Guibas. A point set generation network for 3D object reconstruction from a single image. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2017. 2
|
| 359 |
+
[26] J. Fauw, S. Dieleman, and K. Simonyan. Hierarchical autoregressive image models with auxiliary decoders. ArXiv, abs/1903.04933, 2019. 2
|
| 360 |
+
[27] Yasutaka Furukawa and Carlos Hernández. Multi-view stereo: A tutorial, volume 9. 2013. 2
|
| 361 |
+
[28] Kyle Genova, F. Cole, Avneesh Sud, Aaron Sarna, and T. Funkhouser. Local deep implicit functions for 3d shape. Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 4856-4865, 2020. 1, 2
|
| 362 |
+
[29] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution es
|
| 363 |
+
|
| 364 |
+
timation. In International conference on machine learning, pages 881-889. PMLR, 2015. 3
|
| 365 |
+
[30] Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan Russell, and Mathieu Aubry. AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2018. 2
|
| 366 |
+
[31] Xian-Feng Han, Hamid Laga, and Mohammed Bennamoun. Image-based 3d object reconstruction: State-of-the-art and trends in the deep learning era. IEEE transactions on pattern analysis and machine intelligence, 43(5):1578-1604, 2019. 2
|
| 367 |
+
[32] Christian Hane, Shubham Tulsiani, and Jitendra Malik. Hierarchical surface prediction for 3d object reconstruction. 2017 International Conference on 3D Vision (3DV), pages 412-420, 2017. 2
|
| 368 |
+
[33] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners, 2021. 3
|
| 369 |
+
[34] Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers, 2019. 8
|
| 370 |
+
[35] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In Proc. Int. Conf. on Learning Representations, 2019. 5
|
| 371 |
+
[36] Zhiyang Huang, Nathan Carr, and Tao Ju. Variational implicit point set surfaces. ACM Transactions on Graphics (TOG), 38(4):1-13, 2019. 2
|
| 372 |
+
[37] Vivek Jayaram and John Thickstun. Parallel and flexible sampling from autoregressive models via Langevin dynamics, 2021. 8
|
| 373 |
+
[38] Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, and Thomas Funkhouser. Local implicit grid representations for 3d scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1
|
| 374 |
+
[39] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proc. Eurographics Symp. on Geometry Processing, volume 7, 2006. 2
|
| 375 |
+
[40] Michael Kazhdan and Hugues Hoppe. Screenedoisson surface reconstruction. ACM Trans. on Graphics, 32:29:1- 29:13, 2013. 2
|
| 376 |
+
[41] Or Litany, Alex Bronstein, Michael Bronstein, and Ameesh Makadia. Deformable shape completion with graph convolutional autoencoders, 2018. 2
|
| 377 |
+
[42] Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences, 2018. 2
|
| 378 |
+
[43] Shi-Lin Liu, Hao-Xiang Guo, Hao Pan, Peng-Shuai Wang, Xin Tong, and Yang Liu. Deep implicit moving least-squares functions for 3d reconstruction. arXiv preprint arXiv:2103.12266, 2021. 2
|
| 379 |
+
[44] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3Dreconstruction in function space. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 4460-4470, 2019. 1, 2, 4, 5, 6
|
| 380 |
+
|
| 381 |
+
[45] Paritosh Mittal, Y. Cheng, Maneesh Singh, and Shubham Tulsiani. Autosdf: Shape priors for 3d completion, reconstruction and generation. 2022. 3
|
| 382 |
+
[46] Kaichun Mo, Shilin Zhu, Angel X. Chang, L. Yi, Subarna Tripathi, L. Guibas, and H. Su. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 909–918, 2019. 6
|
| 383 |
+
[47] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygen: An autoregressive generative model of 3d meshes*. In International conference on machine learning, pages 7220-7229. PMLR, 2020. 3
|
| 384 |
+
[48] Charlie Nash, Jacob Menick, Sander Dieleman, and Peter W Battaglia. Generating images with sparse representations. arXiv preprint arXiv:2103.03841, 2021. 3, 4
|
| 385 |
+
[49] M. Niemeyer, Lars M. Mescheder, Michael Oechsle, and Andreas Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. ICCV, pages 5378-5388, 2019. 8
|
| 386 |
+
[50] Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelCNN decoders. In Proc. Conf. on Neural Information Processing Systems, pages 4797-4805, 2016. 3
|
| 387 |
+
[51] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 165-174, 2019. 1, 2
|
| 388 |
+
[52] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International Conference on Machine Learning, pages 4055-4064. PMLR, 2018. 3
|
| 389 |
+
[53] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In Proc. Euro. Conf. on Computer Vision, 2020. 1, 2, 4, 5, 6, 8
|
| 390 |
+
[54] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. 3, 4
|
| 391 |
+
[55] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092, 2021. 3
|
| 392 |
+
[56] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. arXiv preprint arXiv:1906.00446, 2019. 2, 3
|
| 393 |
+
[57] Jason Rock, Tanmay Gupta, Justin Thorsen, JunYoung Gwak, Daeyun Shin, and Derek Hoiem. Completing 3d object shape from one depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2484-2493, 2015. 2
|
| 394 |
+
[58] Dong Wook Shu, Sung Woo Park, and Junseok Kwon. 3d point cloud generative adversarial network based on tree structured graph convolutions, 2019. 6
|
| 395 |
+
[59] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Pro
|
| 396 |
+
|
| 397 |
+
ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2
|
| 398 |
+
[60] Yongbin Sun, Yue Wang, Ziwei Liu, Joshua Siegel, and Sanjay Sarma. Pointgrow: Autoregressively learned point cloud generation with self-attention. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 61–70, 2020. 2, 3, 4
|
| 399 |
+
[61] Maxim Tatarchenko, Stephan R Richter, René Ranftl, Zhuwen Li, Vladlen Koltun, and Thomas Brox. What do single-view 3D reconstruction networks learn? In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 3405-3414, 2019. 6, 2
|
| 400 |
+
[62] Lyne P. Tchapmi, Vineet Kosaraju, Hamid Rezatofighi, Ian Reid, and Silvio Savarese. Topnet: Structural point cloud decoder. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2019. 2
|
| 401 |
+
[63] Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. The Journal of Machine Learning Research, 17(1):7184-7220, 2016. 3
|
| 402 |
+
[64] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proc. Conf. on Neural Information Processing Systems, 2017. 3, 4
|
| 403 |
+
[65] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning, pages 1747-1756. PMLR, 2016. 3
|
| 404 |
+
[66] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 3
|
| 405 |
+
[67] Ziyu Wan, Jingbo Zhang, Dongdong Chen, and Jing Liao. High-fidelity pluralistic image completion with transformers. arXiv preprint arXiv:2103.14031, 2021. 2, 3, 4, 8
|
| 406 |
+
[68] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images, 2018. 2
|
| 407 |
+
[69] Xinpeng Wang, Chandan Yeshwanth, and Matthias Nießner. Sceneformer: Indoor scene generation with transformers. arXiv preprint arXiv:2012.09793, 2020. 3, 4
|
| 408 |
+
[70] Rundi Wu, Xuelin Chen, Yixin Zhuang, and Baoquan Chen. Multimodal shape completion via conditional generative adversarial networks. In Proc. Euro. Conf. on Computer Vision, August 2020. 2, 5, 6, 7
|
| 409 |
+
[71] Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, and Zhizhong Han. Snowflakenet: Point cloud completion by snowflake point deconvolution with skip-transformer. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5479-5489, 2021. 2
|
| 410 |
+
[72] Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, and Jie Zhou. Pointr: Diverse point cloud completion with geometry-aware transformers. In ICCV, 2021. 2, 5, 6, 4, 8
|
| 411 |
+
[73] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. ArXiv, abs/2111.14819, 2021. 3
|
| 412 |
+
|
| 413 |
+
[74] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 International Conference on 3D Vision (3DV), pages 728-737, 2018. 2
|
| 414 |
+
[75] Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, and Chen Change Loy. Unsupervised 3d shape completion through gan inversion. In CVPR, 2021. 6, 7
|
| 415 |
+
[76] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proc. Int. Conf. on Computer Vision, pages 5826–5835, October 2021. 2
|
| 416 |
+
|
| 417 |
+
# ShapeFormer: Transformer-based Shape Completion via Sparse Representation Supplementary Material
|
| 418 |
+
|
| 419 |
+
Xingguang Yan $^{1}$ Liqiang Lin $^{1}$ Niloy J. Mitra $^{2,3}$ Dani Lischinski $^{4}$ Daniel Cohen-Or $^{1,5}$ Hui Huang $^{1*}$
|
| 420 |
+
|
| 421 |
+
<sup>1</sup> Shenzhen University <sup>2</sup> University College London <sup>3</sup> Adobe Research <sup>4</sup> Hebrew University of Jerusalem <sup>5</sup> Tel Aviv University
|
| 422 |
+
|
| 423 |
+
# Abstract
|
| 424 |
+
|
| 425 |
+
In this supplementary document, we first give a detailed description of the ambiguity measure, model architectures, and training/testing statistics in Appendix A. Then we show more visual comparisons between our method and previous methods for scans of both high and low ambiguity in Appendix B. Lastly, we will give more analysis on our method in Appendix C, such as a discussion of limitations. The code of our model is also included in the supplementary material.
|
| 426 |
+
|
| 427 |
+
# A. Implementation Details
|
| 428 |
+
|
| 429 |
+
# A.1. Ambiguity measure for partial point cloud
|
| 430 |
+
|
| 431 |
+
The ambiguity for a partial point cloud measures the variety of its potential complete shapes. However, the direct measurement for ambiguity is difficult, if not impossible. In contrast, the incompleteness of a partial cloud toward its complete shape is relatively easy to compute. Although it can not fully reflect ambiguity (e.g., a top scan of a table as incomplete as a bottom scan could have a much greater
|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
Figure 11. Viewing direction greatly influences the scan ambiguity. Our proposed scores for 70 scans of a teapot are shown in sorted order, with examples marked with their position on the curve. The example contains the scans (in gold insets) and complete shape color-coded scores for each point in it.
|
| 435 |
+
|
| 436 |
+
ambiguity), the ambiguity is still strongly correlated.
|
| 437 |
+
|
| 438 |
+
Hence, we seek to find a metric on the incompleteness of such a point cloud to indicate its ambiguity. Intuitively, we could use metrics like F-score [61] to measure the ratio of the approximate partial surface area toward the
|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
|
| 442 |
+
complete area. But as indicated in the inset figure, such measures will fail to differentiate the coverage difference of the partial cloud (red dots) to the complete one (in blue). Instead, we propose to use a metric based on Chamfer- $L_{2}$ , which goes larger as the partial point cloud misses more global structure. Since the partial to complete distance is always negligible, we can only calculate the complete to partial distance. And to compare the ambiguity of scans on different shapes, we normalize the distance of a point according to its farthest distance in the complete shape. More specifically, we define the metric $Amb$ evaluating the ambiguity of scan $\mathcal{C}$ given the complete point cloud as $\mathcal{B}$ as:
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
A m b (B, C) = \frac {1}{B} \Sigma_ {x \in \mathcal {B}} \frac {\min _ {y \in \mathcal {C}} | | x - y | |}{\max _ {x ^ {\prime} \in B} | | x - x ^ {\prime} | |}, \tag {4}
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
Where $B$ is the number of points in the complete cloud.
|
| 449 |
+
|
| 450 |
+
We sample 70 views for each shape, 64 of which are evenly sampled from the view sphere (via Fibonacci sampling), and the rest are the six orthogonal views. Then we sort these views according to the score. In Fig. 11, we use a teapot as an example to show the score distribution of these 70 scans. For scans with low ambiguity scores, the underlying shape's global structure is either captured or is clearly indicated by the captured shape salient features. For example, the scan covering the teapot's mouth, handle, and body can be completed easily. However, it would be more difficult to infer the complete shape when the score is high since it may have different global structures, and a single explanation is not satisfactory. As shown in the main paper, our method can better handle such scans than existing shape completion methods.
|
| 451 |
+
|
| 452 |
+
# A.2. Architectures
|
| 453 |
+
|
| 454 |
+
We show the detailed architecture of VQDIF and Shape-Former in Figs. 12 and 13, respectively. and the parameters of their sub-modules are listed in Tab. 4
|
| 455 |
+
|
| 456 |
+
VQDIF. As shown in Figure 12, VQDIF is an encoder-decoder architecture, where the encoder maps an input point
|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
Decoder
|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
Figure 12. The architecture of VQDIF. The complete point cloud $\mathcal{P}$ is encoded to a feature grid and down-sampled into a lower resolution one. Its non-empty features are then flattened and quantized to form the VQDIF sequence which is then projected back to a feature grid, up-sampled and sent to an implicit decoder, from which the occupancy grid $\mathcal{T}_o$ of probes $\mathcal{T}_{\mathbf{x}}$ and the reconstruction $\mathcal{M}$ can be obtained.
|
| 463 |
+
Figure 13. An extended view of ShapeFormer. Different from the figure in the main paper, we show the inside of each Transformer module. The input embeddings are obtained by additively mixing the location and value embeddings. And the output head converts the output embedding into categorical distributions.
|
| 464 |
+
|
| 465 |
+
<table><tr><td>Layer Name</td><td>Notes</td><td>Input Size</td></tr><tr><td colspan="3">VQDIF</td></tr><tr><td>Local Pooled Pointnet</td><td></td><td>N × 3</td></tr><tr><td colspan="3">Downsampler</td></tr><tr><td>ConvLayer</td><td>k2s2p0</td><td>64 × 64 × 64 × 32</td></tr><tr><td>ConvLayer</td><td>k1s1p0</td><td>32 × 32 × 32 × 64</td></tr><tr><td>ConvLayer</td><td>k2s2p0</td><td>64 × 64 × 64 × 32</td></tr><tr><td>ConvLayer</td><td>k1s1p0</td><td>32 × 32 × 32 × 64</td></tr><tr><td>Quantizer</td><td></td><td>16 × 16 × 16 × 128</td></tr><tr><td>UNet3D</td><td></td><td>16 × 16 × 16 × 128</td></tr><tr><td>Upsampler</td><td></td><td>16 × 16 × 16 × 128</td></tr><tr><td>Scaling</td><td>nearest mode</td><td>16 × 16 × 16 × 128</td></tr><tr><td>ConvLayer</td><td>k3s1p1</td><td>32 × 32 × 32 × 128</td></tr><tr><td>ConvLayer</td><td>k3s1p1</td><td>32 × 32 × 32 × 64</td></tr><tr><td>Scaling</td><td>nearest mode</td><td>32 × 32 × 32 × 64</td></tr><tr><td>ConvLayer</td><td>k3s1p1</td><td>64 × 64 × 64 × 64</td></tr><tr><td>ConvLayer</td><td>k3s1p1</td><td>64 × 64 × 64 × 32</td></tr><tr><td>Upsampler Output</td><td></td><td>64 × 64 × 64 × 32</td></tr><tr><td>Implicit Decoder</td><td></td><td>1283 × 3</td></tr><tr><td>Implicit Decoder Output</td><td></td><td>1283 × 1</td></tr><tr><td colspan="3">ShapeFormer</td></tr><tr><td>Embedding Blocks</td><td>#4M</td><td>K × 2</td></tr><tr><td>Coordinate Transformer Blocks ×20</td><td>#251M</td><td>K × 1024</td></tr><tr><td>Coordinate Output Heads</td><td>#4M</td><td>K × 4097</td></tr><tr><td>Embedding Blocks</td><td>#4M</td><td>K × 2</td></tr><tr><td>Value Transformer Blocks ×4</td><td>#50M</td><td>K × 1024</td></tr><tr><td>Value Output Heads</td><td>#4M</td><td>K × 4097</td></tr><tr><td>Total params</td><td>#340M</td><td></td></tr><tr><td>Trainable params</td><td>#323M</td><td></td></tr></table>
|
| 466 |
+
|
| 467 |
+
Table 4. The detailed architecture information of our method. $N$ is the point size. For both VQDIF and ShapeFormer, we list the input size of their components. For convolutional neural networks, the "k", "s", "p" stands for kernel size, stride, and padding, respectively. Also "ConvLayer" denotes the composition of CNN + ReLU + GroupNorm. We also list the number of parameters for each component and indicate them with #. The sequence length is denoted by K, with a maximum of 812.
|
| 468 |
+
|
| 469 |
+
cloud to a discrete sequence representation $S$ , while the decoder maps such a sequence to a deep implicit function $f(\mathbf{x})$ . Unlike the main paper's completion pipeline, both the encoder and decoder only take complete input during training. The input to the encoder is a point cloud $\mathcal{P} \in \mathbb{R}^{N \times 3}$ representing the dense sampling of a shape or its partial observation. During the training phase, we use complete dense clouds with $N = 32768$ points to train VQDIF to capture local geometric details in the input. At test time, we use the trained encoder to directly encode partial point clouds, which may be sparse or dense.
|
| 470 |
+
|
| 471 |
+
The encoder first processes the input cloud with a local pooled PointNet [10] to obtain a feature grid. Similar to prior work [53], the local pooled PointNet aggregates features within a grid cell in contrast to the original PointNet, where all point features are pooled together to obtain a global feature. Specifically, we use a grid of resolution 64 with a feature size of 32.
|
| 472 |
+
|
| 473 |
+
Next, to reduce the number of local features, the high-resolution feature grid is down-sampled to lower resolution $R$ , using several consecutive stripped convolution blocks. As shown in Tab. 4, the parameters of these blocks are carefully set to have the least receptive field since a large receptive field lets each grid feature cover a larger region, reducing the sparsity of the representation. We can then extract the non-empty features by directly masking the encoded feature grid with the voxelized input point cloud (resolution $R$ ) thanks to the minimum receptive field. After flattening and quantizing the features (see the main paper), we get the 2-tuple sequence representation directly sent to the decoder. Note that we also save the "empty" feature to project the sequence back to the feature grid in the decoder.
|
| 474 |
+
|
| 475 |
+
The decoder consists of a 3D U-Net [19], an up-sampler, and an implicit decoder. It first projects the quantized sparse sequence back to a 3D feature grid, which serves as the input for the 3D U-Net. In contrast to the encoder, the decoder is designed to have a large receptive field. This is because, in order for the implicit decoder to infer whether a probe lies inside or outside of the shape, we need global knowledge. This is in alignment with prior works [23,53]. More specifically, we use a 3-step U-Net to increase the receptive field, which integrates both local and global information. The upsampler has the same number of scaling stages as the down-sampler, but it has a larger receptive field by design. Lastly, similarly to prior work [53], the implicit decoder consists of multiple ResNet blocks. It takes querying probe points $\mathcal{T}_{\mathbf{x}}$ and predicts their occupancy probability $\mathcal{T}_o$ .
|
| 476 |
+
|
| 477 |
+
ShapeFormer. In Fig. 13, we show the detailed architecture of ShapeFormer. The input to the ShapeFormer consists of the concatenated sequence of $S_{\mathcal{P}}$ and $S_{\mathcal{C}}$ . Since these sequences both have variable lengths, we append an end-token ([END]) to each sequence to indicate when the
|
| 478 |
+
|
| 479 |
+
sequence terminates. Next, as in prior works [22, 48], all these indices are turned into learnable embeddings and are additively combined as the input embedding for Shape-Former.
|
| 480 |
+
|
| 481 |
+
The main components of ShapeFormer are two causally-masked transformers, which consist of multiple decoder-only transformer blocks [54]. The first transformer learns to predict the coordinate of the next tuple, conditioned on previous tuples, while the second one learns to predict the value of the next element conditioned on previous tuples and the (predicted) coordinate index of the next element. Thus, the output feature of the first transformer is additively mixed with the input embedding of the second transformer delivering the encoded sequence information.
|
| 482 |
+
|
| 483 |
+
Each transformer is followed by an output head, which converts the feature produced by the transformer into a categorical distribution of the next sequence element. Both output heads consist of two fully connected layers, followed by a softmax layer to produce categorical conditional distributions for each of the sequence elements: $\{(p_{c_i}, p_{v_i})\}_{i=1}^K$ . Note that this essentially shifts the complete sequence to the right by one element. For training, we also empirically find randomly masking out the partial sequence will improve generalization.
|
| 484 |
+
|
| 485 |
+
# A.3. Details on training and sampling
|
| 486 |
+
|
| 487 |
+
We use Adam optimizer for training both VQDIF and ShapeFormer, and we set the learning rate as $1e - 4$ for VQDIF and $1e - 5$ for ShapeFormer. We use step decay for VQDIF with step size equal to 10 and $\beta = .9$ and do not apply learning rate scheduling for ShapeFormer. We train our network on a deep learning server with Intel Xeon CPU E5-2680 v4 CPU*56 and 256GB memory with 10 Nvidia Quadro P6000 graphics cards with a GPU memory size of 24GB. It takes 30 hours for our model to converge on our virtual scan dataset and 8 hours on the PartNet dataset. For D-Faust, the converging time is 16 hours. For sampling, we can obtain a single sample sequence in roughly 20 seconds, and we can also sample 24 sequences in parallel in 5 minutes.
|
| 488 |
+
|
| 489 |
+
# B. More comparisons
|
| 490 |
+
|
| 491 |
+
We show more visual comparisons between our method and prior state-of-the-art methods in Figs. 14 to 16. Figs. 14 and 15 illustrates results on high-ambiguity scans. In these examples, we can see the averaging effect of the deterministic methods (See the scattering effect in ambiguous regions of the completions of PoinTr [72]). Our method produces significantly better results in terms of quality and diversity.
|
| 492 |
+
|
| 493 |
+
Also, we demonstrate our method can also achieve competitive accuracy for low-ambiguity scans in Fig. 16. Since there is limited ambiguity for such scans and the goal is to achieve accuracy toward ground truth, we put the ground
|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
Figure 14. More comparisons on high ambiguity scans of ShapeNet objects.
|
| 497 |
+
|
| 498 |
+

|
| 499 |
+
|
| 500 |
+

|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
|
| 506 |
+

|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
Input
|
| 514 |
+
VQDIF-only
|
| 515 |
+
|
| 516 |
+

|
| 517 |
+
OccNet
|
| 518 |
+
Ours-1
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
ConvONet
|
| 522 |
+
Ours-2
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
IF-Net
|
| 526 |
+
Ours-3
|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
PoinTr
|
| 530 |
+
Ours-4
|
| 531 |
+
|
| 532 |
+

|
| 533 |
+
cGAN-1
|
| 534 |
+
Ours-5
|
| 535 |
+
|
| 536 |
+

|
| 537 |
+
cGAN-2
|
| 538 |
+
Ours-6
|
| 539 |
+
|
| 540 |
+

|
| 541 |
+
|
| 542 |
+

|
| 543 |
+
OccNet
|
| 544 |
+
|
| 545 |
+

|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
|
| 549 |
+

|
| 550 |
+
PoinTr
|
| 551 |
+
|
| 552 |
+

|
| 553 |
+
cGAN-1
|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
cGAN-2
|
| 557 |
+
|
| 558 |
+

|
| 559 |
+
|
| 560 |
+

|
| 561 |
+
|
| 562 |
+

|
| 563 |
+
ConvONet
|
| 564 |
+
|
| 565 |
+

|
| 566 |
+
IF-Net
|
| 567 |
+
|
| 568 |
+

|
| 569 |
+
|
| 570 |
+

|
| 571 |
+
|
| 572 |
+

|
| 573 |
+
|
| 574 |
+

|
| 575 |
+
VQDIF-only
|
| 576 |
+
|
| 577 |
+

|
| 578 |
+
Ours-1
|
| 579 |
+
|
| 580 |
+

|
| 581 |
+
Ours-2
|
| 582 |
+
|
| 583 |
+

|
| 584 |
+
Ours-3
|
| 585 |
+
|
| 586 |
+

|
| 587 |
+
Ours-4
|
| 588 |
+
|
| 589 |
+

|
| 590 |
+
Ours-5
|
| 591 |
+
cGAN-1
|
| 592 |
+
|
| 593 |
+

|
| 594 |
+
Ours-6
|
| 595 |
+
cGAN-2
|
| 596 |
+
|
| 597 |
+

|
| 598 |
+
Input
|
| 599 |
+
VQDIF-only
|
| 600 |
+
|
| 601 |
+

|
| 602 |
+
OccNet
|
| 603 |
+
Ours-1
|
| 604 |
+
|
| 605 |
+

|
| 606 |
+
ConvONet
|
| 607 |
+
Ours-2
|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
IF-Net
|
| 611 |
+
Ours-3
|
| 612 |
+
|
| 613 |
+

|
| 614 |
+
PoinTr
|
| 615 |
+
Ours-4
|
| 616 |
+
|
| 617 |
+

|
| 618 |
+
Ours-5
|
| 619 |
+
|
| 620 |
+

|
| 621 |
+
Ours-6
|
| 622 |
+
|
| 623 |
+

|
| 624 |
+
|
| 625 |
+

|
| 626 |
+
|
| 627 |
+

|
| 628 |
+
|
| 629 |
+

|
| 630 |
+
|
| 631 |
+

|
| 632 |
+
|
| 633 |
+

|
| 634 |
+
|
| 635 |
+

|
| 636 |
+
|
| 637 |
+

|
| 638 |
+
Input
|
| 639 |
+
VQDIF-only
|
| 640 |
+
|
| 641 |
+

|
| 642 |
+
OccNet
|
| 643 |
+
Ours-1
|
| 644 |
+
|
| 645 |
+

|
| 646 |
+
ConvONet
|
| 647 |
+
Ours-2
|
| 648 |
+
Figure 15. More comparisons on high ambiguity scans of ShapeNet objects.
|
| 649 |
+
|
| 650 |
+

|
| 651 |
+
IF-Net
|
| 652 |
+
Ours-3
|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
PoinTr
|
| 656 |
+
Ours-4
|
| 657 |
+
|
| 658 |
+

|
| 659 |
+
cGAN-1
|
| 660 |
+
Ours-5
|
| 661 |
+
|
| 662 |
+

|
| 663 |
+
cGAN-2
|
| 664 |
+
Ours-6
|
| 665 |
+
|
| 666 |
+

|
| 667 |
+
Figure 16. More comparisons on low ambiguity scans of ShapeNet objects. Ours=top-.4 sampling, Ours*=top-.0 sampling (best sampling).
|
| 668 |
+
|
| 669 |
+
truth in the first row and only sample 1 completion for each of our sampling strategies (Ours: top-.4 sampling, Ours*: top-.0, e.g., best sampling). Also, we only compare state-of-the-art deterministic methods: ConvONet [53], IF-Net [15], and PoinTr [72] in these examples. As we can see, even the scans cover most areas of the ground truth shape; prior works can still produce unsatisfactory results for unseen regions. In contrast, our method can always produce more accurate, high-quality completions. Moreover, since Ours* always picks the coordinate and value indices with the highest probability, it often produces slightly more accurate shapes.
|
| 670 |
+
|
| 671 |
+
# C. More analysis
|
| 672 |
+
|
| 673 |
+
Discussion of Limitation. ShapeFormer inherits the typical limitations of transformer-based autoregressive models. Mainly, the representation length cannot be too long, and thus the method currently can only use VQDIF with $R = 16$ , which may fail to complete and reconstruct shapes with intricate structures; an example is shown in Figure 17. Another related limitation is the sampling speed, which prevents interactive applications.
|
| 674 |
+
|
| 675 |
+

|
| 676 |
+
Figure 17. An example of a shape completion failure case of ShapeFormer. The intricate details present in the input (second from left) are not preserved in the completions (gray shapes). The leftmost image shows the ground truth shape.
|
| 677 |
+
|
| 678 |
+
There are two research avenues to alleviate these problems: (i) Investigating more efficient attention mechanisms to reduce the transformer's quadratic complexity in the sequence length $K$ to $O(K\sqrt{K})$ [34] or even $O(K)$ [17]. (ii) Designing an adaptive quantization scheme for the point clouds, which enables Transformers to focus dependencies on a lower local level while using higher-level features for faraway regions. (iii) Adopt advanced sampling techniques for autoregressive models such as parallel sampling [37].
|
| 679 |
+
|
| 680 |
+
Moreover, since we generate sequences of complete shapes from scratch, our results may slightly alter the input geometry to overcome the potential sparsity and noise. Besides using higher resolution quantized features to obtain more accurate generation, another possible improvement to this issue is to include high-resolution features of the input in the decoding procedure as in a recent image inpainting technique [67].
|
2201.10xxx/2201.10326/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5808f0be2b3b78df09ed8887a9f1bea909ee6aab7ce49ab19bc7e9080a910fb5
|
| 3 |
+
size 1419413
|
2201.10xxx/2201.10326/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10469/9c6fbd2b-e953-40af-ac0a-3f92d5a6246d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c598d4732bab3f5a15db88d8873164e276cd2cdd56e14dc51ce31ba8122a5c07
|
| 3 |
+
size 665343
|
2201.10xxx/2201.10469/full.md
ADDED
|
@@ -0,0 +1,779 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Convex Analysis of the Mean Field Langevin Dynamics
|
| 2 |
+
|
| 3 |
+
Atsushi Nitanda $^{1\dagger}$ , Denny Wu $^{2\dagger}$ , Taiji Suzuki $^{3\star}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Kyushu Institute of Technology and RIKEN Center for Advanced Intelligence Project
|
| 6 |
+
|
| 7 |
+
<sup>2</sup>The University of Toronto and Vector Institute for Artificial Intelligence
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>The University of Tokyo and RIKEN Center for Advanced Intelligence Project
|
| 10 |
+
|
| 11 |
+
Email: †nitanda@ai.kyutech.ac.jp, ‡dennywu@cs.toronto.edu, *taiji@mist.i.u-tokyo.ac.jp
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
As an example of the nonlinear Fokker-Planck equation, the mean field Langevin dynamics recently attracts attention due to its connection to (noisy) gradient descent on infinitely wide neural networks in the mean field regime, and hence the convergence property of the dynamics is of great theoretical interest. In this work, we give a concise and self-contained convergence rate analysis of the mean field Langevin dynamics with respect to the (regularized) objective function in both continuous and discrete time settings. The key ingredient of our proof is a proximal Gibbs distribution $p_q$ associated with the dynamics, which, in combination with techniques in Vempala and Wibisono (2019), allows us to develop a simple convergence theory parallel to classical results in convex optimization. Furthermore, we reveal that $p_q$ connects to the duality gap in the empirical risk minimization setting, which enables efficient empirical evaluation of the algorithm convergence.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Consider a neural network with $M$ trainable neurons parameterized as
|
| 20 |
+
|
| 21 |
+
$$
|
| 22 |
+
h _ {\Theta} (x) \stackrel {\text {d e f}} {=} \frac {1}{M} \sum_ {r = 1} ^ {M} h _ {\theta_ {r}} (x), \tag {1}
|
| 23 |
+
$$
|
| 24 |
+
|
| 25 |
+
where each neuron $h_{\theta_r}$ contains trainable parameters (weights) $\theta_r$ and some nonlinear transformation (e.g., $h_{\theta_r}(x) = \sigma(\langle \theta_r, x \rangle)$ where $\sigma$ is the nonlinear activation function), $x$ is a data example, and $\Theta = \{\theta_r\}_{r=1}^M$ . Under suitable conditions, as $M \to \infty$ we obtain the mean field limit: $h_q(x) := \mathbb{E}_q[h_\theta(x)]$ , where $q(\theta)\mathrm{d}\theta$ represents the probability distribution of the weights; we refer to $h_q$ simply as a mean field model (neural network). In this limit, training can be formulated as an optimization problem over the space of probability measures. An advantage of the mean field regime, in contrast to
|
| 26 |
+
|
| 27 |
+
alternative settings such as the neural tangent kernel regime (Jacot et al., 2018), is the presence of (nonlinear) feature learning (Suzuki, 2019; Ghorbani et al., 2019). However, developing an optimization theory for mean field neural networks is also more challenging.
|
| 28 |
+
|
| 29 |
+
Optimization analysis of mean field neural networks usually utilizes the convexity of the objective function in the space of probability measures. Nitanda and Suzuki (2017); Chizat and Bach (2018); Mei et al. (2018) established global convergence of gradient descent (flow) on two-layer neural networks in the mean field regime under appropriate conditions. Subsequent works proved convergence rates under additional structural assumptions (Javanmard et al., 2019; Chizat, 2021b). One noticeable algorithmic modification is the addition of Gaussian noise to the gradient, which leads to the noisy gradient descent algorithm; this modification gives rise to an entropy regularization term in the objective, and allows for global convergence analysis under less restrictive settings (Rotskoff and Vanden-Eijnden, 2018; Mei et al., 2019). The corresponding stochastic dynamics is often referred to as the mean field Langevin dynamics.
|
| 30 |
+
|
| 31 |
+
Recent works have studied the convergence rate of the mean field Langevin dynamics, including its underdamped (kinetic) version. However, most existing analyses either require sufficiently strong regularization (Hu et al., 2019; Jabir et al., 2019), or build upon involved mathematical tools (Kazeykina et al., 2020; Guillin et al., 2021). Our goal is to provide a simpler convergence proof that covers general and more practical machine learning settings, with a focus on neural network optimization in the mean field regime. Motivated by an observation in Nitanda et al. (2021) that the log-Sobolev inequality can simplify the global convergence analysis of two-layer mean field neural network, we study the optimization efficiency of the mean field Langevin dynamics in the context of KL-regularized empirical/expected risk minimization, and present a new convergence rate analysis by translating the finite-dimensional convex optimization theory to the optimization in the space of measures.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
Figure 1: 1D visualization of mean field two-layer neural network (tanh) optimized by noisy gradient descent ( $\lambda = \lambda' = 10^{-2}$ ). Observe that both the parameter distribution $q$ and the corresponding proximal Gibbs distribution $p_q$ converge to the optimal $q_*$ . Moreover, $q_t$ and $p_{q_t}$ approach $q_*$ from opposite directions, as predicted by Proposition 1.
|
| 35 |
+
|
| 36 |
+
# 1.1 Contributions
|
| 37 |
+
|
| 38 |
+
In this work, we give a simple and self-contained convergence rate analysis of the mean field Langevin dynamics in both continuous and discrete time settings. The key ingredient of our proof is the introduction of a proximal Gibbs distribution $p_{q}$ (see Figure 1), which relates to the optimization gap and allows us to directly apply standard convex optimization techniques. In particular,
|
| 39 |
+
|
| 40 |
+
- By analyzing the proximal Gibbs distribution $p_{q}$ , we establish linear convergence in continuous time with respect to the KL-regularized objective. This convergence result holds for any regularization parameters, in contrast to existing analyses (e.g., Hu et al. (2019)) that require strong regularization.
|
| 41 |
+
- We also provide global convergence rate analysis for the discrete-time update. This is achieved by extending the classical "one-step interpolation" argument in the analysis of Langevin dynamics (e.g., see Vempala and Wibisono (2019) for the KL case) to our nonlinear Fokker-Planck setting.
|
| 42 |
+
- Finally, we present an interpretation of the proximal Gibbs distribution via the primal-dual formulation of empirical risk minimization problems, and reveal that $p_q$ exactly fills the duality gap. This interpretation leads to alternative ways to evaluate the convergence of the algorithm.
|
| 43 |
+
|
| 44 |
+
# 1.2 Related Literature
|
| 45 |
+
|
| 46 |
+
Convergence of the Langevin algorithm. The Langevin dynamics can be interpreted as the (Wasserstein) gradient flow of KL divergence with respect to the target distribution $p \propto \exp(-f)$ (Jordan et al., 1998); in other words, the Langevin dynamics solves
|
| 47 |
+
|
| 48 |
+
the following optimization problem,
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
\min _ {q: \text {d e n s i t y}} \left\{\mathbb {E} _ {q} [ f ] + \mathbb {E} _ {q} [ \log (q) ] \right\}. \tag {2}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
We refer readers to Dalalyan (2017); Wibisono (2018) for additional discussions on the connection between sampling and optimization.
|
| 55 |
+
|
| 56 |
+
It is well-known that when the target distribution satisfies certain isoperimetry conditions such as the log-Sobolev inequality, then the (continuous-time) Langevin dynamics converges exponentially (Bakry et al., 2013). On the other hand, the time-discretized dynamics, or the Langevin algorithm, admits a biased invariant distribution depending on the step size and the numerical scheme (Milstein and Tretyakov, 2013; Li et al., 2019), and sublinear convergence rate has been established under different target assumptions (Vempala and Wibisono, 2019; Erdogdu and Hosseinzadeh, 2020) and distance metrics (Dalalyan, 2014; Durmus and Moulines, 2017; Erdogdu et al., 2021).
|
| 57 |
+
|
| 58 |
+
Mean field regime and nonlinear Fokker-Planck. Analysis of neural networks in the mean field regime typically describes the optimization dynamics as a partial differential equation (PDE) of the parameter distribution, from which convergence to the global optimal solution may be shown (Chizat and Bach, 2018; Mei et al., 2018; Rotskoff and Vanden-Eijnden, 2018; Sirignano and Spiliopoulos, 2020). Quantitative convergence rate usually requires additional conditions, such as structural assumptions on the learning problem (Javanmard et al., 2019; Chizat, 2021b; Akiyama and Suzuki, 2021), or modification of the dynamics (Rotskoff et al., 2019; Wei et al., 2019). Noticeably, Mei et al. (2018); Hu et al. (2019); Chen et al. (2020a) considered the optimization of the KL-regularized objective, which leads to the mean field Langevin dynamics.
|
| 59 |
+
|
| 60 |
+
Note that the optimization of mean field neural networks (e.g., Eq. (13)) falls beyond the scope of Langevin dynamics whose density function follows linear Fokker-Planck equation and solves (2), due to nonlinear loss function. Instead, the density function of parameters follows a nonlinear Fokker-Planck equation (Eq. (8)), the convergence rate of which is more difficult to establish. Hu et al. (2019); Jabir et al. (2019) obtained convergence rate of the mean field Langevin dynamics under sufficiently strong regularization. Exponential convergence of related dynamics (e.g., the underdamped variant and other McKean-Vlasov equations) in continuous time have been shown under various settings (Monmarché, 2017; Guillin et al., 2019; Kazeykina et al., 2020; Guillin et al., 2021), based on hypocoercivity (Villani, 2009) or coupling techniques (Eberle et al., 2019). Bou-Rabee and Schuh (2020); Bou-Rabee and Eberle
|
| 61 |
+
|
| 62 |
+
(2021) studied the discrete time convergence of Hamiltonian Monte Carlo with interaction potential.
|
| 63 |
+
|
| 64 |
+
Our work builds upon Nitanda et al. (2021), which employs the Langevin algorithm to solve the inner loop of a dual averaging method in the space of measures. Importantly, under a uniform log-Sobolev inequality on the "linearized" objective, (sublinear) global convergence rate in minimizing an entropy-regularized nonlinear functional can be proved by adapting finite-dimensional convex optimization theory. Based on similar ideas combined with the primal-dual formulation of empirical risk minimization, a stochastic dual coordinate ascent alternative has been developed in Oko et al. (2022) to achieve linear convergence in discrete time. A few recent works also considered the adaptation of classical convex optimization algorithms into the space of measures, such as the Mirror descent method (Ying, 2020), the Frank-Wolfe method (Kent et al., 2021), and the Bregman proximal gradient method (Chizat, 2021a).
|
| 65 |
+
|
| 66 |
+
Concurrent and independent to our work, Chizat (2022) analyzed the mean field Langevin dynamics also using properties of the proximal Gibbs distribution $p_{q}$ ; while both works build upon the same observation, Chizat (2022) focused on the continuous-time convergence rate and studied the annealed dynamics, whereas we establish discrete-time guarantees for the noisy gradient descent algorithm and present a primal-dual viewpoint.
|
| 67 |
+
|
| 68 |
+
# 1.3 Notations
|
| 69 |
+
|
| 70 |
+
$\| \cdot \| _2$ denotes the Euclidean norm. Given a density function $q(\theta)$ on $\mathbb{R}^d$ , we write the expectation w.r.t. $q(\theta)\mathrm{d}\theta$ as $\mathbb{E}_{\theta \sim q}[\cdot ]$ or simply $\mathbb{E}_q[\cdot ]$ , $\mathbb{E}_{\theta}[\cdot ]$ when the random variable and distribution are obvious from the context; e.g. for a function $f:\mathbb{R}^d\to \mathbb{R}$ , we write $\mathbb{E}_q[f] = \int f(\theta)q(\theta)\mathrm{d}\theta$ when $f$ is integrable. KL stands for the Kullback-Leibler divergence: $\mathrm{KL}(q\| q')\stackrel {\mathrm{def}}{=}\int q(\theta)\log \left(\frac{q(\theta)}{q'(\theta)}\right)\mathrm{d}\theta$ . Let $\mathcal{P}$ be the space of probability density functions with respect to $\mathrm{d}\theta$ on $\mathbb{R}^d$ such that the entropy and second moment are well-defined.
|
| 71 |
+
|
| 72 |
+
# 2 PRELIMINARIES
|
| 73 |
+
|
| 74 |
+
The mean field Langevin dynamics is the target of our convergence analysis. In this section, we introduce this dynamics as well as the associated optimization problem. We also outline one major application of our convergence analysis, which is the optimization of mean field neural networks.
|
| 75 |
+
|
| 76 |
+
# 2.1 Problem Setup
|
| 77 |
+
|
| 78 |
+
Let $F:\mathcal{P}\to \mathbb{R}$ be a differentiable convex functional. That is, we suppose there is a functional $\frac{\delta F}{\delta q}:\mathcal{P}\times \mathbb{R}^d\ni (q,\theta)\mapsto \frac{\delta F}{\delta q} (q)(\theta)\in \mathbb{R}$ such that for any $q,q^{\prime}\in \mathcal{P}$
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\left. \frac {\mathrm {d} F (q + \epsilon (q ^ {\prime} - q))}{\mathrm {d} \epsilon} \right| _ {\epsilon = 0} = \int \frac {\delta F}{\delta q} (q) (\theta) (q ^ {\prime} - q) (\theta) \mathrm {d} \theta
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
and $F$ satisfies the convexity condition:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
F \left(q ^ {\prime}\right) \geq F (q) + \int \frac {\delta F}{\delta q} (q) (\theta) \left(q ^ {\prime} - q\right) (\theta) \mathrm {d} \theta . \tag {3}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
We consider the minimization of an entropy regularized nonlinear functional:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\min _ {q \in \mathcal {P}} \left\{\mathcal {L} (q) \stackrel {\text {d e f}} {=} F (q) + \lambda \mathbb {E} _ {q} [ \log q ] \right\}. \tag {4}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
For $q \in \mathcal{P}$ , we next define an associated Gibbs distribution which plays a key role in our analysis: the proximal Gibbs distribution around $q$ in $\mathcal{P}$ (see Proposition 1).
|
| 97 |
+
|
| 98 |
+
Definition 1 (Proximal Gibbs distribution). We define $p_q(\theta)$ to be the Gibbs distribution with potential function $-\lambda^{-1}\delta F(q) / \delta q$ , i.e.,
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
p _ {q} (\theta) = \frac {\exp \left(- \frac {1}{\lambda} \frac {\delta F (q)}{\delta q} (\theta)\right)}{Z (q)}, \tag {5}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $Z(q)$ is the normalization constant.
|
| 105 |
+
|
| 106 |
+
In this study, we provide basic convergence results for the general form of the problem (4) under the following assumptions, which will be specialized and verified for mean field neural network introduced in the sequel.
|
| 107 |
+
|
| 108 |
+
Assumption 1. For a functional $F:\mathcal{P}\to \mathbb{R}$ and any $q\in \mathcal{P}$ , assume the functional derivative $\frac{\delta F}{\delta q} (q)(\theta)$ exists and is smooth in $\theta$ . Moreover, assume $|Z(q)| < \infty$ , $\frac{\delta F}{\delta q} (q)(\theta) = O(1 + \| \theta \|_2^2)$ uniformly over $\mathcal{P}$ , and $F$ is a convex functional, that is, (3) holds for any $q,q'\in \mathcal{P}$ .
|
| 109 |
+
|
| 110 |
+
Assumption 2 (Log-Sobolev inequality). Suppose there exists a constant $\alpha >0$ such that for any $q\in \mathcal{P}$ the probability distribution $p_q(\theta)\mathrm{d}\theta$ satisfies log-Sobolev inequality with constant $\alpha$ , that is, for any smooth function $g:\mathbb{R}^d\to \mathbb{R}$ , we have
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathbb {E} _ {p _ {q}} [ g ^ {2} \log g ^ {2} ] - \mathbb {E} _ {p _ {q}} [ g ^ {2} ] \log \mathbb {E} _ {p _ {q}} [ g ^ {2} ] \leq \frac {2}{\alpha} \mathbb {E} _ {p _ {q}} [ \| \nabla g \| _ {2} ^ {2} ].
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Under Assumption 2, by setting $g = \sqrt{q / p_q}$ , we get
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\mathrm {K L} (q \| p _ {q}) \leq \frac {1}{2 \alpha} \mathbb {E} _ {q} \left[ \left\| \nabla \log \frac {q}{p _ {q}} \right\| _ {2} ^ {2} \right]. \tag {6}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
# 2.2 Optimization Dynamics
|
| 123 |
+
|
| 124 |
+
To solve the problem (4), we consider the following (continuous-time) mean field Langevin dynamics:
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathrm {d} \theta_ {t} = - \nabla \frac {\delta F}{\delta q} \left(q _ {t}\right) \left(\theta_ {t}\right) \mathrm {d} t + \sqrt {2 \lambda} \mathrm {d} W _ {t}, \tag {7}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $\theta_t \sim q_t(\theta) \mathrm{d}\theta$ and $\{W_t\}_{t \geq 0}$ is the Brownian motion in $\mathbb{R}^d$ with $W_0 = 0$ . Here, the gradient $\nabla$ of the functional derivative $\frac{\delta F(q)}{\delta q}(\theta)$ is applied with respect to $\theta$ . It is known that the distribution of $\theta_t$ following the dynamics (7) solves nonlinear Fokker-Planck equation:
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\frac {\partial q _ {t}}{\partial t} = \nabla \cdot \left(q _ {t} \nabla \frac {\delta F}{\delta q} (q _ {t})\right) + \lambda \Delta q _ {t}. \tag {8}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
We here reformulate the equation (8) as follows:
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\begin{array}{l} \frac {\partial q _ {t}}{\partial t} = \lambda \nabla \cdot \left(q _ {t} \nabla \log \exp \left(\frac {1}{\lambda} \frac {\delta F}{\delta q} (q _ {t})\right) + q _ {t} \nabla \log q _ {t}\right) \\ = \lambda \nabla \cdot \left(q _ {t} \nabla \log \frac {q _ {t}}{p _ {q _ {t}}}\right). \tag {9} \\ \end{array}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
As we will see later, this formulation involving $p_{q_t}$ will be more useful in the convergence analysis.
|
| 143 |
+
|
| 144 |
+
Moreover, we also consider the standard discretization of the above dynamics with step size $\eta > 0$ :
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\theta^ {(k + 1)} = \theta^ {(k)} - \eta \nabla \frac {\delta F}{\delta q} \left(q ^ {(k)}\right) \left(\theta^ {(k)}\right) + \sqrt {2 \lambda \eta} \xi^ {(k)}, \tag {10}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $\theta^{(k)}$ is a random variable following $q^{(k)}(\theta)\mathrm{d}\theta$ and $\xi^{(k)}\sim \mathcal{N}(0,I_d)$ . We present convergence rate analysis for both the continuous- and discrete-time algorithm.
|
| 151 |
+
|
| 152 |
+
# 2.3 Mean Field Neural Networks
|
| 153 |
+
|
| 154 |
+
The main application of our theory is to provide quantitative convergence guarantees for risk minimization problems with the mean field neural network. Before formally defining the mean field limit, we first introduce the finite dimensional counterpart.
|
| 155 |
+
|
| 156 |
+
Finite dimensional case. Let $\mathcal{X}$ be the data space and $h_\theta : \mathcal{X} \to \mathbb{R}$ be a component of neural network, which corresponds to a single neuron with trainable parameter $\theta \in \mathbb{R}^d$ . Then, an $M$ -neuron network $h_\Theta$ (where $\Theta = \{\theta_r\}_{r=1}^M$ ) is defined as the average of these components as in (1). Let $\ell(z,y) : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ be a smooth and convex loss function in $z$ , such as the logistic loss and squared loss, and $\rho$ be the empirical or true data distribution. Then, the regularized risk of $h_\Theta$ is defined as
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\mathbb {E} _ {(X, Y) \sim \rho} [ \ell (h _ {\Theta} (X), Y) ] + \frac {\lambda^ {\prime}}{M} \sum_ {r = 1} ^ {M} r \left(\theta_ {r}\right), \tag {11}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
where $r(\theta)$ is a convex regularizer such as $\ell_2$ -penalty $r(\theta) = \| \theta \|_2^2$ , and $\lambda' > 0$ is regularization strength. Note that this formulation includes both empirical and expected risk minimization problems. To optimize the objective (11), we perform the gradient descent update with step size $\eta M > 0$ :
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
g _ {r} ^ {(k)} = \mathbb {E} [ \partial_ {z} \ell (h _ {\Theta^ {(k)}} (X), Y) \partial_ {\theta_ {r}} h _ {\theta_ {r} ^ {(k)}} (X) ] + \lambda^ {\prime} \partial_ {\theta_ {r}} r (\theta_ {r} ^ {(k)}),
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\theta_ {r} ^ {(k + 1)} = \theta_ {r} ^ {(k)} - \eta g _ {r} ^ {(k)}. \tag {12}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
Mean field limit. We now take the limit $M \to \infty$ and suppose $\theta_r$ follows a probability distribution $q(\theta) \mathrm{d}\theta$ . Then, $h_\Theta(x)$ converges to the mean field limit $h_q(x) = \mathbb{E}_{\theta \sim q}[h_\theta(x)]$ in which the density function $q$ is recognized as the parameter, and the objective (11) converges to the following convex functional,
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
F (q) = \mathbb {E} _ {(X, Y)} [ \ell (h _ {q} (X), Y) ] + \lambda^ {\prime} \mathbb {E} _ {\theta \sim q} [ r (\theta) ]. \tag {13}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
In this case, the functional derivative $\frac{\delta F(q)}{\delta q}$ is
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\frac {\delta F (q)}{\delta q} (\theta) = \mathbb {E} _ {(X, Y)} [ \partial_ {z} \ell (h _ {q} (X), Y) h _ {\theta} (X) ] + \lambda^ {\prime} r (\theta).
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
By noticing that $\nabla \frac{\delta F(q)}{\delta q}(\theta)$ is the mean field limit $(M \to \infty)$ of the step $g_r^{(k)}$ for the finite-dimensional model, we see that the discrete dynamics (10) is the noisy variant of the mean field limit of gradient descent (12). This motivates us to study mean field Langevin dynamics (7) and its discretization (10).
|
| 185 |
+
|
| 186 |
+
Verification of assumptions. Under smoothness and boundedness conditions on $h_\theta$ , and also smooth convex loss function $\ell$ with $\ell_2$ -penalty term $r(\theta) = \| \theta \|_2^2$ , one can easily verify that objective of mean field neural network satisfies Assumption 1. The convexity of $F$ immediately follows by taking the expectation $\mathbb{E}_{(X,Y)}$ of the following inequality:
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\begin{array}{l} \ell \left(h _ {q} (X), Y\right) + \int \partial_ {z} \ell \left(h _ {q} (X), Y\right) h _ {\theta} (X) \left(q ^ {\prime} - q\right) (\theta) d \theta \\ = \ell \left(h _ {q} (X), Y\right) + \partial_ {z} \ell \left(h _ {q} (X), Y\right) \left(h _ {q ^ {\prime}} (X) - h _ {q} (X)\right) \\ \leq \ell \left(h _ {q ^ {\prime}} (X), Y\right). \\ \end{array}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Moreover, with $r(\theta) = \|\theta\|_2^2$ , the uniform log-Sobolev inequality (Assumption 2) with constant $\alpha = \frac{2\lambda'}{\lambda \exp(O(\lambda^{-1}))}$ can be verified via a standard application of the LSI perturbation lemma (Holley and Stroock, 1987) (see Appendix A.1 for details), since $p_q$ is proportional to the Gibbs distribution specified as:
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
\exp \left(- \frac {1}{\lambda} \mathbb {E} _ {(X, Y)} \left[ \partial_ {z} \ell \left(h _ {q} (X), Y\right) h _ {\theta} (X) \right] - \frac {\lambda^ {\prime}}{\lambda} r (\theta)\right). \tag {14}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
# 3 CONVERGENCE ANALYSIS
|
| 199 |
+
|
| 200 |
+
In this section, we present the convergence rate analysis of mean field Langevin dynamics in both continuous- and discrete-time settings. We remark that our proof strategy can be seen as a combination and extension of (i) convex analysis parallel to the finite-dimensional optimization setting, and (ii) convergence analysis for the linear Fokker-Planck equation (e.g., Vempala and Wibisono (2019)). As we will see, the proximal Gibbs distribution $p_{q}$ plays an important role in connecting these different techniques.
|
| 201 |
+
|
| 202 |
+
# 3.1 Basic Properties and Convexity
|
| 203 |
+
|
| 204 |
+
We first present a basic but important result that characterizes the role of $p_q$ in the convergence analysis. Note that the functional derivative of the negative entropy $\mathbb{E}_q[\log q]$ in the density function $q$ is $\log q^1$ ; therefore, the optimality condition of the problem (4) is $\frac{\delta\mathcal{L}}{\delta q}(q) = \frac{\delta F}{\delta q}(q) + \lambda \log q = 0$ . This is to say, the optimal probability density function $q_*$ satisfies
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
q _ {*} (\theta) = p _ {q _ {*}} (\theta) \propto \exp \left(- \frac {1}{\lambda} \frac {\delta F (q _ {*})}{\delta q} (\theta)\right). \tag {15}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
This fact was shown in Hu et al. (2019). Hence, we may interpret the divergence between probability density functions $q$ and $p_{q}$ as an optimization gap. Indeed, this intuition is confirmed by the following proposition, which can be established using standard convex analysis. In particular, the proof relies on the fact that the negative entropy acts as a strongly convex function with respect to KL-divergence.
|
| 211 |
+
|
| 212 |
+
Proposition 1. Under Assumption 1, we have the three following statements,
|
| 213 |
+
|
| 214 |
+
1. In the sense of functional on the space of probability density functions, the following equality holds.
|
| 215 |
+
|
| 216 |
+
$$
|
| 217 |
+
\frac {\delta \mathcal {L}}{\delta q} (q) = \lambda \frac {\delta}{\delta q ^ {\prime}} \mathrm {K L} (q ^ {\prime} \| p _ {q}) | _ {q ^ {\prime} = q} = \lambda \log \frac {q}{p _ {q}}.
|
| 218 |
+
$$
|
| 219 |
+
|
| 220 |
+
In other words, for any $g = p - p'$ ( $p, p' \in \mathcal{P}$ ),
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
\int \frac {\delta \mathcal {L}}{\delta q} (q) (\theta) g (\theta) \mathrm {d} \theta = \int \lambda \log \left(\frac {q}{p _ {q}} (\theta)\right) g (\theta) \mathrm {d} \theta .
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
2. For any probability distributions $q, q' \in \mathcal{P}$ , we have
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
\mathcal {L} (q) + \int \frac {\delta \mathcal {L}}{\delta q} (q) (\theta) \left(q ^ {\prime} - q\right) (\theta) \mathrm {d} \theta + \lambda \mathrm {K L} \left(q ^ {\prime} \| q\right) \leq \mathcal {L} \left(q ^ {\prime}\right). \tag {16}
|
| 230 |
+
$$
|
| 231 |
+
|
| 232 |
+
Moreover, $p_q$ associated with $q \in \mathcal{P}$ is a minimizer of the left hand side of this inequality in $q' \in \mathcal{P}$ .
|
| 233 |
+
|
| 234 |
+
3. Let $q_*$ be an optimal solution of (4). Then, for any $q \in \mathcal{P}$ , we get
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\lambda \mathrm {K L} (q \| p _ {q}) \geq \mathcal {L} (q) - \mathcal {L} (q _ {*}) \geq \lambda \mathrm {K L} (q \| q _ {*}).
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
Proof. (i) We start with the first statement.
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\begin{array}{l} \frac {\delta \mathcal {L}}{\delta q} (q) = \frac {\delta F}{\delta q} (q) + \lambda \log q \\ = - \lambda \log \exp \left(- \frac {1}{\lambda} \frac {\delta F}{\delta q} (q)\right) + \lambda \log q \\ = \lambda (\log q - \log p _ {q}) - \lambda \log Z (p _ {q}), \\ \end{array}
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
where $Z(p_{q})$ is a normalization constant of $p_{q}$ . Moreover, we can see $\frac{\delta}{\delta q'} \mathrm{KL}(q' \| p_{q})|_{q'=q} = \log q - \log p_{q}$ .
|
| 247 |
+
|
| 248 |
+
(ii) From direct computation, for any $q,q^{\prime}\in \mathcal{P}$
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
\begin{array}{l} \mathbb {E} _ {q ^ {\prime}} [ \log (q ^ {\prime}) ] = \mathbb {E} _ {q} [ \log (q) ] + \operatorname {K L} (q ^ {\prime} \| q) \\ + \int \frac {\delta}{\delta q} \mathbb {E} _ {q} [ \log (q) ] (\theta) \left(q ^ {\prime} - q\right) (\theta) \mathrm {d} \theta . \tag {17} \\ \end{array}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
By the convexity of $F$ and (17), we get
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
\begin{array}{l} \mathcal {L} \left(q ^ {\prime}\right) = F \left(q ^ {\prime}\right) + \lambda \mathbb {E} _ {q ^ {\prime}} \left[ \log q ^ {\prime} \right] \\ \geq F (q) + \int \frac {\delta F}{\delta q} (q) (\theta) \left(q ^ {\prime} - q\right) (\theta) d \theta + \lambda \mathbb {E} _ {q ^ {\prime}} [ \log q ^ {\prime} ] \\ = \mathcal {L} (q) + \int \frac {\delta \mathcal {L}}{\delta q} (q) (\theta) \left(q ^ {\prime} - q\right) (\theta) \mathrm {d} \theta + \lambda \mathrm {K L} \left(q ^ {\prime} \| q\right). \\ \end{array}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
In addition, by taking the functional derivative of the left hand side of (16) in $q^{\prime}$ , we obtain the optimality condition of this functional as follows:
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
0 = \frac {\delta \mathcal {L}}{\delta q} + \lambda (\log q ^ {\prime} - \log q) = - \lambda \log p _ {q} + \lambda \log q ^ {\prime}.
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
Therefore, $q' = p_q$ is a minimizer of the left hand side of (16) as desired.
|
| 267 |
+
|
| 268 |
+
(iii) For the last statement, observe that minimizing both sides of (16) over $q' \in \mathcal{P}$ yields
|
| 269 |
+
|
| 270 |
+
$$
|
| 271 |
+
\begin{array}{l} \mathcal {L} (q _ {*}) \geq \mathcal {L} (q) + \int \frac {\delta \mathcal {L}}{\delta q} (q) (\theta) (p _ {q} - q) (\theta) d \theta + \lambda \mathrm {K L} (p _ {q} \| q) \\ = \mathcal {L} (q) + \lambda \int \log \frac {q}{p _ {q}} (\theta) (p _ {q} - q) (\theta) d \theta + \lambda \mathrm {K L} (p _ {q} \| q) \\ = \mathcal {L} (q) - \lambda \mathrm {K L} (q \| p _ {q}). \\ \end{array}
|
| 272 |
+
$$
|
| 273 |
+
|
| 274 |
+
Moreover, by (16) with $q = q_{*}$ and the optimality condition $\frac{\delta\mathcal{L}}{\delta q} (q_{*}) = 0$ , we get $\mathcal{L}(q) - \mathcal{L}(q_{*})\geq \lambda \mathrm{KL}(q\| q_{*})$ This finishes the proof.
|
| 275 |
+
|
| 276 |
+
We remark that the inequality (16) indicates that the functional $\mathcal{L}$ satisfies an analog of strong convexity with the proximal functional $\mathrm{KL}(q' \| q)$ , and in the third
|
| 277 |
+
|
| 278 |
+
statement in Proposition 1, this convexity plays a similar role as in finite dimensional convex analysis. In particular, inequalities $\lambda \mathrm{KL}(q\| p_q)\geq \mathcal{L}(q) - \mathcal{L}(q_*)$ and $\mathcal{L}(q) - \mathcal{L}(q_{*})\geq \lambda \mathrm{KL}(q\| q_{*})$ can be recognized as the counterparts of the Polyak-Lojasiewicz and quadratic growth inequalities, respectively (for details of these conditions see Charles and Papailopoulos (2018)). Following this analogy, $\mathrm{KL}(q\| p_q)$ and $\mathrm{KL}(q\| q_{*})$ act as the squared norm of gradient at $q$ and squared distance between $q$ and $q_{*}$ , respectively.
|
| 279 |
+
|
| 280 |
+
Finally, the third statement in Proposition 1 indicates that the divergence between $q$ and $p_q$ indeed measures the optimality gap, as expected from the optimality condition (15). Furthermore, we reveal that $p_q$ can be interpreted as a proximal point which minimizes the sum of linearization of $\mathcal{L}$ and the KL-divergence around $q$ , and that convergence of the optimization gap implies convergence to $q_*$ in the sense of KL-divergence.
|
| 281 |
+
|
| 282 |
+
# 3.2 Convergence Rate in Continuous Time
|
| 283 |
+
|
| 284 |
+
We now introduce the convergence rate analysis by utilizing the aforementioned results. We first show that the mean field Langevin dynamics (8) converges linearly to the optimal solution of (4) in continuous time under the log-Sobolev inequality.
|
| 285 |
+
|
| 286 |
+
Theorem 1. Let $\{q_t\}_{t\geq 0}$ be the evolution described by (8). Under Assumption 1, 2, we get for $t\geq 0$
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
\mathcal {L} \left(q _ {t}\right) - \mathcal {L} \left(q _ {*}\right) \leq \exp (- 2 \alpha \lambda t) \left(\mathcal {L} \left(q _ {0}\right) - \mathcal {L} \left(q _ {*}\right)\right).
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
Proof. From Proposition 1 and (9), we have
|
| 293 |
+
|
| 294 |
+
$$
|
| 295 |
+
\begin{array}{l} \frac {\mathrm {d}}{\mathrm {d} t} \left(\mathcal {L} \left(q _ {t}\right) - \mathcal {L} \left(q _ {*}\right)\right) \\ = \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \frac {\partial q _ {t}}{\partial t} (\theta) \mathrm {d} \theta \\ = \lambda \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \nabla \cdot \left(q _ {t} (\theta) \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta)\right) d \theta \\ = - \lambda \int q _ {t} (\theta) \nabla \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) ^ {\top} \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \mathrm {d} \theta \\ = - \lambda^ {2} \int q _ {t} (\theta) \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} \mathrm {d} \theta \\ \leq - 2 \alpha \lambda^ {2} \mathrm {K L} \left(q _ {t} \| p _ {q _ {t}}\right) \\ \leq - 2 \alpha \lambda \left(\mathcal {L} \left(q _ {t}\right) - \mathcal {L} \left(q _ {*}\right)\right). \\ \end{array}
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
The statement then follows from a straightforward application of the Gronwall's inequality.
|
| 299 |
+
|
| 300 |
+
As a corollary, we can also show the convergence of $\mathrm{KL}(q_t\| p_{q_t})$ in the following sense.
|
| 301 |
+
|
| 302 |
+
Corollary 1. Under the same setting as Theorem 1, we have for $t \geq 1$ ,
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
\inf _ {s \in [ 0, t ]} \mathrm {K L} \left(q _ {s} \| p _ {q _ {s}}\right) \leq \frac {\exp (- 2 \alpha \lambda (t - 1))}{2 \alpha \lambda^ {2}} \left(\mathcal {L} \left(q _ {0}\right) - \mathcal {L} \left(q _ {*}\right)\right).
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
# 3.3 Convergence Rate in Discrete Time
|
| 309 |
+
|
| 310 |
+
For the standard Langevin dynamics (i.e., linear Fokker-Planck equation), exponential convergence in continuous time often implies the same convergence up to certain error (depending on the step size) in discrete time. In this section we show that the same property also holds for the mean field Langevin dynamics (7). We provide a convergence rate analysis of the discrete-time dynamics (10) by adapting a version of the "one-step interpolation" argument presented in Vempala and Wibisono (2019). Since analysis of one single step of the dynamics (10) is the key, we adopt the following notations for conciseness. Let $\theta^{(k)}\sim q^{(k)}(\theta)\mathrm{d}\theta$ be a random variable that represents the current iteration, and let $\theta_t^{(k + 1)}$ be the next iterate of noisy gradient descent with step size $t > 0$ :
|
| 311 |
+
|
| 312 |
+
$$
|
| 313 |
+
\theta_ {t} ^ {(k + 1)} = \theta^ {(k)} - t \nabla \frac {\delta F}{\delta q} \left(q ^ {(k)}\right) \left(\theta^ {(k)}\right) + \sqrt {2 \lambda t} \xi^ {(k)}, \tag {18}
|
| 314 |
+
$$
|
| 315 |
+
|
| 316 |
+
where $\xi^{(k)}\sim \mathcal{N}(0,I_d)$ .We denote a probability distribution of $\theta_t^{(k + 1)}$ by $q_{t}^{(k + 1)}(\theta)\mathrm{d}\theta$ Note that this step is equivalent to (10) when $t = \eta$ ,that is, $\theta_{\eta}^{(k + 1)} = \theta^{(k + 1)}$ and $q_{\eta}^{(k + 1)} = q^{(k + 1)}$ .We define $\delta_{q^{(k)},t}$ as
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\left. \right. \mathbb {E} _ {\left(\theta^ {(k)}, \theta_ {t} ^ {(k + 1)}\right)} \left\| \nabla \frac {\delta F}{\delta q} \left(q ^ {(k)}\right)\left(\theta^ {(k)}\right) - \nabla \frac {\delta F}{\delta q} \left(q _ {t} ^ {(k + 1)}\right)\left(\theta_ {t} ^ {(k + 1)}\right)\right\| _ {2} ^ {2},
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
where $\mathbb{E}_{(\theta^{(k)},\theta_t^{(k + 1)})}$ is an expectation in the joint distribution of $\theta^{(k)}$ and $\theta_t^{(k + 1)}$ . Note that the term $\delta_{q^{(k)},t}$ can be recognized as a discretization error which comes from the positive step size $t > 0$ , and this error usually decays to zero in common settings as $t\to 0$ . Thus, by carefully incorporating this error into the proof of Theorem 1, we can show the convergence of the discrete-time (10) up to a certain error in proportion to a constant $\delta_{\eta}$ depending on $\eta$ . The complete proof is deferred to Appendix A.3.
|
| 323 |
+
|
| 324 |
+
Theorem 2. Let $\{\theta^{(k)}\}_{k = 0}^{\infty}$ be the iterations of random variables generated by the discrete-time dynamics (10) with the step size $\eta$ and $\{q^{(k)}\}_{k = 0}^{\infty}$ be the corresponding probability distributions. Suppose Assumption 1, 2 hold and there exists a constant $\delta_{\eta}$ such that $\delta_{q^{(k)},t} \leq \delta_{\eta}$ for any $0 < t \leq \eta$ and non-negative integer $k$ . Then, it follows that
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
\begin{array}{l} \mathcal {L} (q ^ {(k)}) - \mathcal {L} (q _ {*}) \leq \\ \frac {\delta_ {\eta}}{2 \alpha \lambda} + \exp (- \alpha \lambda \eta k) \left(\mathcal {L} (q ^ {(0)}) - \mathcal {L} (q _ {*})\right). \\ \end{array}
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
Proof sketch. We first present the one-step analysis for iteration (18) with step size $\eta$ . Consider the stochastic differential equation:
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
\mathrm {d} \theta_ {t} = - \nabla \frac {\delta F}{\delta q} (q ^ {(k)}) (\theta_ {0}) \mathrm {d} t + \sqrt {2 \lambda} \mathrm {d} W _ {t}, \tag {19}
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
where $\theta_0 = \theta^{(k)}$ and $W_{t}$ is the Brownian motion in $\mathbb{R}^d$ with $W_{0} = 0$ . Then, (18) is the solution of this equation at time $t$ . We denote by $q_{0t}(\theta_0,\theta_t)$ the joint probability distribution of $(\theta_0,\theta_t)$ for time $t$ , and by $q_{t|0}$ , $q_{0|t}$ and $q_{0}$ , $q_{t}$ conditional and marginal distributions. That is, $q_{0} = q^{(k)}$ , $q_{t} = q_{t}^{(k + 1)}$ (i.e., $\theta_t\stackrel {\mathrm{d}}{=}\theta_t^{(k + 1)})$ , and
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
q _ {0 t} (\theta_ {0}, \theta_ {t}) = q _ {0} (\theta_ {0}) q _ {t | 0} (\theta_ {t} | \theta_ {0}) = q _ {t} (\theta_ {t}) q _ {0 | t} (\theta_ {0} | \theta_ {t}).
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
The continuity equation of $q_{t|0}$ conditioning on $\theta_0$ can be described as follows (see Section 7 of Vempala and Wibisono (2019) for details):
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\begin{array}{l} \frac {\partial q _ {t | 0} \left(\theta_ {t} \mid \theta_ {0}\right)}{\partial t} = \nabla \cdot \left(q _ {t | 0} \left(\theta_ {t} \mid \theta_ {0}\right) \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right)\right) \\ + \lambda \Delta q _ {t | 0} (\theta_ {t} | \theta_ {0}). \\ \end{array}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
Therefore, we obtain the following description of $q_{t}$ :
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\begin{array}{l} \frac {\partial q _ {t} \left(\theta_ {t}\right)}{\partial t} = \lambda \nabla \cdot \left(q _ {t} \left(\theta_ {t}\right) \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} \left(\theta_ {t}\right)\right) \\ + \nabla \cdot \left\{q _ {t} \left(\theta_ {t}\right) \left(\mathbb {E} _ {\theta_ {0} \mid \theta_ {t}} \left[ \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right) \right| \theta_ {t} \right] \right. \\ \left. \left. - \nabla \frac {\delta F}{\delta q} \left(q _ {t}\right) \left(\theta_ {t}\right)\right) \right\}, \tag {20} \\ \end{array}
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
where $p_{q_t}(\cdot)\propto \exp \left(-\frac{1}{\lambda}\nabla \frac{\delta F}{\delta q} (q_t)(\cdot)\right)$ . By Assumption 2 and (20), for $0\leq t\leq \eta$ we have
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
\begin{array}{l} \frac {\mathrm {d} \mathcal {L}}{\mathrm {d} t} (q _ {t}) = \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \frac {\partial q _ {t}}{\partial t} (\theta) \mathrm {d} \theta \\ \leq - \frac {\lambda^ {2}}{2} \int q _ {t} (\theta) \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} \mathrm {d} \theta \\ + \frac {1}{2} \mathbb {E} _ {\left(\theta_ {0}, \theta\right) \sim q _ {0 t}} \left\| \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right) - \nabla \frac {\delta F}{\delta q} \left(q _ {t}\right) (\theta) \right\| _ {2} ^ {2} \\ \leq - \alpha \lambda (\mathcal {L} (q _ {t}) - \mathcal {L} (q _ {*})) + \frac {1}{2} \delta_ {\eta}, \\ \end{array}
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
where we used $(\theta_0,\theta_t)\stackrel {\mathrm{d}}{=}(\theta^{(k)},\theta_t^{(k + 1)})$ to bound the last expectation by $\delta_{q^{(k)},t}\leq \delta_{\eta}$ . Noting $q_{\eta} = q^{(k + 1)}$ and $q_{0} = q^{(k)}$ , Gronwall's inequality yields
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
\begin{array}{l} \mathcal {L} (q ^ {(k + 1)}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda} \\ \leq \exp (- \alpha \lambda \eta) \left(\mathcal {L} (q ^ {(k)}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda}\right). \\ \end{array}
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
This reduction holds at every iteration of (19), which concludes the proof.
|
| 367 |
+
|
| 368 |
+
The proof indicates that the continuous equation (20) of the dynamics (19) associated with noisy gradient descent contains a discretization error term compared to the continuous-time counterpart (7). Usually, this error depends on the step size $\eta$ , hence $\eta$ should be chosen sufficiently small to achieve a required optimization accuracy. Moreover, to derive a convergence rate, the relationship between $\eta$ and the discretization error should be explicitly characterized. The following lemma outlines this dependency in the case of mean field neural networks (13).
|
| 369 |
+
|
| 370 |
+
Lemma 1. Consider the loss minimization setting using mean field neural networks in Section 2.3 with $r(\theta) = \| \theta \|_2^2$ . Suppose that $\ell(\cdot, y)$ and $h(\cdot, x)$ are differentiable and sufficiently smooth, that is, there exist positive constants $C_1, \ldots, C_4$ such that $|\partial_z \ell(z, y)| \leq C_1$ , $|\partial_z \ell(z, y) - \partial_z \ell(z', y)| \leq C_2 |z - z'|$ , $\| \partial_\theta h_\theta(x) \|_2 \leq C_3$ , and $\| \partial_\theta h_\theta(x) - \partial_\theta h_{\theta'}(x) \|_2 \leq C_4 \| \theta - \theta' \|_2$ . Also suppose $2\lambda' \eta < 1$ and the following condition holds for the iterate $\theta^{(k)} \sim q^{(k)}(\theta) \mathrm{d}\theta$ .
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\mathbb {E} _ {\theta^ {(k)}} \left[ \| \theta^ {(k)} \| _ {2} ^ {2} \right] \leq \frac {\eta C _ {1} ^ {2} C _ {3} ^ {2} + 2 \lambda d}{2 \eta \lambda^ {\prime 2}}. \tag {21}
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
Then, we get for any $0 < t \leq \eta$ ,
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\delta_ {q ^ {(k)}, t} \leq 4 0 \eta \left(C _ {2} ^ {2} C _ {3} ^ {4} + \left(C _ {1} C _ {4} + 2 \lambda^ {\prime}\right) ^ {2}\right) \left(\eta C _ {1} ^ {2} C _ {3} ^ {2} + \lambda d\right). \tag {22}
|
| 380 |
+
$$
|
| 381 |
+
|
| 382 |
+
In addition, the same bound as (21) also holds for the second moment of the next iterate $\| \theta^{(k + 1)}\| _2$
|
| 383 |
+
|
| 384 |
+
This is to say, $\delta_{q^{(k)},t} = O(\eta)$ for all $k$ as long as (21) holds for $k = 0$ . Therefore, in combination with Theorem 2, we arrive at a convergence rate guarantee for the discrete-time dynamics in optimizing mean field neural networks (up to certain error). Specifically, the following Corollary implies an iteration complexity of $O\left(\frac{1}{\epsilon\alpha^2\lambda^2}\log \frac{1}{\epsilon}\right)$ to achieve an $\epsilon$ -accurate solution.
|
| 385 |
+
|
| 386 |
+
Corollary 2. Consider the same setting as Lemma 1 and suppose Assumption 2 holds. Let $\{\theta^{(k)}\}_{k = 0}^{\infty}$ be the iterations of random variables generated by the discrete-time dynamics (10) with the step size $\eta = O(\epsilon \alpha \lambda)$ and $\{q^{(k)}\}_{k = 0}^{\infty}$ be the corresponding probability distributions. Then if the condition (21) is satisfied for the initial iterate $\theta^{(0)}\sim q^{(0)}(\theta)\mathrm{d}\theta$ we know that for any step size $2\lambda^{\prime}\eta < 1$ , the following statement holds true for $k = 0,1,2,3\dots$ ,
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
\begin{array}{l} \mathcal {L} \left(q ^ {(k)}\right) - \mathcal {L} \left(q _ {*}\right) = \\ O (\epsilon) + \exp (- O \left(\epsilon \alpha^ {2} \lambda^ {2} k\right)) \left(\mathcal {L} \left(q ^ {(0)}\right) - \mathcal {L} \left(q _ {*}\right)\right). \tag {23} \\ \end{array}
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
Finally, we remark that discretization error induced by finite-particle approximation can also be controlled via a direct application of Theorem 3 of Mei et al. (2018). However, such finite-particle error grows exponentially
|
| 393 |
+
|
| 394 |
+
with the time horizon, and thus is not negligible unless the exponent in the linear convergence (23) is sufficiently large. In future work, we intend to investigate conditions under which such exponential dependence can be avoided (e.g., as in Chen et al. (2020b)).
|
| 395 |
+
|
| 396 |
+
# 4 PRIMAL-DUAL VIEWPOINT
|
| 397 |
+
|
| 398 |
+
As seen in Section 3, the proximal Gibbs distribution $p_{q}$ plays an important role in our convergence rate analysis. In this section, we complement the previous results by presenting a primal and dual perspective of this proximal distribution in the (regularized) empirical risk minimization setting. Based on this connection, we show that the duality gap can be minimized by the mean field Langevin dynamics (7).
|
| 399 |
+
|
| 400 |
+
# 4.1 Primal-dual Problem
|
| 401 |
+
|
| 402 |
+
We first introduce a primal-dual formulation of the empirical risk minimization problem. For a train dataset $\{(x_i, y_i)\}_{i=1}^n$ and differentiable convex loss function: $\ell(z, y)$ in $z$ , we consider the minimization of the following regularized empirical risk:
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\mathcal {L} (q) = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(h _ {q} \left(x _ {i}\right), y _ {i}\right) + \lambda^ {\prime} \mathbb {E} _ {\theta \sim q} [ \| \theta \| _ {2} ^ {2} ] + \lambda \mathbb {E} _ {q} [ \log q ]. \tag {24}
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
Note that this problem is a special case of (4) by setting $F(q) = \frac{1}{n}\sum_{i = 1}^{n}\ell (h_q(x_i),y_i) + \lambda '\mathbb{E}_{\theta \sim q}[\| \theta \| _2^2 ]$ . Write $\ell_{i}(z) = \ell (z,y_{i})$ and $\ell_i^* (\cdot)$ as its Fenchel conjugate, i.e.,
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
\ell_ {i} ^ {*} (z ^ {*}) = \sup _ {z \in \mathbb {R}} \{z z ^ {*} - \ell_ {i} (z) \}. \text {f o r} z ^ {*} \in \mathbb {R}
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
Also, for any given vector $g = \{g_i\}_{i=1}^n \in \mathbb{R}^n$ , we define
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
q _ {g} (\theta) = \exp \left(- \frac {1}{\lambda} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} h _ {\theta} (x _ {i}) g _ {i} + \lambda^ {\prime} \| \theta \| _ {2} ^ {2}\right)\right).
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
Then, the dual problem of (24) is defined as
|
| 421 |
+
|
| 422 |
+
$$
|
| 423 |
+
\max _ {g \in \mathbb {R} ^ {n}} \left\{\mathcal {D} (g) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {i} ^ {*} (g _ {i}) - \lambda \log \int q _ {g} (\theta) \mathrm {d} \theta \right\}. \tag {25}
|
| 424 |
+
$$
|
| 425 |
+
|
| 426 |
+
The duality theorem (Rockafellar, 1970; Bauschke et al., 2011; Oko et al., 2022) guarantees the relationship $\mathcal{D}(g)\leq \mathcal{L}(q)$ for any $g\in \mathbb{R}^n$ and $q\in \mathcal{P}$ , and it is known that the duality gap $\mathcal{L}(q) - \mathcal{D}(g)$ vanishes at the solutions of (24) and (25) when they exist. In our problem setting, it is possible to establish a stronger and more precise result.
|
| 427 |
+
|
| 428 |
+
We denote $g_{q} = \{\partial_{z}\ell (z,y_{i})|_{z = h_{q}(x_{i})}\}_{i = 1}^{n}\in \mathbb{R}^{n}$ $(q\in \mathcal{P})$ The following theorem exactly characterizes the duality
|
| 429 |
+
|
| 430 |
+
gap $\mathcal{L}(q) - \mathcal{D}(g_q)$ between $q\in \mathcal{P}$ and $g_{q}\in \mathbb{R}^{n}$ via the proximal Gibbs distribution $p_q$
|
| 431 |
+
|
| 432 |
+
Theorem 3 (Duality Theorem). Suppose $\ell(\cdot, y)$ is convex and differentiable. For any $q \in \mathcal{P}$ the duality gap between $q \in \mathcal{P}$ and $g_{q}$ of the problems (24) and (25) is
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
0 \leq \mathcal {L} (q) - \mathcal {D} (g _ {q}) = \lambda \mathrm {K L} (q \| p _ {q}).
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
We make the following observations. First, this theorem can be seen as a refinement of Proposition 1, in which $\lambda \mathrm{KL}(q\| p_q)$ upper bounds the optimality gap $\mathcal{L}(q) - \mathcal{L}(q_{*})$ in the general setting. Theorem 3 reveals that $\lambda \mathrm{KL}(q\| p_q)$ is exactly the duality gap $\mathcal{L}(q) - \mathcal{D}(g_q)$ for the empirical risk minimization problems. Second, because of the relationship $p_q(\theta) = \frac{q_{g_q}(\theta)}{\int q_{g_q}(\theta)\mathrm{d}\theta}$ , we notice that the proximal Gibbs distribution $p_q$ of $q$ can be seen as a "round trip" between the primal and dual spaces: $q\to g_q\to q_{g_q}\propto p_q$ ; the equation $\mathcal{L}(q) - \mathcal{D}(g_q) = \lambda \mathrm{KL}(q\| p_q)$ gives an interesting relationship among these variables. Third, Combining Theorem 3 and the duality theorem (e.g., see Proposition 1 of Oko et al. (2022)): $\mathcal{D}(g)\leq \mathcal{L}(q)$ for any $q\in \mathcal{P}$ and $g\in \mathbb{R}^n$ , we see that $g_{q_*}$ is the solution of the dual problem (25).
|
| 439 |
+
|
| 440 |
+
Another interesting quantity to investigate is the primal objective $\mathcal{L}(p_q)$ of the proximal Gibbs distribution $p_q$ . The next theorem gives an upper bound on a duality gap between $p_q$ and a dual variable $g_q$ .
|
| 441 |
+
|
| 442 |
+
Theorem 4 (Second Duality Theorem). Suppose $\ell(\cdot, y)$ is convex and $C_2$ -smooth, that is, $|\partial_z \ell(z, y) - \partial_z \ell(z', y)| \leq C_2 |z - z'|$ , and $|h_\theta(x)| \leq B$ . Then for any $q \in \mathcal{P}$ , the duality gap between $p_q \in \mathcal{P}$ and $g_q$ of the problems (24) and (25) satisfies
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
0 \leq \mathcal {L} (p _ {q}) - \mathcal {D} (g _ {q}) \leq (\lambda + 2 B ^ {2} C _ {2}) \mathrm {K L} (q \| p _ {q}).
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
This theorem provides a new choice for solving the problem (24). That is, after obtaining $q$ by the optimization, we can alternatively utilize $p_q$ as an alternative solution, which may be efficiently approximated by sampling methods for Gibbs distributions.
|
| 449 |
+
|
| 450 |
+
# 4.2 Convergence of Duality Gap
|
| 451 |
+
|
| 452 |
+
Combining Theorem 3 with Corollary 1, we see that mean field Langevin dynamics solves both the primal and dual problems in the following sense.
|
| 453 |
+
|
| 454 |
+
Corollary 3. Consider the evolution $\{q_t\}_{t\geq 0}$ , which satisfies (8) for the problem (24) under the same settings as in Corollary 1 and Theorem 3. Let $\{g_{q_t}\}_{t\geq 0}$ be the
|
| 455 |
+
|
| 456 |
+
associated dynamics in $\mathbb{R}^n$ . Then for $t \geq 1$ ,
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\begin{array}{l} \inf _ {s \in [ 0, t ]} \left\{\mathcal {L} \left(q _ {s}\right) - \mathcal {D} \left(g _ {g _ {s}}\right) \right\} \\ \leq \frac {\exp (- 2 \alpha \lambda (t - 1))}{2 \alpha \lambda} \left(\mathcal {L} \left(q _ {0}\right) - \mathcal {L} \left(q _ {*}\right)\right). \\ \end{array}
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
Thus, we can conclude that mean field Langevin dynamics also solves the dual problem, that is, $\mathcal{D}(g_{q_t})$ also converges to $\mathcal{D}(g_{q_*})$ . Following the same reasoning, we can derive a corollary of Theorem 4 demonstrating the convergence of $\mathcal{L}(p_{q_t}) - \mathcal{D}(g_{g_t})$ .
|
| 463 |
+
|
| 464 |
+
Evaluation of duality gap. One benefit of this primal-dual formulation is that we can observe the optimization gap by computing the duality gap $\mathcal{L}(q) - \mathcal{D}(g)$ or $\mathcal{L}(p_q) - \mathcal{D}(g)$ , without the knowledge of the optimal value $\mathcal{L}(q_{*})$ . In Figure 2 we empirically demonstrate the duality gap on a regression problem with the squared loss. We set $n = 1000, d = 5$ , and consider a simple student-teacher setting, where the teacher model is a two-layer sigmoid network with orthogonal neurons, and the student model $h_\Theta$ is a two-layer mean field neural network of width $M = 1000$ with tanh activation. The student model is optimized by the noisy gradient descent with $\eta = 0.01$ , and we use the Langevin algorithm to obtain approximate samples from the proximal Gibbs distribution $p_q$ . For the primal objective $\mathcal{L}$ , we adopt the $k$ -nearest neighbors estimator (Kozachenko and Leonenko, 1987) with $k = 10$ to estimate the entropy; for the dual objective $\mathcal{D}$ , the approximation of the log integral term is described in Appendix B.1.
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
Figure 2: Illustration of duality gap: two-layer tanh network optimizing the empirical squared error. We set $\lambda = \lambda' = 10^{-2}$ .
|
| 468 |
+
|
| 469 |
+
Observe that towards the end of training, the primal $(\mathcal{L}(q)$ and $\mathcal{L}(p_q))$ and dual $(\mathcal{D}(g_q))$ objectives become close, which is consistent with Theorem 3 and 4.
|
| 470 |
+
|
| 471 |
+
# CONCLUSION
|
| 472 |
+
|
| 473 |
+
We established quantitative global convergence guarantee for the mean field Langevin dynamics in both continuous- and discrete-time, by adapting convex optimization techniques into the space of measures, in combination with standard analysis of the Langevin dynamics. Looking forward, an interesting future direction is to conduct the analysis under weaker isoperimetry conditions such as the Poincaré inequality, which
|
| 474 |
+
|
| 475 |
+
covers more general objectives. It is also important to refine our convergence result (e.g., exponential dependence on $1 / \lambda$ in the LSI constant) under additional structure of the learning problem. Another interesting direction is to explore applications of the mean field dynamics beyond the optimization of neural networks.
|
| 476 |
+
|
| 477 |
+
# Acknowledgment
|
| 478 |
+
|
| 479 |
+
AN was partially supported by JSPS KAKENHI (19K20337) and JST-PRESTO (JPMJPR1928). DW was partially supported by NSERC and LG Electronics. TS was partially supported by JSPS KAKENHI (18H03201), Japan Digital Design and JST CREST.
|
| 480 |
+
|
| 481 |
+
# References
|
| 482 |
+
|
| 483 |
+
Akiyama, S. and Suzuki, T. (2021). On learnability via gradient method for two-layer relu neural networks in teacher-student setting. In Proceedings of International Conference on Machine Learning 38, pages 152-162.
|
| 484 |
+
Bakry, D. and Emery, M. (1985). Diffusions hypercontractives in sem. probab. xix lnm 1123.
|
| 485 |
+
Bakry, D., Gentil, I., and Ledoux, M. (2013). Analysis and geometry of Markov diffusion operators, volume 348. Springer Science & Business Media.
|
| 486 |
+
Bauschke, H. H., Combettes, P. L., et al. (2011). Convex analysis and monotone operator theory in Hilbert spaces, volume 408. Springer.
|
| 487 |
+
Bou-Rabee, N. and Eberle, A. (2021). Mixing time guarantees for unadjusted hamiltonian monte carlo. arXiv e-prints, pages arXiv-2105.
|
| 488 |
+
Bou-Rabee, N. and Schuh, K. (2020). Convergence of unadjusted hamiltonian monte carlo for mean-field models. arXiv preprint arXiv:2009.08735.
|
| 489 |
+
Charles, Z. and Papailiopoulos, D. (2018). Stability and generalization of learning algorithms that converge to global optima. In Proceedings of International Conference on Machine Learning 35, pages 745-754.
|
| 490 |
+
Chen, Z., Cao, Y., Gu, Q., and Zhang, T. (2020a). A generalized neural tangent kernel analysis for two-layer neural networks. arXiv preprint arXiv:2002.04026.
|
| 491 |
+
Chen, Z., Rotskoff, G. M., Bruna, J., and Vanden-Eijnden, E. (2020b). A dynamical central limit theorem for shallow neural networks. arXiv preprint arXiv:2008.09623.
|
| 492 |
+
Chizat, L. (2021a). Convergence rates of gradient methods for convex optimization in the space of measures. arXiv preprint arXiv:2105.08368.
|
| 493 |
+
Chizat, L. (2021b). Sparse optimization on measures with over-parameterized gradient descent. Mathematical Programming, pages 1-46.
|
| 494 |
+
Chizat, L. (2022). Mean-field Langevin dynamics: Exponential convergence and annealing. arXiv preprint arXiv:2202.01009.
|
| 495 |
+
Chizat, L. and Bach, F. (2018). On the global convergence of gradient descent for over-parameterized models using optimal transport. In Advances in Neural Information Processing Systems 31, pages 3040-3050.
|
| 496 |
+
|
| 497 |
+
Dalalyan, A. S. (2014). Theoretical guarantees for approximate sampling from smooth and log-concave densities. arXiv preprint arXiv:1412.7392.
|
| 498 |
+
Dalalyan, A. S. (2017). Further and stronger analogy between sampling and optimization: Langevin monte carlo and gradient descent. arXiv preprint arXiv:1704.04752.
|
| 499 |
+
Durmus, A. and Moulines, E. (2017). Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. The Annals of Applied Probability, 27(3):1551-1587.
|
| 500 |
+
Eberle, A., Guillin, A., and Zimmer, R. (2019). Quantitative harris-type theorems for diffusions and mckean-vlasov processes. Transactions of the American Mathematical Society, 371(10):7135-7173.
|
| 501 |
+
Erdogdu, M. A. and Hosseinzadeh, R. (2020). On the convergence of Langevin monte carlo: The interplay between tail growth and smoothness. arXiv preprint arXiv:2005.13097.
|
| 502 |
+
Erdogdu, M. A., Hosseinzadeh, R., and Zhang, M. S. (2021). Convergence of Langevin monte carlo in chisquared and rényi divergence.
|
| 503 |
+
Ghorbani, B., Mei, S., Misiakiewicz, T., and Montanari, A. (2019). Limitations of lazy training of two-layers neural network. In Advances in Neural Information Processing Systems 32, pages 9111-9121.
|
| 504 |
+
Guillin, A., Liu, W., Wu, L., and Zhang, C. (2019). Uniform poincar $\{\backslash^{\prime}\mathrm{e}\}$ and logarithmic sobolev inequalities for mean field particles systems. arXiv preprint arXiv:1909.07051.
|
| 505 |
+
Guillin, A., Liu, W., Wu, L., and Zhang, C. (2021). The kinetic fokker-planck equation with mean field interaction. Journal de Mathématiques Pures et Appliquées, 150:1-23.
|
| 506 |
+
Holley, R. and Stroock, D. (1987). Logarithmic sobolev inequalities and stochastic ising models. Journal of statistical physics, 46(5-6):1159-1194.
|
| 507 |
+
Hu, K., Ren, Z., Siska, D., and Szpruch, L. (2019). Mean-field Langevin dynamics and energy landscape of neural networks. arXiv preprint arXiv:1905.07769.
|
| 508 |
+
Jabir, J.-F., Šiška, D., and Szpruch, L. (2019). Meanfield neural odes via relaxed optimal control. arXiv preprint arXiv:1912.05475.
|
| 509 |
+
Jacot, A., Gabriel, F., and Hongler, C. (2018). Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems 31, pages 8580-8589.
|
| 510 |
+
|
| 511 |
+
Javanmard, A., Mondelli, M., and Montanari, A. (2019). Analysis of a two-layer neural network via displacement convexity. arXiv preprint arXiv:1901.01375.
|
| 512 |
+
Jordan, R., Kinderlehrer, D., and Otto, F. (1998). The variational formulation of the fokker-Planck equation. SIAM journal on mathematical analysis, 29(1):1-17.
|
| 513 |
+
Kazeykina, A., Ren, Z., Tan, X., and Yang, J. (2020). Ergodicity of the underdamped mean-field Langevin dynamics. arXiv preprint arXiv:2007.14660.
|
| 514 |
+
Kent, C., Blanchet, J., and Glynn, P. (2021). Frankwolfe methods in probability space. arXiv preprint arXiv:2105.05352.
|
| 515 |
+
Kozachenko, L. and Leonenko, N. N. (1987). Sample estimate of the entropy of a random vector. *Problemy Peredachi Informatii*, 23(2):9-16.
|
| 516 |
+
Li, X., Wu, Y., Mackey, L., and Erdogdu, M. A. (2019). Stochastic runge-kutta accelerates Langevin monte carlo and beyond. In Advances in Neural Information Processing Systems 32, pages 7748-7760.
|
| 517 |
+
Mei, S., Misiakiewicz, T., and Montanari, A. (2019). Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. arXiv preprint arXiv:1902.06015.
|
| 518 |
+
Mei, S., Montanari, A., and Nguyen, P.-M. (2018). A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665-E7671.
|
| 519 |
+
Menz, G. and Schlichting, A. (2014). Poincaré and logarithmic sobolev inequalities by decomposition of the energy landscape. The Annals of Probability, 42(5):1809-1884.
|
| 520 |
+
Milstein, G. N. and Tretyakov, M. V. (2013). Stochastic numerics for mathematical physics. Springer Science & Business Media.
|
| 521 |
+
Monmarché, P. (2017). Long-time behaviour and propagation of chaos for mean field kinetic particles. Stochastic Processes and their Applications, 127(6):1721-1737.
|
| 522 |
+
Nitanda, A. and Suzuki, T. (2017). Stochastic particle gradient descent for infinite ensembles. arXiv preprint arXiv:1712.05438.
|
| 523 |
+
Nitanda, A., Wu, D., and Suzuki, T. (2021). Particle dual averaging: Optimization of mean field neural networks with global convergence rate analysis. In Advances in Neural Information Processing Systems 34.
|
| 524 |
+
|
| 525 |
+
Oko, K., Suzuki, T., Nitanda, A., and Wu, D. (2022). Particle stochastic dual coordinate ascent: Exponential convergent algorithm for mean field neural network optimization. In Proceedings of the 10th International Conference on Learning Representations.
|
| 526 |
+
Rockafellar, R. T. (1970). Convex Analysis. Princeton University Press, Princeton.
|
| 527 |
+
Rotskoff, G. M., Jelassi, S., Bruna, J., and Vanden-Eijnden, E. (2019). Global convergence of neuron birth-death dynamics. In Proceedings of International Conference on Machine Learning 36, pages 9689-9698.
|
| 528 |
+
Rotskoff, G. M. and Vanden-Eijnden, E. (2018). Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems 31, pages 7146-7155.
|
| 529 |
+
Sirignano, J. and Spiliopoulos, K. (2020). Mean field analysis of neural networks: A central limit theorem. Stochastic Processes and their Applications, 130(3):1820-1852.
|
| 530 |
+
Suzuki, T. (2019). Adaptivity of deep relu network for learning in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. In Proceedings of the 7th International Conference on Learning Representations.
|
| 531 |
+
Vempala, S. and Wibisono, A. (2019). Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. In Advances in Neural Information Processing Systems 32, pages 8094-8106.
|
| 532 |
+
Villani, C. (2009). Hypocoercivity.
|
| 533 |
+
Wei, C., Lee, J. D., Liu, Q., and Ma, T. (2019). Regularization matters: Generalization and optimization of neural nets vs their induced kernel. In Advances in Neural Information Processing Systems 32, pages 9712-9724.
|
| 534 |
+
Wibisono, A. (2018). Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In Proceedings of Conference on Learning Theory 31, pages 2093-3027.
|
| 535 |
+
Ying, L. (2020). Mirror descent algorithms for minimizing interacting free energy. Journal of Scientific Computing, 84(3):1-14.
|
| 536 |
+
|
| 537 |
+
# Table of Contents
|
| 538 |
+
|
| 539 |
+
# 1 INTRODUCTION 1
|
| 540 |
+
|
| 541 |
+
1.1 Contributions 2
|
| 542 |
+
1.2 Related Literature 2
|
| 543 |
+
1.3 Notations 3
|
| 544 |
+
|
| 545 |
+
# 2 PRELIMINARIES 3
|
| 546 |
+
|
| 547 |
+
2.1 Problem Setup 3
|
| 548 |
+
2.2 Optimization Dynamics 4
|
| 549 |
+
2.3 Mean Field Neural Networks 4
|
| 550 |
+
|
| 551 |
+
# 3 CONVERGENCE ANALYSIS 5
|
| 552 |
+
|
| 553 |
+
3.1 Basic Properties and Convexity 5
|
| 554 |
+
3.2 Convergence Rate in Continuous Time 6
|
| 555 |
+
3.3 Convergence Rate in Discrete Time 6
|
| 556 |
+
|
| 557 |
+
# 4 PRIMAL-DUAL VIEWPOINT 8
|
| 558 |
+
|
| 559 |
+
4.1 Primal-dual Problem 8
|
| 560 |
+
4.2 Convergence of Duality Gap 8
|
| 561 |
+
|
| 562 |
+
# A OMITTED PROOFS 13
|
| 563 |
+
|
| 564 |
+
A. 1 Preliminaries: Log-Sobolev Inequality 13
|
| 565 |
+
A. 2 Continuous Time Analysis 13
|
| 566 |
+
A. 3 Discrete Time Analysis 14
|
| 567 |
+
A. 4 Duality Theorems 16
|
| 568 |
+
|
| 569 |
+
# B ADDITIONAL DETAILS 17
|
| 570 |
+
|
| 571 |
+
B. 1 Computation of the Dual Objective 17
|
| 572 |
+
|
| 573 |
+
# Appendix: Convex Analysis of Mean Field Langevin Dynamics
|
| 574 |
+
|
| 575 |
+
# A OMITTED PROOFS
|
| 576 |
+
|
| 577 |
+
# A. 1 Preliminaries: Log-Sobolev Inequality
|
| 578 |
+
|
| 579 |
+
For the proximal Gibbs distribution (14) of the mean field neural network, we can verify that the log-Sobolev inequality (Assumption 2) holds uniformly over $q$ under the boundedness assumptions on $\partial_z\ell (z,y)$ and $h_\theta (x)$ and $\ell_2$ -regularization $r(\theta) = \| \theta \| _2^2$ by utilizing the following two facts. Specifically, these two facts indicate that the Gibbs distribution $p_q$ whose potential is the sum of strongly concave function and bounded perturbation satisfies the log-Sobolev inequality.
|
| 580 |
+
|
| 581 |
+
It is well-known that strongly log-concave densities satisfy the LSI with a dimension-free constant (up to the spectral norm of the covariance).
|
| 582 |
+
|
| 583 |
+
Example A (Bakry and Émery (1985)). Let $q \propto \exp(-f)$ be a probability density, where $f: \mathbb{R}^p \to \mathbb{R}$ is a smooth function. If there exists $c > 0$ such that $\nabla^2 f \succeq c I_p$ , then $q(\theta) \mathrm{d}\theta$ satisfies Log-Sobolev inequality with constant $c$ .
|
| 584 |
+
|
| 585 |
+
In addition, the log-Sobolev inequality is preserved under bounded perturbation, as originally shown in Holley and Stroock (1987).
|
| 586 |
+
|
| 587 |
+
Lemma A (Holley and Stroock (1987)). Let $q(\theta) \mathrm{d}\theta$ be a probability distribution on $\mathbb{R}^p$ satisfying the log-Sobolev inequality with a constant $\alpha$ . For a bounded function $B: \mathbb{R}^p \to \mathbb{R}$ , we define a probability distribution $q_B(\theta) \mathrm{d}\theta$ as follows:
|
| 588 |
+
|
| 589 |
+
$$
|
| 590 |
+
q _ {B} (\theta) \mathrm {d} \theta = \frac {\exp (B (\theta)) q (\theta)}{\mathbb {E} _ {q} [ \exp (B (\theta)) ]} \mathrm {d} \theta .
|
| 591 |
+
$$
|
| 592 |
+
|
| 593 |
+
Then, $q_B \mathrm{d}\theta$ satisfies the log-Sobolev inequality with a constant $\alpha / \exp(4 \| B \|_{\infty})$ .
|
| 594 |
+
|
| 595 |
+
In our case, if $|\partial_z\ell (z,y)|\leq C_1$ , $|h_{\theta}(x)|\leq C_5$ , and $r(\theta) = \| \theta \| _2^2$ , then the proximal Gibbs distribution (14) satisfies the log-Sobolev inequality with a constant $\alpha = \frac{2\lambda'}{\lambda\exp(4C_1C_5\lambda^{-1})}$ . We remark that the exponential dependence in the LSI constant may be unavoidable in the most general setting (Menz and Schlichting, 2014).
|
| 596 |
+
|
| 597 |
+
# A. 2 Continuous Time Analysis
|
| 598 |
+
|
| 599 |
+
Proof of Corollary 1. In the same way as the proof of Theorem 1,
|
| 600 |
+
|
| 601 |
+
$$
|
| 602 |
+
2 \alpha \lambda^ {2} \mathrm {K L} \left(q _ {s} \| p _ {q _ {s}}\right) \leq - \frac {\mathrm {d}}{\mathrm {d} s} \left(\mathcal {L} \left(q _ {s}\right) - \mathcal {L} \left(q _ {*}\right)\right).
|
| 603 |
+
$$
|
| 604 |
+
|
| 605 |
+
By taking integral of this inequality on the interval $[t - 1, t] (t \in \mathbb{N})$ , we get
|
| 606 |
+
|
| 607 |
+
$$
|
| 608 |
+
\begin{array}{l} 2 \alpha \lambda^ {2} \int_ {t - 1} ^ {t} \mathrm {K L} (q _ {s} \| p _ {q _ {s}}) \mathrm {d} s \leq \mathcal {L} (q _ {t - 1}) - \mathcal {L} (q _ {*}) - (\mathcal {L} (q _ {t}) - \mathcal {L} (q _ {*})) \\ \leq \exp (- 2 \alpha \lambda (t - 1)) (\mathcal {L} (q _ {0}) - \mathcal {L} (q _ {*})). \\ \end{array}
|
| 609 |
+
$$
|
| 610 |
+
|
| 611 |
+
Therefore, there exists $s_t \in [t - 1, t]$ ( $t \in \mathbb{N}$ ) such that
|
| 612 |
+
|
| 613 |
+
$$
|
| 614 |
+
\operatorname {K L} \left(q _ {s _ {t}} \| p _ {q _ {s _ {t}}}\right) \leq \frac {\exp (- 2 \alpha \lambda (t - 1))}{2 \alpha \lambda^ {2}} \left(\mathcal {L} \left(q _ {0}\right) - \mathcal {L} \left(q _ {*}\right)\right).
|
| 615 |
+
$$
|
| 616 |
+
|
| 617 |
+
By taking the infimum of $\mathrm{KL}(q_s\| p_{q_s})$ over $[0,t]$ $(t\in \mathbb{N})$ , we have
|
| 618 |
+
|
| 619 |
+
$$
|
| 620 |
+
\inf _ {s \in [ 0, t ]} \mathrm {K L} (q _ {s} \| p _ {q _ {s}}) \leq \frac {\exp (- 2 \alpha \lambda (t - 1))}{2 \alpha \lambda^ {2}} (\mathcal {L} (q _ {0}) - \mathcal {L} (q _ {*})).
|
| 621 |
+
$$
|
| 622 |
+
|
| 623 |
+
# A. 3 Discrete Time Analysis
|
| 624 |
+
|
| 625 |
+
Proof of Theorem 2. We first present the one step analysis for the iteration (18) with the step size $\eta$ . Let us consider the stochastic differential equation:
|
| 626 |
+
|
| 627 |
+
$$
|
| 628 |
+
\mathrm {d} \theta_ {t} = - \nabla \frac {\delta F}{\delta q} \left(q ^ {(k)}\right) \left(\theta_ {0}\right) \mathrm {d} t + \sqrt {2 \lambda} \mathrm {d} W _ {t}, \tag {26}
|
| 629 |
+
$$
|
| 630 |
+
|
| 631 |
+
where $\theta_0 = \theta^{(k)}$ and $W_{t}$ is the Brownian motion in $\mathbb{R}^d$ with $W_{0} = 0$ . Then, the step (18) is the solution of this equation at time $t$ . We denote by $q_{0t}(\theta_0,\theta_t)$ the joint probability distribution of $(\theta_0,\theta_t)$ for time $t$ , and by $q_{t|0}$ , $q_{0|t}$ and $q_{0}$ , $q_{t}$ conditional and marginal distributions. That is, it holds that $q_{0} = q^{(k)}$ , $q_{t} = q_{t}^{(k + 1)}$ (i.e., $\theta_t\stackrel {\mathrm{d}}{=}\theta_t^{(k + 1)})$ , and
|
| 632 |
+
|
| 633 |
+
$$
|
| 634 |
+
q _ {0 t} (\theta_ {0}, \theta_ {t}) = q _ {0} (\theta_ {0}) q _ {t | 0} (\theta_ {t} | \theta_ {0}) = q _ {t} (\theta_ {t}) q _ {0 | t} (\theta_ {0} | \theta_ {t}).
|
| 635 |
+
$$
|
| 636 |
+
|
| 637 |
+
The continuity equation of $q_{t|0}$ conditioned on $\theta_0$ is given as (see Section 7 of Vempala and Wibisono (2019) for details):
|
| 638 |
+
|
| 639 |
+
$$
|
| 640 |
+
\frac {\partial q _ {t | 0} (\theta_ {t} | \theta_ {0})}{\partial t} = \nabla \cdot \left(q _ {t | 0} (\theta_ {t} | \theta_ {0}) \nabla \frac {\delta F}{\delta q} (q _ {0}) (\theta_ {0})\right) + \lambda \Delta q _ {t | 0} (\theta_ {t} | \theta_ {0}).
|
| 641 |
+
$$
|
| 642 |
+
|
| 643 |
+
Therefore, we obtain the continuity equation of $q_{t}$ :
|
| 644 |
+
|
| 645 |
+
$$
|
| 646 |
+
\begin{array}{l} \frac {\partial q _ {t} \left(\theta_ {t}\right)}{\partial t} = \int \frac {\partial q _ {t | 0} \left(\theta_ {t} \mid \theta_ {0}\right)}{\partial t} q _ {0} \left(\theta_ {0}\right) \mathrm {d} \theta_ {0} \\ = \int \left(\nabla \cdot \left(q _ {0 t} (\theta_ {0}, \theta_ {t}) \nabla \frac {\delta F}{\delta q} (q _ {0}) (\theta_ {0})\right) + \lambda \Delta q _ {0 t} (\theta_ {0}, \theta_ {t})\right) d \theta_ {0} \\ = \nabla \cdot \left(q _ {t} \left(\theta_ {t}\right) \int q _ {0 | t} \left(\theta_ {0} \mid \theta_ {t}\right) \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right) \mathrm {d} \theta_ {0}\right) + \lambda \Delta q _ {t} \left(\theta_ {t}\right) \\ = \nabla \cdot \left(q _ {t} (\theta_ {t}) \left(\mathbb {E} _ {\theta_ {0} | \theta_ {t}} \left[ \nabla \frac {\delta F}{\delta q} (q _ {0}) (\theta_ {0}) \mid \theta_ {t} \right] + \lambda \nabla \log q _ {t} (\theta_ {t})\right)\right) \\ = \lambda \nabla \cdot \left(q _ {t} \left(\theta_ {t}\right) \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} \left(\theta_ {t}\right)\right) \\ + \nabla \cdot \left(q _ {t} \left(\theta_ {t}\right) \left(\mathbb {E} _ {\theta_ {0} \mid \theta_ {t}} \left[ \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right) \mid \theta_ {t} \right] - \nabla \frac {\delta F}{\delta q} \left(q _ {t}\right) \left(\theta_ {t}\right)\right)\right), \tag {27} \\ \end{array}
|
| 647 |
+
$$
|
| 648 |
+
|
| 649 |
+
where $p_{q_t}(\cdot)\propto \exp \left(-\frac{1}{\lambda}\nabla \frac{\delta F}{\delta q} (q_t)(\cdot)\right)$ . For simplicity, we write $\delta_t(\cdot) = \mathbb{E}_{\theta_0\sim q_{0|t}}\left[\nabla \frac{\delta F}{\delta q} (q_0)(\theta_0)\big|\theta_t = \cdot \right] - \nabla \frac{\delta F}{\delta q} (q_t)(\cdot)$ . By Assumption 2 and (27), for $0\leq t\leq \eta$ , we have
|
| 650 |
+
|
| 651 |
+
$$
|
| 652 |
+
\begin{array}{l} \frac {\mathrm {d} \mathcal {L}}{\mathrm {d} t} (q _ {t}) = \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \frac {\partial q _ {t}}{\partial t} (\theta) \mathrm {d} \theta \\ = \lambda \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \nabla \cdot \left(q _ {t} (\theta) \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta)\right) d \theta \\ + \int \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) \nabla \cdot (q _ {t} (\theta) \delta_ {t} (\theta)) d \theta \\ = - \lambda \int q _ {t} (\theta) \nabla \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) ^ {\top} \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) d \theta \\ - \int q _ {t} (\theta) \nabla \frac {\delta \mathcal {L}}{\delta q} (q _ {t}) (\theta) ^ {\top} \delta_ {t} (\theta) d \theta \\ = - \lambda^ {2} \int q _ {t} (\theta) \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} \mathrm {d} \theta \\ - \int q _ {0 t} (\theta_ {0}, \theta) \lambda \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) ^ {\top} \left(\nabla \frac {\delta F}{\delta q} (q _ {0}) (\theta_ {0}) - \nabla \frac {\delta F}{\delta q} (q _ {t}) (\theta)\right) d \theta_ {0} d \theta \\ \end{array}
|
| 653 |
+
$$
|
| 654 |
+
|
| 655 |
+
$$
|
| 656 |
+
\begin{array}{l} \leq - \lambda^ {2} \int q _ {t} (\theta) \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} \mathrm {d} \theta \\ + \frac {1}{2} \int q _ {0 t} (\theta_ {0}, \theta) \left(\lambda^ {2} \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} + \left\| \nabla \frac {\delta F}{\delta q} (q _ {0}) (\theta_ {0}) - \nabla \frac {\delta F}{\delta q} (q _ {t}) (\theta) \right\| _ {2} ^ {2}\right) d \theta_ {0} d \theta \\ \leq - \frac {\lambda^ {2}}{2} \int q _ {t} (\theta) \left\| \nabla \log \frac {q _ {t}}{p _ {q _ {t}}} (\theta) \right\| _ {2} ^ {2} \mathrm {d} \theta \\ + \frac {1}{2} \mathbb {E} _ {\left(\theta_ {0}, \theta\right) \sim q _ {0 t}} \left[ \left\| \nabla \frac {\delta F}{\delta q} \left(q _ {0}\right) \left(\theta_ {0}\right) - \nabla \frac {\delta F}{\delta q} \left(q _ {t}\right) (\theta) \right\| _ {2} ^ {2} \right] \\ \leq - \alpha \lambda^ {2} \mathrm {K L} \left(q _ {t} \| p _ {q _ {t}}\right) + \frac {1}{2} \delta_ {q _ {0}, t} \\ \leq - \alpha \lambda \left(\mathcal {L} \left(q _ {t}\right) - \mathcal {L} \left(q _ {*}\right)\right) + \frac {1}{2} \delta_ {\eta}, \\ \end{array}
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
where we used $(\theta_0,\theta_t)\stackrel {\mathrm{d}}{=}(\theta^{(k)},\theta_t^{(k + 1)})$ to bound the last expectation by $\delta_{q^{(k)},t}\leq \delta_{\eta}$ . Thus, for $0\leq t\leq \eta$ , we get
|
| 660 |
+
|
| 661 |
+
$$
|
| 662 |
+
\frac {\mathrm {d}}{\mathrm {d} t} \left(\mathcal {L} (q _ {t}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda}\right) \leq - \alpha \lambda \left(\mathcal {L} (q _ {t}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda}\right).
|
| 663 |
+
$$
|
| 664 |
+
|
| 665 |
+
Noting $q_{\eta} = q^{(k + 1)}$ and $q_0 = q^{(k)}$ , the Gronwall's inequality leads to
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
\mathcal {L} (q ^ {(k + 1)}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda} \leq \exp (- \alpha \lambda \eta) \left(\mathcal {L} (q ^ {(k)}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda}\right).
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
This reduction holds at every iteration of (26). Hence, we arrive at the desired result,
|
| 672 |
+
|
| 673 |
+
$$
|
| 674 |
+
\mathcal {L} (q ^ {(k)}) - \mathcal {L} (q _ {*}) \leq \frac {\delta_ {\eta}}{2 \alpha \lambda} + \exp (- \alpha \lambda \eta k) \left(\mathcal {L} (q ^ {(0)}) - \mathcal {L} (q _ {*}) - \frac {\delta_ {\eta}}{2 \alpha \lambda}\right).
|
| 675 |
+
$$
|
| 676 |
+
|
| 677 |
+
Proof of Lemma 1. For notational simplicity, we use $\theta, \theta', q, q'$ to represent $\theta^{(k)}, \theta_t^{(k+1)}, q^{(k)}, q_t^{(k+1)}$ appearing in $\delta_{q^{(k)},t}$ . Recall the definition of $F$ in Section 2.3, we have
|
| 678 |
+
|
| 679 |
+
$$
|
| 680 |
+
\begin{array}{l} \left\| \nabla \frac {\delta F}{\delta q} (q) (\theta) - \nabla \frac {\delta F}{\delta q} \left(q ^ {\prime}\right) \left(\theta^ {\prime}\right) \right\| _ {2} \leq \mathbb {E} _ {(X, Y)} \left[ \| \partial_ {z} \ell \left(h _ {q} (X), Y\right) \partial_ {\theta} h _ {\theta} (X) - \partial_ {z} \ell \left(h _ {q ^ {\prime}} (X), Y\right) \partial_ {\theta} h _ {\theta^ {\prime}} (X) \| _ {2} \right] \\ + 2 \lambda^ {\prime} \| \theta - \theta^ {\prime} \| _ {2} \\ \leq \mathbb {E} _ {(X, Y)} \left[ \left\| \left(\partial_ {z} \ell \left(h _ {q} (X), Y\right) - \partial_ {z} \ell \left(h _ {q ^ {\prime}} (X), Y\right)\right) \partial_ {\theta} h _ {\theta} (X) \right\| _ {2} \right] \\ + \mathbb {E} _ {(X, Y)} \left[ \| \partial_ {z} \ell \left(h _ {q ^ {\prime}} (X), Y\right) \left(\partial_ {\theta} h _ {\theta} (X) - \partial_ {\theta} h _ {\theta^ {\prime}} (X)\right) \| _ {2} \right] \\ + 2 \lambda^ {\prime} \| \theta - \theta^ {\prime} \| _ {2} \\ \leq C _ {2} C _ {3} \mathbb {E} _ {X} \left[ \left| h _ {q} (X) - h _ {q ^ {\prime}} (X) \right| \right] + \left(C _ {1} C _ {4} + 2 \lambda^ {\prime}\right) \| \theta - \theta^ {\prime} \| _ {2}. \tag {28} \\ \end{array}
|
| 681 |
+
$$
|
| 682 |
+
|
| 683 |
+
The expectation of $\| \theta -\theta^{\prime}\|_{2}^{2}$ can be bounded as follows:
|
| 684 |
+
|
| 685 |
+
$$
|
| 686 |
+
\begin{array}{l} \mathbb {E} _ {\left(\theta , \theta^ {\prime}\right)} \left[ \| \theta - \theta^ {\prime} \| _ {2} ^ {2} \right] = \mathbb {E} _ {\left(\theta , \xi\right)} \left[ \left\| t \nabla \frac {\delta F}{\delta q} (q) (\theta) - \sqrt {2 \lambda t} \xi \right\| _ {2} ^ {2} \right] \\ \leq 2 t ^ {2} \mathbb {E} _ {\theta} \left[ \left\| \nabla \frac {\delta F}{\delta q} (q) (\theta) \right\| _ {2} ^ {2} \right] + 4 \lambda t \mathbb {E} _ {\xi} \left[ \| \xi \| _ {2} ^ {2} \right] \\ \leq 4 t ^ {2} \mathbb {E} _ {\theta} \left[ \left\| \mathbb {E} _ {(X, Y)} \left[ \partial_ {z} \ell \left(h _ {q} (X), Y\right) \partial_ {\theta} h _ {\theta} (X) \right] \right\| _ {2} ^ {2} + 4 \lambda^ {\prime 2} \| \theta \| _ {2} ^ {2} \right] + 4 \lambda t d \\ \leq 4 t ^ {2} C _ {1} ^ {2} C _ {3} ^ {2} + 1 6 t ^ {2} \lambda^ {\prime 2} \mathbb {E} _ {\theta} \left[ \| \theta \| _ {2} ^ {2} \right] + 4 \lambda t d \\ \end{array}
|
| 687 |
+
$$
|
| 688 |
+
|
| 689 |
+
$$
|
| 690 |
+
\leq 2 0 \eta \left(\eta C _ {1} ^ {2} C _ {3} ^ {2} + \lambda d\right). \tag {29}
|
| 691 |
+
$$
|
| 692 |
+
|
| 693 |
+
Moreover, $|h_q(x) - h_{q'}(x)|$ can be bounded as follows:
|
| 694 |
+
|
| 695 |
+
$$
|
| 696 |
+
\begin{array}{l} \left| h _ {q} (x) - h _ {q ^ {\prime}} (x) \right| = \left| \mathbb {E} _ {\left(\theta , \theta^ {\prime}\right)} \left[ h _ {\theta} (x) - h _ {\theta^ {\prime}} (x) \right] \right| \\ \leq C _ {3} \mathbb {E} _ {\left(\theta , \theta^ {\prime}\right)} \left[ \left\| \theta - \theta^ {\prime} \right\| _ {2} \right] \\ \leq C _ {3} \sqrt {\mathbb {E} _ {(\theta , \theta^ {\prime})} \left[ \| \theta - \theta^ {\prime} \| _ {2} ^ {2} \right]}. \tag {30} \\ \end{array}
|
| 697 |
+
$$
|
| 698 |
+
|
| 699 |
+
Therefore, we get by (28), (29), and (30),
|
| 700 |
+
|
| 701 |
+
$$
|
| 702 |
+
\begin{array}{l} \mathbb {E} _ {(\theta , \theta^ {\prime})} \left[ \left\| \nabla \frac {\delta F}{\delta q} (q) (\theta) - \nabla \frac {\delta F}{\delta q} (q ^ {\prime}) (\theta^ {\prime}) \right\| _ {2} ^ {2} \right] \leq 2 C _ {2} ^ {2} C _ {3} ^ {2} (\mathbb {E} _ {X} [ | h _ {q} (X) - h _ {q ^ {\prime}} (X) | ]) ^ {2} \\ + 2 \left(C _ {1} C _ {4} + 2 \lambda^ {\prime}\right) ^ {2} \mathbb {E} _ {\left(\theta , \theta^ {\prime}\right)} \left[ \left\| \theta - \theta^ {\prime} \right\| _ {2} ^ {2} \right] \\ \leq 2 \left(C _ {2} ^ {2} C _ {3} ^ {4} + \left(C _ {1} C _ {4} + 2 \lambda^ {\prime}\right) ^ {2}\right) \mathbb {E} _ {\left(\theta , \theta^ {\prime}\right)} \left[ \| \theta - \theta^ {\prime} \| _ {2} ^ {2} \right] \\ \leq 4 0 \eta \left(C _ {2} ^ {2} C _ {3} ^ {4} + \left(C _ {1} C _ {4} + 2 \lambda^ {\prime}\right) ^ {2}\right) \left(\eta C _ {1} ^ {2} C _ {3} ^ {2} + \lambda d\right). \\ \end{array}
|
| 703 |
+
$$
|
| 704 |
+
|
| 705 |
+
Finally, we show the same bound on second moment of $\| \theta^{(k + 1)}\| _2$ as on $\| \theta^{(k)}\| _2$ . Using the inequality $(a + b)^{2}\leq (1 + \gamma)a^{2} + \left(1 + \frac{1}{\gamma}\right)b^{2}$ with $\gamma = \frac{2\eta\lambda'}{1 - 2\eta\lambda'}$ , we get
|
| 706 |
+
|
| 707 |
+
$$
|
| 708 |
+
\begin{array}{l} \mathbb {E} _ {\theta^ {(k + 1)}} \left[ \| \theta^ {(k + 1)} \| _ {2} ^ {2} \right] = \mathbb {E} _ {(\theta , \xi)} \left[ \left\| (1 - 2 \lambda^ {\prime} \eta) \theta - \eta \mathbb {E} _ {(X, Y)} \left[ \partial_ {z} \ell \left(h _ {q} (X), Y\right) \partial_ {\theta} h _ {\theta} (X) \right] + \sqrt {2 \lambda \eta} \xi \right\| _ {2} ^ {2} \right] \\ \leq \mathbb {E} _ {(\theta , \xi)} \left[ \left((1 - 2 \lambda^ {\prime} \eta) \| \theta \| _ {2} + \eta C _ {1} C _ {3} + \sqrt {2 \lambda \eta} \| \xi \| _ {2}\right) ^ {2} \right] \\ = \mathbb {E} _ {(\theta , \xi)} \left[ (1 + \gamma) (1 - 2 \lambda^ {\prime} \eta) ^ {2} \| \theta \| _ {2} ^ {2} + \left(1 + \frac {1}{\gamma}\right) (\eta C _ {1} C _ {3} + \sqrt {2 \lambda \eta} \| \xi \| _ {2}) ^ {2} \right] \\ \leq \left(1 - 2 \lambda^ {\prime} \eta\right) \mathbb {E} _ {\theta} \left[ \| \theta \| _ {2} ^ {2} \right] + \frac {1}{\lambda^ {\prime}} \left(\eta C _ {1} ^ {2} C _ {3} ^ {2} + 2 \lambda \mathbb {E} _ {\xi} \left[ \| \xi \| _ {2} ^ {2} \right]\right) \\ \leq \left(1 - 2 \lambda^ {\prime} \eta\right) \frac {\eta C _ {1} ^ {2} C _ {3} ^ {2} + 2 \lambda d}{2 \eta \lambda^ {\prime 2}} + \frac {1}{\lambda^ {\prime}} \left(\eta C _ {1} ^ {2} C _ {3} ^ {2} + 2 \lambda d\right) \\ = \frac {\eta C _ {1} ^ {2} C _ {3} ^ {2} + 2 \lambda d}{2 \eta \lambda^ {\prime 2}}. \\ \end{array}
|
| 709 |
+
$$
|
| 710 |
+
|
| 711 |
+
# A. 4 Duality Theorems
|
| 712 |
+
|
| 713 |
+
Proof of Theorem 3. It is well known that $\nabla \ell_{i}$ and $\nabla \ell_{i}^{*}$ give inverse mapping to each other. Hence, we know that $\nabla \ell_{i}^{*}(g_{q,i}) = \nabla \ell_{i}^{*}(\nabla \ell_{i}(h_{q}(x_{i}))) = h_{q}(x_{i})$ and
|
| 714 |
+
|
| 715 |
+
$$
|
| 716 |
+
\ell_ {i} ^ {*} \left(g _ {q, i}\right) = \sup _ {z \in \mathbb {R}} \left\{z g _ {q, i} - \ell_ {i} (z) \right\} = h _ {q} \left(x _ {i}\right) g _ {q, i} - \ell_ {i} \left(h _ {q} \left(x _ {i}\right)\right). \tag {31}
|
| 717 |
+
$$
|
| 718 |
+
|
| 719 |
+
Recall the definitions of $p_q$ , $g_q$ , and $q_g$ , we see that $q_{g_q} \propto p_q$ , and hence
|
| 720 |
+
|
| 721 |
+
$$
|
| 722 |
+
\operatorname {K L} (q \| p _ {q}) = \mathbb {E} _ {\theta \sim q} \left[ \log \frac {q}{p _ {q}} (\theta) \right] = \mathbb {E} _ {\theta \sim q} \left[ \log q (\theta) - \log q _ {g _ {q}} (\theta) \right] + \log \int q _ {g _ {q}} (\theta) d \theta . \tag {32}
|
| 723 |
+
$$
|
| 724 |
+
|
| 725 |
+
Combining (31) and (32), we get for any $q \in \mathcal{P}$ ,
|
| 726 |
+
|
| 727 |
+
$$
|
| 728 |
+
\mathcal {D} (g _ {q}) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {i} ^ {*} (g _ {q, i}) - \lambda \log \int q _ {g _ {q}} (\theta) \mathrm {d} \theta
|
| 729 |
+
$$
|
| 730 |
+
|
| 731 |
+
$$
|
| 732 |
+
\begin{array}{l} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\ell_ {i} \left(h _ {q} \left(x _ {i}\right)\right) - h _ {q} \left(x _ {i}\right) g _ {q, i}\right) - \lambda \mathrm {K L} (q \| p _ {q}) + \lambda \mathbb {E} _ {\theta \sim q} [ \log q (\theta) - \log q _ {g _ {q}} (\theta) ] \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\ell_ {i} \left(h _ {q} \left(x _ {i}\right)\right) - h _ {q} \left(x _ {i}\right) g _ {q, i}\right) - \lambda \mathrm {K L} (q \| p _ {q}) + \lambda \mathbb {E} _ {\theta \sim q} [ \log q (\theta) ] + \mathbb {E} _ {\theta \sim q} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} h _ {\theta} \left(x _ {i}\right) g _ {q, i} + \lambda^ {\prime} \| \theta \| _ {2} ^ {2} \right] \\ = \mathcal {L} (q) - \lambda \mathrm {K L} (q \| p _ {q}). \\ \end{array}
|
| 733 |
+
$$
|
| 734 |
+
|
| 735 |
+
This concludes the proof.
|
| 736 |
+
|
| 737 |
+
Proof of Theorem 4. From direct computation, we get for $q \in \mathcal{P}$
|
| 738 |
+
|
| 739 |
+
$$
|
| 740 |
+
\begin{array}{l} \frac {\delta \mathcal {L}}{\delta q ^ {\prime}} (q ^ {\prime}) | _ {q ^ {\prime} = p _ {q}} (\theta) = \frac {1}{n} \sum_ {i = 1} ^ {n} \partial_ {z} \ell \left(h _ {p _ {q}} \left(x _ {i}\right), y _ {i}\right) h _ {\theta} \left(x _ {i}\right) + \lambda^ {\prime} \| \theta \| _ {2} ^ {2} + \lambda \log p _ {q} (\theta) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\partial_ {z} \ell \left(h _ {p _ {q}} \left(x _ {i}\right), y _ {i}\right) - \partial_ {z} \ell \left(h _ {q} \left(x _ {i}\right), y _ {i}\right)\right) h _ {\theta} \left(x _ {i}\right) + c o n s t. \\ \end{array}
|
| 741 |
+
$$
|
| 742 |
+
|
| 743 |
+
Hence, we have
|
| 744 |
+
|
| 745 |
+
$$
|
| 746 |
+
\begin{array}{l} \left| \int \frac {\delta \mathcal {L}}{\delta q ^ {\prime}} (q ^ {\prime}) | _ {q ^ {\prime} = p _ {q}} (\theta) (q - p _ {q}) (\theta) \mathrm {d} \theta \right| = \left| \int \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\partial_ {z} \ell \left(h _ {p _ {q}} \left(x _ {i}\right), y _ {i}\right) - \partial_ {z} \ell \left(h _ {q} \left(x _ {i}\right), y _ {i}\right)\right) h _ {\theta} \left(x _ {i}\right) (q - p _ {q}) (\theta) \mathrm {d} \theta \right| \\ = \left| \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\partial_ {z} \ell \left(h _ {p _ {q}} \left(x _ {i}\right), y _ {i}\right) - \partial_ {z} \ell \left(h _ {q} \left(x _ {i}\right), y _ {i}\right)\right) \left(h _ {q} \left(x _ {i}\right) - h _ {p _ {q}} \left(x _ {i}\right)\right) \right| \\ \leq \frac {C _ {2}}{n} \sum_ {i = 1} ^ {n} \left| h _ {q} \left(x _ {i}\right) - h _ {p _ {q}} \left(x _ {i}\right) \right| ^ {2} \\ \leq B ^ {2} C _ {2} \| q - p _ {q} \| _ {L _ {1} (\mathrm {d} \theta)} ^ {2} \\ \leq 2 B ^ {2} C _ {2} \mathrm {K L} (q \| p _ {q}), \\ \end{array}
|
| 747 |
+
$$
|
| 748 |
+
|
| 749 |
+
where we used Pinsker's inequality for the last inequality.
|
| 750 |
+
|
| 751 |
+
By the convexity of $\mathcal{L}$ , we get
|
| 752 |
+
|
| 753 |
+
$$
|
| 754 |
+
\begin{array}{l} \mathcal {L} (p _ {q}) - \mathcal {D} (g _ {q}) - \lambda \mathrm {K L} (q \| p _ {q}) = \mathcal {L} (p _ {q}) - \mathcal {L} (q) \\ \leq - \int \frac {\delta \mathcal {L}}{\delta q ^ {\prime}} \left(q ^ {\prime}\right) | _ {q ^ {\prime} = p _ {q}} (\theta) (q - p _ {q}) (\theta) d \theta \\ \leq 2 B ^ {2} C _ {2} \mathrm {K L} (q \| p _ {q}). \\ \end{array}
|
| 755 |
+
$$
|
| 756 |
+
|
| 757 |
+
Therefore, we obtain $\mathcal{L}(p_q) - \mathcal{D}(g_q)\leq (\lambda +2B^2 C_2)\mathrm{KL}(q\| p_q)$
|
| 758 |
+
|
| 759 |
+
# B ADDITIONAL DETAILS
|
| 760 |
+
|
| 761 |
+
# B. 1 Computation of the Dual Objective
|
| 762 |
+
|
| 763 |
+
We briefly outline the estimation of the dual objective (25), which consists of the sum of Fenchel conjugate functions and the normalization term as follows:
|
| 764 |
+
|
| 765 |
+
$$
|
| 766 |
+
\mathcal {D} (g) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {i} ^ {*} (g _ {i}) - \lambda \log \int q _ {g} (\theta) \mathrm {d} \theta ,
|
| 767 |
+
$$
|
| 768 |
+
|
| 769 |
+
where $q_{g}(\theta) = \exp \left(-\frac{1}{\lambda}\left(\frac{1}{n}\sum_{i=1}^{n}h_{\theta}(x_{i})g_{i} + \lambda^{\prime}\|\theta\|_{2}^{2}\right)\right)$ .
|
| 770 |
+
|
| 771 |
+
The Fenchel conjugate $\ell_i^*$ can be explicitly described for typical loss functions. For instance, for the squared loss function $\ell_i(z) = 0.5(z - y_i)^2$ , its Fenchel conjugate is $\ell_i^*(g) = 0.5g^2 + gy_i$ .
|
| 772 |
+
|
| 773 |
+
Direct computation of the normalization term of $q_{g}$ is difficult in general, but it can be efficiently estimated by the following procedure. First, we reformulate this term as follows:
|
| 774 |
+
|
| 775 |
+
$$
|
| 776 |
+
\begin{array}{l} \int q _ {g} (\theta) \mathrm {d} \theta = \int \exp \left(- \frac {1}{\lambda} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} h _ {\theta} (x _ {i}) g _ {i} + \lambda^ {\prime} \| \theta \| _ {2} ^ {2}\right)\right) \mathrm {d} \theta \\ = Z \int \exp \left(- \frac {1}{\lambda n} \sum_ {i = 1} ^ {n} h _ {\theta} \left(x _ {i}\right) g _ {i}\right) \frac {\exp \left(- \frac {\lambda^ {\prime}}{\lambda} \| \theta \| _ {2} ^ {2}\right)}{Z} d \theta , \\ \end{array}
|
| 777 |
+
$$
|
| 778 |
+
|
| 779 |
+
where $Z$ is the normalization term of the Gaussian distribution appearing above, that is, $Z = \int \exp \left(-\frac{\lambda'}{\lambda} \| \theta \|_2^2\right) \mathrm{d}\theta$ . Hence, we can estimate the normalization term of $q_g$ by the expectation with respect to the Gaussian distribution and computation of $Z$ . This expectation can be approximated by obtaining samples from the Gaussian distribution in proportion to $\exp \left(-\frac{\lambda'}{\lambda} \| \theta \|_2^2\right)$ , and its normalization term is exactly $Z = \left(\frac{\pi \lambda}{\lambda'}\right)^{d/2}$ . We can use the same samples from the Gaussian distribution for computing this expectation to reduce the computational cost. Note that the normalization constant of the proximal Gibbs distribution $p_q$ can be approximated via the same procedure.
|
2201.10xxx/2201.10469/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0fdcc3a884ff41262a123bd16a1abf732ae60cedbf425348dba735075d887f26
|
| 3 |
+
size 975137
|
2201.10xxx/2201.10469/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10474/9b32b03e-2491-4098-8a04-75205aef7f7c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d4eb3980cd6cd9a944385b377974749e255959f36a78d383c941dcebc5a87fb7
|
| 3 |
+
size 596615
|
2201.10xxx/2201.10474/full.md
ADDED
|
@@ -0,0 +1,433 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
|
| 2 |
+
|
| 3 |
+
Suchin Gururangan† Dallas Card‡ Sarah K. Dreier¶ Emily K. Gade§
|
| 4 |
+
Leroy Z. Wang† Zeyu Wang† Luke Zettlemoyer† Noah A. Smith†
|
| 5 |
+
†University of Washington † University of Michigan ‡ University of New Mexico
|
| 6 |
+
♣Emory University ♣ Allen Institute for AI
|
| 7 |
+
{sg01, zwan4, lsz, nasmith}@cs.washington.edu dalc@umich.edu
|
| 8 |
+
skdreier@unm.edu emily.gade@emory.edu lryw@uw.edu
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
Language models increasingly rely on massive web dumps for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and newswire often serve as anchors for automatically selecting web text most suitable for language modeling, a process typically referred to as quality filtering. Using a new dataset of U.S. high school newspaper articles—written by students from across the country—we investigate whose language is preferred by the quality filter used for GPT-3. We find that newspapers from larger schools, located in wealthier, educated, and urban ZIP codes are more likely to be classified as high quality. We then demonstrate that the filter's measurement of quality is unaligned with other sensible metrics, such as factuality or literary acclaim. We argue that privileging any corpus as high quality entails a language ideology, and more care is needed to construct training corpora for language models, with better transparency and justification for the inclusion or exclusion of various texts.
|
| 13 |
+
|
| 14 |
+
# 1 Introduction
|
| 15 |
+
|
| 16 |
+
The language models central to modern NLP are trained on large Internet corpora, typically gathered from community resources (e.g., Wikipedia; Liu et al. 2019) or web dumps (e.g., WebText, Common CWEl; Radford et al. 2019, Brown et al. 2020). The selection of texts impacts every research or deployed NLP system that builds on these models. Yet there is rarely any explicit justification for why various texts were included.
|
| 17 |
+
|
| 18 |
+
Web dumps like Common crawl offer the promise of more diverse text than what is available in curated resources. However, much of the web consists of frequently replicated boilerplate (e.g., privacy policies), code (e.g., HTML and Javascript), pornography, hate speech, and more. Automated approaches, typically referred to as
|
| 19 |
+
|
| 20 |
+
quality filters, $^{1}$ are often applied in an effort to remove this undesirable content from training data. These filters include code removers (Gao et al., 2020), heuristics (Rae et al., 2021), stopwords (Raffel et al., 2020), and classifiers (Brown et al., 2020; Wenzek et al., 2020).
|
| 21 |
+
|
| 22 |
+
Although quality filtering is often treated as a relatively neutral preprocessing step, it necessarily implies a value judgment: which data is assumed to be of sufficiently high quality to be included in the training corpus? More concretely, when a quality filter is a classifier trained on instances assumed to be of high (and low) quality, the selection of those examples will impact the language model and any downstream technology that uses it. Many filters use Wikipedia, books, and newswire to represent high quality text. But what texts are excluded as a result? Because natural language varies with social and demographic variables (Rickford, 1985; Eckert, 1989; Labov, 2006; Blodgett et al., 2016; Hovy and Yang, 2021; Lucy and Bamman, 2021, inter alia), we can also ask whose language will be excluded.
|
| 23 |
+
|
| 24 |
+
We begin with a summary of the handful of data sources used to construct training corpora for many language models and assumed to be of high quality (§2). The systematic authorship biases in these datasets motivate the study that follows, in which we replicate the quality filter from Brown et al. (2020). We apply this filter to a new dataset of U.S. high school newspapers, augmented (via ZIP codes and counties) with demographic data from the U.S. Census and the National Center for Education Statistics (§3). We demonstrate that the filter has strong topical and stylistic preferences, and favors text from authors who originate from regions with better educational attainment, urban centers, larger schools, and higher valued homes.
|
| 25 |
+
|
| 26 |
+
In sociolinguistics, the term language ideology refers to common (but often unspoken) presuppositions, beliefs, or reflections about language that justify its social use and structure (Craft et al., 2020). Our analysis begins to characterize the language ideology encoded in the quality filter used by Brown et al. (2020), a representative of a wider set of filtering methods. We also observe in §4 that the filter is unaligned with other notions of quality familiar from human endeavors: factuality ratings for news sources, standardized test scores, and literary awards. Of course, these institutions hold their own language ideologies. We argue that when constructing a corpus, one cannot avoid adopting some language ideology; the language ideology which is appropriate depends on the goals of the work, and one language ideology may conflict with another. In short, there is no truly general-purpose corpus.
|
| 27 |
+
|
| 28 |
+
Our code and analysis is publicly available.2
|
| 29 |
+
|
| 30 |
+
# 2 Motivation: Data Sources
|
| 31 |
+
|
| 32 |
+
Across the many language models recently reported in the literature, the same small group of datasets have been routinely used as training corpora—Wikipedia, collections of books, and popular online articles (§A.1). These data are often treated as exemplars of high quality text (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019; Raffel et al., 2020; Brown et al., 2020). Although these datasets include text from many sources, extensive research suggests that the voices they represent are drawn from a relatively small, biased sample of the population, over-representing authors from hegemonic social positions.
|
| 33 |
+
|
| 34 |
+
Wikipedia Wikipedia serves as a backbone for language models because of its scale, ease of use, permissive license, and goal of providing comprehensive coverage of human knowledge. However, although anyone can edit Wikipedia content, not everyone does. In practice, there are significant biases in Wikipedia authorship, content, and perspectives. For instance, despite efforts by Wikimedia, the site has been unable to resolve a persistent gender imbalance among its editors (Huang, 2013; Meta-wiki, 2018). This imbalance is reflected in who gets written about, and how (Bamman and Smith, 2014; Graells-Garrido et al., 2015; Wagner et al., 2015). There is also a pervasive urban bias; editors are less likely to come from rural areas, and
|
| 35 |
+
|
| 36 |
+
coverage of these areas in Wikipedia tends to be more limited (Mandiberg, 2020). Although coverage in English Wikipedia is not limited to those places where English is a majority language, an Anglo-American perspective dominates coverage.3 Lastly, a relatively small number of people are responsible for most of the content (Panciera et al., 2009; Matei and Britt, 2017). Wikipedia is thus less representative of language of the population than one might expect given its size and design.
|
| 37 |
+
|
| 38 |
+
Books Language models are also frequently trained on book corpora. BERT (Devlin et al., 2019) used the Toronto BookCorpus (Zhu et al., 2015), which consists of 7,185 self-published novels, a dataset criticized for copyright violation, poor quality control, imbalanced representation, and lack of documentation (Bandy and Vincent, 2021).
|
| 39 |
+
|
| 40 |
+
GPT-3 (Brown et al., 2020) and The Pile (Gao et al., 2020) both use much larger corpora of books (although the former do not identify the source of this data). However, the Pile's books (also called Books3) are not a random selection. Rather, they appear to be drawn from a torrent file containing hundreds of thousands of copyrighted eBooks.
|
| 41 |
+
|
| 42 |
+
Books3 is deserving of a more thorough investigation, but preliminary analyses reveal that the most prevalent authors in the corpus are American and British writers, especially of romance, mystery, and children's books (e.g., L. Ron Hubbard, Danielle Steel, etc.). This pattern should be considered against the background of the American book publishing industry, which has been widely criticized as homogeneous (Lee & Low Books, 2020).<sup>4</sup>
|
| 43 |
+
|
| 44 |
+
News and Other Popular Internet Content Radford et al. (2019) scrape text from the websites featured in popular Reddit submissions (i.e., those that received at least three upvotes) to construct the training data for GPT-2. As the original corpus is unavailable, we analyze its open-source replica, OpenWebText (Gokaslan and Cohen, 2019).
|
| 45 |
+
|
| 46 |
+
We do not expect the corpus to represent a wide range of language variation. Reddit users are mostly male, younger, and lean liberal, which influences the types of content shared on the platform.[5]
|
| 47 |
+
|
| 48 |
+
<table><tr><td>URL Domain</td><td># Docs</td><td>% of Total Docs</td></tr><tr><td>bbc.co.uk</td><td>116K</td><td>1.50%</td></tr><tr><td>theguardian.com</td><td>115K</td><td>1.50%</td></tr><tr><td>washingtonpost.com</td><td>89K</td><td>1.20%</td></tr><tr><td>nytimes.com</td><td>88K</td><td>1.10%</td></tr><tr><td>reuters.com</td><td>79K</td><td>1.10%</td></tr><tr><td>huffingtonpost.com</td><td>72K</td><td>0.96%</td></tr><tr><td>cnn.com</td><td>70K</td><td>0.93%</td></tr><tr><td>cbc.ca</td><td>67K</td><td>0.89%</td></tr><tr><td>dailymail.co.uk</td><td>58K</td><td>0.77%</td></tr><tr><td>go.com</td><td>48K</td><td>0.63%</td></tr></table>
|
| 49 |
+
|
| 50 |
+
Table 1: The most popular top-level URL domains in OpenWebText. Mainstream news forms the overwhelming majority of content in the dataset. Overall, just $1\%$ of the top-level URL domains in OpenWebText contribute $75\%$ of the total documents in the corpus.
|
| 51 |
+
|
| 52 |
+
Viral media on the Internet assume similar characteristics; they tend to elicit awe, anger, or anxiety (Berger and Milkman, 2012), validate group identities (Gaudette et al., 2021), and disseminate from users with authority (Weismueller et al., 2022).
|
| 53 |
+
|
| 54 |
+
Indeed, we find that $1\%$ of the 311K unique top-level domains in OpenWebText contribute $75\%$ of documents in the corpus (Table 1). The most common websites in OpenWebText are internationally circulating British and American news outlets (e.g., BBC, The New York Times, The Washington Post, The Guardian), blogging platforms (e.g.,Tumblr, Blogspot, or Medium), sports content (e.g.,ESPN, SBNation), and tech news (e.g., TechCrunch, Wired). As expected, these links tend to appear on the most highly trafficked subreddits (e.g., /r/politics, /r/worldnews, /r/news).
|
| 55 |
+
|
| 56 |
+
These data are likely dominated by formal writing styles. Among news organizations, the adherence to slowly evolving style guides expresses specific linguistic standards (Frole et al., 2020) and even geopolitical interests (Vultee, 2012), which encourage rules about language use that can reinforce gender norms and racial hierarchies (DiNicola, 1994; Bien-Aime, 2016).
|
| 57 |
+
|
| 58 |
+
In general, a relatively homogeneous set of authors writes the majority of newswire (Grieco, 2018). Researchers find a striking lack of diversity in newsrooms and newspaper leadership. This may be compounded by the economic hardships aspiring journalists must incur, which act as a filter
|
| 59 |
+
|
| 60 |
+
for who can afford to be employed in newsrooms.
|
| 61 |
+
|
| 62 |
+
Summary Authors from specific, relatively powerful social positions produce a disproportionate amount of text in the core data sources of existing language models. These text sources favor privileged segments of the English-speaking population, including men, white populations, communities of higher socio-economic status, and those harboring American and Western European historical, geopolitical, and cultural perspectives. By contrast, these corpora tend to be less inclusive of the voices of women and members of marginalized groups. Alternative perspectives, including those of people from rural areas, non-dominant gender, sexual, or racial identities, and counter-hegemonic vantage points, are less likely to be included, and thus less likely to influence models trained on this data.
|
| 63 |
+
|
| 64 |
+
Although formal, streamlined content like news or Wikipedia articles may seem like desirable sources for high quality content, not all writing styles or substantive topics that might be relevant to language technologies and their user communities are represented in the resulting corpora. When deployed, many of the technologies using language models trained on these data will face language that—despite being less formal, professional, or carefully edited—is no less high quality and is essential to the communicative lives of the people who use it.
|
| 65 |
+
|
| 66 |
+
# 3 Measuring the Language Ideology of the GPT-3 Quality Filter
|
| 67 |
+
|
| 68 |
+
Empirically evaluating the full distribution of authors in the data sources from §2 is difficult, due to their size, as well as their lack of metadata about each document's authors. We instead curate a new dataset of U.S. high school newspaper articles that varies both topically and along demographic variables that can be resolved using ZIP codes. Although we do not directly consider individual authors of these articles, this dataset is useful, in that it can be associated with extensive metadata at the level of individual newspapers. We then analyze the behavior of a (replicated) quality filter on text from this dataset and discuss its implications.
|
| 69 |
+
|
| 70 |
+
# 3.1 U.S. SCHOOL NEWS
|
| 71 |
+
|
| 72 |
+
Background Many U.S. schools produce a newspaper to give students journalism experience, to
|
| 73 |
+
|
| 74 |
+
and journalists was $35,950, a slight decrease from 2012 after adjusting for inflation: https://pewrsr.ch/3qCO75v
|
| 75 |
+
|
| 76 |
+
report on local news, to comment on national or global events, and to publish school-related material (e.g., announcements, campus life, student interviews, sports or honor rolls; Gibson, 1961). The substantive content of school newspapers varies considerably, possibly due to their local audiences. Because a school's access to resources is shaped by local income levels (Betts et al., 2000) and tied to student achievement (Greenwald et al., 1996), we expect schools in wealthier areas (relative to poorer areas) to produce newspaper content that is more similar to the formal, professional texts that a quality filter is likely to classify as high quality.
|
| 77 |
+
|
| 78 |
+
Collection We collect articles from English-language U.S. school newspapers that used a common WordPress template.<sup>8</sup> After retrieving 2483 schools who use this template, we scrape 1.95M articles from their respective newspaper sites (more details in §A.2). We retrieve article categories by extracting them from the article URL slips. We then resolve each school to its ZIP code using the Google Maps Place API.<sup>9</sup> We restrict our dataset to articles from U.S. high schools. We only consider articles from 2010–2019, remove pages under the video, photo, or multimedia categories, and remove schools that have less than 100 articles (which tend to contain scraping errors). The final corpus includes 910K articles, from 1410 schools, located in 1329 ZIP codes (552 counties) dispersed across all U.S. states (plus D.C.).
|
| 79 |
+
|
| 80 |
+
**Limitations** Our corpus is neither a random nor a representative sample of U.S. school newspapers. Instead, it represents schools that had sufficient Internet access, that elected to use a particular website template, and that maintain websites with retrievable archived content. The lack of representation in school newspaper leadership positions may influence which students contribute content to school newspapers (Chen et al., 2021). Educators also likely shape some articles, at least in part (though we expect them to be similarly affected by resource constraints). Finally, much of the content in these articles is specific to student concerns (e.g., sports, school events, campus culture, etc.), and the writing is, by definition, amateur. Nevertheless, because the corpus captures a wide range of content and geographical areas, it allows us to evaluate
|
| 81 |
+
|
| 82 |
+
how a quality filter handles real-world language variation, within a particular domain.
|
| 83 |
+
|
| 84 |
+
Using text from school newspapers introduces privacy concerns, especially since authors and subjects are minors. We therefore use this data only for evaluation purposes, and do not train (or release) any models on this data, or any raw text from the corpus. We do, however, release a Datasheet (Gebru et al., 2021) which documents the dataset's general characteristics and curation procedure (§A.2).
|
| 85 |
+
|
| 86 |
+
# 3.2 The GPT-3 Quality Filter
|
| 87 |
+
|
| 88 |
+
To investigate how quality correlates with various attributes of a newspaper, we re-implement the Brown et al., 2020 quality filter based on the description provided in the paper. The filter is a binary logistic regression classifier trained (using n-gram features) to distinguish between reference corpora (Books3, Wikipedia, and OpenWebText) and a random sample of Common Crawl.
|
| 89 |
+
|
| 90 |
+
We replicate the filter as closely as possible using scikit-learn (Pedregosa et al., 2011). To create the training data for the classifier, we sample 80M whitespace-separated tokens of OpenWebText, Wikipedia, and Books3 each for the positive class, and 240M whitespace-separated tokens of a September 2019 Common crawl snapshot for the negative class. We download the Common crawl snapshot using code provided by Wenzek et al. (2020). We perform a 100-trial random hyperparameter search, fixing only the hashing vectorizer and basic whitespace tokenization, following the implementation in Brown et al. (2020). See the search space and final hyperparameters of our replicated filter in §A.3. Our final classifier gets $90.4\%$ $F_{1}$ $(91.7\%)$ accuracy on a set of 60M test tokens (30M held-out tokens from each class, or 72K documents from the negative class, and 33K from the positive class). We release code for training the quality filter and a demo of the trained filter.[10,11] We apply the quality filter to the U.S. SCHOOL NEWS data, computing a quality score per document, which we denote $P(\text{high quality})$ .
|
| 91 |
+
|
| 92 |
+
# 3.3 Document-Level Analysis
|
| 93 |
+
|
| 94 |
+
We first explore document-level preferences of the filter. The GPT-3 quality filter is more likely to classify high school newspaper articles as low qual
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
Figure 1: Scraped school articles tend to be considered lower quality by the GPT-3 quality filter than general newswire (histogram built from 10K random documents from each domain). This finding is consistent across a variety of categories, and more significant for certain ones (e.g., school announcements).
|
| 100 |
+
|
| 101 |
+
ity, compared to general newswire (Figure 1).<sup>12</sup> This is unsurprising, since the training data for the GPT-3 quality filter included texts by professional journalists. §A.4 shows a random sample of text from the dataset with high and low quality scores, illustrating differences in style and formality.
|
| 102 |
+
|
| 103 |
+
More notably, controlling for article category (e.g., opinion pieces), we find that the GPT-3 quality filter has topical and stylistic preferences (discovered through exploratory data analysis). For topical features, we train an topic model (via Latent Dirichlet Allocation; Blei et al. 2003) over opinion pieces with 10 topics using scikit-learn. We also consider whether documents contain first, second, or third person pronouns, and the length of the document. We then combine these features in a regression model to assess the effect of particular attributes on the quality score of a document, while controlling for others.
|
| 104 |
+
|
| 105 |
+
The results of our regression are displayed in Table 2. We find that certain topics have quite large effect sizes (see §A.5 for the distribution of quality scores per topic). For example, documents
|
| 106 |
+
|
| 107 |
+
Dependent variable: $P$ (high quality)
|
| 108 |
+
Number of observations: 10K opinion articles
|
| 109 |
+
|
| 110 |
+
<table><tr><td>Feature</td><td>Coefficient</td></tr><tr><td>Intercept</td><td>0.471***</td></tr><tr><td>Topic 5 (christmas, dress, holiday)</td><td>-0.056***</td></tr><tr><td>Topic 2 (school, college, year)</td><td>-0.037***</td></tr><tr><td>Topic 6 (student, school, class)</td><td>-0.004</td></tr><tr><td>Topic 1 (people, just, like)</td><td>0.003</td></tr><tr><td>Topic 7 (movie, film, movies)</td><td>0.062***</td></tr><tr><td>Topic 3 (music, album, song)</td><td>0.113***</td></tr><tr><td>Topic 4 (people, women, media)</td><td>0.197***</td></tr><tr><td>Topic 9 (game, team, players)</td><td>0.246***</td></tr><tr><td>Topic 8 (Trump, president, election)</td><td>0.346***</td></tr><tr><td>Presence of first/second person pronoun</td><td>-0.054***</td></tr><tr><td>Presence of third person pronoun</td><td>0.024</td></tr><tr><td>log2(Number of tokens)</td><td>0.088***</td></tr><tr><td>R²</td><td>0.336</td></tr><tr><td>adj. R²</td><td>0.336</td></tr></table>
|
| 111 |
+
|
| 112 |
+
Table 2: Regression of the quality score of an opinion piece in the U.S. SCHOOL NEWS dataset, on document features. We observe that political and sports-related topics, the lack of first and second person pronouns, and longer document lengths are associated with higher quality scores. We omit Topic 0 (food, restaurant, eat) to avoid a saturated model. See §A.5 for the distribution of quality scores per topic. ${}^{ * }p < {0.05},{}^{* * }p < {0.01}$ , *** $p < {0.001}$ .
|
| 113 |
+
|
| 114 |
+
entirely about Trump and the presidential election have quality scores 35 percentage points higher, on average, whereas documents about sports are 25 percentage points higher (relative to the omitted topic about food). Stylistically, the presence of first or second pronouns in a document decreases quality score by 5 percentage points, while a doubling of the number of tokens in a document increases the quality score by 9 percentage points.
|
| 115 |
+
|
| 116 |
+
# 3.4 Demographic Analysis
|
| 117 |
+
|
| 118 |
+
Next, we examine whether the GPT-3 quality filter prefers language from certain demographic groups over others. We first check raw correlations between average quality scores (per newspaper) and features of interest. As in §3.3, we then combine the features in a regression model.
|
| 119 |
+
|
| 120 |
+
Demographic Features As we note in §3.1, we expect a priori that content from schools located in wealthier, more educated, and urban areas of the U.S. will tend to have higher quality scores, relative to poorer, less educated, rural areas. Therefore, we consider demographic features that correspond to class, rural/urban divides, and school resources.
|
| 121 |
+
|
| 122 |
+
For each school, we retrieve 2017-2018 school-level demographic data from the National Center
|
| 123 |
+
|
| 124 |
+
for Education Statistics (NCES). These include the number of students, student:teacher ratio, and indicators for charter, private, and magnet schools. We also retrieve the latest ZIP code- and county-level demographic data from the 2020 U.S. Census. To measure the wealth of the corresponding ZIP code, we use median home values, and for educational attainment we use the percentage of college-educated adults. We also use Census data on the percent of rural population by county. Finally, we consider local political leanings, operationalized by GOP vote share in the 2016 Presidential election, using county-level data from the MIT election lab. We display full descriptions of features in our demographic analysis in §A.6.
|
| 125 |
+
|
| 126 |
+
Correlation Analysis To inform the variables we include in our regressions, we explore correlations between variables of interest and the average quality score of a school newspaper. Our analyses in Figure 2 suggest that our initial hypotheses hold: schools in wealthier, urban, and more educated ZIP codes, as well as those in Democrat-leaning counties, tend to have higher quality scores.
|
| 127 |
+
|
| 128 |
+
Data Preprocessing Here, we use schools as the unit of analysis, and consider average quality score assigned to the school's articles as the dependent variable. We only include those schools that could be matched to the NCES database, dropping schools which are missing school size, as well as those located in ZIP codes with $1M or greater median home value, due to a census artifact.$ Missing values for other features are imputed with the median value of that feature for the corresponding ZIP code, or (if necessary) county or state. For regressions, we log-transform school size, student:teacher ratio, and home values, using raw values for other features, to preserve interpretability. Our regression dataset includes 968 high schools, in 926 ZIP codes across 354 counties. All linear regressions are implemented with the statsmodels API. $^{17}$ We release this anonymous dataset to support reproducibility. $^{18}$
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
Figure 2: Scatter plots displaying correlations of select demographic features of a school's ZIP code or county with its average $P$ (high quality).
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Regression Analysis Because the variables identified above are correlated with each other, we use regression to estimate the effect of certain factors while controlling for others, with results shown in Table 3. Overall, home values, parental education, school size, public school status, and urban locations all show significant positive associations with quality scores. Thus, even controlling for financial resources, parental education, and other factors, articles from rural schools are still scored as significantly lower quality than those from urban schools.
|
| 140 |
+
|
| 141 |
+
Nevertheless, the effects, considered individually, are relatively modest. A 14 percentage point increase in percent urban population or a 17 percentage point increase in parental education (percent of adults with college degrees) correspond to a 1 percentage point increase in average quality score, as does a doubling of home values, or a quadrupling of school size. Average quality scores associated with public schools are 1.5 percentage points higher than private schools, controlling for other factors. Coefficients for charter schools, magnet schools, and student:teacher ratio are all sensible, though none are significant. Altogether, the combined effects of all these factors account for large differences in quality scores between wealthy, urban, educated locations, and poorer, rural, and less educated parts of the country.
|
| 142 |
+
|
| 143 |
+
Dependent variable: $P$ (high quality) Observations: 968 schools
|
| 144 |
+
|
| 145 |
+
<table><tr><td>Feature</td><td>Coefficient</td></tr><tr><td>Intercept</td><td>0.076</td></tr><tr><td>% Rural</td><td>-0.069***</td></tr><tr><td>% Adults ≥ Bachelor Deg.</td><td>0.059**</td></tr><tr><td>log2(Median Home Value)</td><td>0.010*</td></tr><tr><td>log2(Number of students)</td><td>0.006*</td></tr><tr><td>log2(Student:Teacher ratio)</td><td>-0.007</td></tr><tr><td>Is Public</td><td>0.015*</td></tr><tr><td>Is Magnet</td><td>0.013</td></tr><tr><td>Is Charter</td><td>0.033</td></tr><tr><td>R²</td><td>0.140</td></tr><tr><td>adj.R²</td><td>0.133</td></tr></table>
|
| 146 |
+
|
| 147 |
+
Table 3: Regression of the average $P$ (high quality) of a school in the U.S. SCHOOL NEWS dataset, on demographic variables. We observe that larger schools in educated, urban, and wealthy areas of the U.S tend to be scored higher by the GPT-3 quality filter. See §A.6 for more information on these features. ${}^{ * }p < {0.05},{}^{* * }p <$ 0.01,*** $p < {0.001}$ .
|
| 148 |
+
|
| 149 |
+
Summary and Limitations This analysis reveals an unintended consequence of the GPT-3 quality filter: by attempting to exclude text that is less like mainstream news and Wikipedia, the filter reinforces a language ideology that text from authors of wealthy, urban, and educated backgrounds are more valuable for inclusion in language model training data. These implicit preferences align with the attributes of authors that dominate the corpora from §2, which the filter considers to be high quality.
|
| 150 |
+
|
| 151 |
+
Although most of the above findings are robust to alternate model specifications, the model ultimately only accounts for a relatively small amount of variance in quality scores. In addition, most of our features are taken from a single a point in time, and do not account for changing demographics over the period 2010-2019. Data errors could also arise due to how datasets were aligned (based on school name and ZIP code). These findings may not generalize to other domains (e.g., social media), and inclusion of additional features could affect these findings. For additional models which include vote share and racial demographics taken from NCES data, see §A.7.
|
| 152 |
+
|
| 153 |
+
# 4 Alignment with Other Notions of Quality
|
| 154 |
+
|
| 155 |
+
The GPT-3 quality filter purports to judge the quality of text. Humans, on the other hand, frequently judge the quality of text without the use of auto
|
| 156 |
+
|
| 157 |
+
mated systems. In this section, we consider three forms of human evaluations: institutional awards to select books, fact-checkers' designated factuality of news outlets, and standardized test essays evaluated by human graders. How well does the behavior of the GPT-3 quality filter map onto these other notions of quality?
|
| 158 |
+
|
| 159 |
+
# 4.1 Data
|
| 160 |
+
|
| 161 |
+
Factually (Un)reliable News To analyze the correspondence between the GPT-3 quality filter and news factuality, we use the list provided by Baly et al. (2018) to identify a set of popular news sources from a broad range of factuality ratings and political leanings.[19] Using Newspaper3k,[20] we scrape and score 9.9K and 7.7K articles from high and low factuality news outlets, respectively.
|
| 162 |
+
|
| 163 |
+
Essay Exams Next, to analyze the correspondence between the GPT-3 quality filter and essay scores, we collect and score 12.1K participant essays from the Test Of English as a Foreign Language (TOEFL) exam, a widely used English language proficiency test (Blanchard et al., 2013). The TOEFL exam responses include official scores from exam readers, as well as each essay's prompt.
|
| 164 |
+
|
| 165 |
+
Award-Winning Literature Finally, to analyze the correspondence between the GPT-3 quality filter and literary awards, we select and score books from Books3 and the Gutenberg corpus (Brooke et al., 2015) that have won a Pulitzer Prize in various categories. We collected these data by scraping the publicly available list of recipients.[21]
|
| 166 |
+
|
| 167 |
+
# 4.2 Results
|
| 168 |
+
|
| 169 |
+
If the filter aligns with news factuality, we would expect that articles from factually reliable sources would be rated as higher quality than those from factually unreliable ones. However, we find no difference in the quality distribution between articles from high and low factuality news sources $(p = 0.085$ , two-way Kolmogorov-Smirnov test; Figure 3). Many factually unreliable news articles are considered high quality by the filter (§A.8).
|
| 170 |
+
|
| 171 |
+
Turning to the TOEFL exam responses, we would expect that if the filter agrees with essay
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
Figure 3: There is no difference in quality scores between articles written by news sources of high and low factual reliability.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
Figure 4: Among works that have won a Pulitzer Prize, the quality filter tends to favor nonfiction and longer fictional forms, disfavoring poetry and dramatic plays.
|
| 178 |
+
|
| 179 |
+
scores, higher scoring essays would receive higher quality scores. While essay scores are weakly correlated with quality scores (Pearson $r = 0.12$ , $p < 0.001$ ), Table 4 demonstrates that the essay's prompt is far more predictive of the essay's quality designation. For example, essays responding to a prompt $(P4)$ which asks participants to describe "...whether advertisements make products seem much better than they really are" are much less likely to be filtered than all other prompts, including $P6$ , which asks participants to describe "...whether it is best to travel in a group" (see §A.9 for more details). The latter prompt tends to invoke personal experiences in the responses.
|
| 180 |
+
|
| 181 |
+
Finally, if the filter aligns with literary awards, we would expect that most Pulitzer-Prize winning books would achieve high quality scores. On the contrary, quality scores vary heavily based on the genre (Figure 4). Poetry and drama are less favored by the filter relative to non-fiction, fiction, and even fan fiction (from the BookCorpus; Zhu et al. 2015).
|
| 182 |
+
|
| 183 |
+
Dependent variable: $P$ (high quality) Observations: 12.1K TOEFL exams
|
| 184 |
+
|
| 185 |
+
<table><tr><td>Feature</td><td>Coefficient</td></tr><tr><td>Intercept</td><td>0.0631***</td></tr><tr><td>Low score</td><td>-0.0414</td></tr><tr><td>High score</td><td>0.0339</td></tr><tr><td>Prompt 7</td><td>-0.0283***</td></tr><tr><td>Prompt 6</td><td>-0.0204***</td></tr><tr><td>Prompt 2</td><td>0.0068***</td></tr><tr><td>Prompt 8</td><td>0.0346***</td></tr><tr><td>Prompt 3</td><td>0.0880***</td></tr><tr><td>Prompt 5</td><td>0.1470***</td></tr><tr><td>Prompt 4</td><td>0.6745***</td></tr><tr><td>R²</td><td>0.712</td></tr><tr><td>adj. R²</td><td>0.711</td></tr></table>
|
| 186 |
+
|
| 187 |
+
Table 4: Regression of the quality of a TOEFL exam essay on its assigned score and prompt. While we observe some relationship between the score an essay receives and its quality score, the essay prompts themselves have significantly higher effect sizes. The highest quality essays come from Prompt 4, which asks participants to discuss products and advertisements. See §A.9 for visualizations of distributions of quality across prompts and scores. ${}^{ * }p < {0.05},{}^{* * }p < {0.01},{}^{* * * }p <$ 0.001 .
|
| 188 |
+
|
| 189 |
+
Summary Our analysis demonstrates that the GPT-3 quality filter conflicts with other standards of text quality. Of course, even the alternative standards we compare here are subject to their own language ideologies. Readers are more likely to trust news as factual if its political position aligns with their own (Mitchell et al., 2018). English-language teaching pedagogies are rooted in ideologies about well-spokenness (Vanegas et al., 2016). Literary awards favor white and male authors.[22] In general, any designation of text as high quality is subjective and influenced by sociopolitical context.
|
| 190 |
+
|
| 191 |
+
# 5 Discussion
|
| 192 |
+
|
| 193 |
+
The above sections have demonstrated that automated filtering of text to build language modeling corpora may lead to counterintuitive or undesirable exclusion of sources. Because of the variety of use cases for language models and the broad range of text that could be appropriate for certain tasks, we suggest that there is no simple, universal standard for what should be considered high quality text. Indeed, there is a long history of privileging some people's spoken language as better or more "cor
|
| 194 |
+
|
| 195 |
+
rect” than others. Researchers and practitioners of NLP who are aware of this history have the option to be intentional in their design of systems that, however implicitly, risk excluding the language of underprivileged identities or communities.
|
| 196 |
+
|
| 197 |
+
Some amount of selection in building corpora is inevitable. It is not possible to collect a uniform random sample of all written utterances. However, our findings suggest that current selection methods are, for many purposes, flawed. Future work into alternative filtering criteria could be paired with investigations into the unintended consequences of their assumptions.
|
| 198 |
+
|
| 199 |
+
We do not believe that there is likely to be a single solution to this challenge. Indeed, the text that is best suited for training a model may depend on the application of that model. At a minimum, however, the NLP community could more carefully consider and clearly document the criteria by which text is being selected for inclusion. NLP practitioners could also be explicit about the reasons for using certain sources, even if those reasons are related to availability or empirical performance. A collection of tests could also be deployed (and improved over time), to give a clear understanding of the implications of different choices of filters.
|
| 200 |
+
|
| 201 |
+
More generally, we echo calls in the literature for more thoughtful and inclusive data collection (Jo and Gebru, 2020; Bender et al., 2021; Tanweer et al., 2021). This could include, but is not limited to a) intentionally curating data from people and viewpoints that are not otherwise well represented; b) including a greater diversity of genres; c) more nuanced or intentional exclusion criteria; d) more thorough interrogation of what text is being excluded; e) developing standard checks for prominent biases in inclusion; f) abandoning the notion of a general-purpose corpus.
|
| 202 |
+
|
| 203 |
+
# 6 Ethical Considerations & Limitations
|
| 204 |
+
|
| 205 |
+
Our U.S. SCHOOL NEWS dataset comes with many limitations, as described in §3.1. For example, the dataset contains sampling biases (e.g., it depends on use of a specific publication template), and the ZIP codes and counties are not uniformly spread across U.S. states. In general, our dataset likely captures neither the least resourced schools (which may not have access to online resources) in the United States, nor the wealthiest ones (who may have their own publication platforms). However, we speculate that an expanded corpus, which in
|
| 206 |
+
|
| 207 |
+
cluded writings from these schools, would demonstrate a continuation of trends we report in this paper.
|
| 208 |
+
|
| 209 |
+
While the text in our dataset varies considerably along topical, stylistic, and demographic variables, it is a niche domain; the text is a specific genre meant for local student consumption, its authors are U.S. students, and it thus primarily represents U.S.-centric cultural and political perspectives. We acknowledge that we also perpetuate some of the biases we identify, especially by working with English language text from the United States. We hope future work will extend this study of language ideologies to multilingual settings, other textual domains, and different sets of authors.
|
| 210 |
+
|
| 211 |
+
With respect to demographic variables, we merge census demographics with school-level data via ZIP codes or counties, which are imperfect identifiers of a school, since ZIP codes (and counties) may include multiple schools of varying resource levels. Moreover, tracking demographic variables and other author metadata, if deployed at scale, implies a certain level of invasive surveillance (Brayne, 2017). Future work may explore how to maintain the rights of authors as data subjects and producers while mapping demographic representation in large corpora.
|
| 212 |
+
|
| 213 |
+
Finally, we did not seek consent from authors to scrape their articles. The ethical and legal norms around scraping public-facing web data, especially those produced by minors, are still in flux (Fiesler et al., 2020), and may not align with user perceptions of what constitutes fair use of online communications (Williams et al., 2017). For these reasons, we do not release the corpus of school newspaper articles, and only use it for analysis and evaluation. We only make available a dataset of demographic variables and quality scores per school, to support reproducibility.
|
| 214 |
+
|
| 215 |
+
# 7 Related Work
|
| 216 |
+
|
| 217 |
+
Language Ideologies Language ideologies have been widely explored in the sociolinguistics literature (Gal and Irvine, 1995; Rosa and Flores, 2017; Craft et al., 2020, inter alia). An ideology that promotes the inherent correctness, clarity, and objectivity of certain language varieties over others is a mechanism for linguistic discrimination (Craft et al., 2020; Gal, 2016; MacSwan, 2020; Rickford and King, 2016). A salient example of such discrimination is the stigmatization of second
|
| 218 |
+
|
| 219 |
+
language speakers of English (Lindemann, 2005).
|
| 220 |
+
|
| 221 |
+
Language ideologies have an important, but often unacknowledged, influence on the development of NLP technologies (Blodgett et al., 2020). For example, an ideology that distinguishes between standard and non-standard language variations surfaces in text normalization tasks (van der Goot et al., 2021), which tend to strip documents of pragmatic nuance (Baldwin and Chai, 2011) and social signals (Nguyen et al., 2021). Language on the Internet has been historically treated as a noisy variant of English, even though lexical variation on the Internet is highly communicative of social signals (Eisenstein, 2013), and varies considerably along demographic variables (Eisenstein et al., 2014) and community membership (Lucy and Bamman, 2021). Language ideologies also surface in tools for toxicity detection; for example, the classification behavior of the PERSPECTIVE API (a popular hate speech detector) aligns with the attitudes of conservative, white, female annotators, who tend to perceive African-American dialects as more toxic (Sap et al., 2021). In this work, we examine the language ideology encoded in a widely used quality filter for text data selection.
|
| 222 |
+
|
| 223 |
+
Critiques of Laissez-Faire Data Collection We provide empirical evidence that laissez-faire data collection (i.e., filtering large web data sources) leads to data homogeneity (Bender et al., 2021). As an alternative to laissez-faire collection, Jo and Gebru (2020) recommend drawing on institutional archival practices. However, we note that language ideologies are also prevalent (and may not be explicit) in institutional archives, which, for example, have preferred colonial voices over colonized ones when documenting historical events (Trouillot, 1995; Decker, 2013).
|
| 224 |
+
|
| 225 |
+
Other Quality Filters Other definitions of text quality are used to create pretraining datasets, some of which do not rely on the datasets from §2. However, all techniques adopt language ideologies of what constitutes high quality text. Bad-word filtering, which removes documents that contain certain stop-words, disproportionately excludes language about and by minority groups (Dodge et al., 2021). Filtering Internet content for popularity (Radford et al., 2019) leads to data homogeneity based on the characteristics of viral media and the composition of userbases in online forums (§2). Even lightweight filters (Aghajanyan et al., 2021; Rae
|
| 226 |
+
|
| 227 |
+
et al., 2021) put more emphasis on features like document length over factuality when determining what makes a document high quality. Any filtering method requires transparent justification and recognition of tradeoffs.
|
| 228 |
+
|
| 229 |
+
Downstream Behavior The behavior of language systems aligns with what we would expect from a language ideology that favors training data written by a narrow, powerful sector of society. For example, dialogue agents perform significantly worse when engaging in conversations about race (Schlesinger et al., 2018) and with minority dialects of English (Mengesha et al., 2021). GPT-3 frequently resorts to use of stereotypes when minority groups are mentioned in its prompt (Abid et al., 2021; Blodgett, 2021). GPT-3 is also prone to producing hate speech (Gehman et al., 2020) and misinformation (McGuffie and Newhouse, 2020), which we would expect if its quality filter fails to distinguish the factual reliability of news sources in its training data ( $\S 4$ ). Concurrent to this work, Gao (2021) show that aggressive data filtering with the GPT-3 quality filter degrades downstream task performance. A closer analysis of how the language ideologies in data selection lead to certain model behaviors is a rich area for future work.
|
| 230 |
+
|
| 231 |
+
# 8 Conclusion
|
| 232 |
+
|
| 233 |
+
Using a new dataset of U.S. school newspapers, we find that the conventional, automated valuation of Wikipedia, newswire, books, and popular Internet content as reference for high quality text implicitly favors content written by authors from larger schools in wealthier, educated, urban areas of the United States. Adopting this language ideology for text data selection leads to implicit, yet systematic and as-yet undocumented inequalities in terms of whose language is more likely to be included in training corpora. Although no single action will solve this complicated issue, data curators and researchers could be more intentional about curating text from underrepresented authors and groups, gathering sources from multiple genres and writing styles, and documenting their curation procedures and possible sources of exclusion.
|
| 234 |
+
|
| 235 |
+
# Acknowledgments
|
| 236 |
+
|
| 237 |
+
This paper benefited from thoughtful feedback from a number of people: Emily M. Bender, Amber Boydstun, Timnit Gebru, Eun Seo Jo, Kelvin Luu, Lucy Li, Julian Michael, Amandalynne Paullada, Katharina Reinecke, Swabha Swayamdipta, Kelly Wright, and Kaitlyn Zhou.
|
| 238 |
+
|
| 239 |
+
# References
|
| 240 |
+
|
| 241 |
+
Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-Muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
|
| 242 |
+
Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, and Luke Zettlemoyer. 2021. HtIm: Hyper-text pre-training and prompting of language models. arXiv, abs/2107.06955.
|
| 243 |
+
Gabriel Arana. 2018. Decades of failure. *Columbia Journalism Review*.
|
| 244 |
+
Tyler Baldwin and Joyce Chai. 2011. Beyond normalization: Pragmatics of word form in text messages. In Proceedings of 5th International Joint Conference on Natural Language Processing.
|
| 245 |
+
Ramy Baly, Georgi Karadzhov, Dimitar Alexandrov, James Glass, and Preslav Nakov. 2018. Predicting factuality of reporting and bias of news media sources. In Proceedings of EMNLP.
|
| 246 |
+
David Bamman and Noah A Smith. 2014. Unsupervised discovery of biographical structure from text. Transactions of the Association for Computational Linguistics, 2:363-376.
|
| 247 |
+
Jack Bandy and Nicholas Vincent. 2021. Addressing "documentation debt" in machine learning: A retrospective datasheet for BookCorpus. In NeurIPS.
|
| 248 |
+
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of FAccT.
|
| 249 |
+
Jonah Berger and Katherine L. Milkman. 2012. What makes online content viral? Journal of Marketing Research, 49(2):192-205.
|
| 250 |
+
Julian R. Betts, Kim S. Reuben, and Anne Danenberg. 2000. Equal Resources, Equal Outcomes? The Distribution of School Resources and Student Achievement in California. Public Policy Institute of California.
|
| 251 |
+
Steve Bien-Aime. 2016. AP stylebook normalizes sports as a male space. Newspaper Research Journal, 37(1):44-57.
|
| 252 |
+
|
| 253 |
+
Daniel Blanchard, Joel R. Tetreault, Derrick Higgins, A. Cahill, and Martin Chodorow. 2013. TOEFL11: A corpus of non-native English. ETS Research Report Series, 2013:15.
|
| 254 |
+
David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.
|
| 255 |
+
Su Lin Blodgett. 2021. Sociolinguistically Driven Approaches for Just Natural Language Processing. Ph.D. thesis, University of Massachusetts Amherst.
|
| 256 |
+
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics.
|
| 257 |
+
Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of EMNLP.
|
| 258 |
+
Sarah Brayne. 2017. Big data surveillance: The case of policing. American Sociological Review, 82(5):977-1008.
|
| 259 |
+
Julian Brooke, Adam Hammond, and Graeme Hirst. 2015. GartenTag: an NLP-driven tool for digital humanities research in the Project Gutenberg corpus. In Proceedings of the Fourth Workshop on Computational Linguistics for Literature.
|
| 260 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv, abs/2005.14165.
|
| 261 |
+
Janice Kai Chen, Ilena Peng, Jasen Lo, Trisha Ahmed, Simon J. Levien, and Devan Karp. 2021. Voices investigation: Few black, latinx students are editors of top college newspapers. AAJA Voices.
|
| 262 |
+
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In ICLR.
|
| 263 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale.
|
| 264 |
+
|
| 265 |
+
Justin T. Craft, Kelly E. Wright, Rachel Elizabeth Weissler, and Robin M. Queen. 2020. Language and discrimination: Generating meaning, perceiving identities, and discriminating outcomes. Annual Review of Linguistics, 6(1):389-407.
|
| 266 |
+
Stephanie Decker. 2013. The silence of the archives: business history, post-colonialism and archival ethnography. Management & Organizational History, 8(2):155-173.
|
| 267 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL.
|
| 268 |
+
Robert DiNicola. 1994. Teaching journalistic style with the AP stylebook: Beyond fussy rules and dogma of 'correctness'. The Journalism Educator, 49(2):64-70.
|
| 269 |
+
Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, and Matt Gardner. 2021. Documenting the english colossal clean crawled corpus. arXiv, abs/2104.08758.
|
| 270 |
+
Penelope Eckert. 1989. Jocks and burnouts: Social categories and identity in the high school. Teachers college press.
|
| 271 |
+
Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of NAACL, pages 359-369.
|
| 272 |
+
Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2014. Diffusion of lexical change in social media. PLoS ONE, 9.
|
| 273 |
+
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.
|
| 274 |
+
Casey Fiesler, Nathan Beard, and Brian Keegan. 2020. No robots, spiders, or scrapers: Legal and ethical regulation of data collection methods in social media terms of service. In Proceedings of ICWSM.
|
| 275 |
+
Paula Frole, Anna Jo Bratton, Jeff McMillan, Pia Sarkar, Jerry Schwartz, and Raghuram Vadarevu. 2020. The Associated Press stylebook 2020-2022. The Associated Press.
|
| 276 |
+
Susan Gal. 2016. Sociolinguistic differentiation, page 113-136. Cambridge University Press.
|
| 277 |
+
Susan Gal and Judith T. Irvine. 1995. The boundaries of languages and disciplines: How ideologies construct difference. Social Research, 62(4):967-1001.
|
| 278 |
+
Leo Gao. 2021. An empirical exploration in quality filtering of text data. arXiv, abs/2109.00698.
|
| 279 |
+
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800Gb dataset of diverse text for language modeling. arXiv, abs/2101.00027.
|
| 280 |
+
|
| 281 |
+
Tiana Gaudette, Ryan Scrivens, Garth Davies, and Richard Frank. 2021. Upvoting extremism: Collective identity formation and the extreme right on reddit. New Media & Society, 23(12):3491-3508.
|
| 282 |
+
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daume III, and Kate Crawford. 2021. Datasheets for datasets. Communications of the ACM, 64(12):86-92.
|
| 283 |
+
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2020.
|
| 284 |
+
Joyce Still Gibson. 1961. A study of the status of high school newspapers in the virginia public schools. Master's thesis, University of Richmond.
|
| 285 |
+
Aaron Gokaslan and Vanya Cohen. 2019. Openweb-text corpus.
|
| 286 |
+
Eduardo Graells-Garrido, Mounia Lalmas, and Filippo Menczer. 2015. First women, second sex: Gender bias in Wikipedia. In Proceedings of the 26th ACM conference on hypertext & social media.
|
| 287 |
+
Rob Greenwald, Larry V. Hedges, and Richard D. Laine. 1996. The effect of school resources on student achievement. Review of Educational Research, 66(3):361-396.
|
| 288 |
+
Elizabeth Grieco. 2018. Newsroom employees are less diverse than U.S. workers overall. Pew Research Center. [online; accessed 2022-01-22].
|
| 289 |
+
Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In Proceedings of NAACL.
|
| 290 |
+
Keira Huang. 2013. Wikipedia fails to bridge gender gap. South China Morning Post. [online; accessed 2022-01-11].
|
| 291 |
+
Eun Seo Jo and Timnit Gebru. 2020. Lessons from archives: Strategies for collecting sociocultural data in machine learning. Proceedings of FAccT.
|
| 292 |
+
Paresh Kharya and Ali Alvi. 2021. Using deepspeed and megatron to train megatron-turing nlg 530b, the world's largest and most powerful generative language model. [online; accessed 2022-01-20].
|
| 293 |
+
William Labov. 2006. The Social Stratification of English in New York City, 2 edition. Cambridge University Press.
|
| 294 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 295 |
+
|
| 296 |
+
Lee & Low Books. 2020. Where is the diversity in publishing? The 2019 diversity baseline survey results. [online; accessed 2021-11-24].
|
| 297 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 298 |
+
Stephanie Lindemann. 2005. Who speaks "broken English"? US undergraduates' perceptions of nonnative English. International Journal of Applied Linguistics, 15(2):187-212.
|
| 299 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv, abs/1907.11692.
|
| 300 |
+
Li Lucy and David Bamman. 2021. Characterizing English Variation across Social Media Communities with BERT. Transactions of the Association for Computational Linguistics, 9:538-556.
|
| 301 |
+
Jeff MacSwan. 2020. Academic English as standard language ideology: A renewed research agenda for asset-based language education. *Language Teaching Research*, 24(1):28-36.
|
| 302 |
+
Michael Mandiberg. 2020. Mapping wikipedia. The Atlantic. [online; accessed 2021-11-24].
|
| 303 |
+
Sorin Adam Matei and Brian C. Britt. 2017. Structural Differentiation in Social Media. Springer International Publishing.
|
| 304 |
+
Kris McGuffie and Alex Newhouse. 2020. The radicalization risks of GPT-3 and advanced neural language models. arXiv, abs/2009.06807.
|
| 305 |
+
Zion Mengesha, Courtney Heldreth, Michal Lahav, Juliana Sublewski, and Elyse Tuennerman. 2021. "I don't think these devices are very culturally sensitive."—Impact of automated speech recognition errors on African Americans. Frontiers in Artificial Intelligence, 4:169.
|
| 306 |
+
Meta-wiki. 2018. Community insights/2018 report/contributors. [online; accessed 2012-11-24].
|
| 307 |
+
Amy Mitchell, Jeffrey Gottfried, Michael Barthel, and Nami Sumida. 2018. Can Americans tell factual from opinion statements in the news? Pew Research Center's Journalism Project. [online; accessed 2022-01-22].
|
| 308 |
+
Dong Nguyen, Laura Rosseel, and Jack Grieve. 2021. On learning and representing social meaning in NLP: A sociolinguistic perspective. In Proceedings of NAACL.
|
| 309 |
+
|
| 310 |
+
Katherine Panciera, Aaron Halfaker, and Loren Terveen. 2009. Wikipedians are born, not made: A study of power editors on wikipedia. In Proceedings of the ACM 2009 International Conference on Supporting Group Work.
|
| 311 |
+
Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournaepau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(85):2825-2830.
|
| 312 |
+
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 313 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. [online; accessed 2022-01-22].
|
| 314 |
+
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. [online; accessed 2022-01-22].
|
| 315 |
+
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Jason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorraine Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv, abs/2112.11446.
|
| 316 |
+
|
| 317 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 318 |
+
Sean F Reardon and Ann Owens. 2014. 60 years after Brown: Trends and consequences of school segregation. Annual Review of Sociology, 40:199-218.
|
| 319 |
+
John R. Rickford. 1985. Ethnicity as a sociolinguistic boundary. American Speech, 60(2):99-125.
|
| 320 |
+
John R. Rickford and Sharese King. 2016. Language and linguistics on trial: Hearing rachel jeantel (and other vernacular speakers) in the courtroom and beyond. Language, 92(4):948-988.
|
| 321 |
+
Jonathan Rosa and Nelson Flores. 2017. Unsettling race and language: Toward a raciolinguistic perspective. Language in Society, 46(5):621-647.
|
| 322 |
+
Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2021. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. arXiv, abs/2111.07997.
|
| 323 |
+
Dante J. Scala and Kenneth M. Johnson. 2017. Political polarization along the rural-urban continuum? the geography of the presidential vote, 2000-2016. The ANNALS of the American Academy of Political and Social Science, 672(1):162-184.
|
| 324 |
+
Ari Schlesinger, Kenton P. O'Hara, and Alex S. Taylor. 2018. Let's talk about race: Identity, chatbots, and AI. In Proceedings of CHI.
|
| 325 |
+
Anissa Tanweer, Emily Kalah Gade, PM Krafft, and Sarah K Dreier. 2021. Why the data revolution needs qualitative thinking. Harvard Data Science Review.
|
| 326 |
+
Michel-Rolph Trouillot. 1995. Silencing the past: Power and the production of history. Beacon Press.
|
| 327 |
+
Rob van der Goot, Alan Ramponi, Arkaitz Zubiaga, Barbara Plank, Benjamin Muller, Iñaki San Vicente Roncal, Nikola Ljubesić, Özlem Çetinoğlu, Rahmad Mahendra, Talha Çolakoglu, Timothy Baldwin, Tommaso Caselli, and Wladimir Sidorenko. 2021. MultiLexNorm: A shared task on multilingual lexical normalization. In Proceedings of the Seventh Workshop on Noisy User-generated Text.
|
| 328 |
+
Marlon Vanegas, Juan Restrepo, Yurley Zapata, Giovany Rodríguez, Luis Cardona, and Cristian Muñoz. 2016. Linguistic discrimination in an English language teaching program: Voices of the invisible others. Ikala, Revista de Lenguaje y Cultura, 21.
|
| 329 |
+
Fred Vultee. 2012. A paleontology of style. Journalism Practice, 6(4):450-464.
|
| 330 |
+
|
| 331 |
+
Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. It's a man's Wikipedia? Assessing gender inequality in an online encyclopedia. In Proceedings of the AAAI conference on web and social media.
|
| 332 |
+
Jason Weismueller, Paul Harrigan, Kristof Coussement, and Tina Tessitore. 2022. What makes people share political content on social media? The role of emotion, authority and ideology. Computers in Human Behavior, 129:107150.
|
| 333 |
+
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of LREC.
|
| 334 |
+
Matthew L Williams, Pete Burnap, and Luke Sloan. 2017. Towards an ethical framework for publishing Twitter data in social research: Taking into account users' views, online context and algorithmic estimation. Sociology, 51(6):1149-1168.
|
| 335 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 336 |
+
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of ICCV.
|
| 337 |
+
|
| 338 |
+
# A Appendix
|
| 339 |
+
|
| 340 |
+
# A.1 Language Model Training Corpora
|
| 341 |
+
|
| 342 |
+
We display a list of popular language modeling corpora in Table 5.
|
| 343 |
+
|
| 344 |
+
# A.2 Datasheet
|
| 345 |
+
|
| 346 |
+
Our datasheet for the U.S. SCHOOL NEWS dataset can be found here: https://bit.ly/3rLrrmwV.
|
| 347 |
+
|
| 348 |
+
# A.3 Quality Filter Hyperparameters
|
| 349 |
+
|
| 350 |
+
We display the hyperparameters of our logistic regression classifier (reproduction of the filter developed by Brown et al. 2020) in Table 6.
|
| 351 |
+
|
| 352 |
+
# A.4 Example Articles
|
| 353 |
+
|
| 354 |
+
We display example articles and their quality scores in the U.S. SCHOOL NEWS dataset in Table 11.
|
| 355 |
+
|
| 356 |
+
# A.5 Topic Modeling
|
| 357 |
+
|
| 358 |
+
See the quality distribution among topics for 10K opinion pieces in Figure 5.
|
| 359 |
+
|
| 360 |
+
# A.6 Demographic Features
|
| 361 |
+
|
| 362 |
+
We display a table of features we use in our demographic regression model in Table 7.
|
| 363 |
+
|
| 364 |
+
# A.7 Additional Regressions
|
| 365 |
+
|
| 366 |
+
Here we include regressions results from two models with additional covariates.
|
| 367 |
+
|
| 368 |
+
We first consider race as a possible omitted variable, given the extent of school segregation in the U.S. (Reardon and Owens, 2014). NCES data provides the distribution of students by race for each school, using a particular set of racial categories, which comes with obvious limitations. Nevertheless, we use the raw percentage scores provided as additional covariates in this model as a validity check. We exclude the Native and Pacific Islander categories, due to imbalanced data and geographic concentration, as well as the white category, to avoid a saturated model.
|
| 369 |
+
|
| 370 |
+
As shown in Table 8, the findings are nearly identical to the results in the main paper, with the exception that home values are no longer significant. The only racial category that shows a significant effect is Asian. However, we note a positive correlation between percentage of Asian students and median home values (Pearson $r = 0.32$ , $p < 0.001$ ), suggesting that the variable for percentage of Asian students may be partially absorbing the effect of our measure of wealth.
|
| 371 |
+
|
| 372 |
+
Table 9 shows the results for an alternate model which includes $\%$ GOP vote share in the 2016 election. Once again, the results are very similar to the results in the main paper, although there is a strong (and significant) negative association between GOP vote share and quality scores, whereas the measures of home values and percent rural are no longer significant.
|
| 373 |
+
|
| 374 |
+
The results for this model exemplify the difficulty of working with highly correlated variables. Given the strong association between GOP voters and rural areas, GOP vote share serves as an effective proxy for other variables of interest. However, because the results of the 2016 Presidential election were likely somewhat idiosyncratic, and because we find wealth and geography to be a more plausible explanation for differences in student writing than political preferences among their parents, we opt for the model without GOP vote share in the main paper.
|
| 375 |
+
|
| 376 |
+
# A.8 Low Factuality News Considered High Quality
|
| 377 |
+
|
| 378 |
+
We display example low factuality news articles that are assigned high quality scores by the GPT-3 quality filter in Table 12.
|
| 379 |
+
|
| 380 |
+
# A.9 TOEFL Exam Responses
|
| 381 |
+
|
| 382 |
+
We display the distribution of quality scores against prompts and essay scores in the TOEFL exam dataset in Figure 6. We display the prompts of this dataset in Table 10.
|
| 383 |
+
|
| 384 |
+
<table><tr><td>Model</td><td>Pretraining Data Sources</td><td>Citation</td></tr><tr><td>ELMo</td><td>1B Word benchmark</td><td>(Peters et al., 2018)</td></tr><tr><td>GPT-1</td><td>BookCorpus</td><td>(Radford et al., 2018)</td></tr><tr><td>GPT-2</td><td>WebText</td><td>(Radford et al., 2019)</td></tr><tr><td>BERT</td><td>BookCorpus + Wikipedia</td><td>(Devlin et al., 2019)</td></tr><tr><td>RoBERTa</td><td>BookCorpus + Wikipedia + CC-news + OpenWebText + Stories</td><td>(Liu et al., 2019)</td></tr><tr><td>XL-Net</td><td>BookCorpus + Wikipedia + Giga5 + ClueWeb 2012-B + Common Crawl</td><td>(Yang et al., 2019)</td></tr><tr><td>ALBERT</td><td>BERT, RoBERTa, and XL-net's data sources</td><td>(Lan et al., 2020)</td></tr><tr><td>T5</td><td>Common Crawl (filtered)</td><td>(Raffel et al., 2020)</td></tr><tr><td>XLM-R</td><td>Common Crawl (filtered)</td><td>(Conneau et al., 2020)</td></tr><tr><td>BART</td><td>BookCorpus + Wikipedia</td><td>(Lewis et al., 2020)</td></tr><tr><td>GPT-3</td><td>Wikipedia + Books + WebText (expanded) + Common Crawl (filtered)</td><td>(Brown et al., 2020)</td></tr><tr><td>ELECTRA</td><td>BookCorpus + Wikipedia + Giga5 + ClueWeb 2012-B + Common Crawl</td><td>(Clark et al., 2020)</td></tr><tr><td>Megatron-Turing NLG</td><td>The Pile + Common Crawl (filtered) + RealNews + Stories</td><td>(Kharya and Alvi, 2021)</td></tr><tr><td>Switch-C</td><td>Common Crawl (filtered)</td><td>(Fedus et al., 2021)</td></tr><tr><td>Gopher</td><td>MassiveWeb + Books + Common Crawl (filtered) + News + GitHub + Wikipedia</td><td>(Rae et al., 2021)</td></tr></table>
|
| 385 |
+
|
| 386 |
+
Table 5: Overview of recent language models and their training corpora. All studies tend to draw from the same core data sources: Wikipedia, Books, News, or filtered web dumps.
|
| 387 |
+
|
| 388 |
+
<table><tr><td>Computing Infrastructure</td><td>56 Intel Xeon CPU Cores</td></tr><tr><td>Number of search trials</td><td>100</td></tr><tr><td>Search strategy</td><td>uniform sampling</td></tr><tr><td>Best validation F1</td><td>90.4</td></tr></table>
|
| 389 |
+
|
| 390 |
+
<table><tr><td>Hyperparameter</td><td>Search space</td><td>Best assignment</td></tr><tr><td>regularization</td><td>choice[L1, L2]</td><td>L1</td></tr><tr><td>C</td><td>uniform(float[0, 1]</td><td>0.977778</td></tr><tr><td>solver</td><td>64</td><td>liblinear</td></tr><tr><td>tol</td><td>loguniform(float[10e-5, 10e-3]</td><td>0.000816</td></tr><tr><td>ngram range</td><td>choice["1 2", "1 3", "2 3"]</td><td>"1 2"</td></tr><tr><td>random state</td><td>uniform-int[0, 100000]</td><td>44555</td></tr><tr><td>tokenization</td><td>whitespace</td><td>whitespace</td></tr><tr><td>vectorization</td><td>hashing</td><td>hashing</td></tr><tr><td>remove stopwords</td><td>choice[Yes, No]</td><td>No</td></tr></table>
|
| 391 |
+
|
| 392 |
+
Table 6: Hyperparameter search space and best assignments for our re-implementation of the GPT-3 quality filter.
|
| 393 |
+
|
| 394 |
+
5:christmas dress holiday day thanksgiving dance prom hallowen year wear 2:school college year high senior seniors students class time classes 0:food restaurant eat pizza menu chicken coffee meal foods cheese 6:students school student teachers class high classes time schools teacher 1:people just like life time don know day things ve 7:movie film movies characters story character plot films marvel book 3:album music song songs band lyrics sound listen like artists 4:people women media world new social states gun country like 9:game team players games season sports football teams play athletes 8:trump president election vote political clinton country obama people donald
|
| 395 |
+
|
| 396 |
+

|
| 397 |
+
Figure 5: Considering 10K opinion pieces in U.S. SCHOOL NEWS, we observe that the GPT-3 quality filter prefers topics that are more prevalent in Wikipedia or newswire.
|
| 398 |
+
|
| 399 |
+
<table><tr><td>Feature</td><td>Description</td><td>Level</td><td>Source</td></tr><tr><td>Is Charter</td><td>Is the school a charter school?</td><td>School</td><td>NCES database</td></tr><tr><td>Is Private</td><td>Is the school a private school?</td><td>School</td><td>NCES database</td></tr><tr><td>Is Magnet</td><td>Is the school a magnet school?</td><td>School</td><td>NCES database</td></tr><tr><td>% Black Students</td><td>% students who identify as Black</td><td>School</td><td>NCES database</td></tr><tr><td>% Asian Students</td><td>% students who identify as Asian</td><td>School</td><td>NCES database</td></tr><tr><td>% Mixed Students</td><td>% students who identify as Mixed race</td><td>School</td><td>NCES database</td></tr><tr><td>% Hispanic Students</td><td>% students who identify as Hispanic</td><td>School</td><td>NCES database</td></tr><tr><td>Student:Teacher</td><td>Student-teacher ratio</td><td>School</td><td>NCES database</td></tr><tr><td>School Size</td><td>Total number of students</td><td>School</td><td>NCES database</td></tr><tr><td>Median Home Value</td><td>Median home value</td><td>ZIP code</td><td>Census</td></tr><tr><td>% Adults ≥ Bachelor Deg.</td><td>% adults (≥ 25 years old) with at least a bachelor's degree</td><td>ZIP code</td><td>Census</td></tr><tr><td>% Rural</td><td>Percent of a county population living in a rural area</td><td>County</td><td>Census</td></tr><tr><td>% 2016 GOP Vote</td><td>Republican vote share in the 2016 presidential election</td><td>County</td><td>MIT Election Lab</td></tr></table>
|
| 400 |
+
|
| 401 |
+
Table 7: Description of features we include in our demographic analyses.
|
| 402 |
+
|
| 403 |
+
Dependent variable: $P$ (high quality) Observations: 968 schools
|
| 404 |
+
|
| 405 |
+
<table><tr><td>Feature</td><td>Coefficient</td></tr><tr><td>Intercept</td><td>0.134</td></tr><tr><td>% Rural</td><td>-0.073***</td></tr><tr><td>% Adults ≥ Bachelor Deg.</td><td>0.049*</td></tr><tr><td>log2(Median Home Value)</td><td>0.007</td></tr><tr><td>log2(Number of students)</td><td>0.005*</td></tr><tr><td>log2(Student:Teacher ratio)</td><td>-0.008</td></tr><tr><td>Is Public</td><td>0.020*</td></tr><tr><td>Is Magnet</td><td>0.013</td></tr><tr><td>Is Charter</td><td>0.035*</td></tr><tr><td>% Asian Students</td><td>0.081**</td></tr><tr><td>% Mixed Students</td><td>0.051</td></tr><tr><td>% Black Students</td><td>-0.009</td></tr><tr><td>% Hispanic Students</td><td>-0.020</td></tr><tr><td>R²</td><td>0.152</td></tr><tr><td>adj.R²</td><td>0.142</td></tr></table>
|
| 406 |
+
|
| 407 |
+
Table 8: Regression of the average $P$ (high quality) of a school in the U.S. SCHOOL NEWS dataset, on demographic variables. As in the main paper, larger schools in educated and urban areas of the U.S tend to be scored higher by the GPT-3 quality filter. Asian is the only categorical race variable which shows a significant association (using data and categories taken directly from NCES). The association with home values is no longer significant, plausibly explained by a correlation between a higher proportion of Asian students and higher median home values. See §A.6 for more information on these features. ${}^{ * }p < {0.05},{}^{* * }p < {0.01}$ , *** $p < {0.001}$ .
|
| 408 |
+
Dependent variable: $P$ (high quality) Observations: 968 schools
|
| 409 |
+
|
| 410 |
+
<table><tr><td>Feature</td><td>Coefficient</td></tr><tr><td>Intercept</td><td>0.248**</td></tr><tr><td>% Rural</td><td>-0.021</td></tr><tr><td>% Adults ≥ Bachelor Deg.</td><td>0.067**</td></tr><tr><td>log2(Median Home Value)</td><td>0.003</td></tr><tr><td>log2(Number of students)</td><td>0.006**</td></tr><tr><td>log2(Student:Teacher ratio)</td><td>-0.007</td></tr><tr><td>Is Public</td><td>0.017*</td></tr><tr><td>Is Magnet</td><td>0.009</td></tr><tr><td>Is Charter</td><td>0.027</td></tr><tr><td>% GOP vote share</td><td>-0.114***</td></tr><tr><td>R²</td><td>0.164</td></tr><tr><td>adj.R²</td><td>0.157</td></tr></table>
|
| 411 |
+
|
| 412 |
+
Table 9: Regression of the average $P$ (high quality) of a school in the U.S. SCHOOL NEWS dataset, on demographic variables, including % 2016 GOP Vote. We observe that including the political leaning of the county tends to wash out other variables, likely because partisan voting correlates heavily with other effects, like the urban/rural divide (Scala and Johnson, 2017). The only other covariates that stay significant are school size, parental education, and public (as opposed to private) schools. $^{*}p < 0.05$ , $^{**}p < 0.01$ , $^{***}p < 0.001$ .
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
Figure 6: TOEFL exam score is weakly correlated with quality score across prompts (Pearson correlation; $r = 0.12 \pm 0.05$ , $p \approx 0$ ; top), but the essay prompt seems to be a much stronger indicator of quality scores than the exam scores are (bottom).
|
| 418 |
+
|
| 419 |
+
<table><tr><td>ID</td><td>Text</td><td>P(high quality)</td></tr><tr><td>P7</td><td>It is more important for students to understand ideas and concepts than it is for them to learn facts.</td><td>0.04</td></tr><tr><td>P6</td><td>The best way to travel is in a group led by a tour guide.</td><td>0.05</td></tr><tr><td>P1</td><td>It is better to have broad knowledge of many academic subjects than to specialize in one specific subject.</td><td>0.07</td></tr><tr><td>P2</td><td>Young people enjoy life more than older people do.</td><td>0.08</td></tr><tr><td>P8</td><td>Successful people try new things and take risks rather than only doing what they already know how to do well.</td><td>0.10</td></tr><tr><td>P3</td><td>Young people nowadays do not give enough time to helping their communities.</td><td>0.16</td></tr><tr><td>P5</td><td>In twenty years, there will be fewer cars in use than there are today.</td><td>0.22</td></tr><tr><td>P4</td><td>Most advertisements make products seem much better than they really are.</td><td>0.74</td></tr></table>
|
| 420 |
+
|
| 421 |
+
Table 10: TOEFL prompt IDs and their text, ordered by their quality score by GPT-3 quality filter.
|
| 422 |
+
|
| 423 |
+
<table><tr><td>Category: Student-Life
|
| 424 |
+
P(high quality) = 0.001</td></tr><tr><td>As our seniors count down their final days until graduation, we will be featuring them each day. [REDACTED], what are your plans after graduation? To attend [REDACTED] in the fall and get my basics. Then attend the [REDACTED] program. What is your favorite high school memory? My crazy, obnoxious and silly 5th hour English with [REDACTED]. What advice do you have for underclassmen? Pay attention, stay awake (I suggest lots of coffee), and turn in your dang work! You can do it, keep your head up because you are almost there!</td></tr><tr><td>Category: News
|
| 425 |
+
P(high quality) = 0.99</td></tr><tr><td>On Monday, September 3rd, Colin Kaepernick, the American football star who started the “take a knee” national anthem protest against police brutality and racial inequality, was named the new face of Nike’s “Just Do It” 30th-anniversary campaign. Shortly after, social media exploded with both positive and negative feedback from people all over the United States. As football season ramps back up, this advertisement and the message behind it keeps the NFL Anthem kneeling protest in the spotlight.</td></tr></table>
|
| 426 |
+
|
| 427 |
+
Table 11: Examples of high school news paper articles from U.S. SCHOOL NEWS. Many of the articles in student-life category, and similar, rated lower quality have very different styles from documents rated high quality.
|
| 428 |
+
|
| 429 |
+
<table><tr><td>Article from http://en-volve.com
|
| 430 |
+
P(high quality) = 0.93</td></tr><tr><td>The German government has effectively began the process of eliminating the unvaccinated by starving them to death by pushing grocery stories to ban unvaccinated residents from buying essential food items...The pressure on the unvaccinated grows and grows!...</td></tr><tr><td>Article from http://www.censored.news
|
| 431 |
+
P(high quality) = 0.98</td></tr><tr><td>The provisional number of births in the U.S. was 3,605,201 in 2020. That is the lowest number of births in the United States since 1979, according to the Centers for Disease Control. 2020 also had the lowest fertility rate since the government started tracking births in 1902. And don’t blame the so-called “pandemic.”...we’re learning in 2021 that intelligent people succumb to government psy-ops. But critical thinkers understood immediately that something was very wrong with all the COVID-19 stuff. Plus many among the global elite continually and openly gloat about their desire to cull the masses. Bill Gates isn’t even coy about his desires...</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Table 12: Examples of news from low factuality sources (as identified by MediaBiasFactCheck.com) rated high quality by GPT-3 quality filter, but contain COVID disinformation.
|
2201.10xxx/2201.10474/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:295992befee3bf93106d58a46ca7fe404bfe1f2105671437c9a646b64e8f1056
|
| 3 |
+
size 928464
|
2201.10xxx/2201.10474/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.10xxx/2201.10488/1e872e11-1f22-46ed-ad12-c2cf99c7dcb3_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|