Add Batch df900803-103b-444a-b359-04487c317fa2
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_content_list.json +0 -0
- 2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_model.json +0 -0
- 2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_origin.pdf +3 -0
- 2302.09xxx/2302.09419/full.md +0 -0
- 2302.09xxx/2302.09419/images.zip +3 -0
- 2302.09xxx/2302.09419/layout.json +0 -0
- 2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_content_list.json +1494 -0
- 2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_model.json +2070 -0
- 2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_origin.pdf +3 -0
- 2302.09xxx/2302.09432/full.md +277 -0
- 2302.09xxx/2302.09432/images.zip +3 -0
- 2302.09xxx/2302.09432/layout.json +0 -0
- 2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_content_list.json +0 -0
- 2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_model.json +0 -0
- 2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_origin.pdf +3 -0
- 2302.09xxx/2302.09450/full.md +507 -0
- 2302.09xxx/2302.09450/images.zip +3 -0
- 2302.09xxx/2302.09450/layout.json +0 -0
- 2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_content_list.json +0 -0
- 2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_model.json +0 -0
- 2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_origin.pdf +3 -0
- 2302.09xxx/2302.09462/full.md +476 -0
- 2302.09xxx/2302.09462/images.zip +3 -0
- 2302.09xxx/2302.09462/layout.json +0 -0
- 2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_content_list.json +1915 -0
- 2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_model.json +0 -0
- 2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_origin.pdf +3 -0
- 2302.09xxx/2302.09465/full.md +407 -0
- 2302.09xxx/2302.09465/images.zip +3 -0
- 2302.09xxx/2302.09465/layout.json +0 -0
- 2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_content_list.json +0 -0
- 2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_model.json +0 -0
- 2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_origin.pdf +3 -0
- 2302.09xxx/2302.09466/full.md +0 -0
- 2302.09xxx/2302.09466/images.zip +3 -0
- 2302.09xxx/2302.09466/layout.json +0 -0
- 2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_content_list.json +1287 -0
- 2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_model.json +1945 -0
- 2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_origin.pdf +3 -0
- 2302.09xxx/2302.09479/full.md +289 -0
- 2302.09xxx/2302.09479/images.zip +3 -0
- 2302.09xxx/2302.09479/layout.json +0 -0
- 2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_content_list.json +0 -0
- 2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_model.json +0 -0
- 2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_origin.pdf +3 -0
- 2302.09xxx/2302.09483/full.md +480 -0
- 2302.09xxx/2302.09483/images.zip +3 -0
- 2302.09xxx/2302.09483/layout.json +0 -0
- 2302.09xxx/2302.09491/43edad10-6281-465f-a84f-96b7449b2f43_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -10711,3 +10711,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 10711 |
2302.12xxx/2302.12231/671026f6-161a-4e1d-982f-2febc138cb47_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10712 |
2302.13xxx/2302.13814/721f0ec8-cbe9-44ba-a2a1-ccade66080ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10713 |
2302.14xxx/2302.14829/0bfe5842-d177-4436-adfa-95f60deab87e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10711 |
2302.12xxx/2302.12231/671026f6-161a-4e1d-982f-2febc138cb47_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10712 |
2302.13xxx/2302.13814/721f0ec8-cbe9-44ba-a2a1-ccade66080ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10713 |
2302.14xxx/2302.14829/0bfe5842-d177-4436-adfa-95f60deab87e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10714 |
+
2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10715 |
+
2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10716 |
+
2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10717 |
+
2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10718 |
+
2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10719 |
+
2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10720 |
+
2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10721 |
+
2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10722 |
+
2302.09xxx/2302.09491/43edad10-6281-465f-a84f-96b7449b2f43_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10723 |
+
2302.09xxx/2302.09547/8ffb3dd6-70b9-4f9d-bda4-1a4261aa1e0b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10724 |
+
2302.09xxx/2302.09587/e6a25b87-e4c1-4847-b957-bf448bb5d015_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10725 |
+
2302.09xxx/2302.09598/2fb1e87f-9589-4a3e-8221-a8555d3eb4ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10726 |
+
2302.09xxx/2302.09606/453b0f8b-9741-4e64-bb91-9f77d1db87d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10727 |
+
2302.09xxx/2302.09650/00c3feee-2481-48f9-8b42-979a1850e464_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10728 |
+
2302.09xxx/2302.09664/2695698f-499c-417b-a9ab-73c294492806_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10729 |
+
2302.09xxx/2302.09746/fa8e1a20-98bc-4e98-96cc-8f316b28a78c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10730 |
+
2302.09xxx/2302.09778/80bce07d-211e-4e50-a095-99080b752a2c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10731 |
+
2302.09xxx/2302.09808/788c1a26-b362-488d-930b-6a184e544bd9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10732 |
+
2302.09xxx/2302.09814/2e65362b-10bd-453f-94f0-6656f79621d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10733 |
+
2302.09xxx/2302.09818/110d0e1b-a5d8-40ab-a367-31399b1912a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10734 |
+
2302.09xxx/2302.09831/8c046a65-5683-4cc0-957b-28d3b1dfe784_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10735 |
+
2302.09xxx/2302.09880/b1505bb2-9d13-40cf-a715-69074cf26a72_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10736 |
+
2302.09xxx/2302.09899/c9a18184-cfbe-40ae-a541-858573892d35_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10737 |
+
2302.09xxx/2302.09923/7ed0b061-583f-41c9-af9c-7f0109dede15_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10738 |
+
2302.09xxx/2302.09999/755184de-d6b1-435f-be72-8dbe767e9f60_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10739 |
+
2302.10xxx/2302.10025/00c046c7-c2a3-4048-a7a5-a74e615e7edc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10740 |
+
2302.10xxx/2302.10035/f569d62b-7e81-4ac5-87b6-bc1d7407bc4b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10741 |
+
2302.10xxx/2302.10109/6c003613-a1e6-444b-b9a0-a1bdea749291_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10742 |
+
2302.10xxx/2302.10121/61fb3654-0873-440b-a172-9863b279a7de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10743 |
+
2302.10xxx/2302.10149/f930b241-d197-4644-9735-2a9a9329487b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10744 |
+
2302.10xxx/2302.10166/216adce4-ed78-4d32-8c4e-d93b13da331a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10745 |
+
2302.10xxx/2302.10174/06172c46-de2a-4c29-bc13-0ce25ba987bd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10746 |
+
2302.10xxx/2302.10198/c272435d-5384-4570-ba38-1302a80b1840_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10747 |
+
2302.10xxx/2302.10205/b452df4e-e12a-40d8-8b2e-5952fb0fc22b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10748 |
+
2302.10xxx/2302.10249/5cc87cef-6a90-4f2a-b6d5-72d02abea70b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10749 |
+
2302.10xxx/2302.10322/8b4d07cf-7d10-44ae-95b6-7e6db975f9ad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10750 |
+
2302.10xxx/2302.10326/f971fceb-eafd-4573-b41a-784607ccbac6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10751 |
+
2302.10xxx/2302.10329/86ea4765-5d58-44ee-8033-afa7484ba7b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10752 |
+
2302.10xxx/2302.10352/2544f6dc-31ab-4d1d-a23e-e4e66fc37418_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10753 |
+
2302.10xxx/2302.10371/d5e47aaa-24af-48e9-b361-87a53295e6f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10754 |
+
2302.10xxx/2302.10395/7c738ca1-6a90-45a3-8fac-025a425c72de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10755 |
+
2302.10xxx/2302.10407/95ee98b9-0fed-44ce-ba10-dcba36346538_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10756 |
+
2302.10xxx/2302.10416/fa503605-dc5c-4fbf-88b0-c67b4b4396b0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10757 |
+
2302.10xxx/2302.10417/0a7cb115-2b50-44c4-96e1-cb7c2496023e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10758 |
+
2302.10xxx/2302.10418/f6559790-94b8-4e23-a008-0a7422ad2c4a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10759 |
+
2302.10xxx/2302.10429/188bfa58-cd8f-40f0-9e9e-bb7a547a3c68_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10760 |
+
2302.10xxx/2302.10447/0f20d088-3bdb-4ff5-be88-b0261244b60b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10761 |
+
2302.10xxx/2302.10484/e8705855-f498-45bb-b85b-70d6a9a58407_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10762 |
+
2302.10xxx/2302.10511/5cb8a6b4-b146-46f5-91d6-3bde0e6945ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10763 |
+
2302.10xxx/2302.10512/8cb3dce6-1fb1-403a-9fa2-414dfe8df8f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10764 |
+
2302.10xxx/2302.10586/eea255d2-596c-46b6-a6dd-864f005bea59_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10765 |
+
2302.10xxx/2302.10632/f0f1d0ab-43dd-4fbe-be03-1130a58e8858_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10766 |
+
2302.10xxx/2302.10637/01a39361-1119-49b4-bf40-bdfbfda8a53a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10767 |
+
2302.10xxx/2302.10663/348c6efe-9ae2-480d-bf9e-947f46578646_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10768 |
+
2302.10xxx/2302.10668/f32a0cef-1ea4-44ba-810a-28581397cda2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10769 |
+
2302.10xxx/2302.10671/41389a21-2c3a-4c0a-a5a2-f5bff0d7db3b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10770 |
+
2302.10xxx/2302.10685/1508495c-712d-4ead-b8fe-d2c401d13194_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10771 |
+
2302.10xxx/2302.10717/43aa8b90-88e5-42b2-8474-4c3f8e17bebf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10772 |
+
2302.10xxx/2302.10724/09c79327-dcea-49a6-8eb6-7060e30058ff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10773 |
+
2302.10xxx/2302.10918/304857a6-aadd-42ce-bd8e-bcad46bbe920_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10774 |
+
2302.11xxx/2302.11382/f3695e67-cc53-4922-98b5-b7ae1a98be33_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10775 |
+
2302.13xxx/2302.13795/54ad6ccb-d990-4602-969b-252923b66789_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10776 |
+
2304.12xxx/2304.12298/58dad9f2-97e5-430b-b204-730656bc9687_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10777 |
+
2306.17xxx/2306.17582/86610415-ddd6-45d5-93cc-88c3698b4d54_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09419/69b7a39b-3ff5-496a-a625-b891ca3060de_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:548a61d82a43743353796702084cd3eddf811ecd06bc0b5d9f16f9751f574e89
|
| 3 |
+
size 5550347
|
2302.09xxx/2302.09419/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09419/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54926d31e62cc53aba41dcbd8bf58d036bd079adc344b9a488d9c53dd3af7da9
|
| 3 |
+
size 2655255
|
2302.09xxx/2302.09419/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_content_list.json
ADDED
|
@@ -0,0 +1,1494 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
142,
|
| 8 |
+
78,
|
| 9 |
+
855,
|
| 10 |
+
118
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Dakuan Lu $^{1}$ , Hengkui Wu $^{3*}$ , Jiaqing Liang $^{2}$ , Yipei Xu $^{1}$ , Qianyu He $^{1}$ , Yipeng Geng $^{3}$ , Mengkun Han $^{3}$ , Yingsi Xin $^{3}$ , Yanghua Xiao $^{1*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
211,
|
| 19 |
+
124,
|
| 20 |
+
791,
|
| 21 |
+
158
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University",
|
| 28 |
+
"bbox": [
|
| 29 |
+
132,
|
| 30 |
+
159,
|
| 31 |
+
870,
|
| 32 |
+
175
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ School of Data Science, Fudan University",
|
| 39 |
+
"bbox": [
|
| 40 |
+
327,
|
| 41 |
+
175,
|
| 42 |
+
675,
|
| 43 |
+
192
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "$^{3}$ SuperSymmetry Technologies",
|
| 50 |
+
"bbox": [
|
| 51 |
+
374,
|
| 52 |
+
192,
|
| 53 |
+
628,
|
| 54 |
+
208
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "{ludakuan1234, l.j.q.light, xuyipei000, abbey4799} $@$ gmail.com,",
|
| 61 |
+
"bbox": [
|
| 62 |
+
238,
|
| 63 |
+
209,
|
| 64 |
+
764,
|
| 65 |
+
225
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "{ypgeng, mkhan, ysxin, hkwu} @ssymmetry.com, shawyh@fudan.edu.cn",
|
| 72 |
+
"bbox": [
|
| 73 |
+
206,
|
| 74 |
+
225,
|
| 75 |
+
796,
|
| 76 |
+
242
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Abstract",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
260,
|
| 86 |
+
252,
|
| 87 |
+
339,
|
| 88 |
+
266
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at https://github.com/ssymmetry/ BBT-FinCUGE-Applications. Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project.",
|
| 95 |
+
"bbox": [
|
| 96 |
+
141,
|
| 97 |
+
279,
|
| 98 |
+
460,
|
| 99 |
+
634
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "1 Introduction",
|
| 106 |
+
"text_level": 1,
|
| 107 |
+
"bbox": [
|
| 108 |
+
114,
|
| 109 |
+
646,
|
| 110 |
+
258,
|
| 111 |
+
662
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Pre-trained language models(PLMs), such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019), have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style (Gururangan et al., 2020; Gu et al., 2021). To address this issue, Gururangan et al. (2020) proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on",
|
| 118 |
+
"bbox": [
|
| 119 |
+
112,
|
| 120 |
+
671,
|
| 121 |
+
489,
|
| 122 |
+
898
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "domain-specific tasks, while Gu et al. (2021) further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT (Peng et al., 2019a) and PubMedBERT (Gu et al., 2021) in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
507,
|
| 131 |
+
253,
|
| 132 |
+
884,
|
| 133 |
+
413
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table 2, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT (Hou et al., 2020) and Mengzi-BERT-base-fin (Zhang et al., 2021). However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
507,
|
| 142 |
+
416,
|
| 143 |
+
884,
|
| 144 |
+
738
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pretraining methods to improve PLMs' understanding and memorization of entity knowledge. However,",
|
| 151 |
+
"bbox": [
|
| 152 |
+
507,
|
| 153 |
+
741,
|
| 154 |
+
885,
|
| 155 |
+
919
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "aside_text",
|
| 161 |
+
"text": "arXiv:2302.09432v2 [cs.CL] 26 Feb 2023",
|
| 162 |
+
"bbox": [
|
| 163 |
+
21,
|
| 164 |
+
309,
|
| 165 |
+
60,
|
| 166 |
+
725
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "page_footnote",
|
| 172 |
+
"text": "*Corresponding author.",
|
| 173 |
+
"bbox": [
|
| 174 |
+
141,
|
| 175 |
+
904,
|
| 176 |
+
289,
|
| 177 |
+
917
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 0
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pretraining method based on the T5 model's text-to-text paradigm.",
|
| 184 |
+
"bbox": [
|
| 185 |
+
112,
|
| 186 |
+
84,
|
| 187 |
+
489,
|
| 188 |
+
181
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 1
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"text": "In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training (Xu et al., 2020; Raffel et al., 2019; Gao et al., 2020). However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table 1. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table 2. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks.",
|
| 195 |
+
"bbox": [
|
| 196 |
+
115,
|
| 197 |
+
185,
|
| 198 |
+
489,
|
| 199 |
+
505
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 1
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"text": "The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), while the general benchmark evaluation for Chinese PLMs is CLUE (Xu et al., 2020). Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain.",
|
| 206 |
+
"bbox": [
|
| 207 |
+
115,
|
| 208 |
+
512,
|
| 209 |
+
489,
|
| 210 |
+
800
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 1
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"text": "To address this issue and promote research in the financial domain, we propose CFLEB, the Chinese Financial Language Understanding and Generation Evaluation Benchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty,",
|
| 217 |
+
"bbox": [
|
| 218 |
+
112,
|
| 219 |
+
806,
|
| 220 |
+
489,
|
| 221 |
+
919
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 1
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "and more importantly, emphasize challenges that arise in real-world scenarios.",
|
| 228 |
+
"bbox": [
|
| 229 |
+
507,
|
| 230 |
+
84,
|
| 231 |
+
880,
|
| 232 |
+
115
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 1
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"text": "Our contributions are summarized as follows:",
|
| 239 |
+
"bbox": [
|
| 240 |
+
527,
|
| 241 |
+
116,
|
| 242 |
+
867,
|
| 243 |
+
131
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 1
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "list",
|
| 249 |
+
"sub_type": "text",
|
| 250 |
+
"list_items": [
|
| 251 |
+
"- We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training.",
|
| 252 |
+
"- We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus.",
|
| 253 |
+
"- We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain."
|
| 254 |
+
],
|
| 255 |
+
"bbox": [
|
| 256 |
+
531,
|
| 257 |
+
143,
|
| 258 |
+
882,
|
| 259 |
+
294
|
| 260 |
+
],
|
| 261 |
+
"page_idx": 1
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"type": "text",
|
| 265 |
+
"text": "2 Related Work",
|
| 266 |
+
"text_level": 1,
|
| 267 |
+
"bbox": [
|
| 268 |
+
509,
|
| 269 |
+
307,
|
| 270 |
+
665,
|
| 271 |
+
323
|
| 272 |
+
],
|
| 273 |
+
"page_idx": 1
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"type": "text",
|
| 277 |
+
"text": "2.1 Domain-specific PLMs and Corpora",
|
| 278 |
+
"text_level": 1,
|
| 279 |
+
"bbox": [
|
| 280 |
+
507,
|
| 281 |
+
334,
|
| 282 |
+
838,
|
| 283 |
+
350
|
| 284 |
+
],
|
| 285 |
+
"page_idx": 1
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"type": "text",
|
| 289 |
+
"text": "PLMs have achieved state-of-the-art performance in many NLP tasks (Devlin et al., 2018; Raffel et al., 2019; Liu et al., 2019). However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains (Gururangan et al., 2020; Gu et al., 2021). To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed (Gururangan et al., 2020). For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models (Gu et al., 2021). Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora.",
|
| 290 |
+
"bbox": [
|
| 291 |
+
507,
|
| 292 |
+
355,
|
| 293 |
+
884,
|
| 294 |
+
627
|
| 295 |
+
],
|
| 296 |
+
"page_idx": 1
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"type": "text",
|
| 300 |
+
"text": "In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, Araci (2019) and Yang et al. (2020) pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, Hou et al. (2020) pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, Zhang et al. (2021) pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks.",
|
| 301 |
+
"bbox": [
|
| 302 |
+
507,
|
| 303 |
+
629,
|
| 304 |
+
884,
|
| 305 |
+
885
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 1
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "Table 1 summarizes the characteristics of typical PLMs and their corpora in the financial domain. It",
|
| 312 |
+
"bbox": [
|
| 313 |
+
507,
|
| 314 |
+
887,
|
| 315 |
+
882,
|
| 316 |
+
917
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 1
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "can be observed that both the scale of our model and corpus exceed existing works.",
|
| 323 |
+
"bbox": [
|
| 324 |
+
112,
|
| 325 |
+
84,
|
| 326 |
+
487,
|
| 327 |
+
116
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "2.2 Knowledge Enhanced Pre-training",
|
| 334 |
+
"text_level": 1,
|
| 335 |
+
"bbox": [
|
| 336 |
+
112,
|
| 337 |
+
127,
|
| 338 |
+
431,
|
| 339 |
+
142
|
| 340 |
+
],
|
| 341 |
+
"page_idx": 2
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"type": "text",
|
| 345 |
+
"text": "Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed (Yang et al., 2021). Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory.",
|
| 346 |
+
"bbox": [
|
| 347 |
+
112,
|
| 348 |
+
147,
|
| 349 |
+
489,
|
| 350 |
+
307
|
| 351 |
+
],
|
| 352 |
+
"page_idx": 2
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"type": "text",
|
| 356 |
+
"text": "For example, Ernie (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus.",
|
| 357 |
+
"bbox": [
|
| 358 |
+
112,
|
| 359 |
+
309,
|
| 360 |
+
489,
|
| 361 |
+
451
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 2
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Ernie 3.0, introduced by Sun et al. (2021), incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them.",
|
| 368 |
+
"bbox": [
|
| 369 |
+
112,
|
| 370 |
+
454,
|
| 371 |
+
489,
|
| 372 |
+
613
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 2
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "text",
|
| 378 |
+
"text": "The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence (Smirnova and Cudré-Mauroux, 2018). Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model.",
|
| 379 |
+
"bbox": [
|
| 380 |
+
112,
|
| 381 |
+
614,
|
| 382 |
+
489,
|
| 383 |
+
804
|
| 384 |
+
],
|
| 385 |
+
"page_idx": 2
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"type": "text",
|
| 389 |
+
"text": "To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models.",
|
| 390 |
+
"bbox": [
|
| 391 |
+
112,
|
| 392 |
+
808,
|
| 393 |
+
487,
|
| 394 |
+
854
|
| 395 |
+
],
|
| 396 |
+
"page_idx": 2
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"type": "text",
|
| 400 |
+
"text": "2.3 Domain-specific NLP Benchmarks",
|
| 401 |
+
"text_level": 1,
|
| 402 |
+
"bbox": [
|
| 403 |
+
112,
|
| 404 |
+
866,
|
| 405 |
+
430,
|
| 406 |
+
881
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "text",
|
| 412 |
+
"text": "Various domain-specific NLP benchmarks have been proposed to compare the ability of different",
|
| 413 |
+
"bbox": [
|
| 414 |
+
112,
|
| 415 |
+
887,
|
| 416 |
+
487,
|
| 417 |
+
917
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 2
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "methods in modeling text from specific domains in a fair manner. The BLUE benchmark (Peng et al., 2019b) evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark (Gu et al., 2021) further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE (Shah et al., 2022) is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works.",
|
| 424 |
+
"bbox": [
|
| 425 |
+
507,
|
| 426 |
+
84,
|
| 427 |
+
884,
|
| 428 |
+
374
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 2
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "3 The Corpus: BBT-FinCorpus",
|
| 435 |
+
"text_level": 1,
|
| 436 |
+
"bbox": [
|
| 437 |
+
507,
|
| 438 |
+
391,
|
| 439 |
+
798,
|
| 440 |
+
407
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section 3.1 covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section 3.3.",
|
| 447 |
+
"bbox": [
|
| 448 |
+
507,
|
| 449 |
+
420,
|
| 450 |
+
882,
|
| 451 |
+
514
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 2
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "3.1 Coverage Confirmation of the Corpus",
|
| 458 |
+
"text_level": 1,
|
| 459 |
+
"bbox": [
|
| 460 |
+
507,
|
| 461 |
+
533,
|
| 462 |
+
852,
|
| 463 |
+
549
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "We believe that, since the purpose of domain pretraining is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table 2.",
|
| 470 |
+
"bbox": [
|
| 471 |
+
507,
|
| 472 |
+
557,
|
| 473 |
+
882,
|
| 474 |
+
749
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 2
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance<sup>1</sup>, Tencent Finance<sup>2</sup>, Phoenix Finance<sup>3</sup>,",
|
| 481 |
+
"bbox": [
|
| 482 |
+
507,
|
| 483 |
+
752,
|
| 484 |
+
884,
|
| 485 |
+
864
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "page_footnote",
|
| 491 |
+
"text": "<https://finance.sina.com.cn/",
|
| 492 |
+
"bbox": [
|
| 493 |
+
529,
|
| 494 |
+
878,
|
| 495 |
+
752,
|
| 496 |
+
891
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 2
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "page_footnote",
|
| 502 |
+
"text": "<sup>2</sup>https://new.qq.com/ch/finance/",
|
| 503 |
+
"bbox": [
|
| 504 |
+
532,
|
| 505 |
+
891,
|
| 506 |
+
766,
|
| 507 |
+
904
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 2
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "page_footnote",
|
| 513 |
+
"text": "<sup>3</sup>https://finance.ifeng.com/",
|
| 514 |
+
"bbox": [
|
| 515 |
+
532,
|
| 516 |
+
904,
|
| 517 |
+
736,
|
| 518 |
+
917
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 2
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "table",
|
| 524 |
+
"img_path": "images/9d9cc4916d8ddc2d9a7271ced917e15f5f5763c2beeb605402320bb493683f0a.jpg",
|
| 525 |
+
"table_caption": [],
|
| 526 |
+
"table_footnote": [],
|
| 527 |
+
"table_body": "<table><tr><td>PLM</td><td>Size</td><td>Corpus Size</td><td>Corpus Sources</td></tr><tr><td>FinBERT (Araci, 2019)</td><td>110M</td><td>29M words</td><td>News filtered by financial keywords</td></tr><tr><td>FinBERT (Yang et al., 2020)</td><td>110M</td><td>4.9B tokens</td><td>Corporate Reports, Earnings Call Transcripts, Analyst Reports</td></tr><tr><td>FinBERT (Hou et al., 2020)</td><td>110M</td><td>3B tokens</td><td>News, Analyse reports, Company announcements and Encyclopedias</td></tr><tr><td>Mengzi-BERT-base-fin (Zhang et al., 2021)</td><td>110M</td><td>20GB file</td><td>News, Analyse reports, Company announcements</td></tr><tr><td>BBT-FinT5 (ours)</td><td>220M, 1B</td><td>80B tokens</td><td>Corporate Reports, Analyst Reports, Social media and Financial News</td></tr></table>",
|
| 528 |
+
"bbox": [
|
| 529 |
+
115,
|
| 530 |
+
80,
|
| 531 |
+
892,
|
| 532 |
+
247
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 3
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "table",
|
| 538 |
+
"img_path": "images/f509ce237ac5a3232554de21b56c50416bf0786713c8d28933b99d6ed0a0a978.jpg",
|
| 539 |
+
"table_caption": [
|
| 540 |
+
"Table 1: Typical financial PLMs and their corpora."
|
| 541 |
+
],
|
| 542 |
+
"table_footnote": [],
|
| 543 |
+
"table_body": "<table><tr><td>Dataset</td><td>Text Source</td><td>Open State</td><td>Practicality</td></tr><tr><td>DuEE-fin (Han et al., 2022)</td><td>Financial news, Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>FinRE (Li et al., 2019)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>Announcement information extraction (Tianchi, 2018)</td><td>Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>Discovery of new entities in Internet finance (Datafountain, 2019)</td><td>Social media</td><td>Unspecified</td><td>Low</td></tr><tr><td>Announcement information extraction (Bien-data, 2019)</td><td>Company announcement</td><td>Unspecified</td><td>High</td></tr><tr><td>Construction of financial knowledge graph (Bien-data, 2020b)</td><td>Analyse report</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Event causality extraction (Biendata, 2021)</td><td>Financial news</td><td>Unspecified</td><td>Low</td></tr><tr><td>Financial NL2SQL (Biendata, 2022a)</td><td>Data query sentence</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2022b)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2020a)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>FinNL (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinNA (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinFE (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr><tr><td>FinNSP (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr></table>",
|
| 544 |
+
"bbox": [
|
| 545 |
+
132,
|
| 546 |
+
282,
|
| 547 |
+
863,
|
| 548 |
+
586
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 3
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "Table 2: Chinese financial datasets we collected, with their open source status and practicality scores",
|
| 555 |
+
"bbox": [
|
| 556 |
+
156,
|
| 557 |
+
594,
|
| 558 |
+
838,
|
| 559 |
+
609
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 3
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "text",
|
| 565 |
+
"text": "$36\\mathrm{Kr}^{4}$ and Huxiu $^{5}$ . For company announcements and research reports, we chose Eastmoney $^{6}$ for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba $^{7}$ and Xueqiu $^{8}$ , for crawling.",
|
| 566 |
+
"bbox": [
|
| 567 |
+
112,
|
| 568 |
+
633,
|
| 569 |
+
489,
|
| 570 |
+
715
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "3.2 Crawling and Filtering of the Corpus",
|
| 577 |
+
"text_level": 1,
|
| 578 |
+
"bbox": [
|
| 579 |
+
112,
|
| 580 |
+
738,
|
| 581 |
+
453,
|
| 582 |
+
755
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules (Raffel et al., 2019; Yuan et al., 2021).",
|
| 589 |
+
"bbox": [
|
| 590 |
+
112,
|
| 591 |
+
765,
|
| 592 |
+
489,
|
| 593 |
+
829
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "3.3 Description of the Corpus",
|
| 600 |
+
"text_level": 1,
|
| 601 |
+
"bbox": [
|
| 602 |
+
507,
|
| 603 |
+
634,
|
| 604 |
+
759,
|
| 605 |
+
651
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials:",
|
| 612 |
+
"bbox": [
|
| 613 |
+
507,
|
| 614 |
+
655,
|
| 615 |
+
885,
|
| 616 |
+
720
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "list",
|
| 622 |
+
"sub_type": "text",
|
| 623 |
+
"list_items": [
|
| 624 |
+
"- Corporate announcements. These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB.",
|
| 625 |
+
"- Research reports. These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries,"
|
| 626 |
+
],
|
| 627 |
+
"bbox": [
|
| 628 |
+
531,
|
| 629 |
+
731,
|
| 630 |
+
885,
|
| 631 |
+
919
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "page_footnote",
|
| 637 |
+
"text": "4https://36kr.com/",
|
| 638 |
+
"bbox": [
|
| 639 |
+
134,
|
| 640 |
+
851,
|
| 641 |
+
275,
|
| 642 |
+
866
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "page_footnote",
|
| 648 |
+
"text": "<sup>5</sup>https://www.huxiu.com/",
|
| 649 |
+
"bbox": [
|
| 650 |
+
136,
|
| 651 |
+
866,
|
| 652 |
+
310,
|
| 653 |
+
878
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "page_footnote",
|
| 659 |
+
"text": "<sup>6</sup>https://www.eastmoney.com/",
|
| 660 |
+
"bbox": [
|
| 661 |
+
136,
|
| 662 |
+
879,
|
| 663 |
+
341,
|
| 664 |
+
891
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "page_footnote",
|
| 670 |
+
"text": "<sup>7</sup>https://guba.eastmoney.com/",
|
| 671 |
+
"bbox": [
|
| 672 |
+
136,
|
| 673 |
+
891,
|
| 674 |
+
349,
|
| 675 |
+
904
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 3
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "page_footnote",
|
| 681 |
+
"text": "<sup>8</sup>https://xueqiu.com/",
|
| 682 |
+
"bbox": [
|
| 683 |
+
136,
|
| 684 |
+
904,
|
| 685 |
+
287,
|
| 686 |
+
917
|
| 687 |
+
],
|
| 688 |
+
"page_idx": 3
|
| 689 |
+
},
|
| 690 |
+
{
|
| 691 |
+
"type": "text",
|
| 692 |
+
"text": "and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB.",
|
| 693 |
+
"bbox": [
|
| 694 |
+
149,
|
| 695 |
+
84,
|
| 696 |
+
487,
|
| 697 |
+
179
|
| 698 |
+
],
|
| 699 |
+
"page_idx": 4
|
| 700 |
+
},
|
| 701 |
+
{
|
| 702 |
+
"type": "list",
|
| 703 |
+
"sub_type": "text",
|
| 704 |
+
"list_items": [
|
| 705 |
+
"- Financial news. These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB.",
|
| 706 |
+
"- Social media. These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB."
|
| 707 |
+
],
|
| 708 |
+
"bbox": [
|
| 709 |
+
136,
|
| 710 |
+
190,
|
| 711 |
+
487,
|
| 712 |
+
376
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 4
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "text",
|
| 718 |
+
"text": "The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP.",
|
| 719 |
+
"bbox": [
|
| 720 |
+
112,
|
| 721 |
+
385,
|
| 722 |
+
487,
|
| 723 |
+
432
|
| 724 |
+
],
|
| 725 |
+
"page_idx": 4
|
| 726 |
+
},
|
| 727 |
+
{
|
| 728 |
+
"type": "text",
|
| 729 |
+
"text": "4 The Large PLM: BBT-FinT5",
|
| 730 |
+
"text_level": 1,
|
| 731 |
+
"bbox": [
|
| 732 |
+
112,
|
| 733 |
+
444,
|
| 734 |
+
401,
|
| 735 |
+
462
|
| 736 |
+
],
|
| 737 |
+
"page_idx": 4
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "text",
|
| 741 |
+
"text": "To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 (Raffel et al., 2019) model and are pre-trained on BBT-FinCorpus (refer to Section 3). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus.",
|
| 742 |
+
"bbox": [
|
| 743 |
+
112,
|
| 744 |
+
469,
|
| 745 |
+
489,
|
| 746 |
+
694
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 4
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking.",
|
| 753 |
+
"bbox": [
|
| 754 |
+
112,
|
| 755 |
+
695,
|
| 756 |
+
489,
|
| 757 |
+
791
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 4
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "4.1 Pre-training Model Architecture and Task",
|
| 764 |
+
"text_level": 1,
|
| 765 |
+
"bbox": [
|
| 766 |
+
112,
|
| 767 |
+
801,
|
| 768 |
+
450,
|
| 769 |
+
832
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 4
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "Raffel et al. (2019) model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this,",
|
| 776 |
+
"bbox": [
|
| 777 |
+
112,
|
| 778 |
+
838,
|
| 779 |
+
490,
|
| 780 |
+
920
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 4
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT (Joshi et al., 2020), randomly masking $15\\%$ contiguous spans within a sentence rather than independent tokens.",
|
| 787 |
+
"bbox": [
|
| 788 |
+
507,
|
| 789 |
+
84,
|
| 790 |
+
884,
|
| 791 |
+
197
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 4
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "text",
|
| 797 |
+
"text": "4.2 Pre-training Acceleration",
|
| 798 |
+
"text_level": 1,
|
| 799 |
+
"bbox": [
|
| 800 |
+
507,
|
| 801 |
+
206,
|
| 802 |
+
756,
|
| 803 |
+
223
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 4
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed (Rasley et al., 2020) to accelerate the pre-training process. In particular, we found that using the BFLOAT16 (Kalamkar et al., 2019) half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. Kalamkar et al. (2019) pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format.",
|
| 810 |
+
"bbox": [
|
| 811 |
+
507,
|
| 812 |
+
228,
|
| 813 |
+
885,
|
| 814 |
+
630
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 4
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "4.3 Knowledge Enhancement Pre-training Method Based on Triple Masking",
|
| 821 |
+
"text_level": 1,
|
| 822 |
+
"bbox": [
|
| 823 |
+
507,
|
| 824 |
+
640,
|
| 825 |
+
858,
|
| 826 |
+
673
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 4
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "text",
|
| 832 |
+
"text": "We propose a knowledge enhancement pre-training method based on triple masking (KETM).",
|
| 833 |
+
"bbox": [
|
| 834 |
+
507,
|
| 835 |
+
677,
|
| 836 |
+
880,
|
| 837 |
+
709
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 4
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "text",
|
| 843 |
+
"text": "First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple.",
|
| 844 |
+
"bbox": [
|
| 845 |
+
507,
|
| 846 |
+
709,
|
| 847 |
+
884,
|
| 848 |
+
838
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 4
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask $15\\%$ of a random-length span. Finally,",
|
| 855 |
+
"bbox": [
|
| 856 |
+
507,
|
| 857 |
+
839,
|
| 858 |
+
885,
|
| 859 |
+
920
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 4
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "image",
|
| 865 |
+
"img_path": "images/6289cbb4121017657fcd0ef9f27e0c52f91bea24b8faf05837f5c392b7d8b5d1.jpg",
|
| 866 |
+
"image_caption": [
|
| 867 |
+
"Figure 1: Knowledge enhancement pre-training method based on triple masking (KETM)"
|
| 868 |
+
],
|
| 869 |
+
"image_footnote": [],
|
| 870 |
+
"bbox": [
|
| 871 |
+
159,
|
| 872 |
+
99,
|
| 873 |
+
843,
|
| 874 |
+
280
|
| 875 |
+
],
|
| 876 |
+
"page_idx": 5
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "text",
|
| 880 |
+
"text": "we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure 1. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge.",
|
| 881 |
+
"bbox": [
|
| 882 |
+
112,
|
| 883 |
+
338,
|
| 884 |
+
490,
|
| 885 |
+
467
|
| 886 |
+
],
|
| 887 |
+
"page_idx": 5
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "text",
|
| 891 |
+
"text": "5 The Benchmark: BBT-CFLEB",
|
| 892 |
+
"text_level": 1,
|
| 893 |
+
"bbox": [
|
| 894 |
+
112,
|
| 895 |
+
480,
|
| 896 |
+
413,
|
| 897 |
+
494
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 5
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "text",
|
| 903 |
+
"text": "In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks.",
|
| 904 |
+
"bbox": [
|
| 905 |
+
112,
|
| 906 |
+
506,
|
| 907 |
+
489,
|
| 908 |
+
570
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 5
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "text",
|
| 914 |
+
"text": "5.1 Task Selection",
|
| 915 |
+
"text_level": 1,
|
| 916 |
+
"bbox": [
|
| 917 |
+
112,
|
| 918 |
+
581,
|
| 919 |
+
273,
|
| 920 |
+
594
|
| 921 |
+
],
|
| 922 |
+
"page_idx": 5
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"text": "We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table 2.",
|
| 927 |
+
"bbox": [
|
| 928 |
+
112,
|
| 929 |
+
602,
|
| 930 |
+
489,
|
| 931 |
+
810
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 5
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"text": "5.2 Task Introduction",
|
| 938 |
+
"text_level": 1,
|
| 939 |
+
"bbox": [
|
| 940 |
+
112,
|
| 941 |
+
822,
|
| 942 |
+
302,
|
| 943 |
+
835
|
| 944 |
+
],
|
| 945 |
+
"page_idx": 5
|
| 946 |
+
},
|
| 947 |
+
{
|
| 948 |
+
"type": "text",
|
| 949 |
+
"text": "CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows:",
|
| 950 |
+
"bbox": [
|
| 951 |
+
112,
|
| 952 |
+
843,
|
| 953 |
+
487,
|
| 954 |
+
891
|
| 955 |
+
],
|
| 956 |
+
"page_idx": 5
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"type": "text",
|
| 960 |
+
"text": "- FinNL, a financial news classification dataset.",
|
| 961 |
+
"bbox": [
|
| 962 |
+
134,
|
| 963 |
+
903,
|
| 964 |
+
489,
|
| 965 |
+
917
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 5
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "text",
|
| 971 |
+
"text": "Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.",
|
| 972 |
+
"bbox": [
|
| 973 |
+
542,
|
| 974 |
+
338,
|
| 975 |
+
884,
|
| 976 |
+
434
|
| 977 |
+
],
|
| 978 |
+
"page_idx": 5
|
| 979 |
+
},
|
| 980 |
+
{
|
| 981 |
+
"type": "list",
|
| 982 |
+
"sub_type": "text",
|
| 983 |
+
"list_items": [
|
| 984 |
+
"- FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge (Lin, 2004). The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles.",
|
| 985 |
+
"- FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles.",
|
| 986 |
+
"- FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.",
|
| 987 |
+
"- FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin (Han et al., 2022) dataset. Given"
|
| 988 |
+
],
|
| 989 |
+
"bbox": [
|
| 990 |
+
531,
|
| 991 |
+
448,
|
| 992 |
+
884,
|
| 993 |
+
917
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 5
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "table",
|
| 999 |
+
"img_path": "images/c404511673079653a3b7297fe4678d623c819c1daae95254e6861b96b61d2d76.jpg",
|
| 1000 |
+
"table_caption": [],
|
| 1001 |
+
"table_footnote": [],
|
| 1002 |
+
"table_body": "<table><tr><td>Task Name</td><td>Introduction</td><td>Data</td><td>Evaluation</td></tr><tr><td>FinNL</td><td>Multi-label classification of financial news</td><td>8000/1000/1000</td><td>F1-score</td></tr><tr><td>FinNA</td><td>Generation of summaries for financial news</td><td>24000/3000/3000</td><td>Rouge</td></tr><tr><td>FinRE</td><td>Entity relation classification for financial news</td><td>7454/1489/3727</td><td>F1-score</td></tr><tr><td>FinFE</td><td>Sentiment classification of financial social media text</td><td>8000/1000/1000</td><td>Accuracy</td></tr><tr><td>FinQA</td><td>Question-answering for financial news/events</td><td>16000/2000/2000</td><td>F1-score</td></tr><tr><td>FinNSP</td><td>Detection of negative messages and entities in financial news</td><td>4800/600/600</td><td>F1-score</td></tr></table>",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
163,
|
| 1005 |
+
80,
|
| 1006 |
+
833,
|
| 1007 |
+
294
|
| 1008 |
+
],
|
| 1009 |
+
"page_idx": 6
|
| 1010 |
+
},
|
| 1011 |
+
{
|
| 1012 |
+
"type": "text",
|
| 1013 |
+
"text": "Table 3: Summary of CFLEB tasks.",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
374,
|
| 1016 |
+
304,
|
| 1017 |
+
620,
|
| 1018 |
+
319
|
| 1019 |
+
],
|
| 1020 |
+
"page_idx": 6
|
| 1021 |
+
},
|
| 1022 |
+
{
|
| 1023 |
+
"type": "text",
|
| 1024 |
+
"text": "financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles.",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
147,
|
| 1027 |
+
344,
|
| 1028 |
+
487,
|
| 1029 |
+
470
|
| 1030 |
+
],
|
| 1031 |
+
"page_idx": 6
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"type": "text",
|
| 1035 |
+
"text": "- FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles.",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
136,
|
| 1038 |
+
483,
|
| 1039 |
+
489,
|
| 1040 |
+
659
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 6
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "text",
|
| 1046 |
+
"text": "5.3 Leaderboard Introduction",
|
| 1047 |
+
"text_level": 1,
|
| 1048 |
+
"bbox": [
|
| 1049 |
+
112,
|
| 1050 |
+
671,
|
| 1051 |
+
369,
|
| 1052 |
+
686
|
| 1053 |
+
],
|
| 1054 |
+
"page_idx": 6
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "text",
|
| 1058 |
+
"text": "We have organized the tasks into multiple leaderboards according to different ability requirements (Xu et al., 2020), so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows:",
|
| 1059 |
+
"bbox": [
|
| 1060 |
+
112,
|
| 1061 |
+
693,
|
| 1062 |
+
489,
|
| 1063 |
+
788
|
| 1064 |
+
],
|
| 1065 |
+
"page_idx": 6
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "list",
|
| 1069 |
+
"sub_type": "text",
|
| 1070 |
+
"list_items": [
|
| 1071 |
+
"- Overall leaderboard: includes all six tasks.",
|
| 1072 |
+
"- Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP.",
|
| 1073 |
+
"- Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA."
|
| 1074 |
+
],
|
| 1075 |
+
"bbox": [
|
| 1076 |
+
136,
|
| 1077 |
+
800,
|
| 1078 |
+
487,
|
| 1079 |
+
917
|
| 1080 |
+
],
|
| 1081 |
+
"page_idx": 6
|
| 1082 |
+
},
|
| 1083 |
+
{
|
| 1084 |
+
"type": "text",
|
| 1085 |
+
"text": "6 Experiments",
|
| 1086 |
+
"text_level": 1,
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
507,
|
| 1089 |
+
344,
|
| 1090 |
+
655,
|
| 1091 |
+
360
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method.",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
507,
|
| 1100 |
+
370,
|
| 1101 |
+
884,
|
| 1102 |
+
482
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 6
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "text",
|
| 1108 |
+
"text": "6.1 Experiments Setup",
|
| 1109 |
+
"text_level": 1,
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
507,
|
| 1112 |
+
495,
|
| 1113 |
+
705,
|
| 1114 |
+
511
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 6
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "text",
|
| 1120 |
+
"text": "6.1.1 Pre-trained Language Models",
|
| 1121 |
+
"text_level": 1,
|
| 1122 |
+
"bbox": [
|
| 1123 |
+
507,
|
| 1124 |
+
516,
|
| 1125 |
+
805,
|
| 1126 |
+
532
|
| 1127 |
+
],
|
| 1128 |
+
"page_idx": 6
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "text",
|
| 1132 |
+
"text": "The models participating in the comparative experiment of this section include:",
|
| 1133 |
+
"bbox": [
|
| 1134 |
+
507,
|
| 1135 |
+
536,
|
| 1136 |
+
884,
|
| 1137 |
+
567
|
| 1138 |
+
],
|
| 1139 |
+
"page_idx": 6
|
| 1140 |
+
},
|
| 1141 |
+
{
|
| 1142 |
+
"type": "list",
|
| 1143 |
+
"sub_type": "text",
|
| 1144 |
+
"list_items": [
|
| 1145 |
+
"- GPT2-base (Zhao et al., 2019). A Chinese GPT2 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020).",
|
| 1146 |
+
"- T5-base (Zhao et al., 2019). A Chinese T5 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020).",
|
| 1147 |
+
"- FinBERT (Hou et al., 2020). A Chinese BERT for the financial domain released by Hou et al. (2020).",
|
| 1148 |
+
"- Mengzi-BERT-base-fin (Zhang et al., 2021). A Chinese BERT for the financial domain released by Zhang et al. (2021).",
|
| 1149 |
+
"- FinT5-base. Our Chinese pre-trained language model for the financial domain, pretrained on our financial corpus, FinCorpus. Its model architecture, parameter size, and"
|
| 1150 |
+
],
|
| 1151 |
+
"bbox": [
|
| 1152 |
+
531,
|
| 1153 |
+
580,
|
| 1154 |
+
884,
|
| 1155 |
+
917
|
| 1156 |
+
],
|
| 1157 |
+
"page_idx": 6
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "table",
|
| 1161 |
+
"img_path": "images/f9f65dfa52182145f47968935e9bfe846dcb7d525c7b68c3a2ba1b457b619b07.jpg",
|
| 1162 |
+
"table_caption": [],
|
| 1163 |
+
"table_footnote": [],
|
| 1164 |
+
"table_body": "<table><tr><td>PLMs</td><td>FinFE</td><td>FinNL</td><td>FinNSP</td><td>FinRE</td><td>Un.Avg.</td><td>FinNA</td><td>FinQA</td><td>Ge.Avg.</td><td>Avg.</td></tr><tr><td>GPT2-base</td><td>79.05</td><td>84.09</td><td>91.30</td><td>36.37</td><td>72.70</td><td>44.19</td><td>75.22</td><td>59.71</td><td>68.37</td></tr><tr><td>T5-base</td><td>79.40</td><td>87.48</td><td>95.43</td><td>54.93</td><td>79.56</td><td>48.54</td><td>83.58</td><td>66.06</td><td>74.89</td></tr><tr><td>FinBERT-base</td><td>79.45</td><td>84.69</td><td>69.01</td><td>55.33</td><td>72.37</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Mengzi-BERT-base-fin</td><td>79.50</td><td>85.88</td><td>71.72</td><td>58.25</td><td>73.59</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BBT-FinT5-base</td><td>80.19</td><td>87.55</td><td>94.50</td><td>60.62</td><td>80.21</td><td>50.06</td><td>84.82</td><td>67.44</td><td>76.29</td></tr><tr><td>BBT-FinT5-base-KE</td><td>79.43</td><td>87.77</td><td>95.05</td><td>61.79</td><td>80.26</td><td>51.36</td><td>85.66</td><td>68.51</td><td>76.84</td></tr><tr><td>BBT-FinT5-large</td><td>80.24</td><td>88.44</td><td>94.54</td><td>61.88</td><td>81.78</td><td>51.42</td><td>85.95</td><td>68.69</td><td>77.07</td></tr></table>",
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
119,
|
| 1167 |
+
80,
|
| 1168 |
+
885,
|
| 1169 |
+
200
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 7
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "text",
|
| 1175 |
+
"text": "Table 4: Results of BBT-CFLEB from different PLMs.",
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
310,
|
| 1178 |
+
209,
|
| 1179 |
+
684,
|
| 1180 |
+
223
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 7
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "text",
|
| 1186 |
+
"text": "pre-training hyperparameters are the same as T5-v1.1-base.",
|
| 1187 |
+
"bbox": [
|
| 1188 |
+
149,
|
| 1189 |
+
250,
|
| 1190 |
+
485,
|
| 1191 |
+
280
|
| 1192 |
+
],
|
| 1193 |
+
"page_idx": 7
|
| 1194 |
+
},
|
| 1195 |
+
{
|
| 1196 |
+
"type": "list",
|
| 1197 |
+
"sub_type": "text",
|
| 1198 |
+
"list_items": [
|
| 1199 |
+
"- FinT5-base-KE. Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia (Xu et al., 2017) knowledge graph.",
|
| 1200 |
+
"- FinT5-large. Our proposed Chinese pretrained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base."
|
| 1201 |
+
],
|
| 1202 |
+
"bbox": [
|
| 1203 |
+
136,
|
| 1204 |
+
294,
|
| 1205 |
+
487,
|
| 1206 |
+
448
|
| 1207 |
+
],
|
| 1208 |
+
"page_idx": 7
|
| 1209 |
+
},
|
| 1210 |
+
{
|
| 1211 |
+
"type": "text",
|
| 1212 |
+
"text": "6.1.2 Fine-tuning",
|
| 1213 |
+
"text_level": 1,
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
112,
|
| 1216 |
+
463,
|
| 1217 |
+
265,
|
| 1218 |
+
478
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 7
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "text",
|
| 1224 |
+
"text": "For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks.",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
112,
|
| 1227 |
+
482,
|
| 1228 |
+
487,
|
| 1229 |
+
580
|
| 1230 |
+
],
|
| 1231 |
+
"page_idx": 7
|
| 1232 |
+
},
|
| 1233 |
+
{
|
| 1234 |
+
"type": "text",
|
| 1235 |
+
"text": "6.2 Experiment 1: Comparison of Pre-trained Model Architectures",
|
| 1236 |
+
"text_level": 1,
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
112,
|
| 1239 |
+
590,
|
| 1240 |
+
416,
|
| 1241 |
+
621
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 7
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "text",
|
| 1247 |
+
"text": "For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table 4. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model.",
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
112,
|
| 1250 |
+
627,
|
| 1251 |
+
487,
|
| 1252 |
+
788
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "6.3 Experiment 2: Effectiveness of Domain Pre-training",
|
| 1259 |
+
"text_level": 1,
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
112,
|
| 1262 |
+
801,
|
| 1263 |
+
467,
|
| 1264 |
+
832
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 7
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "text",
|
| 1270 |
+
"text": "As shown in Table 4, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
112,
|
| 1273 |
+
839,
|
| 1274 |
+
489,
|
| 1275 |
+
919
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 7
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "effectiveness of domain pre-training and the effectiveness of FinCorpus.",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
507,
|
| 1284 |
+
250,
|
| 1285 |
+
882,
|
| 1286 |
+
281
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 7
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "text",
|
| 1292 |
+
"text": "6.4 Experiment 3: Superiority Compared to Existing Models in the domain",
|
| 1293 |
+
"text_level": 1,
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
507,
|
| 1296 |
+
294,
|
| 1297 |
+
870,
|
| 1298 |
+
325
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 7
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"text": "As shown in Table 4, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain.",
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
507,
|
| 1307 |
+
331,
|
| 1308 |
+
882,
|
| 1309 |
+
411
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 7
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "text",
|
| 1315 |
+
"text": "6.5 Experiment 4: Effectiveness of KETM",
|
| 1316 |
+
"text_level": 1,
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
507,
|
| 1319 |
+
423,
|
| 1320 |
+
855,
|
| 1321 |
+
438
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 7
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "text",
|
| 1327 |
+
"text": "As shown in Table 4, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method.",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
507,
|
| 1330 |
+
444,
|
| 1331 |
+
882,
|
| 1332 |
+
571
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 7
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "text",
|
| 1338 |
+
"text": "6.6 Experiment 5: Effectiveness of parameter scaling up",
|
| 1339 |
+
"text_level": 1,
|
| 1340 |
+
"bbox": [
|
| 1341 |
+
507,
|
| 1342 |
+
585,
|
| 1343 |
+
880,
|
| 1344 |
+
617
|
| 1345 |
+
],
|
| 1346 |
+
"page_idx": 7
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "text",
|
| 1350 |
+
"text": "As shown in Table 4, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up.",
|
| 1351 |
+
"bbox": [
|
| 1352 |
+
507,
|
| 1353 |
+
621,
|
| 1354 |
+
880,
|
| 1355 |
+
719
|
| 1356 |
+
],
|
| 1357 |
+
"page_idx": 7
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "text",
|
| 1361 |
+
"text": "7 Conclusion",
|
| 1362 |
+
"text_level": 1,
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
507,
|
| 1365 |
+
732,
|
| 1366 |
+
640,
|
| 1367 |
+
746
|
| 1368 |
+
],
|
| 1369 |
+
"page_idx": 7
|
| 1370 |
+
},
|
| 1371 |
+
{
|
| 1372 |
+
"type": "text",
|
| 1373 |
+
"text": "In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM,",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
507,
|
| 1376 |
+
758,
|
| 1377 |
+
884,
|
| 1378 |
+
919
|
| 1379 |
+
],
|
| 1380 |
+
"page_idx": 7
|
| 1381 |
+
},
|
| 1382 |
+
{
|
| 1383 |
+
"type": "text",
|
| 1384 |
+
"text": "which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications.",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
112,
|
| 1387 |
+
84,
|
| 1388 |
+
489,
|
| 1389 |
+
230
|
| 1390 |
+
],
|
| 1391 |
+
"page_idx": 8
|
| 1392 |
+
},
|
| 1393 |
+
{
|
| 1394 |
+
"type": "text",
|
| 1395 |
+
"text": "References",
|
| 1396 |
+
"text_level": 1,
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
510,
|
| 1399 |
+
83,
|
| 1400 |
+
608,
|
| 1401 |
+
98
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 8
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "list",
|
| 1407 |
+
"sub_type": "ref_text",
|
| 1408 |
+
"list_items": [
|
| 1409 |
+
"Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.",
|
| 1410 |
+
"Biendata. 2019. Ccks 2019 extraction of public company announcement information.",
|
| 1411 |
+
"Biendata. 2020a. Ccks 2020: Cross-class few-shot transfer event extraction for financial domain.",
|
| 1412 |
+
"Biendata. 2020b. Ccks 2020: Evaluation of automated construction techniques for financial knowledge graph based on ontology.",
|
| 1413 |
+
"Biendata. 2021. Ccks 2021: Event relation extraction for financial texts (part ii) - extraction of causal relationships between events.",
|
| 1414 |
+
"Biendata. 2022a. Ccks2022: Evaluation of nl2sql for financial domain.",
|
| 1415 |
+
"Biendata. 2022b. Ccks2022: Few-shot event extraction for financial domain.",
|
| 1416 |
+
"Datafountain. 2019. Discovery of new entities in internet finance.",
|
| 1417 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
|
| 1418 |
+
"Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.",
|
| 1419 |
+
"Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1-23.",
|
| 1420 |
+
"Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.",
|
| 1421 |
+
"Cuiyun Han, Jinchuan Zhang, Xinyu Li, Guojin Xu, Weihua Peng, and Zengfeng Zeng. 2022. Due-fin: A large-scale dataset for document-level event extraction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 172-183. Springer.",
|
| 1422 |
+
"Panpan Hou, Mengchao Zhang, Zhibing Fu, and Yu Li. 2020. Finbert. https://github.com/valuesimplex/FinBERT. GitHub repository, commit: ec1b14b96de9bdd5217abba1d197428cf00ddaa6."
|
| 1423 |
+
],
|
| 1424 |
+
"bbox": [
|
| 1425 |
+
510,
|
| 1426 |
+
107,
|
| 1427 |
+
884,
|
| 1428 |
+
917
|
| 1429 |
+
],
|
| 1430 |
+
"page_idx": 8
|
| 1431 |
+
},
|
| 1432 |
+
{
|
| 1433 |
+
"type": "list",
|
| 1434 |
+
"sub_type": "ref_text",
|
| 1435 |
+
"list_items": [
|
| 1436 |
+
"Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
|
| 1437 |
+
"Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. 2019. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322.",
|
| 1438 |
+
"Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. 2019. Chinese relation extraction with multi-grained information and external linguistic knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4377-4386, Florence, Italy. Association for Computational Linguistics.",
|
| 1439 |
+
"Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
|
| 1440 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.",
|
| 1441 |
+
"Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019a. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58-65, Florence, Italy. Association for Computational Linguistics.",
|
| 1442 |
+
"Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019b. Transfer learning in biomedical natural language processing: an evaluation of bert and elmo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474.",
|
| 1443 |
+
"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints.",
|
| 1444 |
+
"Jeff Rasley, Samyam Rajbhandari, Olatunjri Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506.",
|
| 1445 |
+
"Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, and Diyi Yang. 2022. When flue meets flang: Benchmarks and large pre-trained language model for financial domain. arXiv preprint arXiv:2211.00083."
|
| 1446 |
+
],
|
| 1447 |
+
"bbox": [
|
| 1448 |
+
115,
|
| 1449 |
+
85,
|
| 1450 |
+
489,
|
| 1451 |
+
917
|
| 1452 |
+
],
|
| 1453 |
+
"page_idx": 9
|
| 1454 |
+
},
|
| 1455 |
+
{
|
| 1456 |
+
"type": "list",
|
| 1457 |
+
"sub_type": "ref_text",
|
| 1458 |
+
"list_items": [
|
| 1459 |
+
"Alisa Smirnova and Philippe Cudre-Mauroux. 2018. Relation extraction using distant supervision: A survey. ACM Computing Surveys (CSUR), 51(5):1-35.",
|
| 1460 |
+
"Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, et al. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137.",
|
| 1461 |
+
"Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.",
|
| 1462 |
+
"Tianchi. 2018. The dataset for extracting announcement information of a-share listed companies.",
|
| 1463 |
+
"Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32.",
|
| 1464 |
+
"Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
|
| 1465 |
+
"Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cn-dbpedia: A never-ending chinese knowledge extraction system. In Advances in Artificial Intelligence: From Theory to Practice: 30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2017, Arras, France, June 27-30, 2017, Proceedings, Part II, pages 428-438. Springer.",
|
| 1466 |
+
"Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.",
|
| 1467 |
+
"Jian Yang, Gang Xiao, Yulong Shen, Wei Jiang, Xinyu Hu, Ying Zhang, and Jinghui Peng. 2021. A survey of knowledge enhanced pre-trained models. arXiv preprint arXiv:2110.00269.",
|
| 1468 |
+
"Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications. arXiv preprint arXiv:2006.08097.",
|
| 1469 |
+
"Sha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang, and Jie Tang. 2021. Wudaocorpora: A super large-scale chinese corpora for pre-training language models. AI Open, 2:65-68."
|
| 1470 |
+
],
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
510,
|
| 1473 |
+
85,
|
| 1474 |
+
882,
|
| 1475 |
+
917
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 9
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "list",
|
| 1481 |
+
"sub_type": "ref_text",
|
| 1482 |
+
"list_items": [
|
| 1483 |
+
"Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, and Ming Zhou. 2021. Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. arXiv preprint arXiv:2110.06696.",
|
| 1484 |
+
"Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. Uer: An open-source toolkit for pretraining models. EMNLP-IJCNLP 2019, page 241."
|
| 1485 |
+
],
|
| 1486 |
+
"bbox": [
|
| 1487 |
+
115,
|
| 1488 |
+
85,
|
| 1489 |
+
489,
|
| 1490 |
+
214
|
| 1491 |
+
],
|
| 1492 |
+
"page_idx": 10
|
| 1493 |
+
}
|
| 1494 |
+
]
|
2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_model.json
ADDED
|
@@ -0,0 +1,2070 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.31,
|
| 8 |
+
0.061,
|
| 9 |
+
0.726
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2302.09432v2 [cs.CL] 26 Feb 2023"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.143,
|
| 18 |
+
0.079,
|
| 19 |
+
0.856,
|
| 20 |
+
0.12
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.212,
|
| 29 |
+
0.125,
|
| 30 |
+
0.793,
|
| 31 |
+
0.159
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Dakuan Lu\\(^{1}\\), Hengkui Wu\\(^{3*}\\), Jiaqing Liang\\(^{2}\\), Yipei Xu\\(^{1}\\), Qianyu He\\(^{1}\\), Yipeng Geng\\(^{3}\\), Mengkun Han\\(^{3}\\), Yingsi Xin\\(^{3}\\), Yanghua Xiao\\(^{1*}\\)"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.133,
|
| 40 |
+
0.16,
|
| 41 |
+
0.871,
|
| 42 |
+
0.177
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "\\(^{1}\\)Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.328,
|
| 51 |
+
0.177,
|
| 52 |
+
0.676,
|
| 53 |
+
0.193
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "\\(^{2}\\)School of Data Science, Fudan University"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.376,
|
| 62 |
+
0.193,
|
| 63 |
+
0.629,
|
| 64 |
+
0.209
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "\\(^{3}\\)SuperSymmetry Technologies"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.24,
|
| 73 |
+
0.21,
|
| 74 |
+
0.765,
|
| 75 |
+
0.226
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "{ludakuan1234, l.j.q.light, xuyipei000, abbey4799} \\(@\\) gmail.com,"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.207,
|
| 84 |
+
0.227,
|
| 85 |
+
0.798,
|
| 86 |
+
0.243
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "{ypgeng, mkhan, ysxin, hkwu} @ssymmetry.com, shawyh@fudan.edu.cn"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "title",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.261,
|
| 95 |
+
0.253,
|
| 96 |
+
0.341,
|
| 97 |
+
0.267
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Abstract"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.142,
|
| 106 |
+
0.28,
|
| 107 |
+
0.461,
|
| 108 |
+
0.636
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at https://github.com/ssymmetry/ BBT-FinCUGE-Applications. Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.115,
|
| 117 |
+
0.648,
|
| 118 |
+
0.26,
|
| 119 |
+
0.663
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "1 Introduction"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.113,
|
| 128 |
+
0.673,
|
| 129 |
+
0.49,
|
| 130 |
+
0.899
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Pre-trained language models(PLMs), such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019), have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style (Gururangan et al., 2020; Gu et al., 2021). To address this issue, Gururangan et al. (2020) proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.508,
|
| 139 |
+
0.254,
|
| 140 |
+
0.885,
|
| 141 |
+
0.414
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "domain-specific tasks, while Gu et al. (2021) further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT (Peng et al., 2019a) and PubMedBERT (Gu et al., 2021) in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.508,
|
| 150 |
+
0.417,
|
| 151 |
+
0.885,
|
| 152 |
+
0.739
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table 2, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT (Hou et al., 2020) and Mengzi-BERT-base-fin (Zhang et al., 2021). However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.508,
|
| 161 |
+
0.743,
|
| 162 |
+
0.887,
|
| 163 |
+
0.92
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pretraining methods to improve PLMs' understanding and memorization of entity knowledge. However,"
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "page_footnote",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.142,
|
| 172 |
+
0.905,
|
| 173 |
+
0.29,
|
| 174 |
+
0.919
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "*Corresponding author."
|
| 178 |
+
}
|
| 179 |
+
],
|
| 180 |
+
[
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.113,
|
| 185 |
+
0.085,
|
| 186 |
+
0.49,
|
| 187 |
+
0.182
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pretraining method based on the T5 model's text-to-text paradigm."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.117,
|
| 196 |
+
0.186,
|
| 197 |
+
0.49,
|
| 198 |
+
0.506
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training (Xu et al., 2020; Raffel et al., 2019; Gao et al., 2020). However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table 1. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table 2. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.117,
|
| 207 |
+
0.513,
|
| 208 |
+
0.49,
|
| 209 |
+
0.801
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), while the general benchmark evaluation for Chinese PLMs is CLUE (Xu et al., 2020). Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.113,
|
| 218 |
+
0.807,
|
| 219 |
+
0.49,
|
| 220 |
+
0.92
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "To address this issue and promote research in the financial domain, we propose CFLEB, the Chinese Financial Language Understanding and Generation Evaluation Benchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty,"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.509,
|
| 229 |
+
0.085,
|
| 230 |
+
0.882,
|
| 231 |
+
0.116
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "and more importantly, emphasize challenges that arise in real-world scenarios."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.528,
|
| 240 |
+
0.117,
|
| 241 |
+
0.868,
|
| 242 |
+
0.132
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Our contributions are summarized as follows:"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.532,
|
| 251 |
+
0.145,
|
| 252 |
+
0.884,
|
| 253 |
+
0.193
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "- We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.532,
|
| 262 |
+
0.205,
|
| 263 |
+
0.881,
|
| 264 |
+
0.237
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "- We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.532,
|
| 273 |
+
0.248,
|
| 274 |
+
0.881,
|
| 275 |
+
0.296
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "- We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "list",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.532,
|
| 284 |
+
0.145,
|
| 285 |
+
0.884,
|
| 286 |
+
0.296
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": null
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "title",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.51,
|
| 295 |
+
0.309,
|
| 296 |
+
0.666,
|
| 297 |
+
0.324
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "2 Related Work"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "title",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.509,
|
| 306 |
+
0.335,
|
| 307 |
+
0.84,
|
| 308 |
+
0.351
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "2.1 Domain-specific PLMs and Corpora"
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.508,
|
| 317 |
+
0.356,
|
| 318 |
+
0.885,
|
| 319 |
+
0.629
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "PLMs have achieved state-of-the-art performance in many NLP tasks (Devlin et al., 2018; Raffel et al., 2019; Liu et al., 2019). However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains (Gururangan et al., 2020; Gu et al., 2021). To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed (Gururangan et al., 2020). For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models (Gu et al., 2021). Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.508,
|
| 328 |
+
0.63,
|
| 329 |
+
0.885,
|
| 330 |
+
0.886
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, Araci (2019) and Yang et al. (2020) pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, Hou et al. (2020) pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, Zhang et al. (2021) pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.509,
|
| 339 |
+
0.888,
|
| 340 |
+
0.884,
|
| 341 |
+
0.919
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "Table 1 summarizes the characteristics of typical PLMs and their corpora in the financial domain. It"
|
| 345 |
+
}
|
| 346 |
+
],
|
| 347 |
+
[
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.114,
|
| 352 |
+
0.085,
|
| 353 |
+
0.488,
|
| 354 |
+
0.117
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "can be observed that both the scale of our model and corpus exceed existing works."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "title",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.114,
|
| 363 |
+
0.128,
|
| 364 |
+
0.433,
|
| 365 |
+
0.143
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "2.2 Knowledge Enhanced Pre-training"
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.113,
|
| 374 |
+
0.148,
|
| 375 |
+
0.49,
|
| 376 |
+
0.308
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed (Yang et al., 2021). Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.113,
|
| 385 |
+
0.31,
|
| 386 |
+
0.49,
|
| 387 |
+
0.452
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "For example, Ernie (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus."
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.113,
|
| 396 |
+
0.455,
|
| 397 |
+
0.49,
|
| 398 |
+
0.614
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "Ernie 3.0, introduced by Sun et al. (2021), incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.113,
|
| 407 |
+
0.615,
|
| 408 |
+
0.49,
|
| 409 |
+
0.806
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence (Smirnova and Cudré-Mauroux, 2018). Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model."
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.114,
|
| 418 |
+
0.809,
|
| 419 |
+
0.489,
|
| 420 |
+
0.855
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "title",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.114,
|
| 429 |
+
0.867,
|
| 430 |
+
0.431,
|
| 431 |
+
0.882
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "2.3 Domain-specific NLP Benchmarks"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.113,
|
| 440 |
+
0.888,
|
| 441 |
+
0.489,
|
| 442 |
+
0.919
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "Various domain-specific NLP benchmarks have been proposed to compare the ability of different"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.508,
|
| 451 |
+
0.085,
|
| 452 |
+
0.885,
|
| 453 |
+
0.375
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "methods in modeling text from specific domains in a fair manner. The BLUE benchmark (Peng et al., 2019b) evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark (Gu et al., 2021) further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE (Shah et al., 2022) is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "title",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.509,
|
| 462 |
+
0.392,
|
| 463 |
+
0.799,
|
| 464 |
+
0.409
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "3 The Corpus: BBT-FinCorpus"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.508,
|
| 473 |
+
0.421,
|
| 474 |
+
0.884,
|
| 475 |
+
0.516
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section 3.1 covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section 3.3."
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "title",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.509,
|
| 484 |
+
0.535,
|
| 485 |
+
0.853,
|
| 486 |
+
0.55
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "3.1 Coverage Confirmation of the Corpus"
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.508,
|
| 495 |
+
0.558,
|
| 496 |
+
0.884,
|
| 497 |
+
0.75
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "We believe that, since the purpose of domain pretraining is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table 2."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.508,
|
| 506 |
+
0.753,
|
| 507 |
+
0.885,
|
| 508 |
+
0.865
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance<sup>1</sup>, Tencent Finance<sup>2</sup>, Phoenix Finance<sup>3</sup>,"
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "page_footnote",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.531,
|
| 517 |
+
0.879,
|
| 518 |
+
0.753,
|
| 519 |
+
0.892
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "<https://finance.sina.com.cn/"
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "page_footnote",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.533,
|
| 528 |
+
0.892,
|
| 529 |
+
0.767,
|
| 530 |
+
0.905
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "<sup>2</sup>https://new.qq.com/ch/finance/"
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "page_footnote",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.533,
|
| 539 |
+
0.905,
|
| 540 |
+
0.737,
|
| 541 |
+
0.918
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "<sup>3</sup>https://finance.ifeng.com/"
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "list",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.531,
|
| 550 |
+
0.879,
|
| 551 |
+
0.767,
|
| 552 |
+
0.918
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": null
|
| 556 |
+
}
|
| 557 |
+
],
|
| 558 |
+
[
|
| 559 |
+
{
|
| 560 |
+
"type": "table",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.117,
|
| 563 |
+
0.081,
|
| 564 |
+
0.894,
|
| 565 |
+
0.248
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "<table><tr><td>PLM</td><td>Size</td><td>Corpus Size</td><td>Corpus Sources</td></tr><tr><td>FinBERT (Araci, 2019)</td><td>110M</td><td>29M words</td><td>News filtered by financial keywords</td></tr><tr><td>FinBERT (Yang et al., 2020)</td><td>110M</td><td>4.9B tokens</td><td>Corporate Reports, Earnings Call Transcripts, Analyst Reports</td></tr><tr><td>FinBERT (Hou et al., 2020)</td><td>110M</td><td>3B tokens</td><td>News, Analyse reports, Company announcements and Encyclopedias</td></tr><tr><td>Mengzi-BERT-base-fin (Zhang et al., 2021)</td><td>110M</td><td>20GB file</td><td>News, Analyse reports, Company announcements</td></tr><tr><td>BBT-FinT5 (ours)</td><td>220M, 1B</td><td>80B tokens</td><td>Corporate Reports, Analyst Reports, Social media and Financial News</td></tr></table>"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "table_caption",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.326,
|
| 574 |
+
0.257,
|
| 575 |
+
0.672,
|
| 576 |
+
0.273
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "Table 1: Typical financial PLMs and their corpora."
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "table",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.134,
|
| 585 |
+
0.284,
|
| 586 |
+
0.864,
|
| 587 |
+
0.587
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "<table><tr><td>Dataset</td><td>Text Source</td><td>Open State</td><td>Practicality</td></tr><tr><td>DuEE-fin (Han et al., 2022)</td><td>Financial news, Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>FinRE (Li et al., 2019)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>Announcement information extraction (Tianchi, 2018)</td><td>Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>Discovery of new entities in Internet finance (Datafountain, 2019)</td><td>Social media</td><td>Unspecified</td><td>Low</td></tr><tr><td>Announcement information extraction (Bien-data, 2019)</td><td>Company announcement</td><td>Unspecified</td><td>High</td></tr><tr><td>Construction of financial knowledge graph (Bien-data, 2020b)</td><td>Analyse report</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Event causality extraction (Biendata, 2021)</td><td>Financial news</td><td>Unspecified</td><td>Low</td></tr><tr><td>Financial NL2SQL (Biendata, 2022a)</td><td>Data query sentence</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2022b)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2020a)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>FinNL (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinNA (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinFE (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr><tr><td>FinNSP (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr></table>"
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "table_caption",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.157,
|
| 596 |
+
0.595,
|
| 597 |
+
0.84,
|
| 598 |
+
0.61
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "Table 2: Chinese financial datasets we collected, with their open source status and practicality scores"
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.113,
|
| 607 |
+
0.634,
|
| 608 |
+
0.49,
|
| 609 |
+
0.716
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "\\(36\\mathrm{Kr}^{4}\\) and Huxiu \\(^{5}\\). For company announcements and research reports, we chose Eastmoney \\(^{6}\\) for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba \\(^{7}\\) and Xueqiu \\(^{8}\\), for crawling."
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "title",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.114,
|
| 618 |
+
0.739,
|
| 619 |
+
0.455,
|
| 620 |
+
0.756
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "3.2 Crawling and Filtering of the Corpus"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.113,
|
| 629 |
+
0.766,
|
| 630 |
+
0.49,
|
| 631 |
+
0.83
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules (Raffel et al., 2019; Yuan et al., 2021)."
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "title",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.509,
|
| 640 |
+
0.635,
|
| 641 |
+
0.761,
|
| 642 |
+
0.652
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "3.3 Description of the Corpus"
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.508,
|
| 651 |
+
0.656,
|
| 652 |
+
0.886,
|
| 653 |
+
0.721
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials:"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.532,
|
| 662 |
+
0.732,
|
| 663 |
+
0.886,
|
| 664 |
+
0.844
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "- Corporate announcements. These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.532,
|
| 673 |
+
0.855,
|
| 674 |
+
0.887,
|
| 675 |
+
0.92
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "- Research reports. These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries,"
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "list",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.532,
|
| 684 |
+
0.732,
|
| 685 |
+
0.887,
|
| 686 |
+
0.92
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": null
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "page_footnote",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.136,
|
| 695 |
+
0.852,
|
| 696 |
+
0.276,
|
| 697 |
+
0.867
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "4https://36kr.com/"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "page_footnote",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.137,
|
| 706 |
+
0.867,
|
| 707 |
+
0.312,
|
| 708 |
+
0.879
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "<sup>5</sup>https://www.huxiu.com/"
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "page_footnote",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.137,
|
| 717 |
+
0.88,
|
| 718 |
+
0.342,
|
| 719 |
+
0.892
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "<sup>6</sup>https://www.eastmoney.com/"
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "page_footnote",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.137,
|
| 728 |
+
0.892,
|
| 729 |
+
0.35,
|
| 730 |
+
0.905
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "<sup>7</sup>https://guba.eastmoney.com/"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "page_footnote",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.137,
|
| 739 |
+
0.905,
|
| 740 |
+
0.288,
|
| 741 |
+
0.918
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "<sup>8</sup>https://xueqiu.com/"
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "list",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.136,
|
| 750 |
+
0.852,
|
| 751 |
+
0.35,
|
| 752 |
+
0.918
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": null
|
| 756 |
+
}
|
| 757 |
+
],
|
| 758 |
+
[
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.15,
|
| 763 |
+
0.085,
|
| 764 |
+
0.488,
|
| 765 |
+
0.18
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB."
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.137,
|
| 774 |
+
0.191,
|
| 775 |
+
0.488,
|
| 776 |
+
0.288
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "- Financial news. These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.137,
|
| 785 |
+
0.297,
|
| 786 |
+
0.488,
|
| 787 |
+
0.378
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "- Social media. These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB."
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "list",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.137,
|
| 796 |
+
0.191,
|
| 797 |
+
0.488,
|
| 798 |
+
0.378
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": null
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.113,
|
| 807 |
+
0.386,
|
| 808 |
+
0.489,
|
| 809 |
+
0.434
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "title",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.114,
|
| 818 |
+
0.445,
|
| 819 |
+
0.402,
|
| 820 |
+
0.463
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "4 The Large PLM: BBT-FinT5"
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.113,
|
| 829 |
+
0.47,
|
| 830 |
+
0.49,
|
| 831 |
+
0.695
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 (Raffel et al., 2019) model and are pre-trained on BBT-FinCorpus (refer to Section 3). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus."
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.113,
|
| 840 |
+
0.696,
|
| 841 |
+
0.49,
|
| 842 |
+
0.793
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking."
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "title",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.114,
|
| 851 |
+
0.802,
|
| 852 |
+
0.452,
|
| 853 |
+
0.833
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "4.1 Pre-training Model Architecture and Task"
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.113,
|
| 862 |
+
0.839,
|
| 863 |
+
0.491,
|
| 864 |
+
0.921
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "Raffel et al. (2019) model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this,"
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.508,
|
| 873 |
+
0.085,
|
| 874 |
+
0.885,
|
| 875 |
+
0.198
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT (Joshi et al., 2020), randomly masking \\(15\\%\\) contiguous spans within a sentence rather than independent tokens."
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "title",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.509,
|
| 884 |
+
0.208,
|
| 885 |
+
0.757,
|
| 886 |
+
0.224
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "4.2 Pre-training Acceleration"
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.508,
|
| 895 |
+
0.229,
|
| 896 |
+
0.886,
|
| 897 |
+
0.631
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed (Rasley et al., 2020) to accelerate the pre-training process. In particular, we found that using the BFLOAT16 (Kalamkar et al., 2019) half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. Kalamkar et al. (2019) pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format."
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "title",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.509,
|
| 906 |
+
0.642,
|
| 907 |
+
0.859,
|
| 908 |
+
0.674
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "4.3 Knowledge Enhancement Pre-training Method Based on Triple Masking"
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.508,
|
| 917 |
+
0.678,
|
| 918 |
+
0.882,
|
| 919 |
+
0.71
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "We propose a knowledge enhancement pre-training method based on triple masking (KETM)."
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.508,
|
| 928 |
+
0.711,
|
| 929 |
+
0.885,
|
| 930 |
+
0.839
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.508,
|
| 939 |
+
0.84,
|
| 940 |
+
0.887,
|
| 941 |
+
0.921
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask \\(15\\%\\) of a random-length span. Finally,"
|
| 945 |
+
}
|
| 946 |
+
],
|
| 947 |
+
[
|
| 948 |
+
{
|
| 949 |
+
"type": "image",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.16,
|
| 952 |
+
0.1,
|
| 953 |
+
0.844,
|
| 954 |
+
0.281
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": null
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "image_caption",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.195,
|
| 963 |
+
0.3,
|
| 964 |
+
0.802,
|
| 965 |
+
0.315
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "Figure 1: Knowledge enhancement pre-training method based on triple masking (KETM)"
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.113,
|
| 974 |
+
0.34,
|
| 975 |
+
0.491,
|
| 976 |
+
0.468
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure 1. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge."
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "title",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.114,
|
| 985 |
+
0.481,
|
| 986 |
+
0.414,
|
| 987 |
+
0.495
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "5 The Benchmark: BBT-CFLEB"
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.113,
|
| 996 |
+
0.507,
|
| 997 |
+
0.49,
|
| 998 |
+
0.571
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "title",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.114,
|
| 1007 |
+
0.582,
|
| 1008 |
+
0.274,
|
| 1009 |
+
0.595
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "5.1 Task Selection"
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.113,
|
| 1018 |
+
0.603,
|
| 1019 |
+
0.49,
|
| 1020 |
+
0.812
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table 2."
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "title",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.114,
|
| 1029 |
+
0.823,
|
| 1030 |
+
0.304,
|
| 1031 |
+
0.837
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "5.2 Task Introduction"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.113,
|
| 1040 |
+
0.844,
|
| 1041 |
+
0.489,
|
| 1042 |
+
0.892
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows:"
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.136,
|
| 1051 |
+
0.904,
|
| 1052 |
+
0.49,
|
| 1053 |
+
0.919
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "- FinNL, a financial news classification dataset."
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.544,
|
| 1062 |
+
0.34,
|
| 1063 |
+
0.885,
|
| 1064 |
+
0.435
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles."
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "text",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.532,
|
| 1073 |
+
0.449,
|
| 1074 |
+
0.884,
|
| 1075 |
+
0.56
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "- FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge (Lin, 2004). The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.532,
|
| 1084 |
+
0.574,
|
| 1085 |
+
0.884,
|
| 1086 |
+
0.717
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "- FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles."
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.532,
|
| 1095 |
+
0.731,
|
| 1096 |
+
0.885,
|
| 1097 |
+
0.858
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "- FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles."
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.532,
|
| 1106 |
+
0.872,
|
| 1107 |
+
0.884,
|
| 1108 |
+
0.919
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "- FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin (Han et al., 2022) dataset. Given"
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "list",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.532,
|
| 1117 |
+
0.449,
|
| 1118 |
+
0.885,
|
| 1119 |
+
0.919
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": null
|
| 1123 |
+
}
|
| 1124 |
+
],
|
| 1125 |
+
[
|
| 1126 |
+
{
|
| 1127 |
+
"type": "table",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.164,
|
| 1130 |
+
0.082,
|
| 1131 |
+
0.835,
|
| 1132 |
+
0.296
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "<table><tr><td>Task Name</td><td>Introduction</td><td>Data</td><td>Evaluation</td></tr><tr><td>FinNL</td><td>Multi-label classification of financial news</td><td>8000/1000/1000</td><td>F1-score</td></tr><tr><td>FinNA</td><td>Generation of summaries for financial news</td><td>24000/3000/3000</td><td>Rouge</td></tr><tr><td>FinRE</td><td>Entity relation classification for financial news</td><td>7454/1489/3727</td><td>F1-score</td></tr><tr><td>FinFE</td><td>Sentiment classification of financial social media text</td><td>8000/1000/1000</td><td>Accuracy</td></tr><tr><td>FinQA</td><td>Question-answering for financial news/events</td><td>16000/2000/2000</td><td>F1-score</td></tr><tr><td>FinNSP</td><td>Detection of negative messages and entities in financial news</td><td>4800/600/600</td><td>F1-score</td></tr></table>"
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "table_caption",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.375,
|
| 1141 |
+
0.305,
|
| 1142 |
+
0.621,
|
| 1143 |
+
0.32
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "Table 3: Summary of CFLEB tasks."
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "text",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.149,
|
| 1152 |
+
0.345,
|
| 1153 |
+
0.489,
|
| 1154 |
+
0.472
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": "financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles."
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "text",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.137,
|
| 1163 |
+
0.485,
|
| 1164 |
+
0.49,
|
| 1165 |
+
0.66
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "- FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles."
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "title",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.114,
|
| 1174 |
+
0.673,
|
| 1175 |
+
0.37,
|
| 1176 |
+
0.687
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "5.3 Leaderboard Introduction"
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.113,
|
| 1185 |
+
0.694,
|
| 1186 |
+
0.49,
|
| 1187 |
+
0.789
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "We have organized the tasks into multiple leaderboards according to different ability requirements (Xu et al., 2020), so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows:"
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.137,
|
| 1196 |
+
0.801,
|
| 1197 |
+
0.468,
|
| 1198 |
+
0.816
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "- Overall leaderboard: includes all six tasks."
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.137,
|
| 1207 |
+
0.828,
|
| 1208 |
+
0.488,
|
| 1209 |
+
0.875
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "- Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP."
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "text",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.137,
|
| 1218 |
+
0.888,
|
| 1219 |
+
0.489,
|
| 1220 |
+
0.919
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "- Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA."
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "list",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.137,
|
| 1229 |
+
0.801,
|
| 1230 |
+
0.489,
|
| 1231 |
+
0.919
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": null
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "title",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.509,
|
| 1240 |
+
0.345,
|
| 1241 |
+
0.657,
|
| 1242 |
+
0.361
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "6 Experiments"
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "text",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.508,
|
| 1251 |
+
0.371,
|
| 1252 |
+
0.885,
|
| 1253 |
+
0.483
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method."
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "title",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.509,
|
| 1262 |
+
0.496,
|
| 1263 |
+
0.707,
|
| 1264 |
+
0.512
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "6.1 Experiments Setup"
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "title",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.509,
|
| 1273 |
+
0.517,
|
| 1274 |
+
0.806,
|
| 1275 |
+
0.533
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": "6.1.1 Pre-trained Language Models"
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
0.508,
|
| 1284 |
+
0.537,
|
| 1285 |
+
0.885,
|
| 1286 |
+
0.568
|
| 1287 |
+
],
|
| 1288 |
+
"angle": 0,
|
| 1289 |
+
"content": "The models participating in the comparative experiment of this section include:"
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "text",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.532,
|
| 1295 |
+
0.581,
|
| 1296 |
+
0.884,
|
| 1297 |
+
0.645
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "- GPT2-base (Zhao et al., 2019). A Chinese GPT2 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020)."
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.532,
|
| 1306 |
+
0.658,
|
| 1307 |
+
0.885,
|
| 1308 |
+
0.722
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": "- T5-base (Zhao et al., 2019). A Chinese T5 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020)."
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "text",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.532,
|
| 1317 |
+
0.734,
|
| 1318 |
+
0.885,
|
| 1319 |
+
0.782
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": "- FinBERT (Hou et al., 2020). A Chinese BERT for the financial domain released by Hou et al. (2020)."
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "text",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.532,
|
| 1328 |
+
0.795,
|
| 1329 |
+
0.885,
|
| 1330 |
+
0.843
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": "- Mengzi-BERT-base-fin (Zhang et al., 2021). A Chinese BERT for the financial domain released by Zhang et al. (2021)."
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "text",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
0.532,
|
| 1339 |
+
0.855,
|
| 1340 |
+
0.885,
|
| 1341 |
+
0.919
|
| 1342 |
+
],
|
| 1343 |
+
"angle": 0,
|
| 1344 |
+
"content": "- FinT5-base. Our Chinese pre-trained language model for the financial domain, pretrained on our financial corpus, FinCorpus. Its model architecture, parameter size, and"
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "list",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
0.532,
|
| 1350 |
+
0.581,
|
| 1351 |
+
0.885,
|
| 1352 |
+
0.919
|
| 1353 |
+
],
|
| 1354 |
+
"angle": 0,
|
| 1355 |
+
"content": null
|
| 1356 |
+
}
|
| 1357 |
+
],
|
| 1358 |
+
[
|
| 1359 |
+
{
|
| 1360 |
+
"type": "table",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
0.12,
|
| 1363 |
+
0.081,
|
| 1364 |
+
0.887,
|
| 1365 |
+
0.201
|
| 1366 |
+
],
|
| 1367 |
+
"angle": 0,
|
| 1368 |
+
"content": "<table><tr><td>PLMs</td><td>FinFE</td><td>FinNL</td><td>FinNSP</td><td>FinRE</td><td>Un.Avg.</td><td>FinNA</td><td>FinQA</td><td>Ge.Avg.</td><td>Avg.</td></tr><tr><td>GPT2-base</td><td>79.05</td><td>84.09</td><td>91.30</td><td>36.37</td><td>72.70</td><td>44.19</td><td>75.22</td><td>59.71</td><td>68.37</td></tr><tr><td>T5-base</td><td>79.40</td><td>87.48</td><td>95.43</td><td>54.93</td><td>79.56</td><td>48.54</td><td>83.58</td><td>66.06</td><td>74.89</td></tr><tr><td>FinBERT-base</td><td>79.45</td><td>84.69</td><td>69.01</td><td>55.33</td><td>72.37</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Mengzi-BERT-base-fin</td><td>79.50</td><td>85.88</td><td>71.72</td><td>58.25</td><td>73.59</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BBT-FinT5-base</td><td>80.19</td><td>87.55</td><td>94.50</td><td>60.62</td><td>80.21</td><td>50.06</td><td>84.82</td><td>67.44</td><td>76.29</td></tr><tr><td>BBT-FinT5-base-KE</td><td>79.43</td><td>87.77</td><td>95.05</td><td>61.79</td><td>80.26</td><td>51.36</td><td>85.66</td><td>68.51</td><td>76.84</td></tr><tr><td>BBT-FinT5-large</td><td>80.24</td><td>88.44</td><td>94.54</td><td>61.88</td><td>81.78</td><td>51.42</td><td>85.95</td><td>68.69</td><td>77.07</td></tr></table>"
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "table_caption",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
0.312,
|
| 1374 |
+
0.21,
|
| 1375 |
+
0.685,
|
| 1376 |
+
0.224
|
| 1377 |
+
],
|
| 1378 |
+
"angle": 0,
|
| 1379 |
+
"content": "Table 4: Results of BBT-CFLEB from different PLMs."
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "text",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
0.15,
|
| 1385 |
+
0.251,
|
| 1386 |
+
0.486,
|
| 1387 |
+
0.281
|
| 1388 |
+
],
|
| 1389 |
+
"angle": 0,
|
| 1390 |
+
"content": "pre-training hyperparameters are the same as T5-v1.1-base."
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "text",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
0.137,
|
| 1396 |
+
0.295,
|
| 1397 |
+
0.486,
|
| 1398 |
+
0.358
|
| 1399 |
+
],
|
| 1400 |
+
"angle": 0,
|
| 1401 |
+
"content": "- FinT5-base-KE. Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia (Xu et al., 2017) knowledge graph."
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "text",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
0.137,
|
| 1407 |
+
0.371,
|
| 1408 |
+
0.488,
|
| 1409 |
+
0.449
|
| 1410 |
+
],
|
| 1411 |
+
"angle": 0,
|
| 1412 |
+
"content": "- FinT5-large. Our proposed Chinese pretrained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base."
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "list",
|
| 1416 |
+
"bbox": [
|
| 1417 |
+
0.137,
|
| 1418 |
+
0.295,
|
| 1419 |
+
0.488,
|
| 1420 |
+
0.449
|
| 1421 |
+
],
|
| 1422 |
+
"angle": 0,
|
| 1423 |
+
"content": null
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "title",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
0.114,
|
| 1429 |
+
0.464,
|
| 1430 |
+
0.267,
|
| 1431 |
+
0.479
|
| 1432 |
+
],
|
| 1433 |
+
"angle": 0,
|
| 1434 |
+
"content": "6.1.2 Fine-tuning"
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "text",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
0.113,
|
| 1440 |
+
0.483,
|
| 1441 |
+
0.489,
|
| 1442 |
+
0.581
|
| 1443 |
+
],
|
| 1444 |
+
"angle": 0,
|
| 1445 |
+
"content": "For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks."
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "title",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
0.114,
|
| 1451 |
+
0.592,
|
| 1452 |
+
0.418,
|
| 1453 |
+
0.623
|
| 1454 |
+
],
|
| 1455 |
+
"angle": 0,
|
| 1456 |
+
"content": "6.2 Experiment 1: Comparison of Pre-trained Model Architectures"
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "text",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
0.113,
|
| 1462 |
+
0.629,
|
| 1463 |
+
0.489,
|
| 1464 |
+
0.789
|
| 1465 |
+
],
|
| 1466 |
+
"angle": 0,
|
| 1467 |
+
"content": "For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table 4. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model."
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "title",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
0.114,
|
| 1473 |
+
0.802,
|
| 1474 |
+
0.468,
|
| 1475 |
+
0.833
|
| 1476 |
+
],
|
| 1477 |
+
"angle": 0,
|
| 1478 |
+
"content": "6.3 Experiment 2: Effectiveness of Domain Pre-training"
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "text",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
0.113,
|
| 1484 |
+
0.84,
|
| 1485 |
+
0.49,
|
| 1486 |
+
0.92
|
| 1487 |
+
],
|
| 1488 |
+
"angle": 0,
|
| 1489 |
+
"content": "As shown in Table 4, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the"
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
0.508,
|
| 1495 |
+
0.251,
|
| 1496 |
+
0.884,
|
| 1497 |
+
0.282
|
| 1498 |
+
],
|
| 1499 |
+
"angle": 0,
|
| 1500 |
+
"content": "effectiveness of domain pre-training and the effectiveness of FinCorpus."
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "title",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
0.508,
|
| 1506 |
+
0.295,
|
| 1507 |
+
0.872,
|
| 1508 |
+
0.326
|
| 1509 |
+
],
|
| 1510 |
+
"angle": 0,
|
| 1511 |
+
"content": "6.4 Experiment 3: Superiority Compared to Existing Models in the domain"
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "text",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
0.508,
|
| 1517 |
+
0.332,
|
| 1518 |
+
0.884,
|
| 1519 |
+
0.412
|
| 1520 |
+
],
|
| 1521 |
+
"angle": 0,
|
| 1522 |
+
"content": "As shown in Table 4, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain."
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "title",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
0.508,
|
| 1528 |
+
0.424,
|
| 1529 |
+
0.857,
|
| 1530 |
+
0.439
|
| 1531 |
+
],
|
| 1532 |
+
"angle": 0,
|
| 1533 |
+
"content": "6.5 Experiment 4: Effectiveness of KETM"
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "text",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
0.508,
|
| 1539 |
+
0.445,
|
| 1540 |
+
0.884,
|
| 1541 |
+
0.573
|
| 1542 |
+
],
|
| 1543 |
+
"angle": 0,
|
| 1544 |
+
"content": "As shown in Table 4, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method."
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "title",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
0.508,
|
| 1550 |
+
0.586,
|
| 1551 |
+
0.882,
|
| 1552 |
+
0.618
|
| 1553 |
+
],
|
| 1554 |
+
"angle": 0,
|
| 1555 |
+
"content": "6.6 Experiment 5: Effectiveness of parameter scaling up"
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "text",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
0.508,
|
| 1561 |
+
0.623,
|
| 1562 |
+
0.882,
|
| 1563 |
+
0.72
|
| 1564 |
+
],
|
| 1565 |
+
"angle": 0,
|
| 1566 |
+
"content": "As shown in Table 4, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up."
|
| 1567 |
+
},
|
| 1568 |
+
{
|
| 1569 |
+
"type": "title",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
0.509,
|
| 1572 |
+
0.733,
|
| 1573 |
+
0.642,
|
| 1574 |
+
0.747
|
| 1575 |
+
],
|
| 1576 |
+
"angle": 0,
|
| 1577 |
+
"content": "7 Conclusion"
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "text",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.508,
|
| 1583 |
+
0.759,
|
| 1584 |
+
0.885,
|
| 1585 |
+
0.92
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM,"
|
| 1589 |
+
}
|
| 1590 |
+
],
|
| 1591 |
+
[
|
| 1592 |
+
{
|
| 1593 |
+
"type": "text",
|
| 1594 |
+
"bbox": [
|
| 1595 |
+
0.113,
|
| 1596 |
+
0.085,
|
| 1597 |
+
0.49,
|
| 1598 |
+
0.231
|
| 1599 |
+
],
|
| 1600 |
+
"angle": 0,
|
| 1601 |
+
"content": "which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications."
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "title",
|
| 1605 |
+
"bbox": [
|
| 1606 |
+
0.511,
|
| 1607 |
+
0.084,
|
| 1608 |
+
0.61,
|
| 1609 |
+
0.099
|
| 1610 |
+
],
|
| 1611 |
+
"angle": 0,
|
| 1612 |
+
"content": "References"
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "ref_text",
|
| 1616 |
+
"bbox": [
|
| 1617 |
+
0.511,
|
| 1618 |
+
0.108,
|
| 1619 |
+
0.885,
|
| 1620 |
+
0.148
|
| 1621 |
+
],
|
| 1622 |
+
"angle": 0,
|
| 1623 |
+
"content": "Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063."
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "ref_text",
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
0.511,
|
| 1629 |
+
0.159,
|
| 1630 |
+
0.885,
|
| 1631 |
+
0.187
|
| 1632 |
+
],
|
| 1633 |
+
"angle": 0,
|
| 1634 |
+
"content": "Biendata. 2019. Ccks 2019 extraction of public company announcement information."
|
| 1635 |
+
},
|
| 1636 |
+
{
|
| 1637 |
+
"type": "ref_text",
|
| 1638 |
+
"bbox": [
|
| 1639 |
+
0.511,
|
| 1640 |
+
0.198,
|
| 1641 |
+
0.885,
|
| 1642 |
+
0.224
|
| 1643 |
+
],
|
| 1644 |
+
"angle": 0,
|
| 1645 |
+
"content": "Biendata. 2020a. Ccks 2020: Cross-class few-shot transfer event extraction for financial domain."
|
| 1646 |
+
},
|
| 1647 |
+
{
|
| 1648 |
+
"type": "ref_text",
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
0.511,
|
| 1651 |
+
0.236,
|
| 1652 |
+
0.885,
|
| 1653 |
+
0.276
|
| 1654 |
+
],
|
| 1655 |
+
"angle": 0,
|
| 1656 |
+
"content": "Biendata. 2020b. Ccks 2020: Evaluation of automated construction techniques for financial knowledge graph based on ontology."
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "ref_text",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
0.511,
|
| 1662 |
+
0.287,
|
| 1663 |
+
0.885,
|
| 1664 |
+
0.327
|
| 1665 |
+
],
|
| 1666 |
+
"angle": 0,
|
| 1667 |
+
"content": "Biendata. 2021. Ccks 2021: Event relation extraction for financial texts (part ii) - extraction of causal relationships between events."
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "ref_text",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.511,
|
| 1673 |
+
0.338,
|
| 1674 |
+
0.885,
|
| 1675 |
+
0.365
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "Biendata. 2022a. Ccks2022: Evaluation of nl2sql for financial domain."
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "ref_text",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.511,
|
| 1684 |
+
0.377,
|
| 1685 |
+
0.885,
|
| 1686 |
+
0.403
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": "Biendata. 2022b. Ccks2022: Few-shot event extraction for financial domain."
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "ref_text",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.511,
|
| 1695 |
+
0.415,
|
| 1696 |
+
0.885,
|
| 1697 |
+
0.441
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "Datafountain. 2019. Discovery of new entities in internet finance."
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"type": "ref_text",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
0.511,
|
| 1706 |
+
0.453,
|
| 1707 |
+
0.885,
|
| 1708 |
+
0.507
|
| 1709 |
+
],
|
| 1710 |
+
"angle": 0,
|
| 1711 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "ref_text",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
0.511,
|
| 1717 |
+
0.517,
|
| 1718 |
+
0.885,
|
| 1719 |
+
0.584
|
| 1720 |
+
],
|
| 1721 |
+
"angle": 0,
|
| 1722 |
+
"content": "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027."
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "ref_text",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
0.511,
|
| 1728 |
+
0.595,
|
| 1729 |
+
0.885,
|
| 1730 |
+
0.674
|
| 1731 |
+
],
|
| 1732 |
+
"angle": 0,
|
| 1733 |
+
"content": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1-23."
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "ref_text",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
0.511,
|
| 1739 |
+
0.685,
|
| 1740 |
+
0.885,
|
| 1741 |
+
0.751
|
| 1742 |
+
],
|
| 1743 |
+
"angle": 0,
|
| 1744 |
+
"content": "Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964."
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "ref_text",
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
0.511,
|
| 1750 |
+
0.763,
|
| 1751 |
+
0.885,
|
| 1752 |
+
0.842
|
| 1753 |
+
],
|
| 1754 |
+
"angle": 0,
|
| 1755 |
+
"content": "Cuiyun Han, Jinchuan Zhang, Xinyu Li, Guojin Xu, Weihua Peng, and Zengfeng Zeng. 2022. Due-fin: A large-scale dataset for document-level event extraction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 172-183. Springer."
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "ref_text",
|
| 1759 |
+
"bbox": [
|
| 1760 |
+
0.511,
|
| 1761 |
+
0.853,
|
| 1762 |
+
0.885,
|
| 1763 |
+
0.919
|
| 1764 |
+
],
|
| 1765 |
+
"angle": 0,
|
| 1766 |
+
"content": "Panpan Hou, Mengchao Zhang, Zhibing Fu, and Yu Li. 2020. Finbert. https://github.com/valuesimplex/FinBERT. GitHub repository, commit: ec1b14b96de9bdd5217abba1d197428cf00ddaa6."
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "list",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
0.511,
|
| 1772 |
+
0.108,
|
| 1773 |
+
0.885,
|
| 1774 |
+
0.919
|
| 1775 |
+
],
|
| 1776 |
+
"angle": 0,
|
| 1777 |
+
"content": null
|
| 1778 |
+
}
|
| 1779 |
+
],
|
| 1780 |
+
[
|
| 1781 |
+
{
|
| 1782 |
+
"type": "ref_text",
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
0.117,
|
| 1785 |
+
0.086,
|
| 1786 |
+
0.49,
|
| 1787 |
+
0.152
|
| 1788 |
+
],
|
| 1789 |
+
"angle": 0,
|
| 1790 |
+
"content": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77."
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "ref_text",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
0.117,
|
| 1796 |
+
0.163,
|
| 1797 |
+
0.488,
|
| 1798 |
+
0.241
|
| 1799 |
+
],
|
| 1800 |
+
"angle": 0,
|
| 1801 |
+
"content": "Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. 2019. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322."
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "ref_text",
|
| 1805 |
+
"bbox": [
|
| 1806 |
+
0.117,
|
| 1807 |
+
0.252,
|
| 1808 |
+
0.488,
|
| 1809 |
+
0.344
|
| 1810 |
+
],
|
| 1811 |
+
"angle": 0,
|
| 1812 |
+
"content": "Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. 2019. Chinese relation extraction with multi-grained information and external linguistic knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4377-4386, Florence, Italy. Association for Computational Linguistics."
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "ref_text",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
0.117,
|
| 1818 |
+
0.355,
|
| 1819 |
+
0.486,
|
| 1820 |
+
0.395
|
| 1821 |
+
],
|
| 1822 |
+
"angle": 0,
|
| 1823 |
+
"content": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81."
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "ref_text",
|
| 1827 |
+
"bbox": [
|
| 1828 |
+
0.117,
|
| 1829 |
+
0.405,
|
| 1830 |
+
0.488,
|
| 1831 |
+
0.471
|
| 1832 |
+
],
|
| 1833 |
+
"angle": 0,
|
| 1834 |
+
"content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692."
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "ref_text",
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
0.117,
|
| 1840 |
+
0.481,
|
| 1841 |
+
0.488,
|
| 1842 |
+
0.573
|
| 1843 |
+
],
|
| 1844 |
+
"angle": 0,
|
| 1845 |
+
"content": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019a. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58-65, Florence, Italy. Association for Computational Linguistics."
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "ref_text",
|
| 1849 |
+
"bbox": [
|
| 1850 |
+
0.117,
|
| 1851 |
+
0.584,
|
| 1852 |
+
0.488,
|
| 1853 |
+
0.649
|
| 1854 |
+
],
|
| 1855 |
+
"angle": 0,
|
| 1856 |
+
"content": "Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019b. Transfer learning in biomedical natural language processing: an evaluation of bert and elmo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474."
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "ref_text",
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
0.117,
|
| 1862 |
+
0.66,
|
| 1863 |
+
0.488,
|
| 1864 |
+
0.727
|
| 1865 |
+
],
|
| 1866 |
+
"angle": 0,
|
| 1867 |
+
"content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints."
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "ref_text",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
0.117,
|
| 1873 |
+
0.737,
|
| 1874 |
+
0.488,
|
| 1875 |
+
0.828
|
| 1876 |
+
],
|
| 1877 |
+
"angle": 0,
|
| 1878 |
+
"content": "Jeff Rasley, Samyam Rajbhandari, Olatunjri Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506."
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "ref_text",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
0.117,
|
| 1884 |
+
0.84,
|
| 1885 |
+
0.488,
|
| 1886 |
+
0.918
|
| 1887 |
+
],
|
| 1888 |
+
"angle": 0,
|
| 1889 |
+
"content": "Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, and Diyi Yang. 2022. When flue meets flang: Benchmarks and large pre-trained language model for financial domain. arXiv preprint arXiv:2211.00083."
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "list",
|
| 1893 |
+
"bbox": [
|
| 1894 |
+
0.117,
|
| 1895 |
+
0.086,
|
| 1896 |
+
0.49,
|
| 1897 |
+
0.918
|
| 1898 |
+
],
|
| 1899 |
+
"angle": 0,
|
| 1900 |
+
"content": null
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "ref_text",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.513,
|
| 1906 |
+
0.086,
|
| 1907 |
+
0.883,
|
| 1908 |
+
0.126
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "Alisa Smirnova and Philippe Cudre-Mauroux. 2018. Relation extraction using distant supervision: A survey. ACM Computing Surveys (CSUR), 51(5):1-35."
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "ref_text",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.512,
|
| 1917 |
+
0.138,
|
| 1918 |
+
0.883,
|
| 1919 |
+
0.217
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": "Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, et al. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137."
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "ref_text",
|
| 1926 |
+
"bbox": [
|
| 1927 |
+
0.512,
|
| 1928 |
+
0.229,
|
| 1929 |
+
0.883,
|
| 1930 |
+
0.295
|
| 1931 |
+
],
|
| 1932 |
+
"angle": 0,
|
| 1933 |
+
"content": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223."
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "ref_text",
|
| 1937 |
+
"bbox": [
|
| 1938 |
+
0.512,
|
| 1939 |
+
0.307,
|
| 1940 |
+
0.883,
|
| 1941 |
+
0.334
|
| 1942 |
+
],
|
| 1943 |
+
"angle": 0,
|
| 1944 |
+
"content": "Tianchi. 2018. The dataset for extracting announcement information of a-share listed companies."
|
| 1945 |
+
},
|
| 1946 |
+
{
|
| 1947 |
+
"type": "ref_text",
|
| 1948 |
+
"bbox": [
|
| 1949 |
+
0.512,
|
| 1950 |
+
0.346,
|
| 1951 |
+
0.883,
|
| 1952 |
+
0.424
|
| 1953 |
+
],
|
| 1954 |
+
"angle": 0,
|
| 1955 |
+
"content": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32."
|
| 1956 |
+
},
|
| 1957 |
+
{
|
| 1958 |
+
"type": "ref_text",
|
| 1959 |
+
"bbox": [
|
| 1960 |
+
0.512,
|
| 1961 |
+
0.437,
|
| 1962 |
+
0.883,
|
| 1963 |
+
0.502
|
| 1964 |
+
],
|
| 1965 |
+
"angle": 0,
|
| 1966 |
+
"content": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461."
|
| 1967 |
+
},
|
| 1968 |
+
{
|
| 1969 |
+
"type": "ref_text",
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
0.512,
|
| 1972 |
+
0.515,
|
| 1973 |
+
0.883,
|
| 1974 |
+
0.633
|
| 1975 |
+
],
|
| 1976 |
+
"angle": 0,
|
| 1977 |
+
"content": "Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cn-dbpedia: A never-ending chinese knowledge extraction system. In Advances in Artificial Intelligence: From Theory to Practice: 30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2017, Arras, France, June 27-30, 2017, Proceedings, Part II, pages 428-438. Springer."
|
| 1978 |
+
},
|
| 1979 |
+
{
|
| 1980 |
+
"type": "ref_text",
|
| 1981 |
+
"bbox": [
|
| 1982 |
+
0.512,
|
| 1983 |
+
0.645,
|
| 1984 |
+
0.883,
|
| 1985 |
+
0.71
|
| 1986 |
+
],
|
| 1987 |
+
"angle": 0,
|
| 1988 |
+
"content": "Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986."
|
| 1989 |
+
},
|
| 1990 |
+
{
|
| 1991 |
+
"type": "ref_text",
|
| 1992 |
+
"bbox": [
|
| 1993 |
+
0.512,
|
| 1994 |
+
0.723,
|
| 1995 |
+
0.883,
|
| 1996 |
+
0.776
|
| 1997 |
+
],
|
| 1998 |
+
"angle": 0,
|
| 1999 |
+
"content": "Jian Yang, Gang Xiao, Yulong Shen, Wei Jiang, Xinyu Hu, Ying Zhang, and Jinghui Peng. 2021. A survey of knowledge enhanced pre-trained models. arXiv preprint arXiv:2110.00269."
|
| 2000 |
+
},
|
| 2001 |
+
{
|
| 2002 |
+
"type": "ref_text",
|
| 2003 |
+
"bbox": [
|
| 2004 |
+
0.512,
|
| 2005 |
+
0.787,
|
| 2006 |
+
0.883,
|
| 2007 |
+
0.84
|
| 2008 |
+
],
|
| 2009 |
+
"angle": 0,
|
| 2010 |
+
"content": "Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications. arXiv preprint arXiv:2006.08097."
|
| 2011 |
+
},
|
| 2012 |
+
{
|
| 2013 |
+
"type": "ref_text",
|
| 2014 |
+
"bbox": [
|
| 2015 |
+
0.512,
|
| 2016 |
+
0.852,
|
| 2017 |
+
0.883,
|
| 2018 |
+
0.918
|
| 2019 |
+
],
|
| 2020 |
+
"angle": 0,
|
| 2021 |
+
"content": "Sha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang, and Jie Tang. 2021. Wudaocorpora: A super large-scale chinese corpora for pre-training language models. AI Open, 2:65-68."
|
| 2022 |
+
},
|
| 2023 |
+
{
|
| 2024 |
+
"type": "list",
|
| 2025 |
+
"bbox": [
|
| 2026 |
+
0.512,
|
| 2027 |
+
0.086,
|
| 2028 |
+
0.883,
|
| 2029 |
+
0.918
|
| 2030 |
+
],
|
| 2031 |
+
"angle": 0,
|
| 2032 |
+
"content": null
|
| 2033 |
+
}
|
| 2034 |
+
],
|
| 2035 |
+
[
|
| 2036 |
+
{
|
| 2037 |
+
"type": "ref_text",
|
| 2038 |
+
"bbox": [
|
| 2039 |
+
0.117,
|
| 2040 |
+
0.086,
|
| 2041 |
+
0.49,
|
| 2042 |
+
0.152
|
| 2043 |
+
],
|
| 2044 |
+
"angle": 0,
|
| 2045 |
+
"content": "Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, and Ming Zhou. 2021. Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. arXiv preprint arXiv:2110.06696."
|
| 2046 |
+
},
|
| 2047 |
+
{
|
| 2048 |
+
"type": "ref_text",
|
| 2049 |
+
"bbox": [
|
| 2050 |
+
0.117,
|
| 2051 |
+
0.162,
|
| 2052 |
+
0.49,
|
| 2053 |
+
0.215
|
| 2054 |
+
],
|
| 2055 |
+
"angle": 0,
|
| 2056 |
+
"content": "Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. Uer: An open-source toolkit for pretraining models. EMNLP-IJCNLP 2019, page 241."
|
| 2057 |
+
},
|
| 2058 |
+
{
|
| 2059 |
+
"type": "list",
|
| 2060 |
+
"bbox": [
|
| 2061 |
+
0.117,
|
| 2062 |
+
0.086,
|
| 2063 |
+
0.49,
|
| 2064 |
+
0.215
|
| 2065 |
+
],
|
| 2066 |
+
"angle": 0,
|
| 2067 |
+
"content": null
|
| 2068 |
+
}
|
| 2069 |
+
]
|
| 2070 |
+
]
|
2302.09xxx/2302.09432/039b9646-4ab7-4a60-98c4-95ced53dbcad_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c14c32482a769cb9774b5d24de4d3739040184f80fd9cb440ba2fd92eb955a21
|
| 3 |
+
size 238124
|
2302.09xxx/2302.09432/full.md
ADDED
|
@@ -0,0 +1,277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark
|
| 2 |
+
|
| 3 |
+
Dakuan Lu $^{1}$ , Hengkui Wu $^{3*}$ , Jiaqing Liang $^{2}$ , Yipei Xu $^{1}$ , Qianyu He $^{1}$ , Yipeng Geng $^{3}$ , Mengkun Han $^{3}$ , Yingsi Xin $^{3}$ , Yanghua Xiao $^{1*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
|
| 6 |
+
|
| 7 |
+
$^{2}$ School of Data Science, Fudan University
|
| 8 |
+
|
| 9 |
+
$^{3}$ SuperSymmetry Technologies
|
| 10 |
+
|
| 11 |
+
{ludakuan1234, l.j.q.light, xuyipei000, abbey4799} $@$ gmail.com,
|
| 12 |
+
|
| 13 |
+
{ypgeng, mkhan, ysxin, hkwu} @ssymmetry.com, shawyh@fudan.edu.cn
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at https://github.com/ssymmetry/ BBT-FinCUGE-Applications. Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Pre-trained language models(PLMs), such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019), have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style (Gururangan et al., 2020; Gu et al., 2021). To address this issue, Gururangan et al. (2020) proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on
|
| 22 |
+
|
| 23 |
+
domain-specific tasks, while Gu et al. (2021) further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT (Peng et al., 2019a) and PubMedBERT (Gu et al., 2021) in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction.
|
| 24 |
+
|
| 25 |
+
We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table 2, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT (Hou et al., 2020) and Mengzi-BERT-base-fin (Zhang et al., 2021). However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version.
|
| 26 |
+
|
| 27 |
+
Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pretraining methods to improve PLMs' understanding and memorization of entity knowledge. However,
|
| 28 |
+
|
| 29 |
+
these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pretraining method based on the T5 model's text-to-text paradigm.
|
| 30 |
+
|
| 31 |
+
In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training (Xu et al., 2020; Raffel et al., 2019; Gao et al., 2020). However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table 1. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table 2. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks.
|
| 32 |
+
|
| 33 |
+
The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), while the general benchmark evaluation for Chinese PLMs is CLUE (Xu et al., 2020). Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain.
|
| 34 |
+
|
| 35 |
+
To address this issue and promote research in the financial domain, we propose CFLEB, the Chinese Financial Language Understanding and Generation Evaluation Benchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty,
|
| 36 |
+
|
| 37 |
+
and more importantly, emphasize challenges that arise in real-world scenarios.
|
| 38 |
+
|
| 39 |
+
Our contributions are summarized as follows:
|
| 40 |
+
|
| 41 |
+
- We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training.
|
| 42 |
+
- We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus.
|
| 43 |
+
- We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain.
|
| 44 |
+
|
| 45 |
+
# 2 Related Work
|
| 46 |
+
|
| 47 |
+
# 2.1 Domain-specific PLMs and Corpora
|
| 48 |
+
|
| 49 |
+
PLMs have achieved state-of-the-art performance in many NLP tasks (Devlin et al., 2018; Raffel et al., 2019; Liu et al., 2019). However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains (Gururangan et al., 2020; Gu et al., 2021). To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed (Gururangan et al., 2020). For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models (Gu et al., 2021). Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora.
|
| 50 |
+
|
| 51 |
+
In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, Araci (2019) and Yang et al. (2020) pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, Hou et al. (2020) pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, Zhang et al. (2021) pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks.
|
| 52 |
+
|
| 53 |
+
Table 1 summarizes the characteristics of typical PLMs and their corpora in the financial domain. It
|
| 54 |
+
|
| 55 |
+
can be observed that both the scale of our model and corpus exceed existing works.
|
| 56 |
+
|
| 57 |
+
# 2.2 Knowledge Enhanced Pre-training
|
| 58 |
+
|
| 59 |
+
Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed (Yang et al., 2021). Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory.
|
| 60 |
+
|
| 61 |
+
For example, Ernie (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus.
|
| 62 |
+
|
| 63 |
+
Ernie 3.0, introduced by Sun et al. (2021), incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them.
|
| 64 |
+
|
| 65 |
+
The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence (Smirnova and Cudré-Mauroux, 2018). Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model.
|
| 66 |
+
|
| 67 |
+
To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models.
|
| 68 |
+
|
| 69 |
+
# 2.3 Domain-specific NLP Benchmarks
|
| 70 |
+
|
| 71 |
+
Various domain-specific NLP benchmarks have been proposed to compare the ability of different
|
| 72 |
+
|
| 73 |
+
methods in modeling text from specific domains in a fair manner. The BLUE benchmark (Peng et al., 2019b) evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark (Gu et al., 2021) further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE (Shah et al., 2022) is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works.
|
| 74 |
+
|
| 75 |
+
# 3 The Corpus: BBT-FinCorpus
|
| 76 |
+
|
| 77 |
+
We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section 3.1 covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section 3.3.
|
| 78 |
+
|
| 79 |
+
# 3.1 Coverage Confirmation of the Corpus
|
| 80 |
+
|
| 81 |
+
We believe that, since the purpose of domain pretraining is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table 2.
|
| 82 |
+
|
| 83 |
+
It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance<sup>1</sup>, Tencent Finance<sup>2</sup>, Phoenix Finance<sup>3</sup>,
|
| 84 |
+
|
| 85 |
+
<table><tr><td>PLM</td><td>Size</td><td>Corpus Size</td><td>Corpus Sources</td></tr><tr><td>FinBERT (Araci, 2019)</td><td>110M</td><td>29M words</td><td>News filtered by financial keywords</td></tr><tr><td>FinBERT (Yang et al., 2020)</td><td>110M</td><td>4.9B tokens</td><td>Corporate Reports, Earnings Call Transcripts, Analyst Reports</td></tr><tr><td>FinBERT (Hou et al., 2020)</td><td>110M</td><td>3B tokens</td><td>News, Analyse reports, Company announcements and Encyclopedias</td></tr><tr><td>Mengzi-BERT-base-fin (Zhang et al., 2021)</td><td>110M</td><td>20GB file</td><td>News, Analyse reports, Company announcements</td></tr><tr><td>BBT-FinT5 (ours)</td><td>220M, 1B</td><td>80B tokens</td><td>Corporate Reports, Analyst Reports, Social media and Financial News</td></tr></table>
|
| 86 |
+
|
| 87 |
+
Table 1: Typical financial PLMs and their corpora.
|
| 88 |
+
|
| 89 |
+
<table><tr><td>Dataset</td><td>Text Source</td><td>Open State</td><td>Practicality</td></tr><tr><td>DuEE-fin (Han et al., 2022)</td><td>Financial news, Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>FinRE (Li et al., 2019)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>Announcement information extraction (Tianchi, 2018)</td><td>Company announcement</td><td>Yes</td><td>High</td></tr><tr><td>Discovery of new entities in Internet finance (Datafountain, 2019)</td><td>Social media</td><td>Unspecified</td><td>Low</td></tr><tr><td>Announcement information extraction (Bien-data, 2019)</td><td>Company announcement</td><td>Unspecified</td><td>High</td></tr><tr><td>Construction of financial knowledge graph (Bien-data, 2020b)</td><td>Analyse report</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Event causality extraction (Biendata, 2021)</td><td>Financial news</td><td>Unspecified</td><td>Low</td></tr><tr><td>Financial NL2SQL (Biendata, 2022a)</td><td>Data query sentence</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2022b)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>Few-shot event extraction (Biendata, 2020a)</td><td>Financial news</td><td>Unspecified</td><td>Medium</td></tr><tr><td>FinNL (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinNA (ours)</td><td>Financial news</td><td>Yes</td><td>High</td></tr><tr><td>FinFE (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr><tr><td>FinNSP (ours)</td><td>Social media</td><td>Yes</td><td>High</td></tr></table>
|
| 90 |
+
|
| 91 |
+
Table 2: Chinese financial datasets we collected, with their open source status and practicality scores
|
| 92 |
+
|
| 93 |
+
$36\mathrm{Kr}^{4}$ and Huxiu $^{5}$ . For company announcements and research reports, we chose Eastmoney $^{6}$ for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba $^{7}$ and Xueqiu $^{8}$ , for crawling.
|
| 94 |
+
|
| 95 |
+
# 3.2 Crawling and Filtering of the Corpus
|
| 96 |
+
|
| 97 |
+
We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules (Raffel et al., 2019; Yuan et al., 2021).
|
| 98 |
+
|
| 99 |
+
# 3.3 Description of the Corpus
|
| 100 |
+
|
| 101 |
+
After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials:
|
| 102 |
+
|
| 103 |
+
- Corporate announcements. These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB.
|
| 104 |
+
- Research reports. These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries,
|
| 105 |
+
|
| 106 |
+
and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB.
|
| 107 |
+
|
| 108 |
+
- Financial news. These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB.
|
| 109 |
+
- Social media. These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB.
|
| 110 |
+
|
| 111 |
+
The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP.
|
| 112 |
+
|
| 113 |
+
# 4 The Large PLM: BBT-FinT5
|
| 114 |
+
|
| 115 |
+
To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 (Raffel et al., 2019) model and are pre-trained on BBT-FinCorpus (refer to Section 3). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus.
|
| 116 |
+
|
| 117 |
+
In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking.
|
| 118 |
+
|
| 119 |
+
# 4.1 Pre-training Model Architecture and Task
|
| 120 |
+
|
| 121 |
+
Raffel et al. (2019) model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this,
|
| 122 |
+
|
| 123 |
+
they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT (Joshi et al., 2020), randomly masking $15\%$ contiguous spans within a sentence rather than independent tokens.
|
| 124 |
+
|
| 125 |
+
# 4.2 Pre-training Acceleration
|
| 126 |
+
|
| 127 |
+
We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed (Rasley et al., 2020) to accelerate the pre-training process. In particular, we found that using the BFLOAT16 (Kalamkar et al., 2019) half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. Kalamkar et al. (2019) pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format.
|
| 128 |
+
|
| 129 |
+
# 4.3 Knowledge Enhancement Pre-training Method Based on Triple Masking
|
| 130 |
+
|
| 131 |
+
We propose a knowledge enhancement pre-training method based on triple masking (KETM).
|
| 132 |
+
|
| 133 |
+
First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple.
|
| 134 |
+
|
| 135 |
+
Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask $15\%$ of a random-length span. Finally,
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 1: Knowledge enhancement pre-training method based on triple masking (KETM)
|
| 139 |
+
|
| 140 |
+
we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure 1. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge.
|
| 141 |
+
|
| 142 |
+
# 5 The Benchmark: BBT-CFLEB
|
| 143 |
+
|
| 144 |
+
In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks.
|
| 145 |
+
|
| 146 |
+
# 5.1 Task Selection
|
| 147 |
+
|
| 148 |
+
We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table 2.
|
| 149 |
+
|
| 150 |
+
# 5.2 Task Introduction
|
| 151 |
+
|
| 152 |
+
CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows:
|
| 153 |
+
|
| 154 |
+
- FinNL, a financial news classification dataset.
|
| 155 |
+
|
| 156 |
+
Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.
|
| 157 |
+
|
| 158 |
+
- FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge (Lin, 2004). The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles.
|
| 159 |
+
- FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles.
|
| 160 |
+
- FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.
|
| 161 |
+
- FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin (Han et al., 2022) dataset. Given
|
| 162 |
+
|
| 163 |
+
<table><tr><td>Task Name</td><td>Introduction</td><td>Data</td><td>Evaluation</td></tr><tr><td>FinNL</td><td>Multi-label classification of financial news</td><td>8000/1000/1000</td><td>F1-score</td></tr><tr><td>FinNA</td><td>Generation of summaries for financial news</td><td>24000/3000/3000</td><td>Rouge</td></tr><tr><td>FinRE</td><td>Entity relation classification for financial news</td><td>7454/1489/3727</td><td>F1-score</td></tr><tr><td>FinFE</td><td>Sentiment classification of financial social media text</td><td>8000/1000/1000</td><td>Accuracy</td></tr><tr><td>FinQA</td><td>Question-answering for financial news/events</td><td>16000/2000/2000</td><td>F1-score</td></tr><tr><td>FinNSP</td><td>Detection of negative messages and entities in financial news</td><td>4800/600/600</td><td>F1-score</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 3: Summary of CFLEB tasks.
|
| 166 |
+
|
| 167 |
+
financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles.
|
| 168 |
+
|
| 169 |
+
- FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles.
|
| 170 |
+
|
| 171 |
+
# 5.3 Leaderboard Introduction
|
| 172 |
+
|
| 173 |
+
We have organized the tasks into multiple leaderboards according to different ability requirements (Xu et al., 2020), so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows:
|
| 174 |
+
|
| 175 |
+
- Overall leaderboard: includes all six tasks.
|
| 176 |
+
- Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP.
|
| 177 |
+
- Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA.
|
| 178 |
+
|
| 179 |
+
# 6 Experiments
|
| 180 |
+
|
| 181 |
+
In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method.
|
| 182 |
+
|
| 183 |
+
# 6.1 Experiments Setup
|
| 184 |
+
|
| 185 |
+
# 6.1.1 Pre-trained Language Models
|
| 186 |
+
|
| 187 |
+
The models participating in the comparative experiment of this section include:
|
| 188 |
+
|
| 189 |
+
- GPT2-base (Zhao et al., 2019). A Chinese GPT2 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020).
|
| 190 |
+
- T5-base (Zhao et al., 2019). A Chinese T5 released by Zhao et al. (2019). Pretrained using the general corpus CLUECorpusSmall (Xu et al., 2020).
|
| 191 |
+
- FinBERT (Hou et al., 2020). A Chinese BERT for the financial domain released by Hou et al. (2020).
|
| 192 |
+
- Mengzi-BERT-base-fin (Zhang et al., 2021). A Chinese BERT for the financial domain released by Zhang et al. (2021).
|
| 193 |
+
- FinT5-base. Our Chinese pre-trained language model for the financial domain, pretrained on our financial corpus, FinCorpus. Its model architecture, parameter size, and
|
| 194 |
+
|
| 195 |
+
<table><tr><td>PLMs</td><td>FinFE</td><td>FinNL</td><td>FinNSP</td><td>FinRE</td><td>Un.Avg.</td><td>FinNA</td><td>FinQA</td><td>Ge.Avg.</td><td>Avg.</td></tr><tr><td>GPT2-base</td><td>79.05</td><td>84.09</td><td>91.30</td><td>36.37</td><td>72.70</td><td>44.19</td><td>75.22</td><td>59.71</td><td>68.37</td></tr><tr><td>T5-base</td><td>79.40</td><td>87.48</td><td>95.43</td><td>54.93</td><td>79.56</td><td>48.54</td><td>83.58</td><td>66.06</td><td>74.89</td></tr><tr><td>FinBERT-base</td><td>79.45</td><td>84.69</td><td>69.01</td><td>55.33</td><td>72.37</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Mengzi-BERT-base-fin</td><td>79.50</td><td>85.88</td><td>71.72</td><td>58.25</td><td>73.59</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BBT-FinT5-base</td><td>80.19</td><td>87.55</td><td>94.50</td><td>60.62</td><td>80.21</td><td>50.06</td><td>84.82</td><td>67.44</td><td>76.29</td></tr><tr><td>BBT-FinT5-base-KE</td><td>79.43</td><td>87.77</td><td>95.05</td><td>61.79</td><td>80.26</td><td>51.36</td><td>85.66</td><td>68.51</td><td>76.84</td></tr><tr><td>BBT-FinT5-large</td><td>80.24</td><td>88.44</td><td>94.54</td><td>61.88</td><td>81.78</td><td>51.42</td><td>85.95</td><td>68.69</td><td>77.07</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 4: Results of BBT-CFLEB from different PLMs.
|
| 198 |
+
|
| 199 |
+
pre-training hyperparameters are the same as T5-v1.1-base.
|
| 200 |
+
|
| 201 |
+
- FinT5-base-KE. Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia (Xu et al., 2017) knowledge graph.
|
| 202 |
+
- FinT5-large. Our proposed Chinese pretrained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base.
|
| 203 |
+
|
| 204 |
+
# 6.1.2 Fine-tuning
|
| 205 |
+
|
| 206 |
+
For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks.
|
| 207 |
+
|
| 208 |
+
# 6.2 Experiment 1: Comparison of Pre-trained Model Architectures
|
| 209 |
+
|
| 210 |
+
For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table 4. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model.
|
| 211 |
+
|
| 212 |
+
# 6.3 Experiment 2: Effectiveness of Domain Pre-training
|
| 213 |
+
|
| 214 |
+
As shown in Table 4, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the
|
| 215 |
+
|
| 216 |
+
effectiveness of domain pre-training and the effectiveness of FinCorpus.
|
| 217 |
+
|
| 218 |
+
# 6.4 Experiment 3: Superiority Compared to Existing Models in the domain
|
| 219 |
+
|
| 220 |
+
As shown in Table 4, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain.
|
| 221 |
+
|
| 222 |
+
# 6.5 Experiment 4: Effectiveness of KETM
|
| 223 |
+
|
| 224 |
+
As shown in Table 4, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method.
|
| 225 |
+
|
| 226 |
+
# 6.6 Experiment 5: Effectiveness of parameter scaling up
|
| 227 |
+
|
| 228 |
+
As shown in Table 4, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up.
|
| 229 |
+
|
| 230 |
+
# 7 Conclusion
|
| 231 |
+
|
| 232 |
+
In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM,
|
| 233 |
+
|
| 234 |
+
which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications.
|
| 235 |
+
|
| 236 |
+
# References
|
| 237 |
+
|
| 238 |
+
Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.
|
| 239 |
+
Biendata. 2019. Ccks 2019 extraction of public company announcement information.
|
| 240 |
+
Biendata. 2020a. Ccks 2020: Cross-class few-shot transfer event extraction for financial domain.
|
| 241 |
+
Biendata. 2020b. Ccks 2020: Evaluation of automated construction techniques for financial knowledge graph based on ontology.
|
| 242 |
+
Biendata. 2021. Ccks 2021: Event relation extraction for financial texts (part ii) - extraction of causal relationships between events.
|
| 243 |
+
Biendata. 2022a. Ccks2022: Evaluation of nl2sql for financial domain.
|
| 244 |
+
Biendata. 2022b. Ccks2022: Few-shot event extraction for financial domain.
|
| 245 |
+
Datafountain. 2019. Discovery of new entities in internet finance.
|
| 246 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 247 |
+
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
|
| 248 |
+
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1-23.
|
| 249 |
+
Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.
|
| 250 |
+
Cuiyun Han, Jinchuan Zhang, Xinyu Li, Guojin Xu, Weihua Peng, and Zengfeng Zeng. 2022. Due-fin: A large-scale dataset for document-level event extraction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 172-183. Springer.
|
| 251 |
+
Panpan Hou, Mengchao Zhang, Zhibing Fu, and Yu Li. 2020. Finbert. https://github.com/valuesimplex/FinBERT. GitHub repository, commit: ec1b14b96de9bdd5217abba1d197428cf00ddaa6.
|
| 252 |
+
|
| 253 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
|
| 254 |
+
Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. 2019. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322.
|
| 255 |
+
Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. 2019. Chinese relation extraction with multi-grained information and external linguistic knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4377-4386, Florence, Italy. Association for Computational Linguistics.
|
| 256 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 257 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 258 |
+
Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019a. Transfer learning in biomedical natural language processing: An evaluation of BERT and ELMo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58-65, Florence, Italy. Association for Computational Linguistics.
|
| 259 |
+
Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019b. Transfer learning in biomedical natural language processing: an evaluation of bert and elmo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474.
|
| 260 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints.
|
| 261 |
+
Jeff Rasley, Samyam Rajbhandari, Olatunjri Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506.
|
| 262 |
+
Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, and Diyi Yang. 2022. When flue meets flang: Benchmarks and large pre-trained language model for financial domain. arXiv preprint arXiv:2211.00083.
|
| 263 |
+
|
| 264 |
+
Alisa Smirnova and Philippe Cudre-Mauroux. 2018. Relation extraction using distant supervision: A survey. ACM Computing Surveys (CSUR), 51(5):1-35.
|
| 265 |
+
Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, et al. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv preprint arXiv:2107.02137.
|
| 266 |
+
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.
|
| 267 |
+
Tianchi. 2018. The dataset for extracting announcement information of a-share listed companies.
|
| 268 |
+
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32.
|
| 269 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
|
| 270 |
+
Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cn-dbpedia: A never-ending chinese knowledge extraction system. In Advances in Artificial Intelligence: From Theory to Practice: 30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2017, Arras, France, June 27-30, 2017, Proceedings, Part II, pages 428-438. Springer.
|
| 271 |
+
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.
|
| 272 |
+
Jian Yang, Gang Xiao, Yulong Shen, Wei Jiang, Xinyu Hu, Ying Zhang, and Jinghui Peng. 2021. A survey of knowledge enhanced pre-trained models. arXiv preprint arXiv:2110.00269.
|
| 273 |
+
Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications. arXiv preprint arXiv:2006.08097.
|
| 274 |
+
Sha Yuan, Hanyu Zhao, Zhengxiao Du, Ming Ding, Xiao Liu, Yukuo Cen, Xu Zou, Zhilin Yang, and Jie Tang. 2021. Wudaocorpora: A super large-scale chinese corpora for pre-training language models. AI Open, 2:65-68.
|
| 275 |
+
|
| 276 |
+
Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, and Ming Zhou. 2021. Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. arXiv preprint arXiv:2110.06696.
|
| 277 |
+
Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. Uer: An open-source toolkit for pretraining models. EMNLP-IJCNLP 2019, page 241.
|
2302.09xxx/2302.09432/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21a2b24ad795aa9f94b4d64df68b0f3752b9d76395ca283d4079f7fe98ada96b
|
| 3 |
+
size 380752
|
2302.09xxx/2302.09432/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09450/f3d9b8ca-dcdd-4963-b41f-e68fad6bce7b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a2967edb0395df5133754d6eb7ab8ec9c1d5052099525b67140814ae4ef7325e
|
| 3 |
+
size 5510696
|
2302.09xxx/2302.09450/full.md
ADDED
|
@@ -0,0 +1,507 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Robust and Versatile Bipedal Jumping Control through Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Zhongyu Li $^{1}$ , Xue Bin Peng $^{2}$ , Pieter Abbeel $^{1}$ , Sergey Levine $^{1}$ , Glen Berseth $^{3,4}$ , and Koushil Sreenath $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of California, Berkeley, <sup>2</sup>Simon Fraser University, <sup>3</sup>Université de Montréal, <sup>4</sup>Mila
|
| 6 |
+
|
| 7 |
+
Email: zhongyu_li@berkeley.edu, xbpeng@sfu.ca, pabbeel@cs.berkeley.edu, svlevine@eecs.berkeley.edu, glen.berseth@mila.quebec, koushils@berkeley.edu
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
(a)
|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
(b)
|
| 14 |
+
Fig. 1: Representative dynamic jumping maneuvers performed by a bipedal robot Cassie using the proposed goal-conditioned control policies. From left to right: (a) the robot jumps over $1.4\mathrm{m}$ and lands at the given target; (b) the robot jumps to a target that is $0.88\mathrm{m}$ in front of the robot and $0.44\mathrm{m}$ above the ground, and (c) the robot jumps in place while turning $55^{\circ}$ with a command to turn $60^{\circ}$ in place. The policies are trained in simulation and deployed on the hardware without further tuning. Video is at: https://youtu.be/aAPSZ2QFB-E.
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
(c)
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Abstract—This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world. We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions. To improve performance on these challenging tasks, we develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history. In order to train a versatile jumping policy, we utilize a multi-stage training scheme that includes different training stages for different objectives. After multi-stage training, the policy can be directly transferred to a real bipedal Cassie robot. Training on different tasks and exploring more diverse scenarios lead to highly robust policies that can exploit the diverse set of learned maneuvers to recover from perturbations or poor landings during real-world deployment. Such robustness in the proposed policy enables Cassie to succeed in completing a variety of challenging jump tasks in the real world, such as standing long jumps, jumping onto elevated platforms, and multi-axes jumps.
|
| 24 |
+
|
| 25 |
+
# I. INTRODUCTION
|
| 26 |
+
|
| 27 |
+
One question that has been lingering since the first creation of bipedal robots is: how can we enable such complex robots to traverse complex environments using agile and robust maneuvers [23, 53]? For example, creating agile controllers that enable bipedal robots to jump over a given distance or onto different elevations can enable greater mobility in unstructured environments. However, jumping is a challenging skill to control for bipedal robots. During a standing jump, the robot needs to push its body off the ground and break contact, leap into a flight phase where the robot is underactuated, and then make contact again when it lands on its legs. During
|
| 28 |
+
|
| 29 |
+
landing, the robot needs to not only recover from the large impact impulse, but also stick the landing and remain standing. All of these events occur within a very short amount of time (typically less than 2 seconds). These events lead to a hybrid system that switches between modes with different contact configurations (e.g., taking-off, flight, and landing). Planning and controlling such discontinuous dynamics, especially for a high-dimensional, nonlinear, and underactuated bipedal robot present a very challenging task [52]. This challenge is further compounded when the robot needs to accurately land on a given target, where the robot will have to produce the precise translational and angular momentum at take-off in order to land at the desired location [73].
|
| 30 |
+
|
| 31 |
+
Model-based optimal control (OC) frameworks have made notable progress in controlling bipedal robots, including jumping, but they depend on carefully crafted models of the robot and the complex contact dynamics [30, 39, 57]. These methods typically require manual design of task-specific control structures [9, 20, 76], and simplified dynamics models, which provides only a coarse approximation of the robot's full-order dynamics. Furthermore, pre-defined or pre-computed contact sequences are often needed to reduce online optimization complexity [46, 47, 71]. Such limitations have restricted previous model-based methods to only performing a fixed hop on torque-controlled bipedal robots like Cassie [71, 72]. By leveraging the robot's full-order dynamics, model-free reinforcement learning (RL) has shown some success in highly-dynamic locomotion control on quadrupedal robots [26, 42, 49]. However, compared to quadrupeds that are inherently
|
| 32 |
+
|
| 33 |
+
TABLE I: Benchmark with previous work tackling jumping control using optimal control (OC) or reinforcement learning (RL) on the bipedal robot Cassie in the real world.
|
| 34 |
+
|
| 35 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Controlled Landing Pose</td><td rowspan="2">Apex Foot Clearence</td><td rowspan="2">Longest Flight Phase</td><td colspan="5">Maximum Leap Distance</td></tr><tr><td>Forward</td><td>Backward</td><td>Lateral</td><td>Turning</td><td>Elevation</td></tr><tr><td>Aperiodic Hop by OC [71]</td><td>No</td><td>0.18m</td><td>0.42s</td><td>~0.5m*</td><td>0m</td><td>0m</td><td>0°</td><td>0m</td></tr><tr><td>Aperiodic Hop by OC [72]</td><td>No</td><td>0.15m</td><td>0.33s*</td><td>~0.3m*</td><td>0m</td><td>0m</td><td>0°</td><td>0m</td></tr><tr><td>Periodic Hop by RL [60]</td><td>No</td><td>~0.16m*</td><td>0.33s*</td><td>~0.5m*</td><td>0m</td><td>0m</td><td>0°</td><td>~0.15m*</td></tr><tr><td>Ours (Aperiodic Jump by RL)</td><td>Yes</td><td>0.47m</td><td>0.58s</td><td>1.4m</td><td>-0.3m</td><td>±0.3m</td><td>±55°</td><td>0.44m</td></tr></table>
|
| 36 |
+
|
| 37 |
+
\*Not provided in the paper and the listed value is roughly estimated based on the comparison with the background environment in the accompanying video.
|
| 38 |
+
|
| 39 |
+
more stable, RL-based methods still struggle when applied to bipedal robots for more dynamic and aggressive maneuvers in the real world. Given that the most relevant prior RL-based system is only able to perform periodic hopping on Cassie [60], it remains an open question how more dynamic bipedal skills can be achieved in the real world, such as standing long jumps that could be more challenging than periodic motions [20, Sec. I].
|
| 40 |
+
|
| 41 |
+
# A. Objective of this Paper
|
| 42 |
+
|
| 43 |
+
In this paper, our objective is to explore the possibility of creating a robust and versatile jumping controller for a bipedal robot that enables it to land at different target locations, as shown in Fig. 1, and validating the advantages brought by learning with different maneuvers using reinforcement learning. We refer a skill to a kind of legged locomotion method, such as walking, jumping, or running. We further denote a task as using a locomotion skill to accomplish a goal. For instance, one task could be the task of walking (skill) at a desired speed (goal), or as in this work, achieving the task of jumping (skill) to a target configuration (goal). We use the term goal-conditioned to refer to a policy that can perform a variety of jumping tasks, such as jumping over various desired distances and/or directions, conditioned on the given goal. We hypothesize that, by exploring different jumping tasks, a versatile jumping controller can be more robust as it allows the robot to leverage more diverse learned tasks to maintain stability during dynamic maneuvers. For example, in order to recover from a large ground impulse on landing, the robot can quickly switch to another learned task, such as a hop to a different location, which can allow for a more graceful recovery than simply continuing with the original behavior.
|
| 44 |
+
|
| 45 |
+
# B. Contributions
|
| 46 |
+
|
| 47 |
+
The core contribution of this work is the development of the first system that enables a life-sized torque-controlled bipedal robot to perform versatile jumping maneuvers with controlled landing locations in the real world. The robot is first trained in simulation with reinforcement learning and domain randomization. Training does not require an explicit contact sequence, and the learning algorithm automatically develops different contact sequences for different jumping goals. In order to successfully transfer the learned skill for such dynamic maneuvers from simulation to the real world, we utilize two new design decisions. First, we present a new policy structure that encodes the robot's long-term Input-Output (I/O) history while also providing direct access to a short-term I/O history. By training the model in an end-to-end manner, we
|
| 48 |
+
|
| 49 |
+
show that such a structure can outperform previously proposed architectures (Fig. 5). Second, we demonstrate that training the controller for a diverse range of goals improves the robustness of the controller by using the maneuvers learned from different jumping tasks to recover from unstable states. We show that this robustness cannot be easily obtained by training only for a single jumping goal (Fig. 6). Combining these two techniques, we enable the bipedal Cassie robot to perform (1) several aggressive jumps, such as a long jump (1.4m ahead) and a high jump onto a 0.44m-tall platform (Fig. 1), (2) various agile multi-axes bipedal jumps (Fig. 7, Fig. 9), and (3) to utilize diverse learned jumping maneuvers to recover from external perturbations or impacts (Fig. 9), in the real world. We hope this paper can serve as a step forward for enabling more diverse, dynamic, and robust legged locomotion skills.
|
| 50 |
+
|
| 51 |
+
# II. RELATED WORK
|
| 52 |
+
|
| 53 |
+
Prior work tackling dynamic locomotion skills such as jumping with legged robots can be broadly categorized as corresponding to model-based optimal control (OC) and modelfree reinforcement learning (RL). Table I compares our work with the most related prior efforts on the bipedal robot Cassie.
|
| 54 |
+
|
| 55 |
+
1) Model-based optimal control for legged jumping: Prior model-based methods for legged jumping control usually build up a layered optimization scheme, which includes offline trajectory optimization with detailed models of the robot's dynamics and ground contacts [10, 12, 46, 63], and online controllers that leverage simplified models of the robot's dynamics [45, 55, 65, 72]. In order to optimize trajectories for jumping, which needs to switch among modes with different underlying dynamics, there are two commonly employed solutions: (i) relying on human-specified contact sequences [9, 19, 29, 47, 71], which is not scalable to different jump distances and/or directions, or (ii) leveraging contact-implicit optimization [8, 14, 33, 52, 77] which plans through contacts to avoid breaking the trajectory or using computationally expensive mixed-integer programming [1, 11, 12]. However, due to the computational challenges of optimization, both of the above-mentioned methods are still limited to offline computation for legged robots. As we show in this work, by training with different jumping goals offline, Cassie can automatically generate the appropriate contact sequences during online execution for achieving robust jumping.
|
| 56 |
+
|
| 57 |
+
In the case of controllers for aperiodic dynamic jumps, many previous efforts required a separate landing controller to stabilize the robot from the large landing impacts [20, 25, 46, 71]. However, this approach usually requires a contact estimator and needs to fuse the noisy robot proprioceptive measurements
|
| 58 |
+
|
| 59 |
+
to estimate the contact states, which on its own could be a challenging problem [22, 38, 51]. Furthermore, while there are a few prior attempts addressing precise jumping control on low-dimensional single-legged robots [66, 73] and a quadrupedal robot [45], mostly in simulation [45, 66], most prior work on bipedal jumping only focuses on single vertical jumps with the landing location not controlled [20, 65, 71, 72]. In this work, the proposed jumping controller demonstrates the capacity to control the landing pose of the bipedal robot without position feedback or explicit contact estimation.
|
| 60 |
+
|
| 61 |
+
2) Model-free RL for legged locomotion control: In recent years, we have seen exciting progress in using deep RL to learn locomotion controllers for quadrupedal robots [5, 16, 34, 40] and bipedal robots [7, 37, 54, 59, 70, 75] in the real world. Since it is challenging in general to learn a single policy with RL to perform various tasks [28], many prior works focus on learning a single-goal policy [6, 42, 49, 74] for legged robots, such as just forward walking at a constant speed [16, 31, 69]. There have been efforts to obtain more versatile policies, such as walking at different velocities using different gaits, while following different commands [17, 18, 35, 54], which requires more extensive tuning due to the lack of a gait prior. Providing the robot with different reference motions for different goals can be helpful, but requires additional parameterization of the reference motions (e.g., a gait library) [3, 24, 27, 37], policy distillation [70], or a motion prior [15, 50, 67]. There is also a line of research to explicitly provide contact sequences for legged robots [4, 41, 58, 60]. However, such methods are prescriptive and provide little opportunity for the robot to deviate from the contact plan, limiting the flexibility with which it can respond to perturbations. In this work, we show that a versatile policy can enhance the robustness of a jumping policy by intelligently employing a variety of learned tasks to react to perturbations.
|
| 62 |
+
|
| 63 |
+
3) Sim-to-real transfer for legged robots: To tackle sim-to-real transfer for RL-based methods, some works have sought to directly train policies directly in the real world [21, 62, 68], but most of the prior work, especially for dynamic skills, leverages a simulator to train the legged robot with extensive dynamics randomization [48] and then zero-shot transfer to the real world [16, 31, 34, 37, 60] or finetune with real-world data [27, 49, 61]. Since performing rollout on the hardware of human-scale bipedal robots is expensive, we use the zero-shot transfer method. In order to realize this, there are two widely-adopted techniques: (i) end-to-end training a policy by providing the robot with a proprioceptive short-term history [16, 24, 37] or long-term history [48, 58, 59], (ii) teacher-student training that first obtains a teacher policy with privileged information of the environment by RL, then uses this policy to supervise the training of a student policy that only has access of onboard-available observations [18, 26, 31, 34, 40, 75], which shows advantages over the end-to-end training method [31, 32]. However, here we show that, for the dynamic control of bipedal robots, by training the robot in an end-to-end way with a newly-proposed policy structure, we can realize a better learning performance over the
|
| 64 |
+
|
| 65 |
+
teacher-student method which separates the training process and results in increased training time and data.
|
| 66 |
+
|
| 67 |
+
# III. BACKGROUND AND PRELIMINARIES
|
| 68 |
+
|
| 69 |
+
In this section, we provide a brief introduction to our experimental platform, Cassie and the background of goal-conditioned reinforcement learning.
|
| 70 |
+
|
| 71 |
+
# A. Floating-base Model of Cassie
|
| 72 |
+
|
| 73 |
+
We use Cassie as the experimental platform in this work. Cassie (see Fig. 1) is a life-sized bipedal robot and is around 1.1 meter tall, with a weight of $31\mathrm{Kg}$ . It is a dynamic and underactuated system, with 5 actuated motors (abduction $q_{1}$ , rotation $q_{2}$ , thigh $q_{3}$ , knee $q_{4}$ , and toe $q_{7}$ ) and 2 passive joints (shin $q_{5}$ and tarsus $q_{6}$ ) connected by leaf springs on its Left and Right leg. We denote the motor positions as $\mathbf{q}_m = [q_{1,2,3,4,7}^{L / R}] \in \mathbb{R}^{10}$ . The 6 degree of freedom (DoF) floating base (pelvis) can be represented with translational positions (sagittal $q_{x}$ , lateral $q_{y}$ , vertical $q_{z}$ ) and rotational positions (roll $q_{\psi}$ , pitch $q_{\theta}$ , and yaw $q_{\phi}$ ). In total, the robot has 20 DoFs $\mathbf{q} \in \mathbb{R}^{20}$ . For more details about Cassie's configuration, we refer readers to [71, Fig. 2]. The observable joint positions on Cassie are denoted as $\mathbf{q}^o = [q_{\psi,\theta,\phi}, \mathbf{q}_m, \dot{q}_{x,y,z}, \dot{\mathbf{q}}_m] \in \mathbb{R}^{26}$ , which can be obtained from onboard joint encoders and IMUs, while the base linear velocity $\dot{q}_{x,y,z}$ can be estimated with an EKF [70].
|
| 74 |
+
|
| 75 |
+
# B. RL Background and Goal-Conditioned Policy
|
| 76 |
+
|
| 77 |
+
We formulate the locomotion control problem as a Markov decision process (MDP). At each timestep $t$ , the agent (i.e., the robot) observes the environment state $\mathbf{s}_t$ , and the policy $\pi$ produces a distribution over the actions, $\pi(\mathbf{a}_t|\mathbf{s}_t)$ , conditioned on the state. The agent then executes the action $\mathbf{a}_t$ sampled from the policy, interacts with the environment, makes an observation of the environment's new states $\mathbf{s}_{t+1}$ , and receives a reward $r_t$ . The objective of RL is to maximize the expected accumulative reward (return) the agent received over the course of an episode $\mathbb{E}[\Sigma_{t=0}^{T}\gamma^t r_t]$ where $\gamma$ is a discount factor and $T$ is the episode length. In order to obtain a policy that can accomplish different goals, we provide a goal $\mathbf{c}$ which parameterizes the task and the policy $\pi(\mathbf{a}_t|\mathbf{s}_t,\mathbf{c})$ is then also conditioned on the given goal $\mathbf{c}$ to perform different tasks.
|
| 78 |
+
|
| 79 |
+
Task Parameterization: In our jumping task, the goal $\mathbf{c}$ specifies target commands for a desired jump $\mathbf{c} = [c_x, c_y, c_z, c_\phi]$ , which consists of the target location $c_{x,y}$ on the horizontal plane, elevation $c_z$ in the vertical direction, and turning direction $c_\phi$ after the agent lands, calculated based on the robot's pose before the jump, i.e., in the local frame of robot's starting pose. Please note that the change in elevation $c_z$ is defined as the change of the floor height, instead of the change of the robot's base height.
|
| 80 |
+
|
| 81 |
+
# IV. MULTI-STAGE TRAINING FOR VERSATILE JUMPS
|
| 82 |
+
|
| 83 |
+
We now describe our multi-stage training framework for developing goal-conditioned jumping policies. The training environment is developed in a simulation of Cassie using MuJoCo [13, 64].
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Fig. 2: The schematic to train the robot to perform versatile jumping skills in the real world starting with a reference motion of a single jumping animation. This framework consists of three stages. In the first stage, we focus on training the robot to imitate the animation while performing a single jump from scratch. After the robot is good at achieving the single goal, we randomize the goal (to land at different locations and different turning directions/elevations) that is assigned to the robot during each training episode. After these two stages of training, we extensively randomize the dynamics properties of the environment in simulation in order to improve the robustness of the robot during the zero-shot transfer from sim to real.
|
| 87 |
+
|
| 88 |
+
# A. Overview of the Multi-Stage Training Schematic
|
| 89 |
+
|
| 90 |
+
Our goal is to develop a locomotion control policy for jumping skills that can perform targeted jumps to different locations. However, due to the challenging nature of jumping, it can be difficult to directly train a policy to perform a large variety of jumps from scratch. We observed that training policies from scratch to perform a large variety of jumps tends to lead to the robot adopting very conservative behaviors or even failing to learn to jump. Therefore, we use a multi-stage training scheme that consists of 3 stages, as illustrated in Fig. 2: (1) single-goal training, (2) multi-goal training, and (3) dynamics randomization. All stages of training are performed in simulation, but as we show in our experiments, the resulting models can then be directly deployed on a real Cassie robot. In Stage 1, the model is trained on a single goal c, i.e., jumping in place. This stage of training results in a policy that is trained from scratch to specialize in a single task in simulation. Next, in Stage 2, the goal c is randomized every episode to train the robot to jump to different targets. In this stage, the focus is primarily on performing the commanded task in the simulated environment. Finally, in Stage 3, we introduce extensive domain randomization on the simulation environment, while also randomizing the goal, in order to improve the robustness and generalization of the policy for sim-to-real transfer. At each stage, the reward and episode designs of the MDP may vary in order to produce more effective policies for the objectives of the given stage.
|
| 91 |
+
|
| 92 |
+
In the rest of this section, we focus on explaining the details of Stages 2&3 in training, which share the same reward and episode design and cover both multi-goal training and domain randomization. Stage 1 training, which is different in the choice of hyperparameters, is detailed in Appendix A.
|
| 93 |
+
|
| 94 |
+
# B. Reference Motion
|
| 95 |
+
|
| 96 |
+
To initialize the training process, we provide a single jumping reference motion. The reference motion is a human-
|
| 97 |
+
|
| 98 |
+
TABLE II: A list of components of reward $r_t$ which is a weighted summation of the listed items. The weight of each term is scheduled based on the jumping phase and training stage.
|
| 99 |
+
|
| 100 |
+
<table><tr><td rowspan="3">Reward Component r</td><td colspan="4">Weight w</td></tr><tr><td colspan="2">Stage 1</td><td colspan="2">Stage 2, 3</td></tr><tr><td>t ≤ TJ</td><td>t > TJ</td><td>t ≤ TJ</td><td>t > TJ</td></tr><tr><td colspan="5">Reference Motion Tracking</td></tr><tr><td>Motion position: r(qm, qr(t))</td><td>15</td><td>15</td><td>7.5</td><td>15</td></tr><tr><td>Pelvis height: r(qz, qz(t) + cz)</td><td>5</td><td>5</td><td>3</td><td>3</td></tr><tr><td>Foot height: r( ez, ez(t) + cz)</td><td>10</td><td>10</td><td>10</td><td>10</td></tr><tr><td colspan="5">Task Completion</td></tr><tr><td>Pelvis position: r(qx,y, cx,y)</td><td>12.5</td><td>12.5</td><td>15</td><td>15</td></tr><tr><td>Pelvis velocity: r( qx,y, qxdy)</td><td>0</td><td>3</td><td>12.5</td><td>12.5</td></tr><tr><td>Orientation: r(qψ,θ,φ, [0,0, cφ])</td><td>12.5</td><td>12.5</td><td>10</td><td>10</td></tr><tr><td>Angular rate: r(qψ,θ,φ, [0,0, qddy])</td><td>3</td><td>3</td><td>10</td><td>10</td></tr><tr><td colspan="5">Smoothing</td></tr><tr><td>Ground Impact: r(Fz, 0)</td><td>5</td><td>0</td><td>10</td><td>0</td></tr><tr><td>Torque Consumption: r(τ, 0)</td><td>3</td><td>3</td><td>3</td><td>15</td></tr><tr><td>Motor velocity: r( qm, 0)</td><td>0</td><td>15</td><td>0</td><td>25</td></tr><tr><td>Joint acceleration: r( q, 0)</td><td>3</td><td>10</td><td>0</td><td>5</td></tr><tr><td>Change of action: r(at, at+1)</td><td>0</td><td>0</td><td>10</td><td>10</td></tr></table>
|
| 101 |
+
|
| 102 |
+
authored animation of Cassie jumping in place, created in a $3D$ creation suite [36], as presented in Fig. 2. This animated jump has an apex foot height of $0.5\mathrm{m}$ and an apex pelvis height of $1.1\mathrm{m}$ and has a timespan $T_{J}$ of 1.66 second (i.e., $T_{J} = 1.66$ ). This reference motion is only a kinematically-feasible motion for the agent and is not optimized to be dynamically feasible. After the end of the jumping animation, the reference motion will be set to a fixed standing pose for the robot.
|
| 103 |
+
|
| 104 |
+
# C. Reward
|
| 105 |
+
|
| 106 |
+
The design of the reward function is important to encourage the robot to jump with agility. We further spilt the reward within a stage into two phases: before landing $(t \leq T_J)$ and after $(t > T_J)$ , and the reward needs to vary based on these phases because the desired robot's behavior is different: performing aggressive jump versus stationary standing.
|
| 107 |
+
|
| 108 |
+
We here define a function:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
r (u, v) = \exp (- \alpha | | u - v | | _ {2} ^ {2}) \tag {1}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
where $r(u,v) \in (0,1]$ defines a reward component that encourages the two vector $u$ and $v$ to be as close as possible, scaled by $\alpha > 0$ that balance the units. The reward $r_t$ the agent receives at each timestep is a weighted summation of different components, $r_t = (\mathbf{w} / ||\mathbf{w}||_1)^T\mathbf{r} \in [0,1]$ . The component vector $\mathbf{r}$ and weight vector $\mathbf{w}$ are detailed in Table II. The reward used here consists of three main components: reference motion tracking, task completion, and smoothing term.
|
| 115 |
+
|
| 116 |
+
The agent is encouraged to track the reference motor position by $r(\mathbf{q}_m, \mathbf{q}_m^r(t))$ , pelvis height by $r(q_z, q_z^r(t) + c_z)$ , and foot height $r(e_z, e_z^r(t) + c_z)$ at current timestep $t$ . However, as recorded in Table II, the reference motion tracking term has a relatively small weight during the multi-goal training because we want the agent to infer diverse maneuvers, such as jumping to different locations, and the jumping-in-place reference motion may be no longer reasonable.
|
| 117 |
+
|
| 118 |
+
The task completion reward, on the contrary, is designed to dominate others during multi-goal training. We first include the $r(q_{x,y},c_{x,y})$ and $r(q_{\psi ,\theta ,\phi},[0,0,c_{\phi}])$ to encourage the agent to reach the desired location and orientation and stay there
|
| 119 |
+
|
| 120 |
+
after it lands in order to accomplish the assigned task $\mathbf{c}$ . Furthermore, pelvis linear velocity tracking $r(\dot{q}_{x,y},\dot{q}_{x,y}^{d})$ and angular rate tracking $r(\dot{q}_{\psi ,\theta ,\phi},[0,0,\dot{q}_{\phi}^{d}])$ are introduced to shape the sparse task reward, where $\dot{q}_{x,y}^{d} = c_{x,y} / T_{J}$ and $\dot{q}_{\phi}^{d} = c_{\phi} / T_{J}$ . Moreover, although the task does not include the pelvis roll and pitch angle $q_{\psi ,\theta}$ , minimizing them to zero can help to stabilize the robot's pelvis.
|
| 121 |
+
|
| 122 |
+
We further introduce a smoothing term that is less important than task completion but with a larger weight than the motion tracking term. For example, we encourage the robot to produce less ground impact force $F_{z}$ during its jump by $r(F_{z},0)$ , to damp the body's oscillation after it lands by motor velocity reward $r(\dot{\mathbf{q}}_m,0)$ and joint acceleration reward $r(\ddot{\mathbf{q}},0)$ , and to be more energy efficient by $r(\tau ,0)$ . Moreover, the importance of having a stationary standing pose is highlighted by having a relatively large weight on torque consumption and motor velocity reward after the robot lands $(t > T_J)$ . This is because the introduction of dynamics randomization in Stage 3 will make the environment noisy and cause oscillation in body pose during standing. We also introduce an additional component in Stages 2&3, the change of action reward $r(\mathbf{a}_t,\mathbf{a}_{t + 1})$ , to further smooth the aggressive maneuver the robot may conduct to jump over a long distance.
|
| 123 |
+
|
| 124 |
+
Remark 1: Although the switching time $(T_J)$ of the reward is fixed and not tuned for jumping to different locations, as we show in our experiments, the robot learns different flight times for different targets (Fig. 7b).
|
| 125 |
+
|
| 126 |
+
# D. Episode Design
|
| 127 |
+
|
| 128 |
+
Having a careful design of the reward may not be enough since it is challenging to encourage the agent to jump. The robot may keep failing to stabilize itself while learning to jump. Therefore, the robot may easily adopt very conservative but stable behaviors because it can quickly improve the return in this way. For example, the robot may just stand or just jump in place without completing the task, and can still have some suboptimal return. To prevent this, we note that a cautious design of the episode can also facilitate the training of dynamic jumping maneuvers.
|
| 129 |
+
|
| 130 |
+
In the stages for multi-goal training, the maximum episode length is set to 2500 timestep which lasts 76 seconds. During such an episode, the robot is commanded to jump to a random target after a random time interval of standing, and these random values are uniformly sampled. The task is sampled from $c_x \sim U(-0.5, 1.5)$ m, $c_y \sim U(-1.0, 1.0)$ m, $c_z \sim U(-0.5, 0.5)$ m, and $c_\phi \sim U(-100^\circ, 100^\circ)$ , and standing phase distribution is $U(1, 15)$ second. Such a "jump $\leftrightarrow$ stand" switch is repeated. Such a design can improve the robustness of the learned policy to different initial states by performing repeat jumps over an episode. Moreover, compared to Stage 1 where the agent is asked to jump at $t = 0$ , starting from Stage 2, there is a high probability the robot will start with a standing skill at each episode.
|
| 131 |
+
|
| 132 |
+
The episode will be terminated earlier if the robot falls over (pelvis height $q_{z} < 0.55$ m or the tarsus joints hit the ground) to prevent it from having further rewards. We also emphasize
|
| 133 |
+
|
| 134 |
+
the importance of foot height tracking and task completion to the robot by terminating the episode earlier if: (i) the foot height tracking error $|e_z - e_z^d|$ is larger than the bound $E_e$ which is set at $0.32\mathrm{m}$ , or (ii) the robot does not arrive at the given target after it lands ( $t > T_J$ ) when $[||q_{x,y} - c_{x,y}||_2, |q_\phi - c_\phi|] > E_t$ where $E_t = [0.35\mathrm{m}, 35^\circ]$ . Please note that we have a relatively small task completion error bound $E_t$ while we have a large tolerance on the foot height tracking error $E_e$ . Using such a design, the robot is allowed to deviate from the reference foot trajectory to find a better foot height trajectory for different tasks. The robot will also have more incentive to complete the task by landing close to the target and having more rewards in a longer episode.
|
| 135 |
+
|
| 136 |
+
Remark 2: The larger choice of the foot tracking error tolerance $E_{e}$ also allows the robot to perform small hops after it lands. The robot is encouraged to stand due to the foot height tracking reward but can dynamically switch to hop, including hopping to different places as long as staying within $E_{t}$ , for better robustness. We do not specifically train or encourage the agent to deviate from the assigned task for robustness.
|
| 137 |
+
|
| 138 |
+
In the first stage of training, a single jump introduces an inductive bias into the policy towards performing jumping behaviors. In later stages of training, by combining the early termination conditions and motion imitation reward, this inductive bias leads the robot to favor jumping to different targets, instead of using other skills such as walking.
|
| 139 |
+
|
| 140 |
+
# E. Dynamics Randomization
|
| 141 |
+
|
| 142 |
+
In order to succeed during the sim-to-real transfer, we introduce extensive randomization on dynamics parameters of the environment in Stage 3. The dynamics properties that are randomized are listed completely in Table III in Appendix B. During training at this stage, at each episode, the value of each dynamics parameter is uniformly sampled from the range listed in Table III. We consider three sources that cause the sim-to-real gap: (1) modeling errors, (2) sensor noise, and (3) communication delay between the high-level computer running the RL policy and the robot's low-level computer.
|
| 143 |
+
|
| 144 |
+
In order to robustify the policy to the modeling errors, we randomize the floor friction, robot's joint damping, link mass and inertia, and the position of the link's Center of Mass (CoM). Specifically, to deal with the error of motor dynamics between the simulation and hardware, we have a larger upper bound of the joint damping (4 times the default value) to approximate the motor aging issues on the hardware. We also randomize the PD gains used in the joint-level PD controllers (since our policy outputs target motor positions). The range is $\pm 30\%$ of the default value. Such a change is able to diversify the motor responses the robot is trained on and enhance the robustness to the change of motor dynamics during hardware deployment. Furthermore, specific to Cassie whose leg has leaf springs to connect the passive joints $q_{5,6}$ , the parameters of the springs are important because they will have significant displacement during the taking-off and landing phases. Therefore, we introduce a $20\%$ uncertainty on
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
Fig. 3: The architecture of the goal-conditioned jumping policy $\pi_{\theta}$ . The policy outputs the desired motor positions $\mathbf{q}_m^d$ , which are used by joint-level PD controllers to generate the motor torques $\tau$ on the robot. The input to the policy includes the goal $\mathbf{c}$ , which specifies the landing targets, the reference motion $\mathbf{q}_t^r$ , which provides the robot a short preview of the reference trajectory, and a short 4-timestep history of the robot's input (robot's action $\mathbf{a}_{t-1}$ ) and output (robot's feedback $\mathbf{q}_t^o$ ). The policy is also provided with a long-term 2-second I/O history, which is first encoded by a 1D CNN. The policy updates at $33\mathrm{Hz}$ while the rest runs at $2\mathrm{kHz}$ .
|
| 148 |
+
|
| 149 |
+
spring stiffness during training. We empirically found that the randomization of the motor dynamics and spring stiffness has a non-trivial effect to succeed during the sim-to-real for the bipedal jumping skills.
|
| 150 |
+
|
| 151 |
+
The sensor noise from joint encoders, IMU, and estimation error of the base linear velocity are simulated as a Gaussian noise whose mean is sampled in Table III at each episode.
|
| 152 |
+
|
| 153 |
+
# V. TRAINING SETUP
|
| 154 |
+
|
| 155 |
+
We now build up our control policy by optimizing reinforcement learning through the multi-stage training pipeline.
|
| 156 |
+
|
| 157 |
+
# A. Policy Architecture
|
| 158 |
+
|
| 159 |
+
Our policy $\pi_{\theta}$ is represented by a deep neural network with parameters $\theta$ . As shown in Fig. 3, it has two components, a base network represented by a multilayer perceptron (MLP), and a long-term history encoder represented by a 1D convolutional neural network (CNN). The policy operates at $33\mathrm{Hz}$ . Each action $\mathbf{a}_t$ specifies the target motor positions $\mathbf{q}_m^d$ for the robot. The action is first passed through a Low Pass Filter (LPF) [15, 37, 49], which smooths the motor targets before being applied to joint-level PD controllers and further complements the smoothing rewards. The PD controllers operate at $2\mathrm{kHz}$ , to generate motor torques $\tau \in \mathbb{R}^{10}$ for driving the movements of the joints.
|
| 160 |
+
|
| 161 |
+
The input to the policy at timestep $t$ contains four components: the goal $\mathbf{c}$ introduced in Sec. III-B, a preview of the reference trajectory $\mathbf{q}_t^r$ , a short-term history of previous actions and states (robot's Input/Output), and a long-term I/O history of the last 2-second. The preview of the reference trajectory $\mathbf{q}_t^r = [q_z^r(t), \mathbf{q}_m^r(t + 1), \mathbf{q}_m^r(t + 4), \mathbf{q}_m^r(t + 7)]$ provided in the robot's observation contains the current reference pelvis height $q_z^r(t)$ and reference motor positions $\mathbf{q}_m^r$ sampled at 1, 4, and 7 future timesteps. Providing a segment of the future reference trajectory as input provides the policy with more information such as future joint position, velocity and other higher order terms, which has been used in [37, 49, 61]. To close the control loop, we provide the robot direct access to a
|
| 162 |
+
|
| 163 |
+
short-term I/O history of the robot $\left(\mathbf{q}_{t-4:t}^{o}, \mathbf{a}_{t-4:t-1}\right)$ in the previous 4 timesteps (about 0.12 second). The I/O history enables the policy to infer the dynamics of the system from past observations. The task $\mathbf{c}$ , reference motion $\mathbf{q}_t^r$ , and the short-term I/O history at current timestep $t$ are directly passed as inputs to the base MLP.
|
| 164 |
+
|
| 165 |
+
For sim-to-real transfer, a short-term history may not be enough to provide adequate information to control dynamic maneuvers on a high-dimensional system. For example, during a jump, the landing event is affected by the angular momentum gained before the take-off, and the interval between these two events can be much longer than the 0.12 second. Therefore, we include an additional input in the form of a long-term I/O history of the past 2 seconds, which contains 66 timesteps of past observations and actions measurements $\left(\mathbf{q}_{t-65:t}^{o}, \mathbf{a}_{t-66:t-1}\right)$ . The timespan of this long I/O history is designed to cover the duration of a jump to help the policy implicitly infer the robot's dynamics, traveled trajectory, and contacts. To encode this long sequence of observations, we use a 1D CNN to compress it into a latent representation before providing it as an input to the base MLP. As we will see in Fig. 5, both the long-term and short-term history are needed for better learning performance.
|
| 166 |
+
|
| 167 |
+
In this work, the CNN encoder consists of 2 hidden layers whose [kernel size, filter size, stride size] are $[6,32,3]$ and $[4,16,2]$ with relu activation and no padding, respectively. The result of the CNN is flattened and concatenated into the inputs of the base MLP. The MLP has two hidden layers with $512\tanh$ units, followed by an output layer representing the mean of a Gaussian action distribution with a fixed standard deviation of $0.1I$ .
|
| 168 |
+
|
| 169 |
+
# B. Training Details
|
| 170 |
+
|
| 171 |
+
Empirically, we found that simultaneously performing both turning and jumping to different elevations is very difficult for Cassie, which does not have a torso. Due to this hardware limitation, we choose to train two separate goal-conditioned policies: a flat-ground policy that is specialized for jumping without elevation changes, i.e., $c_{z} = 0$ , and a discrete-terrain policy that is trained to jump onto platforms with different elevations without turning ( $c_{\phi} = 0$ ).
|
| 172 |
+
|
| 173 |
+
Proximal Policy Optimization (PPO) [56] is used to train all policies $\pi_{\theta}$ in simulation, with a value function represented by a 2-layered MLP, which has an access to the ground truth observations. Due to the differences in the complexity of the different training stages, the three stages are trained with 6k, 12k, and 20k iterations, respectively. Each iteration collects a batch of 65536 samples.
|
| 174 |
+
|
| 175 |
+
# VI. SIMULATION VALIDATION
|
| 176 |
+
|
| 177 |
+
Having introduced our methodology for training goal-conditioned jumping policies, we will next validate the proposed method in simulation (MuJoCo). In this section, we aim to address two questions: (1) what are the advantages of the proposed policy architecture compared to models used in prior work, (2) whether training with multiple tasks can further improve the robustness of the policy over single-goal training,
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Fig. 4: Illustration of the baseline policy structures used to train the policy for bipedal jumping. (a) Ours: proposed structure as discussed in detail in Fig. 3. (b) Residual policy that has the same input structure as our method but outputs a residual term adding to the reference motor position [34, 70]. (c) Long History Only policy that only has the access to a long-term I/O history (we still provide robot immediate feedback to the base, as suggested by Peng et al. [48]). (d) Short History Only policy that is only provided with a short-term I/O history [37]. We also compare with the RMA [31]/Teacher-Student [34] training strategy where an (e) expert policy with access to privileged environment information (Table III) is first trained by RL and is later utilized to train (f) RMA (student) policy by supervised learning. The RMA can be further finetuned using (g) A-RMA [32] by RL. While the short I/O history is not included in the original RMA [31] or TS [34], it is included in this benchmark to have a fair comparison. The blocks are shaded if their parameters are not updated. The dash lines indicate that parameters are copied.
|
| 181 |
+
|
| 182 |
+
by allowing the robot to utilize more diverse maneuvers to recover from unstable states or unknown perturbations.
|
| 183 |
+
|
| 184 |
+
# A. Baselines
|
| 185 |
+
|
| 186 |
+
To answer the first question, we benchmark our proposed policy architecture with several baselines illustrated in Fig. 4. All policies are trained with multiple goals, i.e., jumping to different landing locations and turning directions with no change of elevation, using the training schematic shown in Fig. 2, and are trained with 3 different random seeds. The details of baseline models are described in Appendix C.
|
| 187 |
+
|
| 188 |
+
To address the second question, we obtained two single-goal policies using the proposed policy structure, as detailed below:
|
| 189 |
+
|
| 190 |
+
- Single Goal: a policy that is trained on a single jumping-in-place task and extensive dynamics randomization as listed in Table III.
|
| 191 |
+
- Single Goal w/ Perturbation: a policy similar to the single-goal policy but is also trained with a randomized perturbation wrench (6 DoF) applied on the robot pelvis. The external forces and torques are sampled uniformly from $[-20\mathrm{N}, -5\mathrm{Nm}]$ to $[20\mathrm{N}, 5\mathrm{Nm}]$ and are applied on the robot's pelvis for a random time interval ranging from [0.1, 2.0] second.
|
| 192 |
+
|
| 193 |
+
We compare these baselines based on two metrics: (1) learning performance in Sec. VI-B and (2) the ability to generalize to dynamics parameters that lie outside of the training distributions in Sec. VI-C. These two metrics are important for the sim-to-real transfer because the first one shows how well the policy can perform during training and the second evaluates robustness to changes in the environment, which are not considered during training, as can be the case during sim-to-real transfer.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
Fig. 5: Benchmark of learning curves trained by different policy structures in Stage 3 (multi-goal training with dynamics randomization). The curves are the average normalized returns trained with 3 random seeds while the colored areas enclose the min and max values obtained among different seeds. The normalized return is calculated by the return divided by the max episode length and in the range of [0, 1]. Our method shows similar performance as the expert policy which is used to supervise RMAs and has access to the privileged environment parameters. The A-RMA shows the second-best performance but it requires significantly more samples compared to the proposed methods, followed by RMA. The policies with short history only or long history only show a similar learning performance but are a bit worse than RMA in terms of returns. The residual policy shows the worst performance because the reference motion added to the policy's action prevents the agent from exploring more diverse maneuvers.
|
| 197 |
+
|
| 198 |
+
# B. Policy Structure Choice
|
| 199 |
+
|
| 200 |
+
The learning curves from Stage 3 (multi-goal learning with dynamics randomization) using our policy structure and baselines are presented in Fig. 5. The learning curves at early training stages (learning a single task in Stage 1 and multiple tasks in Stage 2) are available in Fig. 10 in Appendix D. The same hyperparameters and reward functions are used for every training stage.
|
| 201 |
+
|
| 202 |
+
According to Fig. 5 (and Fig. 10), the residual structure drawn as the purple curve shows the worst learning performance over all the training stages. The reason is the reference motion we provided is a dynamically-infeasible animation, which may cause the robot to spend more effort learning to correct these default motions, and prevents it from exploring more diverse trajectories and inferring the motion that is outside of the range of the reference motion.
|
| 203 |
+
|
| 204 |
+
The baselines using short history only (orange curve) and long history only (blue curve) show a similar learning performance. But if we combine these two by providing the policy with a long history encoder and direct access to short history, which results in our method, the learning performance can be enhanced to a large extent, as drawn as the red curves in Fig. 5. This showcases that, providing the policy with a long history is not enough because the robot may need immediate feedback which may be hidden from the long-history encoder. Providing the policy with direct access to the short history can address it and the agent can learn to utilize both information.
|
| 205 |
+
|
| 206 |
+
Remark 3: We note that there is other work using RNNs with LSTM [48, 58, 59] or TCN [34] to encode the long-term I/O history. We hypothesize that providing the policy
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
(a) With Consistent Unknown Lateral Perturbation Force
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
Fig. 6: Robustness comparison among three policies which are: (i) trained with a single task (jumping in place) with dynamics randomization, (ii) trained with a single task with dynamics randomization and random perturbation, and (iii) trained with multiple tasks with dynamics randomization but without random perturbation (proposed). The testing scenarios are outside the training setting for all three policies. The single-goal policies fail to stabilize the robot, even the one trained with extensive perturbations. The goal-conditioned policy which is trained with diverse jumping tasks but without perturbation succeeds to stabilize the robot by exploiting the learned skills. The goal-conditioned policy is able to deviate from the commands (jumping in place) and utilize a lateral jump to stay robust to the lateral external force and two forward jumps to adapt to the forward CoM offset.
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
(b) With Errors in Center of Mass Positions of All Links
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
direct access to the short history is not limited to the 1D CNN encoder but also to other neural network structures that capture temporal information such as TCN, LSTM, GRU, and Transformer. We choose 1D CNN in this work because it is easier to train.
|
| 224 |
+
|
| 225 |
+
The comparison between our method and RMA/Teacher-Student (TS) policies (green curves) is also interesting. During the training of the goal-conditioned policy with dynamics randomization, our method only shows a little degradation compared to the expert policy. This actually showcases the advantages of our method because it can be zero-shot transferred to the real world while the expert policy that requires privileged information cannot. After training the expert policy, RMA and A-RMA have trained with 3k iterations and 5k iterations respectively, as shown in Fig. 5. We found that RMA has a large degradation compared to the expert policy due to the regression loss, and A-RMA is necessary to finetune the base policy in order to further improve the return. RMA shows a better return than the policy with only short history or only long history, which is aligned with the finding from previous work [31, 32]. However, even after A-RMA converged, the return is a bit worse than our method, while RMA and A-RMA require additional training and significantly more samples.
|
| 226 |
+
|
| 227 |
+
Remark 4: The original implementation of RMA [31] or TS [34] only provides the robot's very last I/O pair [31] or last state feedback [34] besides the long-term history encoder. In the implementation of RMA/TS in Fig. 4, we added the shortterm I/O history, which can improve the learning performance in order to have a fair comparison. Furthermore, the long-term I/O history encoder used in RMA and A-RMA is the same as the one used in the proposed method, which shows a better learning performance than the original encoder proposed in [31, 32] as shown in Fig. 11b.
|
| 228 |
+
|
| 229 |
+
Summary of the Result: By the ablation study above, we can summarize three factors that can improve the learning performance in our case for dynamic locomotion control: (1) using desired motion positions as the action space (in contrast to the residual), (2) providing the policy with direct access to the short-term I/O history in addition to a long-term robot's I/O history, and (3) training the policy in an end-to-end way instead of separating the training process into teacher and student. This combination leads to our proposed method.
|
| 230 |
+
|
| 231 |
+
# C. Advantages of the Verstiale Policy
|
| 232 |
+
|
| 233 |
+
In order to validate the advantages brought by multi-goal training, we further compare our goal-conditioned policy with the single-goal policies. These two single-goal policies are trained with the same amount of samples with dynamics randomization as the proposed one, whose learning curves are recorded in Fig. 11a. During the test in simulation, we command the robot to perform an in-place jump in an environment that the robot has not been trained on. As presented in Fig. 6, we conducted two tests where (1) a consistent lateral perturbation force is applied on the robot pelvis, and (2) the CoM of all links are set to be $+8$ cm off from the default position in all dimensions, while other dynamics parameters are set to the default values.
|
| 234 |
+
|
| 235 |
+
During these two tests, both of the single-goal policies fail to control the robot, while the goal-conditioned policy succeeded to stabilize the robot and perform a jump. Specifically, the policies trained with a single goal directly fail during standing, even in the case where one is trained with extensive external perturbations that "force" the robot to explore more maneuvers by perturbing it from a nominal jump. On the contrary, the policy trained with multiple goals, such as jumping forwards and lateral, without perturbations during training, is able to generalize the learned tasks, exploit them to stabilize the robot, and pick the best jumping maneuver even if it is not commanded. For example, while being commanded to jump in place, the goal-conditioned policy utilizes a lateral jump that it learned to stabilize the robot with the presence of lateral force (Fig. 6a(iii)) and two emergent forward jumps to adapt to the CoM errors in the forward direction (Fig. 6b(iii)). Such a benchmark highlights the advantages of learning with multiple tasks which makes the policy more robust.
|
| 236 |
+
|
| 237 |
+
Having conducted an extensive ablation study in simulation, we show that the proposed policy structure and multi-goal training significantly improve the robustness of the policy over other policy structures or single-goal policies.
|
| 238 |
+
|
| 239 |
+
# VII. EXPERIMENTS
|
| 240 |
+
|
| 241 |
+
We now deploy the goal-conditioned policies obtained in simulation, the flat-ground policy that is trained on different goals to jump to various locations and turning directions, and
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
(a) Different Jumps using the Flat-ground Policy
|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
(b) Different Jumps using the Discrete-terrain Policy
|
| 248 |
+
Fig. 7: Snapshots of Cassie performing different jumps using the proposed goal-conditioned policies. The snapshots are aligned with timestamps. The tags in the figures indicate the given landing targets. (a) Using a single policy that is specialized on flat ground, the robot is able to (i) jump in place while turning to $-60^{\circ}$ , (ii) jump $0.3\mathrm{m}$ backward, and (iii) jump $1\mathrm{m}$ forward, respectively. During the $1\mathrm{m}$ jump, the robot utilizes a forward hop to reach the goal after it lands on $0.5\mathrm{m}$ after the first jump. (b) The robot utilizes a single discrete-terrain policy to jump to different locations and elevations. The single policy can change the contact plan for different tasks. For example, the flight phase of the $1.4\mathrm{m}$ forward jump (ii) is the longest while being the shortest when the robot jumps onto the $0.44\mathrm{m}$ high elevation (i). The robot can land at the target (tag) with insignificant errors among all of these jumps without global position feedback.
|
| 249 |
+
|
| 250 |
+
the discrete-terrain policy that is specialized in jumping to variable locations and elevations, on the hardware of Cassie. As shown in Fig. 1, both policies can successfully control the robot in the real world, without finetuning.
|
| 251 |
+
|
| 252 |
+
Besides the ability to succeed in sim-to-real, in this section, we aim to validate two hypotheses: 1) whether the policy trained in simulation can complete the same task in the real world, and 2) whether the goal-conditioned policy is still able to exploit the learned tasks to stabilize the robot after being transferred to the real world. The experiments can be best seen in the accompanying video (https://youtu.be/aAPSZ2QFB-E). Please note that in all of the experiments, the robot does not have global position feedback, i.e., once it starts to move, it does not know the distance to the landing target nor the distance to the ground.
|
| 253 |
+
|
| 254 |
+
# A. Task Completion in the Real World
|
| 255 |
+
|
| 256 |
+
We first test the flat-ground policy on the robot in three distinct tasks: jumping in place while turning to negative 60 degrees, jumping $0.3\mathrm{m}$ backward, and jumping forward to land at a $1\mathrm{m}$ ahead target. As recorded in Fig. 7a, controlled by this goal-conditioned policy, the robot is able to complete all three tasks. For example, during the turning task, the robot rotates to $-55^{\circ}$ while jumping in the air, and lands at the same place where it took off (marked by a tag on the ground in Fig. 7a(i)). During the backward jump, different from the previous task, the robot leans backward before taking off (0.6-0.7 sec in Fig. 7a(ii)), and lands accurately at the target tag on the ground. To jump $1\mathrm{m}$ forward, the robot adopts a different maneuver where it leans forward before the flight phase and pushes itself off the ground with a larger strength, which results in a longer flight phase and travel distance than the previous two tasks. We also observe that the robot first
|
| 257 |
+
|
| 258 |
+
lands at a $0.5\mathrm{m}$ landmark, but quickly conducts a forward hop when legs touch the ground (1.6 sec in Fig. 7a(iii)), and lands at the $1\mathrm{m}$ target tag in the last. We note that such a consecutive jumping maneuver does not happen during the same task in the simulation with the robot's nominal dynamics model. Such an experiment highlights two favorable features of the proposed policy: it can (1) adapt to different system dynamics (from sim to real) and (2) deviate from the reference motion and utilize multiple contacts to complete the given task (jumping to the target).
|
| 259 |
+
|
| 260 |
+
We then validate the discrete-terrain policy with three tasks: Jumping $1\mathrm{m}$ ahead, $1.4\mathrm{m}$ ahead, and to a target that is $0.88\mathrm{m}$ ahead and $0.44\mathrm{m}$ above the ground, as presented in Fig. 7b. It shows the capacity to control the robot to jump over a distance/elevation to land on the given target accurately. We notice that the policy is able to adjust the robot's maneuvers/contact plan to jump over different distances/elevations. For example, compared to the $1\mathrm{m}$ jump (Fig. 7b(iii)), the robot takes off earlier while landing later during the $1.4\mathrm{m}$ jump (Fig. 7b (ii)). Such a change is reasonable as the robot needs a longer flight phase/larger take-off velocity in order to land at a farther location. Furthermore, when the robot is commanded to jump onto a $0.44\mathrm{m}$ table, the policy jumps more vertically (0.8 sec in Fig. 7b(i)) compared to the $1.4\mathrm{m}$ jump (0.6 sec in Fig. 7b(ii)) at the beginning of the flight phase while lifting the robot legs much higher in order to jump higher. Note that the robot makes contact with the platform much earlier than the other jumps on the ground, but this single policy is still able to stabilize the robot with different landing events. Furthermore, all these three experiments show that the robot controlled by the proposed policy can land accurately on the given target (land on the tags in Fig. 7b), which is challenging. Because the robot's motion is ballistic in the flight phase and a small
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
Fig. 8: The profiles of the robot's joint positions when it is commanded to jump and turn $-60^{\circ}$ in place in simulation and the real world. We observe a large deviation of the joint profiles between sim and real, e.g., the tarsus joints which are passive and driven by leaf springs show a significant difference during the sim-to-real transfer. Moreover, the flight phase in the real world is delayed compared with the one in the sim. Such errors highlight a big sim-to-real gap but our policy is robust to this and succeeds in controlling the robot to the given target.
|
| 264 |
+
|
| 265 |
+
error during taking-off may result in a large deviation from the landing target. In these experiments, the proposed policy is able to adapt to the dynamics of the robot hardware, adjust the robot's pose during the take-off, and accelerate to a velocity that can land the robot on the target.
|
| 266 |
+
|
| 267 |
+
Remark 5: During the long jump (like Fig. 7b(ii)), the robot leans its body forward at a large angle when it is pushing off from the ground and swings the legs forward during descent, and rotates its body forward w.r.t. contact points after it lands. Such a maneuver is very close to what we observed when a human athlete performs a standing jump [44, Fig. 1A]. Similar to humans, our robot's long jump skill is also learned during training, which is very different from the jumping-in-place reference motion we provided.
|
| 268 |
+
|
| 269 |
+
# B. Sim-to-Real Gap
|
| 270 |
+
|
| 271 |
+
In order to further understand the difficulty to succeed in robot jumping experiments in Fig. 7, we take a close look at the sim-to-real gap. We record the robot's joint position profiles during a jump in the simulation with the robot's nominal dynamics parameters and on the robot's hardware. The profiles for a turning task $(-60^{\circ}$ , Fig. 7a(i)) using the flat-ground policy is presented in Fig. 8. According to the recorded profiles, the robot's actual joint position has a large deviation between the simulation (blue curves) and the real world (red curves). For example, the maximum error on the tarsus joint position $q_{6}$ between sim and real is over 0.35 rad, which largely affects the robot's dynamics considering this joint is not actuated and is driven by a leaf spring whose nominal stiffness is $1250\mathrm{Nm / rad}$ . A similar deviation is observed in other joints, such as rotation joints $q_{2}$ , thigh joints $q_{3}$ and knee joints $q_{4}$ , which play a critical role during a jump and turning, and in other experiments using the discrete-terrain policy as recorded in Fig. 12. Such a discrepancy highlights the huge gap between the simulation and the real world but also showcases that, despite such a large gap, our methodology introduced in Sec. IV is able to stay robust and succeed in controlling the robot to accomplish the task.
|
| 272 |
+
|
| 273 |
+
C. Diverse and Robust Maneuvers by the Goal-Conditioned Policy
|
| 274 |
+
|
| 275 |
+
In order to push the limits of the proposed control policies, we further conduct more dynamic jumping experiments as presented in Fig. 9 and Fig. 13. As shown in Fig. 9a, using the single flat-ground policy, the robot performs a large repertoire of dynamic jumping maneuvers, such as jumping in place (Fig. 9a(i)), jumping to lateral (Fig. 9a(iii)), and multi-axes jumps such as blending lateral and forward jumps (Fig. 9a(iv)) and forward, lateral and turning (Fig. 9a(v)). In these multi-axes jumps, the robot demonstrates more complex maneuvers. For example, the robot leans in the lateral direction while jumping forward and turning to land on the target that is $0.5\mathrm{m}$ ahead, $0.2\mathrm{m}$ to robot's left, and turned $-45^{\circ}$ , as shown in Fig. 9a(v). During some challenging tasks, the robot is aware to utilize small hops to adjust its body pose after it lands with unstable states, such as demonstrated in Fig. 9a(iv)(v).
|
| 276 |
+
|
| 277 |
+
Moreover, in order to test the robustness of the policy, we applied a backward perturbation force on the robot's pelvis at its apex jumping height, as shown in Fig. 9a(ii). Due to such a perturbation, the robot leans backward during descending, and both of its toes pitch up after it lands, which makes the robot underactuated w.r.t contact points. However, the robot quickly exerts a backward hop, which is learned during the multi-goal training, after it lands. By this hop, the robot can adjust its body pose during the flight phase and then land stably afterward. The goal we gave to the robot in this test is to jump in place and it is interesting to see that the robot deviates from it in order to recover from falling over.
|
| 278 |
+
|
| 279 |
+
Remark 6: The robot, controlled by the proposed jumping policy, shows the ability to not rely on the pre-defined contact plan and can break the contact after it lands and make contact again when it needs to utilize impacts to stabilize itself. Such a capability is similar to contact implicit trajectory optimization [8, 14, 33, 52, 77]. While such optimization schemes still need to be computed offline for legged robots, our work achieves this online.
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
(i) $(c_{x},c_{y},c_{\phi}) = (0\mathrm{m},0\mathrm{m},0^{\circ})$
|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
(ii) $(c_{x},c_{y},c_{\phi}) = (0\mathrm{m},0\mathrm{m},0^{\circ})$ , with perturbation force in the air
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
(iii) $\overline{(c_x,c_y,c_\phi)} = (0\mathrm{m}, - 0.3\mathrm{m},0^\circ)$
|
| 293 |
+
(iv) $(c_{x}, c_{y}, c_{\phi}) = (0.3\mathrm{m}, 0.3\mathrm{m}, 0^{\circ})$
|
| 294 |
+
(v) $(c_{x}, c_{y}, c_{\phi}) = (0.5\mathrm{m}, 0.2\mathrm{m}, -45^{\circ})$
|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
(a) Different Jumps using the Flat-ground Policy
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
(ii) $(c_{x}, c_{z}) = (0.88\mathrm{m}, 0.17\mathrm{m})$
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
(i) $(c_{x}, c_{z}) = (0\mathrm{m}, 0\mathrm{m})$
|
| 304 |
+
(iii) $(c_{x}, c_{z}) = (0.88\mathrm{m}, 0.32\mathrm{m})$
|
| 305 |
+
(b) Different Jumps using the Discrete-terrain Policy
|
| 306 |
+
Fig. 9: Snapshots of various dynamic jumps performed by Cassie using the proposed policies. (a) The robot is able to perform a large repertoire of multi-axes jumps on flat ground. It shows the ability to stabilize the robot from a backward external perturbation (ii) by deviating from the commanded in-place jump and exploiting the maneuvers learned from backward jumping tasks. The robot also leverages emergent hops after landing to stabilize it from a huge impact force, while being commanded to stand, like (iv) and (v). (b) Using a single discrete-terrain policy, the robot can not only jump in place (i) but also jump to different locations with different elevations (ii) (iii) (iv).
|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
(iv) $(c_{x}, c_{z}) = (0.64\mathrm{m}, 0.32\mathrm{m})$
|
| 310 |
+
|
| 311 |
+
In the additional testing of the discrete-terrain policy demonstrated in Fig. 9b, the robot shows the ability to accurately land on different given targets. While the changes in the commanded distance and elevation are relatively small, the policy still demonstrates the ability to adjust the robot's take-off maneuvers in order to jump to the given targets.
|
| 312 |
+
|
| 313 |
+
# VIII. DISCUSSION OF DESIGN CHOICES FOR RL-BASED LEGGED LOCOMOTION CONTROL
|
| 314 |
+
|
| 315 |
+
In this section, we discuss the lessons learned through the development of jumping controllers for bipedal robots using RL. We hope this can provide useful insights for future endeavors on applying RL for legged locomotion.
|
| 316 |
+
|
| 317 |
+
Short-term history complements the long-term history: Providing a long-term history of the robot's input (policy's action) and/or output (measurement feedback) has been used
|
| 318 |
+
|
| 319 |
+
in many prior efforts in RL-based robotic controls [31, 34, 48, 58, 59]. While these prior systems show the advantages of using a long-term history over only the current state feedback [48, 59], the advantages over a short history [37, 43, 49] were not investigated. In this work, we demonstrate that incorporating both short-term history and long-term history can be beneficial. The ablation study in Fig. 5 shows that providing only the long-term I/O history (Fig. 4c) may not be sufficient, even when combined with observations of the robot's current state besides the history encoder, which is analogous to [31, 34, 48]. The learning performance of such a method shows no significant difference between the MLP policy with only short-term history (Fig. 4d), as shown in Fig. 5. Our architecture (Fig. 4a) exhibits better learning performance because it has direct access to a short-term history while also having a long-term history encoder. During real
|
| 320 |
+
|
| 321 |
+
time control, the robot's recent feedback and policy outputs (I/O) could be more important than the observations that are further back in time. Although it is also part of the long-term history, such recent information can be obfuscated and hard to extract from the compressed latent representation from the long-term history encoder. The short-term history provides the model with direct access to the most recent observations. Therefore, one of the reasons the proposed method shows the best learning performance in Fig. 5 is not the usage of long-term history, but the combination of short-term and long-term history. In addition to jumping, this design decision may also benefit other locomotion skills.
|
| 322 |
+
|
| 323 |
+
Encode environment parameters or robot's I/O history? Although the proposed architecture (Fig. 4a), with the exception of the introduction of short I/O history, may resemble the architecture of RMA [31] or Teacher/Student (TS) framework [34] which also has a long-term history encoder (Fig. 4e,f), the objective of the temporal encoder in this work is different from those methods [31, 34]. The long I/O history encoder in RMA or TS is to estimate the human-selected environment parameters (e.g., floor friction and robot's model parameters) by matching the predicted extrinsics from the teacher policy. The proposed method, in contrast, jointly trains the long I/O history encoder with the base policy and learns to directly utilize the robot's past I/O trajectories for control. The advantage is, the robot's I/O history implicitly contains more information besides environment parameters, such as impact events and contact wrenches. In this way, the robot has more freedom to extract the information from the long I/O history without being restricted to estimating the pre-selected environment parameters. This is the reason that the proposed method shows improvement over RMA/TS, which separates training into different teacher and student stages, and also requires an additional finetuning stage by A-RMA, which requires more training time and data, as shown in Fig. 5. We would like to also note that, RMA/TS methods potentially have benefits when combined with external sensors like vision [2, 43], which the proposed one may not have.
|
| 324 |
+
|
| 325 |
+
Robustness comes from versatility: We have observed that some of the prior RL-based locomotion controllers on periodic walking skills show highly robust behaviors during real-world deployment, including being robust to external perturbations [37, 41] or change of terrains [31, 34, 43]. For aperiodic dynamic jumping skills studied in this work, the RL-based policy also demonstrates significant robustness such as Fig. 9a(ii). This phenomenon raises an interesting question: where does the robustness come from and how can we improve robustness when we are using RL for legged locomotion control? While there has been little prior work that studies this source of robustness, in this work, we conduct an ablation study in Sec. VI-C. Fig. 6 clearly shows that one source of this robustness stems from multi-goal training: the versatile RL-based policy learned from different jumping tasks is able to generalize the learned task to recover from the unexpected perturbation (Fig. 6a(iii)) or deviation from the nominal trajectory (Fig. 6b(iii)). Such robustness cannot be
|
| 326 |
+
|
| 327 |
+
easily obtained by extensive dynamics randomization when the policy is limited to a single jumping task (Fig. 6a(i), Fig. 6b(i)), even with additional randomization on the external perturbation (Fig. 6a(ii), Fig. 6b(ii)). Such a result suggests that, besides the commonly-used dynamics randomization, diversifying the tasks, like jumping to different targets or walking with different velocities, can further improve the robustness of the RL-based control policy, which is a desirable property during sim-to-real transfer.
|
| 328 |
+
|
| 329 |
+
# IX. CONCLUSION
|
| 330 |
+
|
| 331 |
+
In this work, we presented an RL-based system for learning a large variety of highly-dynamic jumping maneuvers on real-world bipedal robots. We formulated the bipedal jumping problem as a parameterized set of tasks, and develop a goal-conditioned policy that is trained in simulation but can then be deployed directly in the real world. In order to tackle the challenging multi-goal learning problem, we utilized a multi-stage training scheme that divides the problem into three sub-problems and addresses each through different training stages. We showcase that by training with multiple goals, the robot is able to generalize the learned tasks to produce robust emergent recovery behaviors from large landing impact forces or unknown perturbations. The robustness acquired through multi-goal training then also facilitates the sim-to-real transfer process, which can not be easily acquired through single-goal training alone. Furthermore, we present a policy architecture that improves learning performance. Our framework enables a real Cassie robot to perform a suite of challenging jumping tasks, such as jumping to different locations, jumping onto different evaluations, and blending multi-axes movements during a jump. A limitation we observe occasionally during some experiments is that the robot oscillates after a jump. This may be due to the challenges of having a single policy for both dynamic jumps and stationary standing. In the future, it will be interesting to combine this goal-conditioned jumping policy with a more sophisticated perception system to traverse complex environments with greater mobility.
|
| 332 |
+
|
| 333 |
+
# ACKNOWLEDGMENTS
|
| 334 |
+
|
| 335 |
+
This work was supported in part by NSF Grant CMMI-1944722 and Canadian Institute for Advanced Research (CIFAR). The authors would like to thank Dr. Ayush Agrawal, Xuxin Cheng, Jiaming Chen, Xiaoyu Huang, Yiming Ni, Lizhi Yang, and Bike Zhang for their gracious help.
|
| 336 |
+
|
| 337 |
+
# REFERENCES
|
| 338 |
+
|
| 339 |
+
[1] Bernardo Aceituno-Cabezas, Carlos Mastalli, Hongkai Dai, Michele Focchi, Andreae Radulescu, Darwin G Caldwell, José Cappelletto, Juan C Grieco, Gerardo Fernández-López, and Claudio Semini. Simultaneous contact, gait, and motion planning for robust multilegged locomotion via mixed-integer convex optimization. IEEE Robotics and Automation Letters, 3(3):2531-2538, 2017.
|
| 340 |
+
|
| 341 |
+
[2] Ananye Agarwal, Ashish Kumar, Jitendra Malik, and Deepak Pathak. Legged locomotion in challenging terrains using egocentric vision. arXiv preprint arXiv:2211.07638, 2022.
|
| 342 |
+
[3] Ryan Batke, Fangzhou Yu, Jeremy Dao, Jonathan Hurst, Ross L Hatton, Alan Fern, and Kevin Green. Optimizing bipedal maneuvers of single rigid-body models for reinforcement learning. In 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), pages 714-721, 2022.
|
| 343 |
+
[4] Guillaume Bellegarda and Auke Ijspeert. Cpg-rl: Learning central pattern generators for quadruped locomotion. IEEE Robotics and Automation Letters, 7(4):12547-12554, 2022.
|
| 344 |
+
[5] Guillaume Bellegarda, Yiyu Chen, Zhuochen Liu, and Quan Nguyen. Robust high-speed running for quadruped robots via deep reinforcement learning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10364-10370, 2022.
|
| 345 |
+
[6] Miroslav Bogdanovic, Majid Khadiv, and Ludovic Righetti. Model-free reinforcement learning for robust locomotion using demonstrations from trajectory optimization. Frontiers in Robotics and AI, 9, 2022.
|
| 346 |
+
[7] Guillermo A Castillo, Bowen Weng, Wei Zhang, and Ayonga Hereid. Robust feedback motion policy design using reinforcement learning on a 3d digit bipedal robot. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5136-5143, 2021.
|
| 347 |
+
[8] Iordanis Chatzinikolaidis, Yangwei You, and Zhibin Li. Contact-implicit trajectory optimization using an analytically solvable contact model for locomotion on variable ground. IEEE Robotics and Automation Letters, 5(4): 6357-6364, 2020.
|
| 348 |
+
[9] Hua Chen, Bingheng Wang, Zejun Hong, Cong Shen, Patrick M Wensing, and Wei Zhang. Underactuated motion planning and control for jumping with wheeled-bipedal robots. IEEE Robotics and Automation Letters, 6(2):747-754, 2020.
|
| 349 |
+
[10] Hongkai Dai, Andres Valenzuela, and Russ Tedrake. Whole-body motion planning with centroidal dynamics and full kinematics. In 2014 IEEE-RAS International Conference on Humanoid Robots, pages 295-302, 2014.
|
| 350 |
+
[11] Yanran Ding, Chuanzheng Li, and Hae-Won Park. Single leg dynamic motion planning with mixed-integer convex optimization. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1-6, 2018.
|
| 351 |
+
[12] Yanran Ding, Chuanzheng Li, and Hae-Won Park. Kinodynamic motion planning for multi-legged robot jumping via mixed-integer convex program. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3998–4005, 2020.
|
| 352 |
+
[13] OSU DRL. cassie-mujoco-sim, 2023. URL https://github.com/osudrl/cassie-mujoco-sim.
|
| 353 |
+
[14] Luke Drnach and Ye Zhao. Robust trajectory optimization.
|
| 354 |
+
|
| 355 |
+
tion over uncertain terrain with stochastic complementarity. IEEE Robotics and Automation Letters, 6(2):1168-1175, 2021.
|
| 356 |
+
[15] Alejandro Escontrela, Xue Bin Peng, Wenhao Yu, Tingnan Zhang, Atil Iscen, Ken Goldberg, and Pieter Abbeel. Adversarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 25-32, 2022.
|
| 357 |
+
[16] Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath, et al. Genloco: Generalized locomotion controllers for quadrupedal robots. arXiv preprint arXiv:2209.05309, 2022.
|
| 358 |
+
[17] Zipeng Fu, Ashish Kumar, Jitendra Malik, and Deepak Pathak. Minimizing energy consumption leads to the emergence of gaits in legged robots. arXiv preprint arXiv:2111.01674, 2021.
|
| 359 |
+
[18] Zipeng Fu, Xuxin Cheng, and Deepak Pathak. Deep whole-body control: learning a unified policy for manipulation and locomotion. arXiv preprint arXiv:2210.10044, 2022.
|
| 360 |
+
[19] Scott Gilroy, Derek Lau, Lizhi Yang, Ed Izaguirre, Kristen Biermayer, Anxing Xiao, Mengti Sun, Ayush Agrawal, Jun Zeng, Zhongyu Li, et al. Autonomous navigation for quadrupedal robots with optimized jumping through constrained obstacles. In 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), pages 2132-2139, 2021.
|
| 361 |
+
[20] Dip Goswami and Prahlad Vadakkepat. Planar bipedal jumping gaits with stable landing. IEEE Transactions on Robotics, 25(5):1030-1046, 2009.
|
| 362 |
+
[21] Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, and Sergey Levine. Learning to walk via deep reinforcement learning. arXiv preprint arXiv:1812.11103, 2018.
|
| 363 |
+
[22] Ross Hartley, Maani Ghaffari, Ryan M Eustice, and Jessy W Grizzle. Contact-aided invariant extended kalman filtering for robot state estimation. The International Journal of Robotics Research, 39(4):402-430, 2020.
|
| 364 |
+
[23] Masato Hirose and Kenichi Ogawa. Honda humanoid robots development. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1850):11-19, 2007.
|
| 365 |
+
[24] Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao Li, Lizhi Yang, Xue Bin Peng, and Koushil Sreenath. Creating a dynamic quadrupedal robotic goalkeeper with reinforcement learning. arXiv preprint arXiv:2210.04435, 2022.
|
| 366 |
+
[25] Se Hwan Jeon, Sangbae Kim, and Donghyun Kim. Online optimal landing control of the mit mini cheetah. In 2022 International Conference on Robotics and Automation (ICRA), pages 178-184, 2022.
|
| 367 |
+
[26] Gwanghyeon Ji, Juhyeok Mun, Hyeongjun Kim, and Jemin Hwangbo. Concurrent training of a control policy
|
| 368 |
+
|
| 369 |
+
and a state estimator for dynamic and robust legged locomotion. IEEE Robotics and Automation Letters, 7 (2):4630-4637, 2022.
|
| 370 |
+
[27] Yandong Ji, Zhongyu Li, Yinan Sun, Xue Bin Peng, Sergey Levine, Glen Berseth, and Koushil Sreenath. Hierarchical reinforcement learning for precise soccer shooting skills using a quadrupedal robot. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1479-1486, 2022.
|
| 371 |
+
[28] Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, and Karol Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale. arXiv preprint arXiv:2104.08212, 2021.
|
| 372 |
+
[29] Benjamin Katz, Jared Di Carlo, and Sangbae Kim. Mini cheetah: A platform for pushing the limits of dynamic quadruped control. In 2019 international conference on robotics and automation (ICRA), pages 6295-6301, 2019.
|
| 373 |
+
[30] Daniel E Koditschek and Martin Buehler. Analysis of a simplified hopping robot. The International Journal of Robotics Research, 10(6):587-605, 1991.
|
| 374 |
+
[31] Ashish Kumar, Zipeng Fu, Deepak Pathak, and Jitendra Malik. Rma: Rapid motor adaptation for legged robots. arXiv preprint arXiv:2107.04034, 2021.
|
| 375 |
+
[32] Ashish Kumar, Zhongyu Li, Jun Zeng, Deepak Pathak, Koushil Sreenath, and Jitendra Malik. Adapting rapid motor adaptation for bipedal robots. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1161-1168, 2022.
|
| 376 |
+
[33] Benoit Landry, Joseph Lorenzetti, Zachary Manchester, and Marco Pavone. Bilevel optimization for planning through contact: A semidirect method. In Robotics Research: The 19th International Symposium ISRR, pages 789-804, 2022.
|
| 377 |
+
[34] Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, and Marco Hutter. Learning quadrupedal locomotion over challenging terrain. Science robotics, 5 (47):eabc5986, 2020.
|
| 378 |
+
[35] Quanyi Li, Zhenghao Peng, Haibin Wu, Lan Feng, and Bolei Zhou. Human-ai shared control via policy dissection. arXiv preprint arXiv:2206.00152, 2022.
|
| 379 |
+
[36] Zhongyu Li, Christine Cummings, and Koushil Sreenath. Animated cassie: A dynamic relatable robotic character. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3739-3746, 2020.
|
| 380 |
+
[37] Zhongyu Li, Xuxin Cheng, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, and Koushil Sreenath. Reinforcement learning for robust parameterized locomotion control of bipedal robots. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 2811-2817, 2021.
|
| 381 |
+
[38] Tzu-Yuan Lin, Ray Zhang, Justin Yu, and Maani Ghaffari. Legged robot state estimation using invariant kalman filtering and learned contact events. In 5th Annual
|
| 382 |
+
|
| 383 |
+
Conference on Robot Learning, 2021.
|
| 384 |
+
[39] Zachary Manchester and Scott Kuindersma. Variational contact-implicit trajectory optimization. In Robotics Research: The 18th International Symposium ISRR, pages 985–1000, 2020.
|
| 385 |
+
[40] Gabriel Margolis, Ge Yang, Kartik Paigwar, Tao Chen, and Pulkit Agrawal. Rapid locomotion via reinforcement learning. In Robotics: Science and Systems, 2022.
|
| 386 |
+
[41] Gabriel B Margolis and Pulkit Agrawal. Walk these ways: Tuning robot control for generalization with multiplicity of behavior. Conference on Robot Learning, 2022.
|
| 387 |
+
[42] Gabriel B Margolis, Tao Chen, Kartik Paigwar, Xiang Fu, Donghyun Kim, Sang bae Kim, and Pulkit Agrawal. Learning to jump from pixels. In 5th Annual Conference on Robot Learning, 2021.
|
| 388 |
+
[43] Takahiro Miki, Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, and Marco Hutter. Learning robust perceptive locomotion for quadrupedal robots in the wild. Science Robotics, 7(62):eabk2822, 2022.
|
| 389 |
+
[44] Mark P Moresi, Elizabeth J Bradshaw, David Greene, and Geraldine Naughton. The assessment of adolescent female athletes using standing and reactive long jumps. Sports Biomechanics, 10(02):73-84, 2011.
|
| 390 |
+
[45] Chuong Nguyen, Lingfan Bao, and Quan Nguyen. Continuous jumping for legged robots on stepping stones via trajectory optimization and model predictive control. arXiv preprint arXiv:2204.01147, 2022.
|
| 391 |
+
[46] Quan Nguyen, Matthew J Powell, Benjamin Katz, Jared Di Carlo, and Sangbae Kim. Optimized jumping on the mit cheetah 3 robot. In 2019 International Conference on Robotics and Automation (ICRA), pages 7448-7454, 2019.
|
| 392 |
+
[47] Hae-Won Park, Patrick M Wensing, Sangbae Kim, et al. Online planning for autonomous running jumps over obstacles in high-speed quadrupeds. Robotics: Science and System, 2015.
|
| 393 |
+
[48] Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pages 3803-3810, 2018.
|
| 394 |
+
[49] Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, and Sergey Levine. Learning agile robotic locomotion skills by imitating animals. arXiv preprint arXiv:2004.00784, 2020.
|
| 395 |
+
[50] Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and Angjoo Kanazawa. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (TOG), 40(4):1-20, 2021.
|
| 396 |
+
[51] Samuel Pfrommer, Mathew Halm, and Michael Posa. Contactnets: Learning discontinuous contact dynamics with smooth, implicit representations. In Conference on Robot Learning, pages 2279-2291. PMLR, 2021.
|
| 397 |
+
[52] Michael Posa, Cecilia Cantu, and Russ Tedrake. A direct method for trajectory optimization of rigid bodies through contact. The International Journal of Robotics
|
| 398 |
+
|
| 399 |
+
Research, 33(1):69-81, 2014.
|
| 400 |
+
[53] M. H. Raibert, M. A. Chepponis, and H. Benjamin Brown. Experiments in balance with a 3d one-legged hopping machine. International Journal of Robotics Research, 3(2):75 - 92, June 1984.
|
| 401 |
+
[54] Diego Rodriguez and Sven Behnke. Deepwalk: Omni-directional bipedal gait by deep reinforcement learning. In 2021 IEEE international conference on robotics and automation (ICRA), pages 3033-3039, 2021.
|
| 402 |
+
[55] Martin Rutschmann, Brian Satzinger, Marten Byl, and Katie Byl. Nonlinear model predictive control for rough-terrain robot hopping. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1859–1864, 2012.
|
| 403 |
+
[56] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 404 |
+
[57] André Seyfarth, Hartmut Geyer, and Hugh Herr. Swingleg retraction: a simple control model for stable running. Journal of Experimental Biology, 206(15):2547-2555, 2003.
|
| 405 |
+
[58] Yecheng Shao, Yongbin Jin, Xianwei Liu, Weiyan He, Hongtao Wang, and Wei Yang. Learning free gait transition for quadruped robots via phase-guided controller. IEEE Robotics and Automation Letters, 7(2):1230-1237, 2021.
|
| 406 |
+
[59] Jonah Siekmann, Srikar Valluri, Jeremy Dao, Lorenzo Bermillo, Helei Duan, Alan Fern, and Jonathan Hurst. Learning memory-based control for human-scale bipedal locomotion. arXiv preprint arXiv:2006.02402, 2020.
|
| 407 |
+
[60] Jonah Siekmann, Yesh Godse, Alan Fern, and Jonathan Hurst. Sim-to-real learning of all common bipedal gaits via periodic reward composition. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 7309-7315, 2021.
|
| 408 |
+
[61] Laura Smith, J Chase Kew, Xue Bin Peng, Sehoon Ha, Jie Tan, and Sergey Levine. Legged robots that keep on learning: Fine-tuning locomotion policies in the real world. In 2022 International Conference on Robotics and Automation (ICRA), pages 1593-1599, 2022.
|
| 409 |
+
[62] Laura Smith, Ilya Kostrikov, and Sergey Levine. A walk in the park: Learning to walk in 20 minutes with model-free reinforcement learning. arXiv preprint arXiv:2208.07860, 2022.
|
| 410 |
+
[63] Zhitao Song, Linzhu Yue, Guangli Sun, Yihu Ling, Hongshuo Wei, Linhai Gui, and Yun-Hui Liu. An optimal motion planning framework for quadruped jumping. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 11366-11373, 2022.
|
| 411 |
+
[64] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pages 5026-5033, 2012.
|
| 412 |
+
[65] Barkan Ugurlu, Jody A Saglia, Nikos G Tsagarakis, and Darwin G Caldwell. Hopping at the resonance frequency: A trajectory generation technique for bipedal robots with
|
| 413 |
+
|
| 414 |
+
elastic joints. In 2012 IEEE International Conference on Robotics and Automation, pages 1436-1443, 2012.
|
| 415 |
+
[66] Ivo Vatavuk and Zdenko Kovacic. Precise jump planning using centroidal dynamics based bilevel optimization. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 3026-3032, 2021.
|
| 416 |
+
[67] Eric Vollenweider, Marko Bjelonic, Victor Klemm, Nikita Rudin, Joonho Lee, and Marco Hutter. Advanced skills through multiple adversarial motion priors in reinforcement learning. arXiv preprint arXiv:2203.14912, 2022.
|
| 417 |
+
[68] Philipp Wu, Alejandro Escontrela, Danijar Hafner, Ken Goldberg, and Pieter Abbeel. Daydreamer: World models for physical robot learning. arXiv preprint arXiv:2206.14176, 2022.
|
| 418 |
+
[69] Zhaoming Xie, Glen Berseth, Patrick Clary, Jonathan Hurst, and Michiel van de Panne. Feedback control for cassie with deep reinforcement learning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1241-1246, 2018.
|
| 419 |
+
[70] Zhaoming Xie, Patrick Clary, Jeremy Dao, Pedro Morais, Jonathan Hurst, and Michiel Panne. Learning locomotion skills for cassie: Iterative design and sim-to-real. In Conference on Robot Learning, pages 317-329. PMLR, 2020.
|
| 420 |
+
[71] Xiaobin Xiong and Aaron D Ames. Bipedal hopping: Reduced-order model embedding via optimization-based control. In International Conference on Intelligent Robots and Systems (IROS), pages 3821-3828, 2018.
|
| 421 |
+
[72] William Yang and Michael Posa. Impact invariant control with applications to bipedal locomotion. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5151-5158, 2021.
|
| 422 |
+
[73] Justin K Yim and Ronald S Fearing. Precision jumping limits from flight-phase control in salto-1p. In 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 2229–2236, 2018.
|
| 423 |
+
[74] Fangzhou Yu, Ryan Batke, Jeremy Dao, Jonathan Hurst, Kevin Green, and Alan Fern. Dynamic bipedal maneuvers through sim-to-real reinforcement learning. arXiv preprint arXiv:2207.07835, 2022.
|
| 424 |
+
[75] Wenhao Yu, Visak CV Kumar, Greg Turk, and C Karen Liu. Sim-to-real transfer for biped locomotion. In 2019 ieee/rsj international conference on intelligent robots and systems (iros), pages 3503-3510, 2019.
|
| 425 |
+
[76] Chi Zhang, Wei Zou, Liping Ma, and Zhiqing Wang. Biologically inspired jumping robots: A comprehensive review. Robotics and Autonomous Systems, 124:103362, 2020.
|
| 426 |
+
[77] Yifan Zhu, Zherong Pan, and Kris Hauser. Contact-implicit trajectory optimization with learned deformable contacts using bilevel optimization. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 9921–9927, 2021.
|
| 427 |
+
|
| 428 |
+
# A. Training in Stage 1
|
| 429 |
+
|
| 430 |
+
1) Reward: The reward design in Stage 1 is presented in Table II. In the first stage to initiate the training for a single jumping goal, we incentivize the robot to imitate the jumping-in-place animation. Therefore, the tracking rewards for motor position and foot height have overwhelming weights over others in order to accomplish a jump and stand still afterward. We also include the task completion term, but since the task is fixed in this stage, i.e., $\mathbf{c} = \mathbf{0}$ , this term is more to encourage the robot to jump in place and stabilize its pelvis orientation. We also have a smoothing term as a small fraction of the reward at this stage and do not have the change of action reward to prevent the robot from adopting a stationary behavior at the early stage of training.
|
| 431 |
+
|
| 432 |
+
2) Episode Design: In the initial training stage, the episode length is designed to have 750 timesteps corresponding to 23 seconds. The agent is asked to jump at $t = 0$ and to stand until the end. If we allow the robot to stand at the beginning of the episode, the robot may focus on learning the easy standing skill and fail to explore the jumping maneuver. Furthermore, note that a jumping phase usually is less than 2 second but we have a 23-second episode. This is because the robot may learn jumping well but overlook the standing skill if the episode is short, which may result in the robot adopting an undesirable maneuver, such as continue hopping after landing. Having such a long episode can give the robot more incentive to learn a robust and stable standing skill in order to have a better return over the episode. The early termination conditions in Stage 1 are different than the multi-goal training stage, except for the falling-over condition. In this stage, the foot height tracking error bound $E_{e}$ is smaller (0.22 m) while the task completion error bound $E_{t}$ is much larger ([1.0, 45°]). This is because we want to push the robot to jump by lifting its feet at the initial stage of training and completing the task is not a big concern at this stage.
|
| 433 |
+
|
| 434 |
+
# B. Details of Dynamics Randomization
|
| 435 |
+
|
| 436 |
+
The details of the dynamics parameters and randomization range used in this paper are listed in Table III. Note that the range of the noise is relatively small (such as $0.1^{\circ}$ in joint position measurement and $0.5^{\circ}$ in joint velocity) because we found that the onboard sensors on Cassie are reliable and therefore we use a smaller bound to reduce the training complexity. For the robot that has larger sensor noises, a larger bound of the noise during training is recommended.
|
| 437 |
+
|
| 438 |
+
# C. Details of Baseline Models
|
| 439 |
+
|
| 440 |
+
The details of the model structure we compared are listed as follow:
|
| 441 |
+
|
| 442 |
+
- Ours (Fig. 4a): the long-term I/O history is encoded with a CNN, while the short-term I/O history is provided directly as input to the base MLP. The policy direct outputs desired motion positions. The CNN encoder and the MLP base are jointly trained.
|
| 443 |
+
- Residual (Fig. 4b): the policy shares the same structure as the proposed one, but the policy output is a residual term added to
|
| 444 |
+
|
| 445 |
+
TABLE III: Dynamics Randomization Range
|
| 446 |
+
|
| 447 |
+
<table><tr><td>Parameters</td><td>Range</td></tr><tr><td>Floor Friction Ratio</td><td>[0.3, 3.0]</td></tr><tr><td>Joint Damping</td><td>[0.3, 4.0] Nms/rad</td></tr><tr><td>Spring Stiffness</td><td>[0.8, 1.2] × default</td></tr><tr><td>Link Mass</td><td>[0.5, 1.5] × default</td></tr><tr><td>Link Inertia</td><td>[0.7, 1.3] × default</td></tr><tr><td>Pelvis (Root) CoM Position</td><td>[-0.1, 0.1] m in qx,y,z</td></tr><tr><td>Other Link CoM Position</td><td>[-0.05, 0.05] m + default</td></tr><tr><td>Motor PD Gains</td><td>[0.7, 1.3] × default</td></tr><tr><td>Motor Position Noise Mean</td><td>[-0.002, 0.002] rad</td></tr><tr><td>Motor Velocity Noise Mean</td><td>[-0.01, 0.01] rad/s</td></tr><tr><td>Gyro Rotation Noise Mean</td><td>[-0.002, 0.002] rad</td></tr><tr><td>Linear Velocity Estimation Error</td><td>[-0.04, 0.04] m/s</td></tr><tr><td>Communication Delay</td><td>[0, 0.025] sec</td></tr></table>
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
(a) Stage 1: Learning a Single Goal (b) Stage 2: Learning Multiple Goals
|
| 451 |
+
|
| 452 |
+

|
| 453 |
+
Fig. 10: Benchmark of learning curves trained by different policy structures trained with 3 random seeds in the early stages. The curves record the mean of normalized returns obtained using different seeds and the min and max among different seeds are the boundaries of the colored areas. The proposed method shows the best performance during the early stages of training including learning a single goal from scratch (Stage 1) and multiple goals in Stage 2.
|
| 454 |
+
|
| 455 |
+
the reference motor position at the current timestep, i.e., $q_{m}^{d} = \mathbf{a}_{t} + q_{m}^{r}(t)$ , which is used in [34, 70]. Please note that the policy has the reference motion as input.
|
| 456 |
+
|
| 457 |
+
- Long History Only (Fig. 4c): the policy only has a long-term I/O history encoded by a CNN, which is a baseline used in [31, 34]. Note that we still provide the robot feedback at the current timestep directly to the MLP base, as suggested by Peng et al. [48].
|
| 458 |
+
- Short History Only (Fig. 4d): the policy has short I/O history without the long-term I/O history CNN encoder, which is used in [37] and serves as a baseline in [32].
|
| 459 |
+
- RMA/Teacher-Student: an expert (teacher) policy (Fig. 4e) with access to privileged environment information (listed in Table III) is first trained using RL. The privileged information is encoded by an MLP into an 8D extrinsics vector. This expert policy is then used to supervise the training of an RMA (student) policy, which uses the base MLP copied from the expert policy, while using a long I/O history encoder to predict the teacher's extrinsic vector. This two-stage training scheme is used in [31, 34] and also adopted in other work such as [18, 26, 40].
|
| 460 |
+
- A-RMA (Fig. 4g): after the standard RMA training, the parameters of the long I/O history encoder are fixed, and the base MLP is further finetuned using RL as proposed by Kumar et al. [32]. Both RMA and A-RMA are also provided with a short I/O history which are newly added in this work for a fair comparison.
|
| 461 |
+
|
| 462 |
+
# D. Learning Performance in Early Stages
|
| 463 |
+
|
| 464 |
+
According to Fig. 5, at the training stages without dynamics randomization (Stage 1&2), our method shows similar, even a bit better, learning performance compared with the expert
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
(a) Learning Single Goal with Domain Randomization
|
| 468 |
+
|
| 469 |
+

|
| 470 |
+
(b) Learning with Different Memory Encoders for RMAs
|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
Fig. 11: Additional Learning Curves.
|
| 474 |
+
Fig. 12: The profiles of the robot's joint positions when it is commanded to jump to $0.44\mathrm{m}$ -tall elevation while forward $0.88\mathrm{m}$ in simulation and the real world, using the discrete-terrain policy.
|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
(i) $(c_{x}, c_{y}, c_{\phi}) = (0\mathrm{m}, 0.3\mathrm{m}, 0^{\circ})$
|
| 478 |
+
|
| 479 |
+

|
| 480 |
+
(ii) $(c_{x},c_{y},c_{\phi}) = (0\mathrm{m},0\mathrm{m},60^{\circ})$
|
| 481 |
+
|
| 482 |
+

|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
(iii) $(c_{x}, c_{y}, c_{\phi}) = (0.5\mathrm{m}, 0\mathrm{m}, 0^{\circ})$
|
| 486 |
+
|
| 487 |
+

|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
(iv) $(c_{x}, c_{y}, c_{\phi}) = (0.7\mathrm{m}, 0\mathrm{m}, -45^{\circ})$
|
| 495 |
+
Fig. 13: Additional experiments show Cassie jumping to different targets with the single flat-ground policy.
|
| 496 |
+
|
| 497 |
+
policy which has the access to the privileged environment information. This is because the long-term I/O history is able to provide more information than the dynamics parameters used in the expert policy, such as the robot's take-off trajectory which will be useful to determine a better landing maneuver. Although the policy with short history only (orange curve) shows a faster learning curve at the initial stage of training
|
| 498 |
+
|
| 499 |
+
(Stage 1, Fig. 10a), the learning performance using short history only and long history only (blue curve) show a similar learning performance in a more complex multi-goal training stage (Stage 2).
|
| 500 |
+
|
| 501 |
+
# E. Additional Learning Curves
|
| 502 |
+
|
| 503 |
+
The learning curves for single-goal policies with dynamics randomization detailed in Sec. IV-E is recorded in Fig. 11a. Training RMAs with different long-history encoders are recorded in Fig. 11b. The RMA used in [31, 32] (Original) has a different structure of the long-term I/O encoder (1D CNN) than the one used in this work. It has 3 hidden layers and the [kernel size, filter size, stride size] of each layer is [8, 32, 4], [5, 32, 1], and [5, 32, 1], with zero padding, respectively.
|
| 504 |
+
|
| 505 |
+
# F. Additional Hardware Experiments
|
| 506 |
+
|
| 507 |
+
More experiment results are presented in Fig. 12 and Fig. 13. It shows the capacity of the flat-ground policy to accomplish more challenging jumping tasks on the real robot Cassie.
|
2302.09xxx/2302.09450/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cecb428f70859c25d1ed73dd2a884dede6d0886c667163363a3be5bea42d6837
|
| 3 |
+
size 1144358
|
2302.09xxx/2302.09450/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09462/03361486-163b-47cf-bac4-14b59aa04594_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:120cbd66f426244cdbaef8f5d5f9781d1ad7db5b8c925be2cdfcb231e36edf2d
|
| 3 |
+
size 1704876
|
2302.09xxx/2302.09462/full.md
ADDED
|
@@ -0,0 +1,476 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Omid Nejati Manzari $^{a,\ast}$ , Hamid Ahmadabadi $^{a}$ , Hossein Kashiani $^{b}$ , Shahriar B. Shokouhi $^{a}$ and Ahmad Ayatollahi $^{a}$
|
| 2 |
+
|
| 3 |
+
${}^{a}$ School of Electrical Engineering,Iran University of Science and Technology,Tehran,Iran
|
| 4 |
+
|
| 5 |
+
$^{b}$ Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, USA
|
| 6 |
+
|
| 7 |
+
# ARTICLE INFO
|
| 8 |
+
|
| 9 |
+
Keywords:
|
| 10 |
+
|
| 11 |
+
Medical image classification
|
| 12 |
+
|
| 13 |
+
Adversarial attack
|
| 14 |
+
|
| 15 |
+
Adversarial robustness
|
| 16 |
+
|
| 17 |
+
Vision Transformer
|
| 18 |
+
|
| 19 |
+
# ABSTRACT
|
| 20 |
+
|
| 21 |
+
Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, there are still concerns about the reliability of deep medical diagnosis systems against the potential threats of adversarial attacks since inaccurate diagnosis could lead to disastrous consequences in the safety realm. In this study, we propose a highly robust yet efficient CNN-Transformer hybrid model which is equipped with the locality of CNNs as well as the global connectivity of vision Transformers. To mitigate the high quadratic complexity of the self-attention mechanism while jointly attending to information in various representation subspaces, we construct our attention mechanism by means of an efficient convolution operation. Moreover, to alleviate the fragility of our Transformer model against adversarial attacks, we attempt to learn smoother decision boundaries. To this end, we augment the shape information of an image in the high-level feature space by permuting the feature mean and variance within mini-batches. With less computational complexity, our proposed hybrid model demonstrates its high robustness and generalization ability compared to the state-of-the-art studies on a large-scale collection of standardized MedMNIST-2D datasets.
|
| 22 |
+
|
| 23 |
+
# 1. Introduction
|
| 24 |
+
|
| 25 |
+
Medical image classification is a critical step in medical image analysis that uses different factors such as clinical information or imaging modalities to differentiate across medical images. A dependable medical image classification may help clinicians evaluate medical images quickly and with less error. The healthcare industry has significantly benefited from recent Convolution Neural Networks (CNNs) advancements. Such advancements have prompted much research into the use of computer-aided diagnostic systems [1-4] based on artificial intelligence in clinical settings. CNNs are able to learn robust discriminative representation from vast volumes of medical data to generate accurate diagnostic performance in medical fields. They validate their satisfactory prediction capabilities and obtain comparable performance as clinicians.
|
| 26 |
+
|
| 27 |
+
However, the locality bias of CNNs makes it hard for them to learn long-range dependencies in visual data. The texture, shape, and size of many organs vary widely across people, making it difficult to correctly analyze medical data [5, 6]. As such, it is important to extract robust feature representation which can model long-range dependencies in different domains for medical image analysis. Recently, the Transformer architectures have adopted the self-attention mechanisms to model the long-range dependencies between input images and have achieved promising results. Different studies demonstrate their performance superiority compared to CNN architectures [7, 8]. However, a sizable amount of training data is crucial to their success. The construction of a large-scale dataset needs a significant amount of time and resources.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 1: Comparison between MedViTs and the baseline ResNets, in terms of average ACC-Parameters and average AUC-Parametrs trade-off over all 2D datasets.
|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
|
| 34 |
+
Regarding the medical field, radiologist experts must manually annotate and verify medical data, which is costly and time-consuming. While the Transformer architecture mitigates the shortcomings of CNNs, its computational complexity grows quadratically with spatial or embedding dimensions, therefore making it infeasible for most image restoration tasks involving high-resolution images. That is to say, the Transformer architectures address the long-range dependency modeling in CNNs, yet their computational complexity increases quadratically with the spatial dimension [9]. As a result, they cannot be used in realistic clinical settings. Moreover, the state-of-the-art studies assume that training and test data are identically distributed. Consequently, on out-of-domain target domains, they often suffer significant performance drops. The domain shift is more pronounced in healthcare areas since medical images can be captured by different devices at various sites. Consequently,
|
| 35 |
+
|
| 36 |
+
due to different scanners and imaging protocols, their data distribution can greatly vary. In addition, variations in epidemiology at different sites could impact the distribution of ground truth labels between various populations [10, 11].
|
| 37 |
+
|
| 38 |
+
In this study, we aim to address the above-mentioned challenges and propose a generalized Transformer architecture for medical image analysis. While recent studies in medical image analysis work specifically on the predetermined medical test sets, our proposed model generalizes to a wide range of medical domains such as CT, X-ray, ultrasound, and OCT domains. To this end, we follow the hierarchical hybrid architecture equipped with a patch embedding layer and a series of convolution and Transformer blocks in each stage with efficient computational complexity. Inspired by recent advances, we cater out Transformer architecture in such a way that each stage consists of two efficient phases to long-term dependencies and model short-term in visual data. In the first phase, we leverage a multi-head convolutional attention block to learn affinity between different tokens in representation subspaces for effective local representation learning. The attention block alleviates the high computational attention-based token mixer in conventional Transformer architectures, thereby improving inference speed.
|
| 39 |
+
|
| 40 |
+
Compared to the conventional Transformer architectures that suffice to incorporate locality bias into the lower layers of Transformer architectures through tokenization and self-attention components (CvT [12], Co-Scale Conv-Attentional [13], CMT [14]). We also propose a local feedforward network (LFFN) that encodes the local dependencies between nearby pixels in feed-forward components of Transformer architectures in all stages. For this objective, a depth-wise convolution is applied to the reshaped 2D feature map. While recent Transformer-based studies have demonstrated a high capacity to learn long-range dependencies between visual data, they fail to encode high-frequency context in visual data. To mitigate this issue, in the second phase of our proposed architecture, we first encode low- and high-frequency feature representation separately with an efficient multi-head self-attention block and a multi-head convolutional attention block, respectively. Then, the computed feature representations are fused and fed to the LFFN to enhance global and local modeling capacity further. As depicted in Figure 1, our model shows great superiority in terms of accuracy-complexity against CNNs.
|
| 41 |
+
|
| 42 |
+
The adversarial attack is a serious security risk for deep neural networks because it could trick the trained models into making incorrect predictions via small, undetectable perturbations. When it comes to healthcare, an adversarial attack could also pose severe security concerns [15]. The nature of medical data could provide an opportunity for a higher attack success rate with imperceptibility. In this paper, to enhance the adversarial robustness of our proposed Transformer model, we make our model focus more on global structure features (such as shape and style) rather than texture information. Several works [16, 17] have found
|
| 43 |
+
|
| 44 |
+
that neural networks tend to rely upon texture information for making predictions, which consequently makes them vulnerable to out-of-distribution samples. Motivated by these studies, we try to encourage our model to rely more on global structure features rather than texture information to boost the generalization performance as well as adversarial robustness. With this objective in mind, we extract the mean and variance of training instances across channel dimensions in feature space and interpolate them with each other. It is worth pointing out that the decision boundary is often sharp, and there is a significant portion of the hidden representation space that is associated with high-confidence predictions [18, 19]. With the proposed interpolation, we can explore new useful regions of the feature space, which are mainly relevant to global structure features. This ultimately would enable us to learn smoother decision boundaries, which are beneficial for adversarial robustness and generalization performance.
|
| 45 |
+
|
| 46 |
+
# 2. Related Works
|
| 47 |
+
|
| 48 |
+
Convolutional networks. Convolutional neural networks (CNNs) have witnessed extraordinary contributions to the vast fields of computer vision in recent years due to their ability to extract deep discriminative features. ResNet [20] introduced residual connections to CNN and mitigated the vanishing gradient problem, which ensures the model builds deeper to capture high-level features for image classification. MobileNets [21] use pointwise convolutions and depthwise separable convolutions to enhance CNN efficiency. In DenseNet [22], skip connections were used between each two layers, and summation was replaced with concatenation for the dense connections of feature maps. ConvNext [23] re-introduces core designs of Vision Transformers and employs $7 \times 7$ depthwise convolutions to design robust CNN architecture, which can achieve comparable results with Transformers. ShuffleNet [24] performs the channel shuffle operation to fuse separated channel information using group convolution.
|
| 49 |
+
|
| 50 |
+
Vision Transformers. Since original Transformer architecture achieved remarkable results in natural language processing many attempts have been made to use Transformer architecture to vision tasks like image classification [7], semantic segmentation [25], and object detection [26]. In particular, the Vision Transformer (ViT) of Dosovistky et al. [7] shows that pure Transformer-based can also achieve promising result on the image classification task. ViT splits the image into patches (a.k.a., tokens) and applies transformer layers to model the global relation among these patches for classification. T2T-ViT [27] mainly improves tokenization in ViT by delicately generating tokens in a soft split manner, which recursively aggregates neighboring tokens into one token to enrich local structure modeling. Swin Transformer [28] performs self-attention in a local window with the shifted window scheme to alternately model in-window and cross-window connection. PiT [29] follows a similar pyramid structure as
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
Figure 2: Overall architecture of the proposed Medical Vision Transformer (MedViT).
|
| 54 |
+
|
| 55 |
+
CNNs, which produces various feature maps through spatial dimension reduction based on the pooling structure of a convolutional layer. Nowadays, researchers are specifically interested in efficient methods, including pyramidal designs, training strategies, efficient self-attention, etc.
|
| 56 |
+
|
| 57 |
+
Hybrid Models. Recent works show that designing a hybrid architecture of transformer and convolution layers helps the model to combine the advantages of both architectures. BoTNet [30] uses a slightly-modified self-attention in the last three blocks of ResNet. CMT [14] block contains depthwise convolution layers based local perception unit and a lightweight transformer block. The CvT [12] inserts pointwise and depthwise convolution before self-attention.
|
| 58 |
+
|
| 59 |
+
LeViT [31] uses the convolutional stem to replace the patch embedding block and achieves fast inference image classification. The MobileViT [32] introduces a lightweight vision transformer by combining Transformer blocks with the MobileNetV2 [33] block in series. Mobile-Former [34] takes a bidirectional bridge between CNN and transformer to leverage the advantage of global and local concepts.
|
| 60 |
+
|
| 61 |
+
Robustness Study. Due to the nature of convolutional neural networks that rely on low-level features, their assumptions are generally vulnerable to adversarial examples. There are numerous studies on improving the adversarial robustness of CNNs that aim to strengthen it in various approaches. These include carefully designed model [35,
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Figure 3: Comparison of different core blocks. Convolution-based that includes the ResNet, ConvNext, and Transformer-based that includes RVT, PoolFormer, ViT, and MedViT (Ours).
|
| 65 |
+
|
| 66 |
+
36], strong data augmentation [37, 38], searched network architecture [39, 40], improved training strategy [41-43], pruning [44] of the weights, and quantization [45], activation functions [46] or better pooling [47, 48]. Although the methods mentioned earlier perform well on CNNs, there is no evidence that they also improve the adversarial robustness of ViTs.
|
| 67 |
+
|
| 68 |
+
Following the success of Transformers and their variants in various computer vision tasks, several studies are attempting to examine the robustness of Transformers. Early research conduct the adversarial robustness of Transformers on image classification tasks and compare their vulnerability against the MLP and CNN baselines. The experimental results illustrate that Transformers are more adversarially robust than CNNs [49]. Additionally, the adversarial transferability between CNNs and Transformers is unexpectedly low[50]. Furthermore, the robustness study [51] of ViTs is extended to the natural distribution shift and common image corruption, which demonstrate the superiority of ViTs over CNNs in the robustness benchmark. Although several studies have challenged the adversarial robustness without carefully designing architecture, in this paper, we do not make a simple comparison of adversarial robustness between CNNs and ViTs, but take a step further by designing a robust hybrid architecture family of MedViTs. Based on the architecture of Transformer, we introduce a novel Augmentation technique to further reduce the fragility of Transformer models.
|
| 69 |
+
|
| 70 |
+
# 3. Method
|
| 71 |
+
|
| 72 |
+
We first give a brief overview of the proposed MedViT in this section. Then, We describe the main body designs within MedViT, which include the Efficient Convolution Block (ECB), Local Transformer Block (LTB), and Transformer Augmentation Block (TAB). In addition, we provide different model sizes for the proposed architecture.
|
| 73 |
+
|
| 74 |
+
# 3.1. Overview
|
| 75 |
+
|
| 76 |
+
MedViT aims to combine the convolution block and transformer block in a novel approach to achieve a robust hybrid architecture for medical image classification. As shown in Figure 2, MedViT is composed of a patch embedding layer, transformer blocks and a series of stacked convolution in each stage, which follows the hierarchical pyramid architecture traditionally. The spatial resolution will be gradually reduced with a total of $32 \times$ ratio by $[4 \times, 2 \times, 2 \times$ , and $2 \times]$ while the channel dimension will be doubled after convolution blocks in each stage. Our purpose in this section is first to explore the core blocks are responsible for embedding multi-scale context and respectively develop robust LTB and ECB to effectively capture long-term and short-term dependencies in input data. LTB also performs the fusion of local and global features, thereby enhancing modeling capabilities. Also we study how to integrate blocks of convolution and transformer technically. Lastly, to further improve the performance and adversarial robustness, we propose a novel Patch Momentum Changer (PMC) data augmentation technique to train our models.
|
| 77 |
+
|
| 78 |
+
# 3.2. Efficient Convolution Block
|
| 79 |
+
|
| 80 |
+
We begin by discussing some traditional core blocks of transformer and convolution network, as illustrated in Figure 3. To show the effectiveness of the proposed ECB and its superiority over previous methods. ResNet [20] introduced skip connection and Residual block, which has dominated a wide range of tasks in visual recognition for a long time due to its compatible features and inherent inductive biases for the realistic deployment scenario. Unfortunately, the performance of the Residual block is not satisfactory compared to the Transformer block. The ConvNext block [23] constructed from the Residual block by following the designs of the Transformer block without self-attention. Although the ConvNext block enhances network performance to some extent, inefficient components make the model hard to capture high-level structures, such as $7 \times 7$
|
| 81 |
+
|
| 82 |
+
depth convolution, GELU and LayerNorm. To overcome this, the transformer block has been proposed to capture high-level structures. Transformers have achieved excellent results in various computer vision tasks, and their inherent superiority is jointly conferred by the attention-based token mixing operation [28]. However, these methods merely focus on the model complexity and the standard accuracy. The results demonstrate that the models are vulnerable to adversarial attacks [52, 53], which are intolerable in clinically relevant medical use cases.
|
| 83 |
+
|
| 84 |
+
In addition, long-range dependencies are crucial for medical images because the background in medical images is generally scattered [54], thereby learning long-range dependencies between the pixels corresponding to the background can help the network in preventing misclassification [55]. It can be noted that there is still scope for improvement in capturing global context as the shortcoming of prior methods, which do not focus on this aspect for medical image classification tasks. To address the adversarial robustness and accurate medical classification, we introduce an Efficient Convolution Block (ECB) that achieves outstanding performance as a transformer-based block while retaining the deployment advantage of the Residual block. As illustrated in Figure 3 (a), The ECB follows the hybrid architecture, which has been confirmed as necessary for utilizing the multi-scale information. Meanwhile, an effective attention-based token mixing module is equally important. We design a Locally Feed Forward Network (LFFN) as an efficient way of introducing locality into the network with depth-wise convolution and a Multi-Head Convolutional Attention (MHCA) as an effective token mixer. Inspired by Robust Vision Transformer [56] that analyzed the effect of each component of Transformers in the robustness, we build ECB by combining LFFN and MHCA block in the robust paradigm. The proposed ECB can be formulated as follows:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\tilde {z} ^ {l} = \mathrm {M H C A} \left(z ^ {l - 1}\right) + z ^ {l - 1}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
z ^ {l} = \operatorname {L F F N} \left(\tilde {z} ^ {l}\right) + \tilde {z} ^ {l} \tag {1}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $z^{l-1}$ denotes the input from the $l-1$ block, $\tilde{z}^l$ and $z^l$ are the outputs of MHCA and the $l$ ECB. We will introduce LFNN in detail in the next section.
|
| 95 |
+
|
| 96 |
+
# 3.2.1. Locally Feed-Forward Network
|
| 97 |
+
|
| 98 |
+
The feed-forward network, which is applied position-wise to a sequence of tokens $\mathbf{Z}^r$ , can be precisely represented by rearranging the sequence of tokens into a 2D lattice, as shown in Figure 4 (c). As a result, the reshaped features are represented as follows:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathbf {Z} ^ {r} = \operatorname {S e q} 2 \operatorname {I m g} (\mathbf {Z}), \mathbf {Z} ^ {r} \in \mathbb {R} ^ {h \times w \times d} \tag {2}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $h = H / p$ and $w = W / p$ . Seq2IMG takes a sequence and converts it into a feature map that can be visualized. The tokens are placed at pixel locations on the feature map, and each token corresponds to one pixel.
|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
(a)
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
(b)
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
(c)
|
| 114 |
+
Figure 4: Comparison between the feed-forward network in vision transformers, (a) convolutional feed-forward network. (b) inverted residual block. (c) Our finally utilized network that brings efficient local mechanism into the transformer.
|
| 115 |
+
|
| 116 |
+
Through this perspective, it is possible to introduce locality into the network by recovering the proximity between tokens. The fully-connected layers could be replaced by $1 \times 1$ convolution layers, i.e.
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\mathbf {Y} ^ {r} = f \left(\mathbf {Z} ^ {r} \circledast \mathbf {W} _ {1} ^ {r}\right) \circledast \mathbf {W} _ {2} ^ {r} \tag {3}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathbf {Y} = \operatorname {I m g 2 S e q} \left(\mathbf {Y} ^ {r}\right)
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where $\mathbf{W}_1^r\in \mathbb{R}^{d\times \gamma d\times 1\times 1}$ and $\mathbf{W}_2^r\in \mathbb{R}^{\gamma d\times d\times 1\times 1}$ are reshaped from $W_{1}$ and $W_{2}$ and denote the kernels of convolutional layers. With Img2Seq, the image feature map is converted back into a token sequence, which is then used by the next self-attention layer by transforming it into the fused token.
|
| 127 |
+
|
| 128 |
+
# 3.3. Local Transformer Block
|
| 129 |
+
|
| 130 |
+
While the local representation has been effectively learned through the ECB, capturing global information is urgent and needs to be addressed in this block. It is well known that transformer blocks have the capability to capture low-frequency signals, which are very useful for capturing global information (e.g., global shapes and structures). However, There have been a few related studies [57] that have demonstrated that transformer blocks have a tendency to deteriorate high-frequency information, such as information about the local texture of objects, to some extent. It is essential that signals in different frequency segments are fused in order to extract essential and distinct features in the computer vision system [58].
|
| 131 |
+
|
| 132 |
+
In response to these observations, we have developed the Local Transformer Block (LTB) in order to capture multi-frequency signals in a lightweight mechanism with high efficiency. Moreover, LTB works as an effective multifrequency signal mixer, thus enhancing the overall modeling capability of the network. As shown in Figure 3 (b), LTB first captures low-frequency signals by utilizing an Efficient
|
| 133 |
+
|
| 134 |
+
Table 1 Detailed configurations of MedViT variants. C and S denotes number of channels and stride of convolution for each stage.
|
| 135 |
+
|
| 136 |
+
<table><tr><td>Stages</td><td>Output size</td><td>Layers</td><td>MedViT-T</td><td>MedViT-S</td><td>MedViT-L</td></tr><tr><td rowspan="4">Stem</td><td rowspan="4">H/4×W/4</td><td rowspan="4">Convolution Layers</td><td colspan="3">Conv 3 × 3, C = 64, S = 2</td></tr><tr><td colspan="3">Conv 3 × 3, C = 32, S = 1</td></tr><tr><td colspan="3">Conv 3 × 3, C = 64, S = 1</td></tr><tr><td colspan="3">Conv 3 × 3, C = 64, S = 2</td></tr><tr><td rowspan="2">Stage 1</td><td rowspan="2">H/4×W/4</td><td>Patch Embedding</td><td colspan="3">Conv 1 × 1, C = 96</td></tr><tr><td>MedViT Block</td><td colspan="3">[ECB × 3,96] × 1</td></tr><tr><td rowspan="3">Stage 2</td><td rowspan="3">H/8×W/8</td><td rowspan="2">Patch Embedding</td><td colspan="3">Avg_pool, S = 2</td></tr><tr><td colspan="3">Conv 1 × 1, C = 192</td></tr><tr><td>MedViT Block</td><td colspan="3">[ECB × 3,192/LTB × 1,256] × 1</td></tr><tr><td rowspan="3">Stage 3</td><td rowspan="3">H/16×W/16</td><td rowspan="2">Patch Embedding</td><td colspan="3">Avg_pool, S = 2</td></tr><tr><td colspan="3">Conv 1 × 1, C = 384</td></tr><tr><td>MedViT Block</td><td>[ECB × 4,384/LTB × 1,512] × 2</td><td>[ECB × 4,384/LTB × 1,512] × 4</td><td>[ECB × 4,384/LTB × 1,512] × 6</td></tr><tr><td rowspan="3">Stage 4</td><td rowspan="3">H/32×W/32</td><td rowspan="2">Patch Embedding</td><td colspan="3">Avg_pool, S = 2</td></tr><tr><td colspan="3">Conv 1 × 1, C = 768</td></tr><tr><td>MedViT Block</td><td colspan="3">[ECB × 2,768/LTB × 1,1024] × 1</td></tr></table>
|
| 137 |
+
|
| 138 |
+
Self Attention (ESA), which can be formulated as follows:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\begin{array}{l} \operatorname {E S A} (x) = \operatorname {C o n c a t} \left(\operatorname {S A} _ {1} \left(x _ {1}\right), \operatorname {S A} _ {2} \left(x _ {2}\right), \dots , \operatorname {S A} _ {h} \left(x _ {h}\right)\right) W ^ {O} \\ \operatorname {S A} (X) = \text {A t t e n t i o n} \left(X \cdot W ^ {Q}, \mathrm {P} _ {s} \left(X \cdot W ^ {K}\right), \mathrm {P} _ {s} \left(X \cdot W ^ {V}\right)\right) \end{array} \tag {4}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $X = [x_{1}, x_{2}, \dots, x_{h}]$ denotes to divide the input feature $X$ into multi-head form in channel dimension. $W^{O}$ is an output projection layer and $h$ is the number of heads. In order to reduce the spatial resolution of self-attention, SA was derived from linear SRA [59]. Attention calculates $W^{Q}, W^{K}, W^{V}$ linear layers in standard attention form as $Attention(Q, K, V) = \text{softmax}(QK^T / \sqrt{d})V$ , in which $d$ is the transformer hidden dimension. The $Ps$ operation involves an avg-pool operation with a stride $s$ parameter for downsampling the spatial dimensions before the attention operation is applied to reduce the computation cost. Furthermore, we observe that the number of channels in the ESA module is also a major determinant of the time consumption of the module. With the help of point-wise convolutions, LTB enables further acceleration of inference by reducing the dimension of the channel before it passes to the ESA module. In order to reduce the number of channels, a shrinking ratio $r$ is introduced. Additionally, the ESA module also utilizes Batch Normalization to make the module's deployment extremely efficient.
|
| 145 |
+
|
| 146 |
+
It is significant to note that LTB has a multi-frequency configuration that is designed to function in conjunction with ESA and MHCA modules. Following that, we design a new attention mechanism that is based on efficient convolutional operations for improving the efficiency of the LTB. Inspired by the effective multi-head design in MHSA [36], we build our convolutional attention (CA) with multi-head paradigm, which jointly attends to information from different representation subspaces at different positions for effective local representation learning. The
|
| 147 |
+
|
| 148 |
+
proposed MHCA can be formulated as follows:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
\operatorname {M H C A} (x) = \operatorname {C o n c a t} \left(\mathrm {C A} _ {1} \left(x _ {1}\right), \mathrm {C A} _ {2} \left(x _ {2}\right), \dots , \mathrm {C A} _ {h} \left(x _ {h}\right)\right) W ^ {O}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\operatorname {C A} (X) = \left(\mathrm {W} \cdot \mathrm {T} _ {\{i, j \}}\right), \text {w h e r e} \mathrm {T} _ {\{i, j \}} \in X \tag {5}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where MHCA captures information from h parallel representation subspaces and CA is single-head convolutional attention. $W$ is trainable parameter and $\mathrm{T}_{\{i,j\}}$ are adjacent tokens in input feature $X$ . The CA is calculated by the inner product operation of displaced vectors between adjacent tokens $\mathrm{T}_{\{i,j\}}$ and trainable parameter W. Since the multihead self-attention (MHSA) in Transformers could capture the global context, we propose the CA from the MHCA, which can learn affinity between different tokens in the local receptive. Notably, our implementation of MHCA involves a point-wise convolution and a group convolution (multi-head convolution), as shown in Figure 3 (a).
|
| 159 |
+
|
| 160 |
+
the output features of the MHCA and the ESA are concatenated to produce a mix of high-low features. As a final step, an MLP layer is borrowed at the end of the process in order to extract the essential and distinct features. In brief, the implementation of the LTB can be summarized as follows:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\bar {z} ^ {l} = \operatorname {P r o j} \left(z ^ {l - 1}\right)
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\ddot {z} ^ {l} = \operatorname {E S A} \left(\bar {z} ^ {l}\right) + \bar {z} ^ {l}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\dot {z} ^ {l} = \operatorname {P r o j} \left(\ddot {z} ^ {l}\right) \tag {6}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\tilde {z} ^ {l} = \operatorname {M H C A} \left(\dot {z} ^ {l}\right) + \dot {z} ^ {l} \tag {6}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
$$
|
| 179 |
+
\hat {z} ^ {l} = \operatorname {C o n c a t} \left(\ddot {z} ^ {l}, \tilde {z} ^ {l}\right)
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
z ^ {l} = \mathrm {L F F N} (\hat {z} ^ {l}) + \hat {z} ^ {l}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
where $z^l$ is the output of LTB from the $l$ -th block, and $\tilde{z}^l$ , $\tilde{z}^l$ denote the output of MHCA and ESA, respectively. Proj refers to the point-wise convolution layer associated with the project channel. In order to provide efficient norm and activation layers for LTB, the BN and the ReLU are uniformly adopted instead of the LN and the GELU as the efficient norm and activation layers. A major advantage of the LTB over traditional transformer blocks is its ability to capture and mix multi-frequency information in such a lightweight mechanism, so the performance of the model is greatly enhanced.
|
| 187 |
+
|
| 188 |
+
# 3.4. Transformer Augmentation Block
|
| 189 |
+
|
| 190 |
+
Image augmentation techniques apply geometric transformation functions such as rotating, cropping, and flipping or color space transformation functions such as edge enhancement, grayscale transformations, and color jittering on an input image. Data augmentation is an important strategy for ViTs because they suffer from data scarcity when trained on relatively small-size datasets, while is a dataspace solution to the problem of limited data can be solved by strong data augmentation [60]. Moreover, a rich data augmentation also helps with robustness and generalization, which has been verified in previous works [17, 61, 62].
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
BloodMNIST
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
ChestMNIST
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
OctMNIST
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
PneumoniaMNIST
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
BreastMNIST
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
DermaMNIST
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
PathMNIST
|
| 212 |
+
Figure 5: MedMNIST-2D Classification. MedMNIST is a collection of 12 pre-processed medical image datasets. It is designed to be educational, standardized, diverse and lightweight, which could be used as a general classification benchmark in medical image analysis.
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
TissueMNIST
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
RetinaMNIST
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
OrganMNISTaxial
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
OrganMNIST sagital
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
OrganMNISTcoronal
|
| 228 |
+
|
| 229 |
+
In order to improve the diversity of the augmented training data, we introduce Patch Momentum Changer (PMC) augmentation for ViTs, which blends feature normalization with data augmentation at training time for a pair of images at token level. Our motivation stems from the fact that all layers of ViTs have global receptive fields, so they are also concerned with the local relationships that exist between the tokens. We believe that the traditional augmentations, which randomly transform the whole image in order to enlarge the data, are sufficient for providing global context. However, for ViTs that naturally capture global receptive fields, conventional augmentations are less beneficial. In order to increase the interactions between the tokens, PMC laterally fuses intermediate feature maps and targets across two training samples. It mixes two very different components, the feature moments of one instance are combined with the normalized features of another at token level. This asymmetric composition in feature space helps Transformer to improve robustness and generalization when predicting medical image datasets.
|
| 230 |
+
|
| 231 |
+
At each stage, After feeding the word-level features $\hat{Z}^l$ into the Locally Feed-Forward Network, PMC could take as input the feature representation $Z^l$ , which is a 2D tensor. Similar to Cutmix and Mixup, features of two random training samples are fused with their labels, while performing the feature normalization. Specifically, PMC combine the normalized feature map of one sample with the feature moments of another. This nonsymmetric combination in word-level feature aims to create robust targets and smooth out the decision boundary of the trained classifier. To normalize features at different stages inside the MedViT model, function $F$ is defined. This function takes the word-level features $Z_i^l$ of the $i - th$ input $x_i$ at stage $l$ of MedViT model and generates three outputs which include: the first-moment $\mu_i$ , the second moment $\sigma_i$ , and the normalized word-level features $|Z_i^l|$ , as follows:
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
F \left(Z _ {i} ^ {l}\right) = \left(\mu_ {i} ^ {l}, \sigma_ {i} ^ {l}, \left| Z _ {i} ^ {l} \right|\right). \tag {7}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
Function $F$ calculates the value of the first and second momentum after feeding the word-level feature $Z_{i}^{l}$ through the LFFN module. This operation relatively resembles PONO function [16] in the realm of CNNs. To employ MedViT model, we randomly select two different images $x_{A}$ and $x_{B}$ . The operation could apply at each stage of the model, but it is more effective at the first stage. Consequently, we drop the $l$ superscript for notational simplicity. Augmented features are generated from normalized word-level features of the first image $(x_{A}: F(Z_{A}) = (\mu_{A}, \sigma_{A}, |Z_{A}|))$ that are combined with the moments of the second image $(x_{B}: F(Z_{B}) = (\mu_{B}, \sigma_{B}, |Z_{B}|))$ as follows:
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
Z _ {A} ^ {(B)} = \sigma_ {B} \frac {\left| Z _ {A} \right| - \mu_ {A}}{\sigma_ {A}} + \mu_ {B}, \tag {8}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
where $Z_A^{(B)}$ are augmented features and $|Z_A|, \mu_A, \sigma_A$ are the normalized word-level features, the first-moment, and the second moment of image $A$ . In addition, $\mu_B$ and $\sigma_B$ are the first and second moments of image $B$ . The model continues the forward pass from stage $l$ until the output using these features $Z_A^{(B)}$ . The lost function is modified to force the model to pay attention to injected features of image $x_B$ . The mixed new loss function would be created as follows:
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\lambda \cdot \left(Z _ {A} ^ {(B)}, y _ {A}\right) + (1 - \lambda) \cdot \left(Z _ {A} ^ {(B)}, y _ {B}\right), \tag {9}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
where $\lambda \in (0,1)$ is a fixed variable for setting the combination of the features and the moments. Also $(y_A,y_B)$ are labels of images, which are combined together for the final loss.
|
| 250 |
+
|
| 251 |
+
PMC is performed entirely at the feature level inside the transformer vision network and can be readily combined with other augmentation methods that operate on
|
| 252 |
+
|
| 253 |
+
Table 2
|
| 254 |
+
Overview of MedMNIST v2 [63] dataset. MedMNIST2D consists of 12 biomedical datasets of 2D images. Some of the notations used in datasets include OR: Ordinal Regression. MC: Multi-Class. ML: Multi-Label. BC: Binary-Class.
|
| 255 |
+
|
| 256 |
+
<table><tr><td>Name</td><td>Data Modality</td><td>Task (# Classes / Labels)</td><td># Samples</td><td># Training / Validation / Test</td></tr><tr><td colspan="5">MedMNIST2D</td></tr><tr><td>ChestMNIST</td><td>Chest X-Ray</td><td>ML (14) BC (2)</td><td>112,120</td><td>78,468 / 11,219 / 22,433</td></tr><tr><td>PathMNIST</td><td>Colon Pathology</td><td>MC (9)</td><td>107,180</td><td>89,996 / 10,004 / 7,180</td></tr><tr><td>OCTMNIST</td><td>Retinal OCT</td><td>MC (4)</td><td>109,309</td><td>97,477 / 10,832 / 1,000</td></tr><tr><td>DermaMNIST</td><td>Dermatoscope</td><td>MC (7)</td><td>10,015</td><td>7,007 / 1,003 / 2,005</td></tr><tr><td>RetinaMNIST</td><td>Fundus Camera</td><td>OR (5)</td><td>1,600</td><td>1,080 / 120 / 400</td></tr><tr><td>PneumoniaMNIST</td><td>Chest X-Ray</td><td>BC (2)</td><td>5,856</td><td>4,708 / 524 / 624</td></tr><tr><td>BreastMNIST</td><td>Breast Ultrasound</td><td>BC (2)</td><td>780</td><td>546 / 78 / 156</td></tr><tr><td>TissueMNIST</td><td>Kidney Cortex Microscope</td><td>MC (8)</td><td>236,386</td><td>165,466 / 23,640 / 47,280</td></tr><tr><td>BloodMNIST</td><td>Blood Cell Microscope</td><td>MC (8)</td><td>17,092</td><td>11,959 / 1,712 / 3,421</td></tr><tr><td>OrganAMNIST</td><td>Abdominal CT</td><td>MC (11)</td><td>58,850</td><td>34,581 / 6,491 / 17,778</td></tr><tr><td>OrganCMNIST</td><td>Abdominal CT</td><td>MC (11)</td><td>23,660</td><td>13,000 / 2,392 / 8,268</td></tr><tr><td>OrganSMNIST</td><td>Abdominal CT</td><td>MC (11)</td><td>25,221</td><td>13,940 / 2,452 / 8,829</td></tr></table>
|
| 257 |
+
|
| 258 |
+
the raw input (pixels or words). We explicitly encourage the transformer to encode better long-range dependency to correctly classify the image with the feature of another image combined inside. We show that our approach can lead to consistent accuracy gain when used in MedViT, and also enhances the adversarial robustness of the transformers. Besides, we evaluate the efficacy of PMC thoroughly across several datasets.
|
| 259 |
+
|
| 260 |
+
# 4. Experiments
|
| 261 |
+
|
| 262 |
+
# 4.1. Datasets
|
| 263 |
+
|
| 264 |
+
MedMNIST datasets include a set of 12 pre-processed datasets that include CT, X-ray, ultrasound and OCT images. These datasets are used in various classification tasks including multi-label, ordinal, multi-class, regression, and binary. The size of the data in this collection varies from at least 100 to more than 100,000. As shown in Table 2, the diversity of these datasets has created a favorable criterion for classification tasks. The pre-processing and split of the datasets into training, validation and test subsets have been done according to [63].
|
| 265 |
+
|
| 266 |
+
PathMNIST is adapted from a dataset based on kather's work [64]. This dataset contains 100,000 image patches that are manually divided into 9 different classes. It is also adapted from another dataset that contains 7180 nonoverlapping image patches in the classes of fat loss, background, debris, lymphocytes, mucosa, smooth muscle, normal colon mucosa, cancer-related stroma, and epithelium.
|
| 267 |
+
|
| 268 |
+
ChestMNIST is adapted from a dataset consisting of chest x-ray images [65]. The dataset consists of 112,120 frontal X-ray images from a total of 32,717 patients. This dataset contains 14 different classes of diseases, which is a multi-class dataset in the MEDMNIST collection. We used benchmark standards to split and resize data.
|
| 269 |
+
|
| 270 |
+
DermaMNIST is based on HAM10000 [66], a large collection of multi-source dermatoscopic images of common pigmented skin lesions. The dataset consists of 10015 dermatoscopic images of a size of $450 \times 600$ . It consists of 7 diagnostic categories as follows: Melanocytic Nevi
|
| 271 |
+
|
| 272 |
+
(NV), Melanoma (MEL), Basal Cell Carcinoma (BCC), and Intra-Epithelial Carcinoma (AKIEC), Actinic Keratosis, Benign Keratosis (BKL), Vascular lesions (VASC), Dermatofibroma (DF). All formulated as a multi-class classification task.
|
| 273 |
+
|
| 274 |
+
OCTMNIST is built on the back of a prior set [67] of 109309 valid optical coherence tomography (OCT) images that were collected for retinal diseases. 4 types are involved, leading to a multi-class classification task.
|
| 275 |
+
|
| 276 |
+
PneumoniaMNIST is adapted from a prior dataset [68]. This dataset consists of 5856 pediatric chest X-ray images on just two classes. The task is to categorize pneumonia into two binary classes, pneumonia and normal.
|
| 277 |
+
|
| 278 |
+
RetinaMNIST is based on DeepDRiD (Deep Diabetic Retinopathy) [69], the dataset provides 628 patients data including 1600 retina fundus images. The task is ordinal regression for 5-level grading of diabetic retinopathy severity.
|
| 279 |
+
|
| 280 |
+
TissueMNIST is adapted from the Broad Bioimage Benchmark Collection [70]. This dataset is categorized into 8 classes of human kidney cortex cells, which contains 236,386 segmented images from different reference tissue specimens.
|
| 281 |
+
|
| 282 |
+
BloodMNIST is adapted from a prior blood collection [71]. The dataset is categorized into 8 classes and contains a total of 17,092 normal blood cell images.
|
| 283 |
+
|
| 284 |
+
BreastMNIST is based on a dataset [72] of 780 breast ultrasound images. It is categorized into 3 classes: benign, malignant, and normal. Because low-resolution images are used, the task is simplified into binary classification by combing normal and benign as positive, and classify them against malignant as negative.
|
| 285 |
+
|
| 286 |
+
OrganMNIST {Axial, Coronal, Sagittal} is taken from 3D computed tomography (CT) images by the Liver Tumor Segmentation Benchmark (LiTS) [71]. As a way to get the organ labels, bounding-box annotations of 11 body organs from another study have been used [70]. As a result of translating the Hounsfield-Unit (HU) of the 3D images to greyscale and with an abdominal window, it is then possible to produce 2D images by selecting slices in the axial,
|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
Figure 6: Visual inspection of MedViT-T and ResNet-18 using Grad-CAM on MedMNIST-2D datasets. The green rectangles are used to show a specific part of the image that contains information relevant to the diagnosis or analysis of a medical condition, where the superiority of our proposed method can be clearly seen.
|
| 290 |
+
|
| 291 |
+
coronal, and sagittal directions within the bounding boxes of the 3D images. The view is the only difference between OrganMNIST and LiST. The images are resized into $1 \times 28 \times 28$ to function as targets for multiclass classification of 11 organs of the body. The training and validation sets are comprised of 115 and 16 CT scans from the source training set, respectively. The 70 CT scans from the source test set are also considered the test set.
|
| 292 |
+
|
| 293 |
+
# 4.2. Implementation Details
|
| 294 |
+
|
| 295 |
+
Our experiment on medical image classification is conducted on the MedMNIST dataset, which is composed of 12 standardized datasets from comprehensively medical resources covering a range of primary data modalities representative of medical images. To make a fair and objective judgment, we follow the same training settings of the MedMNISTv2 [63] without making any changes from the original settings. Specifically, we train all of the MedViT variants for 100 epochs on NVIDIA 2080Ti GPUs, and use a batch size of 128. The images are first resized to a size of $224 \times 224$ pixels. We employ an AdamW optimizer [73] with an initial learning rate of 0.001, the learning rate is decayed by a factor set of 0.1 in 50 and 75 epochs. Moreover, we introduce MedViT models at three different network sizes MedViT-T, MedViT-S, and MedViT-L, as shown in Table 1. All of them adopt the best settings investigated in section 2 and are trained for each dataset separately. For MedViT*, we add augmentation in the training phase. The PMC uses the mixture feature normalization with data augmentation at training time for each input image patch.
|
| 296 |
+
|
| 297 |
+
# 4.3. Evaluation metric
|
| 298 |
+
|
| 299 |
+
We report Accuracy (ACC) and Area under the ROC Curve (AUC) as the standard evaluation metrics. In contrast to AUC, which is a free threshold metric used to evaluate continuous prediction scores, ACC uses a threshold-based metric to evaluate discrete prediction labels. Therefore, ACC is more sensitive to class discrepancy than AUC. Because our experiments have many datasets of different sizes and data variety, both ACC and AUC could serve as comprehensive metrics. Although there are many other metrics, to establish a fair comparison, we select ACC and AUC for the benchmarking methods reported in the original publications [63, 74]. We report the results of ACC and AUC for each dataset in table 3. Similar to [63], we average the results over MedMNIST2D and report the average AUC and ACC scores in table 5.
|
| 300 |
+
|
| 301 |
+
# 5. Evaluation Results
|
| 302 |
+
|
| 303 |
+
# 5.1. Results on Each Dataset
|
| 304 |
+
|
| 305 |
+
The comparison of the proposal method with previous state-of-the-art (SOTA) methods in terms of the AUC and ACC on each dataset of MedMNIST-2D is shown in Table 3. MedViT outperforms previous SOTA methods by a large margin. Compared to AutoML methods, our MedViT-S shows superior learning ability on both evaluation metrics, observing an increase of $2.3\%$ (AUC) and $3.0\%$ (ACC) in RetinaMNIST and an increase of $1.1\%$ (AUC) and $2.8\%$ (ACC) in TissueMNIST compared to Google AutoML Vision and AutoKeras, respectively. Concretely,
|
| 306 |
+
|
| 307 |
+
Table 3
|
| 308 |
+
Comparison results of the proposed method on the MedMNIST2D in metrics of AUC and ACC. White background shows CNN-based and AutoML methods, while the proposed MedViT are colored in blue. Also blue indicates the best result, and red displays the second-best.
|
| 309 |
+
|
| 310 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">PathMNIST</td><td colspan="2">ChestMNIST</td><td colspan="2">DermaMNIST</td><td colspan="2">OCTMNIST</td><td colspan="2">PneumoniaMNIST</td><td colspan="2">RetinaMNIST</td></tr><tr><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td></tr><tr><td>ResNet-18 (28) [20]</td><td>0.983</td><td>0.907</td><td>0.768</td><td>0.947</td><td>0.917</td><td>0.735</td><td>0.943</td><td>0.743</td><td>0.944</td><td>0.854</td><td>0.717</td><td>0.524</td></tr><tr><td>ResNet-18 (224) [20]</td><td>0.989</td><td>0.909</td><td>0.773</td><td>0.947</td><td>0.920</td><td>0.754</td><td>0.958</td><td>0.763</td><td>0.956</td><td>0.864</td><td>0.710</td><td>0.493</td></tr><tr><td>ResNet-50 (28) [20]</td><td>0.990</td><td>0.911</td><td>0.769</td><td>0.947</td><td>0.913</td><td>0.735</td><td>0.952</td><td>0.762</td><td>0.948</td><td>0.854</td><td>0.726</td><td>0.528</td></tr><tr><td>ResNet-50 (224) [20]</td><td>0.989</td><td>0.892</td><td>0.773</td><td>0.948</td><td>0.912</td><td>0.731</td><td>0.958</td><td>0.776</td><td>0.962</td><td>0.884</td><td>0.716</td><td>0.511</td></tr><tr><td>auto-sklearn [75]</td><td>0.934</td><td>0.716</td><td>0.649</td><td>0.779</td><td>0.902</td><td>0.719</td><td>0.887</td><td>0.601</td><td>0.942</td><td>0.855</td><td>0.690</td><td>0.515</td></tr><tr><td>AutoKeras [76]</td><td>0.959</td><td>0.834</td><td>0.742</td><td>0.937</td><td>0.915</td><td>0.749</td><td>0.955</td><td>0.763</td><td>0.947</td><td>0.878</td><td>0.719</td><td>0.503</td></tr><tr><td>Google AutoML [77]</td><td>0.944</td><td>0.728</td><td>0.778</td><td>0.948</td><td>0.914</td><td>0.768</td><td>0.963</td><td>0.771</td><td>0.991</td><td>0.946</td><td>0.750</td><td>0.531</td></tr><tr><td>MedVIT-T (224)</td><td>0.994</td><td>0.938</td><td>0.786</td><td>0.956</td><td>0.914</td><td>0.768</td><td>0.961</td><td>0.767</td><td>0.993</td><td>0.949</td><td>0.752</td><td>0.534</td></tr><tr><td>MedVIT-S (224)</td><td>0.993</td><td>0.942</td><td>0.791</td><td>0.954</td><td>0.937</td><td>0.780</td><td>0.960</td><td>0.782</td><td>0.995</td><td>0.961</td><td>0.773</td><td>0.561</td></tr><tr><td>MedVIT-L (224)</td><td>0.984</td><td>0.933</td><td>0.805</td><td>0.959</td><td>0.920</td><td>0.773</td><td>0.945</td><td>0.761</td><td>0.991</td><td>0.921</td><td>0.754</td><td>0.552</td></tr><tr><td rowspan="2">Methods</td><td colspan="2">BreastMNIST</td><td colspan="2">BloodMNIST</td><td colspan="2">TissueMNIST</td><td colspan="2">OrganAMNIST</td><td colspan="2">OrganCMNIST</td><td colspan="2">OrganSMNIST</td></tr><tr><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td><td>AUC</td><td>ACC</td></tr><tr><td>ResNet-18 (28) [20]</td><td>0.901</td><td>0.863</td><td>0.998</td><td>0.958</td><td>0.930</td><td>0.676</td><td>0.997</td><td>0.935</td><td>0.992</td><td>0.900</td><td>0.972</td><td>0.782</td></tr><tr><td>ResNet-18 (224) [20]</td><td>0.891</td><td>0.833</td><td>0.998</td><td>0.963</td><td>0.933</td><td>0.681</td><td>0.998</td><td>0.951</td><td>0.994</td><td>0.920</td><td>0.974</td><td>0.778</td></tr><tr><td>ResNet-50 (28) [20]</td><td>0.857</td><td>0.812</td><td>0.997</td><td>0.956</td><td>0.931</td><td>0.680</td><td>0.997</td><td>0.935</td><td>0.992</td><td>0.905</td><td>0.972</td><td>0.770</td></tr><tr><td>ResNet-50 (224) [20]</td><td>0.866</td><td>0.842</td><td>0.997</td><td>0.950</td><td>0.932</td><td>0.680</td><td>0.998</td><td>0.947</td><td>0.993</td><td>0.911</td><td>0.975</td><td>0.785</td></tr><tr><td>auto-sklearn [75]</td><td>0.836</td><td>0.803</td><td>0.984</td><td>0.878</td><td>0.828</td><td>0.532</td><td>0.963</td><td>0.762</td><td>0.976</td><td>0.829</td><td>0.945</td><td>0.672</td></tr><tr><td>AutoKeras [76]</td><td>0.871</td><td>0.831</td><td>0.998</td><td>0.961</td><td>0.941</td><td>0.703</td><td>0.994</td><td>0.905</td><td>0.990</td><td>0.879</td><td>0.974</td><td>0.813</td></tr><tr><td>Google AutoML [77]</td><td>0.919</td><td>0.861</td><td>0.998</td><td>0.966</td><td>0.924</td><td>0.673</td><td>0.990</td><td>0.886</td><td>0.988</td><td>0.877</td><td>0.964</td><td>0.749</td></tr><tr><td>MedVIT-T (224)</td><td>0.934</td><td>0.896</td><td>0.996</td><td>0.950</td><td>0.943</td><td>0.703</td><td>0.995</td><td>0.931</td><td>0.991</td><td>0.901</td><td>0.972</td><td>0.789</td></tr><tr><td>MedVIT-S (224)</td><td>0.938</td><td>0.897</td><td>0.997</td><td>0.951</td><td>0.952</td><td>0.731</td><td>0.996</td><td>0.928</td><td>0.993</td><td>0.916</td><td>0.987</td><td>0.805</td></tr><tr><td>MedVIT-L (224)</td><td>0.929</td><td>0.883</td><td>0.996</td><td>0.954</td><td>0.935</td><td>0.699</td><td>0.997</td><td>0.943</td><td>0.994</td><td>0.922</td><td>0.973</td><td>0.806</td></tr></table>
|
| 311 |
+
|
| 312 |
+
MedViT steadily improve the performance of visual classification tasks in the MedMNIST-2D benchmark, particularly for PathMNIST, ChestMNIST, DermaMNIST, PneumoniaMNIST and BreastMNIST.
|
| 313 |
+
|
| 314 |
+
Although a specific architecture designed for a special image format is more accurate in one area, we have designed MedViT by combining efficient blocks to extract local and global features for generalized medical image classification. Also, MedMNIST-2D contains different types of images, including CT, ultrasound, X-ray, and OCT, which are colour or grayscale with different content in the medical domain. Results in Table 3 show that our MedViT performs the classification of medical images well for MedMNIST datasets. Besides, the efficiency in terms of the number of parameters is indicated in Table 5, which will be discussed in the following sections. These results demonstrate that the proposed MedViTs design has effectiveness and good generalization ability.
|
| 315 |
+
|
| 316 |
+
# 5.2. Comparison with State-of-the-art Models
|
| 317 |
+
|
| 318 |
+
We compare our MedViT with the latest state-of-the-art methods (e.g. ViTs, CNNs and hybrid networks) with similar model sizes in Table 4. We achieve a favorable trade-off between complexity and accuracy. Specifically, our MedViT-T achieves a $70.3\%$ Top-1 accuracy compared with CNN models, which is better than EfficientNet-B3 and ResNet-18 with more parameters. Similarly, MedViT-S achieves $73.1\%$ Top-1 accuracy, $5.1\%$ higher than ResNet-50, $2.6\%$ higher than EfficientNet-B4, and $0.5\%$ higher than
|
| 319 |
+
|
| 320 |
+
ConvNext-T, which are famous CNNs. Moreover, MedViT-L outperforms the ConvNext-B by $0.8\%$ , which has approximately two times more parameters than ours. Furthermore, compared to the pure ViTs, MedViT-T also outperforms PVT-T by a large margin of $6.9\%$ , while the model complexity is much lower. MedViT-S surpasses Twins-SVT-S by $1\%$ with similar number parameters. Finally, compared with recent hybrid methods, MedViT-T beats RVT-Ti by $0.7\%$ . Compared to CvT-13, MedViT-S improve performance by $1.5\%$ while the complexity is similar. MedViT-L also obtains a $0.6\%$ performance gain over RVT-B while enjoying less computation complexity. Experimental results illustrate that the proposed MedViT can effectively handle the classification task.
|
| 321 |
+
|
| 322 |
+
# 5.3. Average performance
|
| 323 |
+
|
| 324 |
+
We compare our method with the average AUC and average ACC over all datasets reported in Table 5. Our models show average AUCs $93.6\%$ , $94.2\%$ , $93.5\%$ and average ACCs $84\%$ , $85.1\%$ , $84.2\%$ in the total of twelve different datasets obtained by MedViT-T, MedViT-S and MedViTL, respectively. MedViT-S outperforms all the baseline ResNets and AutoML methods in both average AUC and average ACC by a large margin, demonstrating the advantage of using transformer vision to classify medical image.
|
| 325 |
+
|
| 326 |
+
In Table 5, we also compare the numbers of parameters of our proposed method with baseline ResNets. Our MedViT shows great superiority in terms of performance
|
| 327 |
+
|
| 328 |
+
Table 4 Classification performance compared with MedViTs and recent state-of-the-art methods on TissueMNIST.
|
| 329 |
+
|
| 330 |
+
<table><tr><td>Network</td><td>Image Size</td><td>Param (M)</td><td>FLOPs (G)</td><td>Top-1 (%)</td></tr><tr><td>ResNet-18 [20]</td><td>224</td><td>11.7</td><td>1.8</td><td>68.1</td></tr><tr><td>EfficientNet-B3 [78]</td><td>300</td><td>12.0</td><td>1.8</td><td>69.0</td></tr><tr><td>DeiT-Ti [79]</td><td>224</td><td>5.7</td><td>1.3</td><td>59.5</td></tr><tr><td>PiT-Ti [29]</td><td>224</td><td>4.9</td><td>0.7</td><td>62.1</td></tr><tr><td>PVT-T [8]</td><td>224</td><td>13.2</td><td>1.9</td><td>63.4</td></tr><tr><td>RVT-Ti [56]</td><td>224</td><td>8.6</td><td>1.3</td><td>69.6</td></tr><tr><td>MedViT-T</td><td>224</td><td>10.8</td><td>1.3</td><td>70.3</td></tr><tr><td>ResNet-50 [20]</td><td>224</td><td>25.6</td><td>4.1</td><td>68.0</td></tr><tr><td>EfficientNet-B4 [78]</td><td>380</td><td>19.3</td><td>4.2</td><td>70.5</td></tr><tr><td>ConvNeXt-T [23]</td><td>224</td><td>29.0</td><td>4.5</td><td>72.6</td></tr><tr><td>DeiT-S [79]</td><td>224</td><td>22.0</td><td>4.6</td><td>67.0</td></tr><tr><td>Swin-T [28]</td><td>224</td><td>29.0</td><td>4.5</td><td>71.7</td></tr><tr><td>PiT-S [29]</td><td>224</td><td>23.5</td><td>2.9</td><td>66.9</td></tr><tr><td>PVT-S [8]</td><td>224</td><td>25.4</td><td>4.0</td><td>66.7</td></tr><tr><td>Twins-SVT-S [80]</td><td>224</td><td>24.0</td><td>2.9</td><td>72.1</td></tr><tr><td>PoolFormer-S36 [81]</td><td>224</td><td>31.2</td><td>5.0</td><td>71.8</td></tr><tr><td>CoaT Tiny [13]</td><td>224</td><td>5.5</td><td>4.4</td><td>69.3</td></tr><tr><td>CvT-13 [12]</td><td>224</td><td>20.1</td><td>4.5</td><td>71.6</td></tr><tr><td>RVT-S [56]</td><td>224</td><td>22.1</td><td>4.7</td><td>71.2</td></tr><tr><td>MedViT-S</td><td>224</td><td>23.6</td><td>4.9</td><td>73.1</td></tr><tr><td>ResNet-152 [20]</td><td>224</td><td>60.2</td><td>11.3</td><td>67.5</td></tr><tr><td>ConvNeXt-B [23]</td><td>224</td><td>88.0</td><td>15.4</td><td>69.1</td></tr><tr><td>DeiT-B [79]</td><td>224</td><td>87.0</td><td>17.5</td><td>66.9</td></tr><tr><td>Swin-B [28]</td><td>224</td><td>87.8</td><td>15.4</td><td>68.5</td></tr><tr><td>PiT-B [29]</td><td>224</td><td>73.8</td><td>12.5</td><td>68.1</td></tr><tr><td>PVT-L [8]</td><td>224</td><td>61.4</td><td>9.8</td><td>66.8</td></tr><tr><td>Twins-SVT-B [80]</td><td>224</td><td>56.0</td><td>8.6</td><td>68.7</td></tr><tr><td>PoolFormer-M36 [81]</td><td>224</td><td>56.1</td><td>8.8</td><td>67.6</td></tr><tr><td>CoaT Small [13]</td><td>224</td><td>22.0</td><td>12.6</td><td>66.5</td></tr><tr><td>CvT-21 [12]</td><td>224</td><td>32.0</td><td>7.1</td><td>67.8</td></tr><tr><td>RVT-B [56]</td><td>224</td><td>86.2</td><td>17.7</td><td>69.3</td></tr><tr><td>MedViT-L</td><td>224</td><td>45.8</td><td>13.4</td><td>69.9</td></tr></table>
|
| 331 |
+
|
| 332 |
+
Table 5 Average performance comparison in standard metrics of average ACC and average AUC over all MedMNIST-2D.
|
| 333 |
+
|
| 334 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">Params (M)</td><td colspan="2">Avg.</td></tr><tr><td>AUC</td><td>ACC</td></tr><tr><td>ResNet-18 (28) [20]</td><td>11.2</td><td>0.922</td><td>0.819</td></tr><tr><td>ResNet-18 (224) [20]</td><td>11.2</td><td>0.925</td><td>0.821</td></tr><tr><td>ResNet-50 (28) [20]</td><td>23.5</td><td>0.920</td><td>0.816</td></tr><tr><td>ResNet-50 (224) [20]</td><td>23.5</td><td>0.923</td><td>0.821</td></tr><tr><td>auto-sklearn [75]</td><td>-</td><td>0.878</td><td>0.722</td></tr><tr><td>AutoKeras [76]</td><td>-</td><td>0.917</td><td>0.813</td></tr><tr><td>Google AutoML [77]</td><td>-</td><td>0.927</td><td>0.809</td></tr><tr><td>MedViT-T (224)</td><td>10.2</td><td>0.936</td><td>0.840</td></tr><tr><td>MedViT-S (224)</td><td>23</td><td>0.942</td><td>0.851</td></tr><tr><td>MedViT-L (224)</td><td>45</td><td>0.935</td><td>0.842</td></tr></table>
|
| 335 |
+
|
| 336 |
+
while the model complexity of ours is on par with baseline ResNets.
|
| 337 |
+
|
| 338 |
+
# 5.4. Visual inspection of MedViT
|
| 339 |
+
|
| 340 |
+
To further verify the property of our MedViT, we apply Grad-CAM [82] on the ESA's output in the last ECB to qualitatively inspect MedViT. We visualize the heat maps of the output features from ResNet-18 and MedViT in Figure 6. Compared with the baseline ResNet-18, our MedViT covers the relevant locations in the images more precisely and attends less to the background. Moreover, MedViT can better handle the scale variance issue as shown in Derma, Oct and Path. That is, it covers target area accurately
|
| 341 |
+
|
| 342 |
+
whether they are small, medium, or large in size. In the Retina dataset, it can be seen that our model in the heat map of retinal fundus image can well recognize the direction and area of the specific lesion. Our model well-localized a focal infected area by bacterial infection in the heat map of Chest dataset, while it was also able to delineate the multi-focal lesions in periphery of both upper lungs in the heat map of Pneumonia dataset, which is typical findings for pneumonia.
|
| 343 |
+
|
| 344 |
+
Such observations demonstrate that introducing the intrinsic IBs of locality and scale-invariance from convolutions to transformers helps MedViT learn capable of simultaneously capturing high-quality and multi- frequency signals. Compared to the conventional approach, our method mitigates the background bias significantly.
|
| 345 |
+
|
| 346 |
+
# 5.5. Augmentation and Robustness Evaluation
|
| 347 |
+
|
| 348 |
+
To evaluate our model against the adversarial attack benchmarks, we adopt a common gradient-based attack method FGSM [83] and a powerful multi-step attack PGD [41] with a step size of $4 / 255 = 0.015$ and steps $n_{iter} = 5$ . For both attackers, the magnitude of the adversarial noise is $\varepsilon$ of $8 / 255 = 0.031$ . Results in Table 6 demonstrate that different blocks of MedViT architecture have a strong correlation with the adversarial robustness. The proposed MedViT-T and MedViT-T* represent high adversarial robustness under both attack benchmarks. This is ascribed to the Efficient Convolution Block and Patch Momentum Changer, which aims to improve the robustness of medical diagnostic. In ECB, adding depth-wise convolution into feed-forward networks help model to better capture local dependencies within tokens. Moreover, the PMC module utilizes implicit data augmentation at token-level, which forces ViTs to pay special attention towards local features at different stages. We show empirically that MedViT by using these blocks is consistently able to improve robustness and classification accuracy across medical datasets.
|
| 349 |
+
|
| 350 |
+
The proposed MedViT-T* model achieves superior performance on both admired PGD and FGSM attacks in compared ResNet and baseline MedViT. In detail, MedViT-T* considerably outperforms the counterparts in TissueMNIST with gains of $38.4\%$ , $6.1\%$ on FGSM attack and gains of $30.2\%$ , $7.5\%$ on PGD attack compared to ResNet-18 and MedViT-T, respectively. MedViT-T* also achieves outstanding standard performance (ACC) consistently on four MNIST dataset. Specifically, the Transformer Augmentation Block module brings significant improvements $(1.5\%, 3.1\%, 1.1\%$ and $0.7\%)$ on the Oct, Tissue, Retina and Path MNIST datasets, respectively. This advance is further expanded by our Locally-FeedForward and PMC augmentation. Nonetheless, our MedViT-T* model generally yields the best accuracy/robustness tradeoff.
|
| 351 |
+
|
| 352 |
+
# 5.6. Ablation Study
|
| 353 |
+
|
| 354 |
+
We conduct various ablation experiments to investigate the effectiveness of the critical blocks of our architecture. Firstly, we study the impact of Efficient Convolution Block on robust and clean accuracy in comparison with the most
|
| 355 |
+
|
| 356 |
+
Table 6
|
| 357 |
+
The performance of MedViT-T, MedViT-T* and ResNet-18 on four MedMNIST-2D and two robustness benchmarks. Except for MedViT-T* architecture, we do not make use of any specialized modules or additional fine-tuning procedures.
|
| 358 |
+
|
| 359 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">OctMNIST</td><td colspan="3">TissueMNIST</td><td colspan="3">RetinaMNIST</td><td colspan="3">PathMNIST</td></tr><tr><td>ACC</td><td>FGSM</td><td>PGD</td><td>ACC</td><td>FGSM</td><td>PGD</td><td>ACC</td><td>FGSM</td><td>PGD</td><td>ACC</td><td>FGSM</td><td>PGD</td></tr><tr><td>ResNet-18 (224)</td><td>0.763</td><td>0.238</td><td>0.201</td><td>0.681</td><td>0.096</td><td>0.090</td><td>0.493</td><td>0.167</td><td>0.117</td><td>0.909</td><td>0.406</td><td>0.108</td></tr><tr><td>MedViT-T (224)</td><td>0.767</td><td>0.272</td><td>0.249</td><td>0.703</td><td>0.419</td><td>0.317</td><td>0.534</td><td>0.168</td><td>0.145</td><td>0.938</td><td>0.562</td><td>0.224</td></tr><tr><td>MedViT-T* (224)</td><td>0.782</td><td>0.304</td><td>0.297</td><td>0.734</td><td>0.480</td><td>0.392</td><td>0.545</td><td>0.197</td><td>0.180</td><td>0.945</td><td>0.585</td><td>0.245</td></tr></table>
|
| 360 |
+
|
| 361 |
+
Table 7 Impact of Efficient Convolution Block. Performance of clean accuracy and adversarial robustness under FGSM attack on TissueMNIST.
|
| 362 |
+
|
| 363 |
+
<table><tr><td rowspan="2">Block type</td><td colspan="2">Model Complexity</td><td rowspan="2">Clean Acc(%)</td><td rowspan="2">Robust Acc(%)</td></tr><tr><td>Params (M)</td><td>Flops (G)</td></tr><tr><td>Residual Block [20]</td><td>10.9</td><td>1.3</td><td>68.1</td><td>22.3</td></tr><tr><td>ConvNeXt Block [23]</td><td>11.2</td><td>1.4</td><td>69.7</td><td>37.5</td></tr><tr><td>PoolFormer Block [81]</td><td>10.7</td><td>1.1</td><td>68.9</td><td>29.1</td></tr><tr><td>LSA Block [80]</td><td>12.7</td><td>2.1</td><td>69.2</td><td>31.8</td></tr><tr><td>ECB (ours)</td><td>10.8</td><td>1.3</td><td>70.3</td><td>41.9</td></tr></table>
|
| 364 |
+
|
| 365 |
+
Table 8 Effect of PMC in Different Stages. Performance $(\%)$ of clean accuracy and adversarial robustness under FGSM attack on TissueMNIST. Place of PMC is indicated by $\checkmark$ in different stages.
|
| 366 |
+
|
| 367 |
+
<table><tr><td colspan="3">Augmentations</td><td rowspan="2">Acc</td><td rowspan="2">Rob. Acc</td></tr><tr><td>Stage 1</td><td>Stage 2</td><td>Stage 3</td></tr><tr><td>X</td><td>X</td><td>X</td><td>70.3</td><td>41.9</td></tr><tr><td>✓</td><td>X</td><td>X</td><td>73.4</td><td>48.0</td></tr><tr><td>X</td><td>✓</td><td>X</td><td>72.9</td><td>45.4</td></tr><tr><td>X</td><td>X</td><td>✓</td><td>71.5</td><td>44.1</td></tr></table>
|
| 368 |
+
|
| 369 |
+
well-known components. Afterwards, we individually evaluate Patch Moment Changer block at different stages of our architecture. It is important to note that all of our ablation experiments are based on the MedViT-T model on TissueMNIST.
|
| 370 |
+
|
| 371 |
+
Impact of Efficient Convolution Block. To analyze the effectiveness of our ECB for improving robustness/accuracy of Transformers, we substitute ECB in MedViT with famous blocks, including ConvNext [23] block, Residual block in ResNet [20], PoolFormer Block [81], and LSA block in Twins [80]. We constantly keep other components of our architecture unchanged to build different models under similar complexity. As illustrated in Table 7, our architecture with ECB block achieves the best robustness/accuracy in comparison with prior blocks. In particular, ECB outperforms the ConvNext block (runner-up) by $0.6\%$ in clean and $4.4\%$ in robust accuracy with lower model complexity.
|
| 372 |
+
|
| 373 |
+
Effect of PMC in Different Stages. To find the best place for Patch Moment Changer in our model, we combine the efficient blocks with PMC in different stages. PMC is applied to 3 different stages of MedViT-T on TissueMNIST to find the best setup. As illustrated in Table 8, PMC works best when applied after the first stage of the 4-stage MedViT-T. We assume PMC helps transformers to capture local information and has a significant advantage at the early stages. In contrast, the last stages already contain a lot of information, which is less impacted by the effect of PMC. In this paper, we adopt PMC augmentation in the first stage.
|
| 374 |
+
|
| 375 |
+
# 6. Conclusion
|
| 376 |
+
|
| 377 |
+
In this paper, we introduce a family of MedViT, a novel hybrid CNN-transformer architecture for medical image classification. Specifically, we combine the local
|
| 378 |
+
|
| 379 |
+
representations and the global features by using robust components. Furthermore, we have devised a novel patch moment changer augmentation that adds rich diversity and affinity to training data. Experiments show that our MedViT achieves state-of-the-art accuracy and robustness on the standard large-scale collection of 2D biomedical datasets. We hope that our model can encourage more researchers and provide inspire for future works on realistic medical deployment.
|
| 380 |
+
|
| 381 |
+
# CRediT authorship contribution statement
|
| 382 |
+
|
| 383 |
+
Omid Nejati Manzari: Conceptualization, Software, Writing- Original draft preparation, Validation, Resources. Hamid Ahmadabadi: Methodology, Writing- Reviewing and Editing. Hossein Kashiani: Writing- Reviewing and Editing, Modification for the final layout. Shahriar B. Shokouhi: Supervision, Review & Editing. Ahmad Ayatollahi: Supervision, Review & Editing.
|
| 384 |
+
|
| 385 |
+
# References
|
| 386 |
+
|
| 387 |
+
[1] C.-M. Lo and P.-H. Hung, "Computer-aided diagnosis of ischemic stroke using multi-dimensional image features in carotid color doppler," Computers in Biology and Medicine, vol. 147, p. 105779, 2022.
|
| 388 |
+
[2] W. Hu, C. Li, X. Li, M. M. Rahaman, J. Ma, Y. Zhang, H. Chen, W. Liu, C. Sun, Y. Yao et al., "Gashissdb: A new gastric histopathology image dataset for computer aided diagnosis of gastric cancer," Computers in biology and medicine, vol. 142, p. 105207, 2022.
|
| 389 |
+
[3] Q. Hu, C. Chen, S. Kang, Z. Sun, Y. Wang, M. Xiang, H. Guan, L. Xia, and S. Wang, "Application of computer-aided detection (cad) software to automatically detect nodules under sdct and ldct scans with different parameters," Computers in Biology and Medicine, vol. 146, p. 105538, 2022.
|
| 390 |
+
|
| 391 |
+
[4] X. Yang and M. Stamp, "Computer-aided diagnosis of low grade endometrial stromal sarcoma (lgess)," Computers in Biology and Medicine, vol. 138, p. 104874, 2021.
|
| 392 |
+
[5] S. Igarashi, Y. Sasaki, T. Mikami, H. Sakuraba, and S. Fukuda, "Anatomical classification of upper gastrointestinal organs under various image capture conditions using alexnet," Computers in Biology and Medicine, vol. 124, p. 103950, 2020.
|
| 393 |
+
[6] R. Togo, H. Watanabe, T. Ogawa, and M. Haseyama, "Deep convolutional neural network-based anomaly detection for organ classification in gastric x-ray examination," Computers in biology and medicine, vol. 123, p. 103903, 2020.
|
| 394 |
+
[7] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
|
| 395 |
+
[8] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” arXiv preprint arXiv:2102.12122, 2021.
|
| 396 |
+
[9] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu et al., "A survey on vision transformer," IEEE transactions on pattern analysis and machine intelligence, 2022.
|
| 397 |
+
[10] Q. Dou, D. Coelho de Castro, K. Kamnitsas, and B. Glocker, "Domain generalization via model-agnostic learning of semantic features," Advances in Neural Information Processing Systems, vol. 32, 2019.
|
| 398 |
+
[11] Q. Liu, Q. Dou, L. Yu, and P. A. Heng, "Ms-net: multi-site network for improving prostate segmentation with heterogeneous mri data," IEEE transactions on medical imaging, vol. 39, no. 9, pp. 2713-2724, 2020.
|
| 399 |
+
[12] H. Wu, B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang, "Cvt: Introducing convolutions to vision transformers," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 22-31.
|
| 400 |
+
[13] W. Xu, Y. Xu, T. Chang, and Z. Tu, "Co-scale conv-attentional image transformers," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9981-9990.
|
| 401 |
+
[14] J. Guo, K. Han, H. Wu, C. Xu, Y. Tang, C. Xu, and Y. Wang, "Cmt: Convolutional neural networks meet vision transformers," arXiv preprint arXiv:2107.06263, 2021.
|
| 402 |
+
[15] L. Ma and L. Liang, “A regularization method to improve adversarial robustness of neural networks for ecg signal classification,” Computers in Biology and Medicine, vol. 144, p. 105345, 2022.
|
| 403 |
+
[16] B. Li, F. Wu, S.-N. Lim, S. Belongie, and K. Q. Weinberger, “On feature normalization and data augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12383-12392.
|
| 404 |
+
[17] J.-H. Kim, W. Choo, and H. O. Song, “Puzzle mix: Exploiting saliency and local statistics for optimal mixup,” in International Conference on Machine Learning. PMLR, 2020, pp. 5275-5285.
|
| 405 |
+
[18] C. Cao, F. Zhou, Y. Dai, and J. Wang, "A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability," arXiv preprint arXiv:2212.10888, 2022.
|
| 406 |
+
[19] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, and Y. Bengio, "Manifold mixup: Better representations by interpolating hidden states," in International Conference on Machine Learning. PMLR, 2019, pp. 6438-6447.
|
| 407 |
+
[20] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
|
| 408 |
+
[21] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
|
| 409 |
+
[22] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
|
| 410 |
+
|
| 411 |
+
[23] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," pp. 11 976-11 986, 2022.
|
| 412 |
+
[24] X. Zhang, X. Zhou, M. Lin, and J. Sun, "Shufflingnet: An extremely efficient convolutional neural network for mobile devices," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6848-6856.
|
| 413 |
+
[25] S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. Torr et al., "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 6881-6890.
|
| 414 |
+
[26] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European Conference on Computer Vision, 2020, pp. 213–229.
|
| 415 |
+
[27] L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z.-H. Jiang, F. E. Tay, J. Feng, and S. Yan, "Tokens-to-token vit: Training vision transformers from scratch onImagenet," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 558-567.
|
| 416 |
+
[28] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, "Swin transformer: Hierarchical vision transformer using shifted windows," arXiv preprint arXiv:2103.14030, 2021.
|
| 417 |
+
[29] B. Heo, S. Yun, D. Han, S. Chun, J. Choe, and S. J. Oh, "Rethinking spatial dimensions of vision transformers," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 11936-11945.
|
| 418 |
+
[30] A. Srinivas, T.-Y. Lin, N. Parmar, J. Shlens, P. Abbeel, and A. Vaswani, "Bottleneck transformers for visual recognition," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 16519-16529.
|
| 419 |
+
[31] B. Graham, A. El-Nouby, H. Touvron, P. Stock, A. Joulin, H. Jégou, and M. Douze, “Levit: a vision transformer in convnet's clothing for faster inference,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 12-259-12-269.
|
| 420 |
+
[32] S. Mehta and M. Rastegari, "Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer," arXiv preprint arXiv:2110.02178, 2021.
|
| 421 |
+
[33] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510-4520.
|
| 422 |
+
[34] Y. Chen, X. Dai, D. Chen, M. Liu, X. Dong, L. Yuan, and Z. Liu, "Mobile-former: Bridging mobilenet and transformer," arXiv preprint arXiv:2108.05895, 2021.
|
| 423 |
+
[35] O. N. Manzari, H. Kashiani, H. A. Dehkordi, and S. B. Shokouhi, "Robust transformer with locality inductive bias and feature normalization," Engineering Science and Technology, an International Journal, vol. 38, p. 101320, 2023.
|
| 424 |
+
[36] B. Wu, J. Chen, D. Cai, X. He, and Q. Gu, “Do wider neural networks really help adversarial robustness?” Advances in Neural Information Processing Systems, vol. 34, pp. 7054–7067, 2021.
|
| 425 |
+
[37] E. Rusak, L. Schott, R. S. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel, “A simple way to make neural networks robust against diverse image corruptions,” in European Conference on Computer Vision. Springer, 2020, pp. 53–69.
|
| 426 |
+
[38] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Lakshminarayanan, "Augmix: A simple data processing method to improve robustness and uncertainty," arXiv preprint arXiv:1912.02781, 2019.
|
| 427 |
+
[39] M. Guo, Y. Yang, R. Xu, Z. Liu, and D. Lin, "When nas meets robustness: In search of robust architectures against adversarial attacks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 631-640.
|
| 428 |
+
[40] M. Dong, Y. Li, Y. Wang, and C. Xu, "Adversarily robust neural architectures," arXiv preprint arXiv:2009.00902, 2020.
|
| 429 |
+
[41] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017.
|
| 430 |
+
[42] Y. Li, Q. Yu, M. Tan, J. Mei, P. Tang, W. Shen, A. Yuille, and C. Xie, "Shape-texture debiased neural network training," arXiv preprint
|
| 431 |
+
|
| 432 |
+
arXiv:2010.05981, 2020.
|
| 433 |
+
[43] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, "Self-training with noisy student improves imagenet classification," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10687-10698.
|
| 434 |
+
[44] S. Ye, K. Xu, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, and X. Lin, "Adversarial robustness vs. model compression, or both?" in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 111-120.
|
| 435 |
+
[45] J. Lin, C. Gan, and S. Han, “Defensive quantization: When efficiency meets robustness,” arXiv preprint arXiv:1904.08444, 2019.
|
| 436 |
+
[46] C. Xie, M. Tan, B. Gong, A. Yuille, and Q. V. Le, "Smooth adversarial training," arXiv preprint arXiv:2006.14536, 2020.
|
| 437 |
+
[47] R. Zhang, “Making convolutional networks shift-invariant again,” in International conference on machine learning. PMLR, 2019, pp. 7324–7334.
|
| 438 |
+
[48] C. Vasconcelos, H. Larochelle, V. Dumoulin, N. L. Roux, and R. Goroshin, "An effective anti-aliasing approach for residual networks," arXiv preprint arXiv:2011.10675, 2020.
|
| 439 |
+
[49] R. Shao, Z. Shi, J. Yi, P.-Y. Chen, and C.-J. Hsieh, “On the adversarial robustness of vision transformers,” arXiv preprint arXiv:2103.15670, 2021.
|
| 440 |
+
[50] K. Mahmood, R. Mahmood, and M. Van Dijk, "On the robustness of vision transformers to adversarial examples," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7838-7847.
|
| 441 |
+
[51] S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, and A. Veit, “Understanding robustness of transformers for image classification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10231–10241.
|
| 442 |
+
[52] G. Bortsova, C. González-Gonzalo, S. C. Wetstein, F. Dubost, I. Katramados, L. Hogeweg, B. Liefers, B. van Ginneken, J. P. Plum, M. Veta et al., "Adversarial attack vulnerability of medical image analysis systems: Unexplored factors," Medical Image Analysis, vol. 73, p. 102141, 2021.
|
| 443 |
+
[53] M. Xu, T. Zhang, Z. Li, M. Liu, and D. Zhang, "Towards evaluating the robustness of deep diagnostic models by adversarial attack," Medical Image Analysis, vol. 69, p. 101977, 2021.
|
| 444 |
+
[54] F. Shamshad, S. Khan, S. W. Zamir, M. H. Khan, M. Hayat, F. S. Khan, and H. Fu, "Transformers in medical imaging: A survey," arXiv preprint arXiv:2201.09873, 2022.
|
| 445 |
+
[55] J. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, and V. M. Patel, "Medical transformer: Gated axial-attention for medical image segmentation," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2021, pp. 36-46.
|
| 446 |
+
[56] X. Mao, G. Qi, Y. Chen, X. Li, R. Duan, S. Ye, Y. He, and H. Xue, "Towards robust vision transformer," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12042-12051.
|
| 447 |
+
[57] N. Park and S. Kim, “How do vision transformers work?” arXiv preprint arXiv:2202.06709, 2022.
|
| 448 |
+
[58] L. Kauffmann, S. Ramanoel, and C. Peyrin, “The neural bases of spatial frequency processing during scene perception,” Frontiers in integrative neuroscience, vol. 8, p. 37, 2014.
|
| 449 |
+
[59] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, "Pvtv2: Improved baselines with pyramid vision transformer," arXiv preprint arXiv:2106.13797, 2021.
|
| 450 |
+
[60] G. Chen, P. Peng, L. Ma, J. Li, L. Du, and Y. Tian, "Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 458-467.
|
| 451 |
+
[61] L. Carratino, M. Cissé, R. Jenatton, and J.-P. Vert, "On mixup regularization," arXiv preprint arXiv:2006.06049, 2020.
|
| 452 |
+
[62] S. Chen, E. Dobriban, and J. H. Lee, “A group-theoretic framework for data augmentation,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 9885–9955, 2020.
|
| 453 |
+
[63] J. Yang, R. Shi, D. Wei, Z. Liu, L. Zhao, B. Ke, H. Pfister, and B. Ni, "Medmnist v2: A large-scale lightweight benchmark for 2d and 3d
|
| 454 |
+
|
| 455 |
+
biomedical image classification," arXiv preprint arXiv:2110.14795, 2021.
|
| 456 |
+
[64] J. N. Kather, J. Krisam, P. Charoentong, T. Luedde, E. Herpel, C.-A. Weis, T. Gaiser, A. Marx, N. A. Valous, D. Ferber et al., “Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study,” *PLoS medicine*, vol. 16, no. 1, p. e1002730, 2019.
|
| 457 |
+
[65] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2097-2106.
|
| 458 |
+
[66] P. Tschandl, C. Rosendahl, and H. Kittler, "The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions," Scientific data, vol. 5, no. 1, pp. 1-9, 2018.
|
| 459 |
+
[67] D. Dataset, "The 2nd diabetic retinopathy-grading and image quality estimation challenge," 2020.
|
| 460 |
+
[68] K. Qi and H. Yang, "Elastic net nonparallel hyperplane support vector machine and its geometrical rationality," IEEE Transactions on Neural Networks and Learning Systems, 2021.
|
| 461 |
+
[69] K. Chen, Y. Mao, H. Lu, C. Zeng, R. Wang, and W.-S. Zheng, "Alleviating data imbalance issue with perturbed input during inference," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2021, pp. 407-417.
|
| 462 |
+
[70] V. Ljosa, K. L. Sokolnicki, and A. E. Carpenter, "Annotated high-throughput microscopy image sets for validation." Nature methods, vol. 9, no. 7, pp. 637-637, 2012.
|
| 463 |
+
[71] A. Acevedo, A. Merino, S. Alférez, Á. Molina, L. Boldú, and J. Rodellar, “A dataset of microscopic peripheral blood cell images for development of automatic recognition systems,” Data in brief, vol. 30, 2020.
|
| 464 |
+
[72] W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ultrasound images,” Data in brief, vol. 28, p. 104863, 2020.
|
| 465 |
+
[73] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
|
| 466 |
+
[74] J. Yang, R. Shi, and B. Ni, “Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis,” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021, pp. 191–195.
|
| 467 |
+
[75] M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter, "Efficient and robust automated machine learning," Advances in neural information processing systems, vol. 28, 2015.
|
| 468 |
+
[76] H. Jin, Q. Song, and X. Hu, "Auto-keras: An efficient neural architecture search system," in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 1946-1956.
|
| 469 |
+
[77] E. Bisong, "Google automl: cloud vision," in Building Machine Learning and Deep Learning Models on Google Cloud Platform. Springer, 2019, pp. 581-598.
|
| 470 |
+
[78] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International Conference on Machine Learning, 2019, pp. 6105-6114.
|
| 471 |
+
[79] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” in International Conference on Machine Learning, 2021, pp. 10347-10357.
|
| 472 |
+
[80] X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, "Twins: Revisiting the design of spatial attention in vision transformers," arXiv preprint arXiv:2104.13840, 2021.
|
| 473 |
+
[81] W. Yu, M. Luo, P. Zhou, C. Si, Y. Zhou, X. Wang, J. Feng, and S. Yan, “Metaformer is actually what you need for vision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10819–10829.
|
| 474 |
+
[82] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
|
| 475 |
+
|
| 476 |
+
[83] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
|
2302.09xxx/2302.09462/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7aa6f1b632d0b948de1c027e193f3ed8163f5b1238cd7533ff10fb4b6671b4d
|
| 3 |
+
size 1177075
|
2302.09xxx/2302.09462/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_content_list.json
ADDED
|
@@ -0,0 +1,1915 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Stochastic Generative Flow Networks",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
302,
|
| 8 |
+
103,
|
| 9 |
+
694,
|
| 10 |
+
123
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Ling Pan\\*1,2",
|
| 17 |
+
"bbox": [
|
| 18 |
+
115,
|
| 19 |
+
176,
|
| 20 |
+
208,
|
| 21 |
+
194
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Dinghuai Zhang*1,2",
|
| 28 |
+
"bbox": [
|
| 29 |
+
250,
|
| 30 |
+
176,
|
| 31 |
+
394,
|
| 32 |
+
194
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Moksh Jain<sup>1,2</sup>",
|
| 39 |
+
"bbox": [
|
| 40 |
+
438,
|
| 41 |
+
176,
|
| 42 |
+
542,
|
| 43 |
+
191
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Longbo Huang",
|
| 50 |
+
"bbox": [
|
| 51 |
+
584,
|
| 52 |
+
176,
|
| 53 |
+
700,
|
| 54 |
+
194
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Yoshua Bengio $^{1,2,4}$",
|
| 61 |
+
"bbox": [
|
| 62 |
+
744,
|
| 63 |
+
176,
|
| 64 |
+
878,
|
| 65 |
+
194
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "$^{1}$ Mila - Québec AI Institute",
|
| 72 |
+
"bbox": [
|
| 73 |
+
403,
|
| 74 |
+
207,
|
| 75 |
+
593,
|
| 76 |
+
223
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "$^{2}$ Université de Montréal",
|
| 83 |
+
"bbox": [
|
| 84 |
+
416,
|
| 85 |
+
224,
|
| 86 |
+
581,
|
| 87 |
+
238
|
| 88 |
+
],
|
| 89 |
+
"page_idx": 0
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"text": "3Tsinghua University",
|
| 94 |
+
"bbox": [
|
| 95 |
+
423,
|
| 96 |
+
239,
|
| 97 |
+
571,
|
| 98 |
+
253
|
| 99 |
+
],
|
| 100 |
+
"page_idx": 0
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"text": "4 CIFAR AI Chair",
|
| 105 |
+
"bbox": [
|
| 106 |
+
436,
|
| 107 |
+
253,
|
| 108 |
+
559,
|
| 109 |
+
267
|
| 110 |
+
],
|
| 111 |
+
"page_idx": 0
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"text": "Abstract",
|
| 116 |
+
"text_level": 1,
|
| 117 |
+
"bbox": [
|
| 118 |
+
247,
|
| 119 |
+
311,
|
| 120 |
+
326,
|
| 121 |
+
328
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 0
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "text",
|
| 127 |
+
"text": "Generative Flow Networks (or GFlowNets for short) are a family of probabilistic agents that learn to sample complex combinatorial structures through the lens of \"inference as control\". They have shown great potential in generating high-quality and diverse candidates from a given energy landscape. However, existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with stochastic dynamics, which can limit their applicability. To overcome this challenge, this paper introduces Stochastic GFlowNets, a new algorithm that extends GFlowNets to stochastic environments. By decomposing state transitions into two steps, Stochastic GFlowNets isolate environmental stochasticity and learn a dynamics model to capture it. Extensive experimental results demonstrate that Stochastic GFlowNets offer significant advantages over standard GFlowNets as well as MCMC- and RL-based approaches, on a variety of standard benchmarks with stochastic dynamics.",
|
| 128 |
+
"bbox": [
|
| 129 |
+
115,
|
| 130 |
+
358,
|
| 131 |
+
455,
|
| 132 |
+
676
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 0
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "text",
|
| 138 |
+
"text": "1 INTRODUCTION",
|
| 139 |
+
"text_level": 1,
|
| 140 |
+
"bbox": [
|
| 141 |
+
87,
|
| 142 |
+
702,
|
| 143 |
+
282,
|
| 144 |
+
718
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "Recently, Generative Flow Networks [GFlowNets; Bengio et al., 2021a,b] have been successfully applied to a wide variety of tasks, including molecule discovery [Bengio et al., 2021a, Jain et al., 2022b], biological sequence design [Jain et al., 2022a], and robust scheduling [Zhang et al., 2023a]. GFlowNets learn policies to generate objects $x \\in \\mathcal{X}$ sequentially, and are related to Monte-Carlo Markov chain (MCMC) methods [Metropolis et al., 1953, Hastings, 1970, Andrieu et al., 2003], generative models [Goodfellow et al., 2014, Ho et al., 2020], and amortized variational inference [Kingma and Welling, 2013]. The sequential process",
|
| 151 |
+
"bbox": [
|
| 152 |
+
84,
|
| 153 |
+
734,
|
| 154 |
+
487,
|
| 155 |
+
901
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "of generating an object following a policy bears a close resemblance to reinforcement learning [RL; Sutton and Barto, 2018]. Contrary to the typical reward-maximizing policy in RL [Mnih et al., 2015, Lillicrap et al., 2015, Haarnoja et al., 2017, Fujimoto et al., 2018, Haarnoja et al., 2018], GFlowNets aim to learn a stochastic policy for sampling composite objects $x$ with probability proportional to the reward function $R(x)$ . This is desirable in many real-world tasks where the diversity of solutions is important, and we aim to sample a diverse set of high-reward candidates, including recommender systems [Kunaver and Požrl, 2017], drug discovery [Bengio et al., 2021a, Jain et al., 2022a], and sampling causal models from a Bayesian posterior [Deleu et al., 2022], among others.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
507,
|
| 164 |
+
315,
|
| 165 |
+
910,
|
| 166 |
+
527
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "image",
|
| 172 |
+
"img_path": "images/6d2ef76e33aede06aea006d146086db05db8997eaf009361d97f0cb25a8998d7.jpg",
|
| 173 |
+
"image_caption": [
|
| 174 |
+
"Figure 1: An example illustrating the failure of existing GFlowNet approaches. (Left) Squares and circles denote states and actions, while solid and dotted arrows correspond to policy decisions and stochastic environment dynamics. The numbers above the dotted lines represent state transition probabilities, and the numbers below the blue squares (terminal states) denote the terminal reward. (Right) Results from existing GFlowNet approaches and the ideal solution."
|
| 175 |
+
],
|
| 176 |
+
"image_footnote": [],
|
| 177 |
+
"bbox": [
|
| 178 |
+
512,
|
| 179 |
+
542,
|
| 180 |
+
712,
|
| 181 |
+
636
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 0
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "equation",
|
| 187 |
+
"text": "\n$$\nP \\left(s _ {1}\\right) = \\frac {5}{1 2} \\neq \\boxed {\\frac {1}{3} = \\frac {R _ {1}}{R _ {1} + R _ {2}}}\n$$\n",
|
| 188 |
+
"text_format": "latex",
|
| 189 |
+
"bbox": [
|
| 190 |
+
726,
|
| 191 |
+
547,
|
| 192 |
+
902,
|
| 193 |
+
580
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 0
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "equation",
|
| 199 |
+
"text": "\n$$\nP (s _ {2}) = \\frac {7}{1 2} \\neq \\boxed {\\frac {2}{3} = \\frac {R _ {2}}{R _ {1} + R _ {2}}}\n$$\n",
|
| 200 |
+
"text_format": "latex",
|
| 201 |
+
"bbox": [
|
| 202 |
+
726,
|
| 203 |
+
582,
|
| 204 |
+
902,
|
| 205 |
+
616
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 0
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "text",
|
| 211 |
+
"text": "Results",
|
| 212 |
+
"bbox": [
|
| 213 |
+
744,
|
| 214 |
+
618,
|
| 215 |
+
793,
|
| 216 |
+
630
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 0
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "Ideal solution",
|
| 223 |
+
"bbox": [
|
| 224 |
+
815,
|
| 225 |
+
618,
|
| 226 |
+
897,
|
| 227 |
+
630
|
| 228 |
+
],
|
| 229 |
+
"page_idx": 0
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"type": "text",
|
| 233 |
+
"text": "Existing work on GFlowNets [Bengio et al., 2021a, Malkin et al., 2022a, Madan et al., 2022], however, is limited to deterministic environments, where state transitions are deterministic, which may limit their applicability in the more general stochastic cases in practice. Figure 1 illustrates an example with stochastic transition dynamics where existing GFlowNet approaches can fail. Standard GFlowNet approaches will result in $P(s_{1}) = \\frac{5}{12}$ and $P(s_{2}) = \\frac{7}{12}$ when trained to completion (with $P(s)$ denoting the probability",
|
| 234 |
+
"bbox": [
|
| 235 |
+
507,
|
| 236 |
+
786,
|
| 237 |
+
910,
|
| 238 |
+
924
|
| 239 |
+
],
|
| 240 |
+
"page_idx": 0
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"type": "aside_text",
|
| 244 |
+
"text": "arXiv:2302.09465v3 [cs.LG] 25 Jun 2023",
|
| 245 |
+
"bbox": [
|
| 246 |
+
21,
|
| 247 |
+
267,
|
| 248 |
+
60,
|
| 249 |
+
707
|
| 250 |
+
],
|
| 251 |
+
"page_idx": 0
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"type": "page_footnote",
|
| 255 |
+
"text": "*Equal contribution.",
|
| 256 |
+
"bbox": [
|
| 257 |
+
112,
|
| 258 |
+
909,
|
| 259 |
+
236,
|
| 260 |
+
922
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 0
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "footer",
|
| 266 |
+
"text": "Accepted for the $39^{\\text{th}}$ Conference on Uncertainty in Artificial Intelligence (UAI 2023).",
|
| 267 |
+
"bbox": [
|
| 268 |
+
231,
|
| 269 |
+
946,
|
| 270 |
+
759,
|
| 271 |
+
960
|
| 272 |
+
],
|
| 273 |
+
"page_idx": 0
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"type": "text",
|
| 277 |
+
"text": "of sampling state $s$ ), which does not match the ideal case where $P(s_{1}) = \\frac{1}{3}$ and $P(s_{2}) = \\frac{2}{3}$ . Therefore, the learned policy does not sample proportionally to the reward function in the presence of stochastic transition dynamics. In practice, many tasks involve stochasticity in state transitions, which are more challenging to solve but are applicable to a wide variety of problems [Antonoglou et al., 2021, Yang et al., 2022, Paster et al., 2022].",
|
| 278 |
+
"bbox": [
|
| 279 |
+
84,
|
| 280 |
+
79,
|
| 281 |
+
487,
|
| 282 |
+
200
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 1
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "To address this limitation, in this paper, we introduce a novel methodology, Stochastic GFlowNet, which is the first empirically effective approach for tackling environments with stochastic transition dynamics with GFlowNets. Stochastic GFlowNet decomposes the state transitions based on the concept of afterstates [Sutton and Barto, 2018, Bengio et al., 2021b]. Specifically, each stochastic state transition is decomposed into a deterministic step that transitions from the environment state $s$ to an intermediate state $(s,a)$ and a stochastic step that branches from the intermediate state $(s,a)$ to the next state $s'$ . We propose a practical way for training the dynamics model to capture the stochastic environment dynamics. The methodology is general and can be applied to different GFlowNet learning objectives. The code is publicly available at https://github.com/ling-pan/Stochastic-GFN.",
|
| 289 |
+
"bbox": [
|
| 290 |
+
84,
|
| 291 |
+
207,
|
| 292 |
+
489,
|
| 293 |
+
450
|
| 294 |
+
],
|
| 295 |
+
"page_idx": 1
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "text",
|
| 299 |
+
"text": "In summary, the contribution of this work is as follows:",
|
| 300 |
+
"bbox": [
|
| 301 |
+
85,
|
| 302 |
+
457,
|
| 303 |
+
463,
|
| 304 |
+
470
|
| 305 |
+
],
|
| 306 |
+
"page_idx": 1
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"type": "list",
|
| 310 |
+
"sub_type": "text",
|
| 311 |
+
"list_items": [
|
| 312 |
+
"- We propose a novel method, Stochastic GFlowNet, which is the first empirically effective approach extending GFlowNets to the more general stochastic environments based on Bengio et al. [2021b].",
|
| 313 |
+
"- We conduct extensive experiments on GFlowNet benchmark tasks augmented with stochastic transition dynamics, and validate the effectiveness of our approach in tackling stochastic environments. Results show that our method significantly outperforms existing baselines and scales well to the more complex and challenging biological sequence generation tasks."
|
| 314 |
+
],
|
| 315 |
+
"bbox": [
|
| 316 |
+
105,
|
| 317 |
+
484,
|
| 318 |
+
487,
|
| 319 |
+
656
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 1
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "text",
|
| 325 |
+
"text": "2 BACKGROUND",
|
| 326 |
+
"text_level": 1,
|
| 327 |
+
"bbox": [
|
| 328 |
+
85,
|
| 329 |
+
678,
|
| 330 |
+
268,
|
| 331 |
+
694
|
| 332 |
+
],
|
| 333 |
+
"page_idx": 1
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"text": "2.1 GFLOWNET PRELIMINARIES",
|
| 338 |
+
"text_level": 1,
|
| 339 |
+
"bbox": [
|
| 340 |
+
85,
|
| 341 |
+
710,
|
| 342 |
+
369,
|
| 343 |
+
726
|
| 344 |
+
],
|
| 345 |
+
"page_idx": 1
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"type": "text",
|
| 349 |
+
"text": "We denote a directed acyclic graph (DAG) by $\\mathcal{G} = (\\mathcal{S},\\mathcal{A})$ with $\\mathcal{S}$ the set of vertices corresponding to the states and $\\mathcal{A}\\subseteq S\\times S$ the set of edges, which corresponds to the set of actions. There is a unique initial state $s_0\\in \\mathcal{S}$ which has no parent state; on the other hand, we define all states without children to be terminal states, whose set is denoted by $\\mathcal{X}\\subseteq \\mathcal{S}$ . A GFlowNet learns stochastic policy which aims to sample complete trajectories $\\tau = (s_0\\to s_1\\to \\dots \\to s_n)$ where $s_n\\in \\mathcal{X}$ and $(s_i\\rightarrow s_{i + 1})\\in \\mathcal{A},\\forall i$ to sample terminal states. Each trajectory is assigned a non-negative flow $F(\\tau)$ . A trajectory can be generated sequentially by sampling iteratively from the forward policy $P_F(s_{t + 1}|s_t)$ , which is a",
|
| 350 |
+
"bbox": [
|
| 351 |
+
84,
|
| 352 |
+
741,
|
| 353 |
+
489,
|
| 354 |
+
924
|
| 355 |
+
],
|
| 356 |
+
"page_idx": 1
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"type": "text",
|
| 360 |
+
"text": "collection of distributions over the children at each state. Existing work on GFlowNets assumes a one-to-one correspondence between action and next state, making the definition of forward policy to be consistent to the notion of policy in general RL. Nonetheless, in this work, we relax this assumption and generalize GFlowNets to more flexible stochastic environments. The objective of GFlowNet learning is to sample terminal states with probability proportional to a given non-negative reward function $R(x)$ for all $x \\in \\mathcal{X}$ . This indicates that all the flows that end up with $x$ should sum up to $R(x)$ , namely $\\sum_{\\tau \\to x} F(\\tau) = R(x), \\forall x \\in \\mathcal{X}$ , where $\\tau \\to x$ is a trajectory $\\tau$ that ends in $x$ and the sum is thus over all complete trajectories that lead to terminal state $x \\in \\mathcal{X}$ . To formalize this, we first define the terminating probability $P_T(x)$ to be the marginal likelihood of sampling trajectories terminating at a terminal state $x$ :",
|
| 361 |
+
"bbox": [
|
| 362 |
+
507,
|
| 363 |
+
79,
|
| 364 |
+
910,
|
| 365 |
+
321
|
| 366 |
+
],
|
| 367 |
+
"page_idx": 1
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"type": "equation",
|
| 371 |
+
"text": "\n$$\nP _ {T} (x) = \\sum_ {\\tau \\rightarrow x} P _ {F} (\\tau) = \\sum_ {\\tau \\rightarrow x} \\prod_ {i = 1} ^ {n} P _ {F} \\left(s _ {i} \\mid s _ {i - 1}\\right). \\quad (1)\n$$\n",
|
| 372 |
+
"text_format": "latex",
|
| 373 |
+
"bbox": [
|
| 374 |
+
552,
|
| 375 |
+
334,
|
| 376 |
+
910,
|
| 377 |
+
375
|
| 378 |
+
],
|
| 379 |
+
"page_idx": 1
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"text": "Therefore, the goal of GFlowNet learning is to obtain a policy such that $P_T(x) \\propto R(x), \\forall x \\in \\mathcal{X}$ .",
|
| 384 |
+
"bbox": [
|
| 385 |
+
507,
|
| 386 |
+
383,
|
| 387 |
+
910,
|
| 388 |
+
416
|
| 389 |
+
],
|
| 390 |
+
"page_idx": 1
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"text": "2.2 LEARNING OBJECTIVES FOR GFLOWNETS",
|
| 395 |
+
"text_level": 1,
|
| 396 |
+
"bbox": [
|
| 397 |
+
509,
|
| 398 |
+
439,
|
| 399 |
+
905,
|
| 400 |
+
455
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 1
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "text",
|
| 406 |
+
"text": "In applicative tasks, practitioners need to design the GFlowNet modules (e.g., policies, flows) with parameterized neural networks, and further choose a training criterion to train these networks. In this subsection, we briefly summarize some learning criteria of GFlowNets.",
|
| 407 |
+
"bbox": [
|
| 408 |
+
507,
|
| 409 |
+
470,
|
| 410 |
+
910,
|
| 411 |
+
546
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 1
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "Detailed balance (DB). By summing the flows $F(\\tau)$ of all the trajectories $\\tau$ going through a state $s$ , we can define a state flow $F(s) := \\sum_{\\tau \\ni s} F(\\tau)$ . Such a function can be learned, together with forward and backward policy $P_F(s'|s)$ , $P_B(s|s')$ , where $s'$ is a child state of $s$ . The backward policy $P_B$ , a collection of distributions over the parents of each state, is not part of the generative process, but serves as a tool for learning the forward policy $P_F$ . The GFlowNet detailed balance (DB) constraint is defined as",
|
| 418 |
+
"bbox": [
|
| 419 |
+
507,
|
| 420 |
+
561,
|
| 421 |
+
910,
|
| 422 |
+
699
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 1
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "equation",
|
| 428 |
+
"text": "\n$$\nF (s) P _ {F} (s ^ {\\prime} | s) = F (s ^ {\\prime}) P _ {B} (s | s ^ {\\prime}), \\forall (s \\rightarrow s ^ {\\prime}) \\in \\mathcal {A}. \\quad (2)\n$$\n",
|
| 429 |
+
"text_format": "latex",
|
| 430 |
+
"bbox": [
|
| 431 |
+
529,
|
| 432 |
+
710,
|
| 433 |
+
910,
|
| 434 |
+
729
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 1
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "text",
|
| 440 |
+
"text": "It is also worth noting that at terminal states $x$ , it pushes the flow at $x$ to match the terminal rewards $R(x)$ . In practice, we transform the DB constraint into a training objective by setting the loss function to be a squared difference between the logarithm of the left and right-hand sides Bengio et al. [2021a] of Eq. (2). If the optimization objective is perfectly minimized, it would make the above flow consistency constraint satisfied, thus making the forward policy $P_F$ sample proportionally to given reward values, as desired. It means that after training, the constraint is only approximately achieved (and in general it would be intractable to obtain an exact solution).",
|
| 441 |
+
"bbox": [
|
| 442 |
+
507,
|
| 443 |
+
741,
|
| 444 |
+
910,
|
| 445 |
+
922
|
| 446 |
+
],
|
| 447 |
+
"page_idx": 1
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "text",
|
| 451 |
+
"text": "Trajectory balance (TB). In analogy to the forward decomposition of a complete trajectory in Eq. (1), we could use $\\prod_{i=1}^{n} P_B(s_{i-1}|s_i)$ to represent the trajectory backward probability. As an alternative to DB, Malkin et al. [2022a] propose the trajectory balance (TB) criterion which operates on complete trajectories, instead of state transitions, defined as follows",
|
| 452 |
+
"bbox": [
|
| 453 |
+
85,
|
| 454 |
+
79,
|
| 455 |
+
485,
|
| 456 |
+
183
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "equation",
|
| 462 |
+
"text": "\n$$\nZ \\prod_ {i = 1} ^ {n} P _ {F} \\left(s _ {i} \\mid s _ {i - 1}\\right) = R (x) \\prod_ {i = 1} ^ {n} P _ {B} \\left(s _ {i - 1} \\mid s _ {i}\\right), \\tag {3}\n$$\n",
|
| 463 |
+
"text_format": "latex",
|
| 464 |
+
"bbox": [
|
| 465 |
+
134,
|
| 466 |
+
189,
|
| 467 |
+
485,
|
| 468 |
+
229
|
| 469 |
+
],
|
| 470 |
+
"page_idx": 2
|
| 471 |
+
},
|
| 472 |
+
{
|
| 473 |
+
"type": "text",
|
| 474 |
+
"text": "where $\\tau = (s_0 \\to s_1 \\to \\ldots \\to s_n = x)$ is any complete trajectory and $Z$ is a learned scalar parameter, denoting the partition function of the reward distribution. Note that TB does not explicitly learn a flow function.",
|
| 475 |
+
"bbox": [
|
| 476 |
+
85,
|
| 477 |
+
236,
|
| 478 |
+
485,
|
| 479 |
+
296
|
| 480 |
+
],
|
| 481 |
+
"page_idx": 2
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"type": "text",
|
| 485 |
+
"text": "For on-policy training, we can simply use trajectories sampled from the forward policy $P_F$ to evaluate the training loss and its gradient with respect to the parameters of the neural networks. The GFlowNet training objectives can be further improved with off-policy training, i.e., with trajectories sampled from a broader and more exploratory distribution than $P_F$ [Malkin et al., 2022b]. A popular choice is using a tempered version of the current forward policy [Zhang et al., 2022b] or a mixture of the forward policy and a uniform random policy [Bengio et al., 2021a] that mimics $\\epsilon$ -greedy exploration in RL.",
|
| 486 |
+
"bbox": [
|
| 487 |
+
85,
|
| 488 |
+
303,
|
| 489 |
+
485,
|
| 490 |
+
470
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 2
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "text",
|
| 496 |
+
"text": "3 STOCHASTIC GFLOWNETS",
|
| 497 |
+
"text_level": 1,
|
| 498 |
+
"bbox": [
|
| 499 |
+
85,
|
| 500 |
+
492,
|
| 501 |
+
386,
|
| 502 |
+
508
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 2
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"text": "We now describe the Stochastic GFlowNet, a novel method that learns a model of the environment to capture the stochasticity of state transitions. We first describe a key idea introduced by Bengio et al. [2021b] to decompose the GFlowNet transitions as per Figure 2, and then introduce a new approach to learn the GFlowNet policy and the dynamics model. We also discuss the applicability to different GFlowNet learning objectives and the resulting effects.",
|
| 509 |
+
"bbox": [
|
| 510 |
+
85,
|
| 511 |
+
523,
|
| 512 |
+
485,
|
| 513 |
+
645
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 2
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "text",
|
| 519 |
+
"text": "3.1 PROPOSED METHOD",
|
| 520 |
+
"text_level": 1,
|
| 521 |
+
"bbox": [
|
| 522 |
+
85,
|
| 523 |
+
666,
|
| 524 |
+
302,
|
| 525 |
+
681
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 2
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "text",
|
| 531 |
+
"text": "Existing work on GFlowNets [Bengio et al., 2021a, Malkin et al., 2022a] typically makes the assumption that all transitions from a state $s_t$ to the next state $s_{t+1}$ within a trajectory are defined deterministically based on the selected action $a_t$ (and also there is only one action $a_t$ that can transition from $s_t$ to $s_{t+1} = T(s_t, a_t)$ , with $T$ denoting the deterministic transition function). This applies to problems where the generative process for the objects is deterministic, which is appropriate when the actions are internal, e.g., choosing what to attend, what to imagine (such as solutions to problems), how to make inferences, etc. Yet, a number of real-world tasks are stochastic, either inherently or due to the environment complexity [Antonoglou et al., 2021, Paster et al., 2022, Yang et al., 2022]. In the more general stochastic environments, the action $a_t$ at $s_t$ can land in several",
|
| 532 |
+
"bbox": [
|
| 533 |
+
85,
|
| 534 |
+
696,
|
| 535 |
+
485,
|
| 536 |
+
922
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 2
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "text",
|
| 542 |
+
"text": "possible next states. For instance, synthesizing proteins with oligo pools can result in the generation of variants of the specified protein [Song et al., 2021].",
|
| 543 |
+
"bbox": [
|
| 544 |
+
507,
|
| 545 |
+
79,
|
| 546 |
+
907,
|
| 547 |
+
125
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 2
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "image",
|
| 553 |
+
"img_path": "images/934e254ffc8c04dee9d56c52c665db943d61599092e779a9c5304251eb4780c4.jpg",
|
| 554 |
+
"image_caption": [
|
| 555 |
+
"Figure 2: We decompose traditional GFlowNet transitions (top) into two steps to facilitate the GFlowNet formalization: (a) stochastically choosing the action, (b) stochastically transitioning to a new state. We call the intermediate state after (1) an odd state (and starting and final points even states), as illustrated above."
|
| 556 |
+
],
|
| 557 |
+
"image_footnote": [],
|
| 558 |
+
"bbox": [
|
| 559 |
+
559,
|
| 560 |
+
138,
|
| 561 |
+
860,
|
| 562 |
+
290
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 2
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"text": "To cope with stochasticity in the transition dynamics, we decompose state transitions based on the concept of after-states [Sutton and Barto, 2018, Bengio et al., 2021b]. Specifically, for a transition from a state $s_t$ to the next state $s_{t+1}$ , we decompose the transition in two steps. First, as illustrated in part (a) in Figure 2, we sample an action $a_t$ based on a policy $\\pi$ in the current state $s_t$ (called an even state), and transit deterministically to an intermediate state $(s_t, a_t)$ , called an odd state. The flow consistency constraint for detailed balance (DB) for even-to-add transitions is shown in Eq. (4), since the backward policy probability is 1 here (we can only get to $(s_t, a_t)$ from $s_t$ ):",
|
| 569 |
+
"bbox": [
|
| 570 |
+
507,
|
| 571 |
+
417,
|
| 572 |
+
910,
|
| 573 |
+
598
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 2
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "equation",
|
| 579 |
+
"text": "\n$$\nF \\left(s _ {t}\\right) \\pi \\left(a _ {t} \\mid s _ {t}\\right) = F \\left(\\left(s _ {t}, a _ {t}\\right)\\right). \\tag {4}\n$$\n",
|
| 580 |
+
"text_format": "latex",
|
| 581 |
+
"bbox": [
|
| 582 |
+
606,
|
| 583 |
+
604,
|
| 584 |
+
909,
|
| 585 |
+
622
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 2
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "The odd state can be considered as a hypothetical state after we apply an action before the environment gets involved [Antonoglou et al., 2021]. The environment dynamics then transform $(s_t, a_t)$ into the next even state $s_{t+1}$ stochastically according to a distribution $P(\\cdot | s_t, a_t)$ , which is the state transition function. This second step corresponds to part (b) in Figure 2, and the corresponding flow consistency constraint is",
|
| 592 |
+
"bbox": [
|
| 593 |
+
507,
|
| 594 |
+
628,
|
| 595 |
+
910,
|
| 596 |
+
750
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 2
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "equation",
|
| 602 |
+
"text": "\n$$\nF \\left(\\left(s _ {t}, a _ {t}\\right)\\right) P \\left(s _ {t + 1} \\mid \\left(s _ {t}, a _ {t}\\right)\\right) = F \\left(s _ {t + 1}\\right) \\pi_ {B} \\left(\\left(s _ {t}, a _ {t}\\right) \\mid s _ {t + 1}\\right). \\tag {5}\n$$\n",
|
| 603 |
+
"text_format": "latex",
|
| 604 |
+
"bbox": [
|
| 605 |
+
512,
|
| 606 |
+
756,
|
| 607 |
+
907,
|
| 608 |
+
787
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 2
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "Note that an odd state can lead to many possible next even states due to stochasticity in the environment. With the introduction of odd states, we isolate the effect of choosing the action to apply in the environment and of the stochastic state transition given an action, with a deterministic and a stochastic step.",
|
| 615 |
+
"bbox": [
|
| 616 |
+
507,
|
| 617 |
+
787,
|
| 618 |
+
910,
|
| 619 |
+
878
|
| 620 |
+
],
|
| 621 |
+
"page_idx": 2
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"text": "Training a Stochastic GFlowNet. Combining the two steps, we obtain a novel flow consistency constraint which",
|
| 626 |
+
"bbox": [
|
| 627 |
+
507,
|
| 628 |
+
893,
|
| 629 |
+
910,
|
| 630 |
+
922
|
| 631 |
+
],
|
| 632 |
+
"page_idx": 2
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"text": "we call stochastic GFlowNets based on detailed balance (DB), where $P$ denotes the state transition function:",
|
| 637 |
+
"bbox": [
|
| 638 |
+
85,
|
| 639 |
+
79,
|
| 640 |
+
485,
|
| 641 |
+
109
|
| 642 |
+
],
|
| 643 |
+
"page_idx": 3
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "equation",
|
| 647 |
+
"text": "\n$$\n\\begin{array}{l} F \\left(s _ {t}\\right) \\pi \\left(a _ {t} \\mid s _ {t}\\right) P \\left(s _ {t + 1} \\mid \\left(s _ {t}, a _ {t}\\right)\\right) \\\\ = F \\left(s _ {t + 1}\\right) \\pi_ {B} \\left(\\left(s _ {t}, a _ {t}\\right) \\mid s _ {t + 1}\\right). \\tag {6} \\\\ \\end{array}\n$$\n",
|
| 648 |
+
"text_format": "latex",
|
| 649 |
+
"bbox": [
|
| 650 |
+
174,
|
| 651 |
+
118,
|
| 652 |
+
485,
|
| 653 |
+
154
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "In practice, for training stochastic GFlowNets, we would minimize the loss $\\mathcal{L}_{\\mathrm{StochGFN - DB}}(s,a,s^{\\prime})$ in Eq. (7) based on the flow consistency constraint from Eq. (6), which is trained on a log-scale.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
85,
|
| 662 |
+
169,
|
| 663 |
+
485,
|
| 664 |
+
229
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "equation",
|
| 670 |
+
"text": "\n$$\n\\begin{array}{l} \\left(\\log F \\left(s _ {t}\\right) + \\log \\pi \\left(a _ {t} \\mid s _ {t}\\right) + \\log P \\left(s _ {t + 1} \\mid \\left(s _ {t}, a _ {t}\\right)\\right) \\right. \\tag {7} \\\\ - \\log F (s _ {t + 1}) - \\log \\pi_ {B} ((s _ {t}, a _ {t}) | s _ {t + 1})) ^ {2}. \\\\ \\end{array}\n$$\n",
|
| 671 |
+
"text_format": "latex",
|
| 672 |
+
"bbox": [
|
| 673 |
+
110,
|
| 674 |
+
239,
|
| 675 |
+
485,
|
| 676 |
+
277
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 3
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "Note that our proposed methodology is general and can be applied to other GFlowNet learning objectives such as trajectory balance (TB), as we discuss in Section 3.2.",
|
| 683 |
+
"bbox": [
|
| 684 |
+
85,
|
| 685 |
+
285,
|
| 686 |
+
485,
|
| 687 |
+
330
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 3
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "text",
|
| 693 |
+
"text": "Learning the dynamics model. Since the transition dynamics $P(\\cdot | s, a)$ are unknown in general, we need to learn it. In practice, we learn a model $\\hat{P}$ with parameters $\\phi$ to approximate $P$ through maximum likelihood estimation (other techniques from generative and dynamics modeling [Venkatraman et al., 2015] could also be applied). We optimize its parameters with the interaction data using the Adam optimizer [Kingma and Ba, 2015] based on the loss function in Eq. (8), where the output of $\\hat{P}$ is a softmax distribution across all possible next states. The data is sampled from a experience replay buffer, which stores interaction data $\\{s, a, s'\\}$ from the GFlowNet policy and the environment.",
|
| 694 |
+
"bbox": [
|
| 695 |
+
85,
|
| 696 |
+
345,
|
| 697 |
+
485,
|
| 698 |
+
527
|
| 699 |
+
],
|
| 700 |
+
"page_idx": 3
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "equation",
|
| 704 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {m o d e l}} (s, a, s ^ {\\prime}) = - \\log \\hat {P} (s ^ {\\prime} | s, a) \\tag {8}\n$$\n",
|
| 705 |
+
"text_format": "latex",
|
| 706 |
+
"bbox": [
|
| 707 |
+
166,
|
| 708 |
+
535,
|
| 709 |
+
485,
|
| 710 |
+
551
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 3
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "Practical implementation. Figure 3 illustrates the major components of Stochastic GFlowNets as described above and how they interact with each other. The procedure for training Stochastic GFlowNet based on DB is summarized in Algorithm 1.",
|
| 717 |
+
"bbox": [
|
| 718 |
+
85,
|
| 719 |
+
564,
|
| 720 |
+
485,
|
| 721 |
+
638
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 3
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "image",
|
| 727 |
+
"img_path": "images/60037c21f47a2621624c3fa7ad9600a953fd134166adb19e9d0a414f303ac858.jpg",
|
| 728 |
+
"image_caption": [
|
| 729 |
+
"Figure 3: Illustration of Stochastic GFlowNets."
|
| 730 |
+
],
|
| 731 |
+
"image_footnote": [],
|
| 732 |
+
"bbox": [
|
| 733 |
+
169,
|
| 734 |
+
654,
|
| 735 |
+
401,
|
| 736 |
+
773
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 3
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "3.2 DISCUSSION ON THE APPLICABILITY TO TRAJECTORY BALANCE (TB)",
|
| 743 |
+
"text_level": 1,
|
| 744 |
+
"bbox": [
|
| 745 |
+
85,
|
| 746 |
+
832,
|
| 747 |
+
465,
|
| 748 |
+
862
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 3
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "As discussed in Section 3.1, our proposed method is versatile and can be applied to other GFlowNet learning objectives beyond DB. We state the flow consistency constraint",
|
| 755 |
+
"bbox": [
|
| 756 |
+
85,
|
| 757 |
+
878,
|
| 758 |
+
485,
|
| 759 |
+
922
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 3
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "text",
|
| 765 |
+
"text": "Algorithm 1 Stochastic Generative Flow Networks",
|
| 766 |
+
"text_level": 1,
|
| 767 |
+
"bbox": [
|
| 768 |
+
510,
|
| 769 |
+
79,
|
| 770 |
+
858,
|
| 771 |
+
94
|
| 772 |
+
],
|
| 773 |
+
"page_idx": 3
|
| 774 |
+
},
|
| 775 |
+
{
|
| 776 |
+
"type": "list",
|
| 777 |
+
"sub_type": "text",
|
| 778 |
+
"list_items": [
|
| 779 |
+
"1: Initialize the forward and backward policies $\\pi, \\pi_B$ , and the state flow function $F$ with parameters $\\theta$",
|
| 780 |
+
"2: Initialize the transition dynamics $\\dot{P}$ with parameters $\\phi$",
|
| 781 |
+
"3: Initialize experience replay buffer $\\mathcal{B}$",
|
| 782 |
+
"4: for each training step $t = 1$ to $T$ do",
|
| 783 |
+
"5: Collect a batch of $M$ trajectories $\\tau = \\{s_0\\to \\dots \\to s_n\\}$ from the policy $\\pi$ , and store them in $\\mathcal{B}$",
|
| 784 |
+
"6: Update the stochastic GFN model according to the loss $\\mathcal{L}_{\\mathrm{StochGFN - DB}}$ in Eq. (7) based on $\\{\\tau \\}_{i = 1}^{M}$",
|
| 785 |
+
"7: Sample a batch of $K$ trajectories from $\\mathcal{B}$",
|
| 786 |
+
"8: Update the transition dynamics model according to the loss $\\mathcal{L}_{\\mathrm{model}}$ in Eq. (8) using data sampled from the replay buffer",
|
| 787 |
+
"9: end for"
|
| 788 |
+
],
|
| 789 |
+
"bbox": [
|
| 790 |
+
519,
|
| 791 |
+
99,
|
| 792 |
+
907,
|
| 793 |
+
308
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 3
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "text",
|
| 799 |
+
"text": "for Stochastic TB in Eq. (9), which is obtained via a telescoping calculation based on Eq. (6).",
|
| 800 |
+
"bbox": [
|
| 801 |
+
507,
|
| 802 |
+
349,
|
| 803 |
+
910,
|
| 804 |
+
380
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 3
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "equation",
|
| 810 |
+
"text": "\n$$\nZ \\prod_ {t = 0} ^ {n - 1} \\pi \\left(a _ {t} \\mid s _ {t}\\right) P \\left(s _ {t + 1} \\mid \\left(s _ {t}, a _ {t}\\right)\\right) = R (x) \\prod_ {t = 0} ^ {n - 1} \\pi_ {B} \\left(\\left(s _ {t}, a _ {t}\\right) \\mid s _ {t + 1}\\right) \\tag {9}\n$$\n",
|
| 811 |
+
"text_format": "latex",
|
| 812 |
+
"bbox": [
|
| 813 |
+
509,
|
| 814 |
+
402,
|
| 815 |
+
931,
|
| 816 |
+
455
|
| 817 |
+
],
|
| 818 |
+
"page_idx": 3
|
| 819 |
+
},
|
| 820 |
+
{
|
| 821 |
+
"type": "text",
|
| 822 |
+
"text": "In practice, we can train with Stochastic TB by minimizing the loss $\\mathcal{L}_{\\mathrm{StochGFN - TB}}(s,a,s^{\\prime})$ obtained from Eq (9), i.e.,",
|
| 823 |
+
"bbox": [
|
| 824 |
+
507,
|
| 825 |
+
455,
|
| 826 |
+
909,
|
| 827 |
+
487
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 3
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "equation",
|
| 833 |
+
"text": "\n$$\n\\begin{array}{l} \\left[ \\log Z + \\sum_ {t = 0} ^ {n - 1} \\log \\pi \\left(a _ {t} \\mid s _ {t}\\right) + \\sum_ {t = 0} ^ {n - 1} \\log \\hat {P} \\left(s _ {t + 1} \\mid \\left(s _ {t}, a _ {t}\\right)\\right) \\right. \\\\ \\left. - \\log R (x) - \\sum_ {t = 0} ^ {n - 1} \\log \\pi_ {B} \\left(\\left(s _ {t}, a _ {t}\\right) \\mid s _ {t + 1}\\right) \\right] ^ {2}. \\tag {10} \\\\ \\end{array}\n$$\n",
|
| 834 |
+
"text_format": "latex",
|
| 835 |
+
"bbox": [
|
| 836 |
+
529,
|
| 837 |
+
510,
|
| 838 |
+
909,
|
| 839 |
+
609
|
| 840 |
+
],
|
| 841 |
+
"page_idx": 3
|
| 842 |
+
},
|
| 843 |
+
{
|
| 844 |
+
"type": "text",
|
| 845 |
+
"text": "However, as TB is optimized based on a sampled trajectory instead of each transition, it can lead to a larger variance as studied in Madan et al. [2022] even in deterministic environments. This problem can be further exacerbated in stochastic environments. In Section 4.1.3, we find that the Stochastic TB underperforms relative to Stochastic DB, presumable due to a larger variance, as studied by Madan et al. [2022].",
|
| 846 |
+
"bbox": [
|
| 847 |
+
507,
|
| 848 |
+
632,
|
| 849 |
+
910,
|
| 850 |
+
738
|
| 851 |
+
],
|
| 852 |
+
"page_idx": 3
|
| 853 |
+
},
|
| 854 |
+
{
|
| 855 |
+
"type": "text",
|
| 856 |
+
"text": "4 EXPERIMENTS",
|
| 857 |
+
"text_level": 1,
|
| 858 |
+
"bbox": [
|
| 859 |
+
509,
|
| 860 |
+
782,
|
| 861 |
+
692,
|
| 862 |
+
797
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 3
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "text",
|
| 868 |
+
"text": "In this section, we conduct extensive experiments to investigate the following key questions: i) How much can Stochastic GFNs improve over GFNs in the presence of stochastic transition dynamics? ii) Can Stochastic GFNs be built upon different GFlowNets learning objectives? iii) Can Stochastic GFNs scale to the more complex and challenging tasks of generating biological sequences?",
|
| 869 |
+
"bbox": [
|
| 870 |
+
507,
|
| 871 |
+
816,
|
| 872 |
+
910,
|
| 873 |
+
922
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 3
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "text",
|
| 879 |
+
"text": "4.1 GRIDWORLD",
|
| 880 |
+
"text_level": 1,
|
| 881 |
+
"bbox": [
|
| 882 |
+
87,
|
| 883 |
+
79,
|
| 884 |
+
236,
|
| 885 |
+
93
|
| 886 |
+
],
|
| 887 |
+
"page_idx": 4
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "text",
|
| 891 |
+
"text": "4.1.1 Experimental Setup",
|
| 892 |
+
"text_level": 1,
|
| 893 |
+
"bbox": [
|
| 894 |
+
85,
|
| 895 |
+
109,
|
| 896 |
+
285,
|
| 897 |
+
125
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 4
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "text",
|
| 903 |
+
"text": "We first conduct a series of experiments in the GridWorld task introduced in Bengio et al. [2021a] to understand the effectiveness of Stochastic GFlowNets. An illustration of the task with size $H \\times H$ is shown in Figure 4. At each time step, the agent takes an action to navigate in the grid, where possible actions include operations to increase one coordinate and also a stop operation to terminate the episode, ensuring the underlying Markov decision process (MDP) is a directed acyclic graph. The agent obtains a reward $R(x)$ as defined in Bengio et al. [2021a] when the trajectory ends at a terminal state $x$ . The reward function $R(x)$ has 4 modes located at the corners of the map as illustrated in Figure 4. The goal for the agent is to model the target reward distribution, and captures all the modes of the reward function. The shade of color in Figure 4 indicates the magnitude of rewards, where a darker color corresponds to a larger reward. We consider a variant with stochastic transition dynamics, where the randomness in the environment is injected following Machado et al. [2018], Yang et al. [2022] in GridWorld and all other benchmark tasks in Sections 4.2.1-4.2.3. Specifically, the environment transitions according to the selected action with probability $1 - \\alpha$ , while with probability $\\alpha$ the environment executes a uniformly chosen action (like slipping to its neighboring regions randomly in Figure 4).",
|
| 904 |
+
"bbox": [
|
| 905 |
+
84,
|
| 906 |
+
140,
|
| 907 |
+
487,
|
| 908 |
+
503
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 4
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "image",
|
| 914 |
+
"img_path": "images/555d42b483a3946eb17172d8fa7badf3bfbdddc4500a4d2157ed50aa6b7ba31a.jpg",
|
| 915 |
+
"image_caption": [
|
| 916 |
+
"Figure 4: The GridWorld environment. The agent starts at the top-left corner and reward is largest at the four dark blue positions near the four corners (with the keys), lower in the $2 \\times 2$ squares near the corner, and yet lower in other (light blue) positions. This can be extended to different sizes $H$ , as well as different degrees of noise $\\alpha$ in the (state, action)-to-state transitions."
|
| 917 |
+
],
|
| 918 |
+
"image_footnote": [],
|
| 919 |
+
"bbox": [
|
| 920 |
+
216,
|
| 921 |
+
515,
|
| 922 |
+
356,
|
| 923 |
+
621
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 4
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "We compare Stochastic GFlowNet against vanilla GFlowNets trained with detailed balance (DB) [Bengio et al., 2021b] and trajectory balance (TB) [Malkin et al., 2022a] learning objectives, Metropolis-Hastings-MCMC [Xie et al., 2021], and PPO [Schulman et al., 2017] methods. We evaluate each method in terms of the empirical $L_{1}$ error defined as $\\mathbb{E}[|p(x) - \\pi(x)|]$ , with $p(x) = \\frac{R(x)}{Z}$ denoting the true reward distribution, and we estimate $\\pi$ according to repeated sampling and calculating frequencies for visiting every possible state $x$ . We also compare them in terms of the number of modes discovered by each method",
|
| 930 |
+
"bbox": [
|
| 931 |
+
85,
|
| 932 |
+
755,
|
| 933 |
+
487,
|
| 934 |
+
924
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 4
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "during the course of training. Each algorithm is run for 5 different seeds, and the performance is reported in its mean and standard deviation. We implement all baselines based on the open-source code<sup>1</sup>, and a detailed description of the hyperparameters and setup can be found in Appendix A.1.",
|
| 941 |
+
"bbox": [
|
| 942 |
+
507,
|
| 943 |
+
79,
|
| 944 |
+
910,
|
| 945 |
+
157
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 4
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "text",
|
| 951 |
+
"text": "4.1.2 Performance Comparison",
|
| 952 |
+
"text_level": 1,
|
| 953 |
+
"bbox": [
|
| 954 |
+
509,
|
| 955 |
+
176,
|
| 956 |
+
752,
|
| 957 |
+
193
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 4
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "We now study the effectiveness of Stochastic GFNs on small, medium, and large GridWorlds with increasing sizes $H$ , and different levels of stochasticity.",
|
| 964 |
+
"bbox": [
|
| 965 |
+
507,
|
| 966 |
+
207,
|
| 967 |
+
910,
|
| 968 |
+
253
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 4
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "Varying sizes of the map. Figure 5 demonstrates the empirical $L_{1}$ error for each method in GridWorld (with a stochasticity level of $\\alpha = 0.25$ ) with increasing sizes. As shown, MCMC does not perform well and PPO fails to converge. We also observe that the performance of TB gets much worse as the size of the problem increases, which may be attributed to a larger gradient variance [Madan et al., 2022]. Stochastic GFlowNets significantly outperform the baselines, and converge fastest and to the lowest empirical $L_{1}$ error. Figure 6 illustrates the number of modes discovered by each method during the course of training. As demonstrated, in stochastic environments (where the original convergence guarantees of GFlowNets do not hold), existing GFlowNet methods including DB and TB fail to discover all of the modes in maps with larger sizes. It is also worth noting that TB performs much worse than DB in terms of the number of modes discovered with increasing sizes of the maps, as it is optimized on the trajectory level with a sampled trajectory instead of the transition level as in DB, and can induce large variance. The proposed Stochastic GFlowNet method outperforms previous GFlowNet methods as well as MCMC and PPO by a large margin, while being able to efficiently discover different modes in maps with different sizes.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
507,
|
| 977 |
+
268,
|
| 978 |
+
910,
|
| 979 |
+
630
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 4
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "text",
|
| 985 |
+
"text": "Varying stochasticity levels. In Figure 7, we compare different methods in a small GridWorld with an increasing level of stochasticity $\\alpha$ . We observe that TB also fails to learn well with an increasing $\\alpha$ , and performs worse than DB besides the decreased performance with increasing sizes. On the other hand, Stochastic GFlowNets outperform the baselines by a significant margin, and are robust to higher levels of stochasticity, successfully handling stochastic transition dynamics.",
|
| 986 |
+
"bbox": [
|
| 987 |
+
507,
|
| 988 |
+
645,
|
| 989 |
+
912,
|
| 990 |
+
782
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 4
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "4.1.3 Compatibility with Different GFlowNet Learning Objectives",
|
| 997 |
+
"text_level": 1,
|
| 998 |
+
"bbox": [
|
| 999 |
+
509,
|
| 1000 |
+
803,
|
| 1001 |
+
848,
|
| 1002 |
+
834
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 4
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "text",
|
| 1008 |
+
"text": "In this section, we study Stochastic GFlowNets with the trajectory balance (TB) objective as described in Section 3.2. We evaluate Stochastic TB in GridWorlds with different",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
507,
|
| 1011 |
+
848,
|
| 1012 |
+
910,
|
| 1013 |
+
893
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 4
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "page_footnote",
|
| 1019 |
+
"text": "<https://github.com/GFNOrg/gflownet>",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
534,
|
| 1022 |
+
907,
|
| 1023 |
+
850,
|
| 1024 |
+
922
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 4
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "image",
|
| 1030 |
+
"img_path": "images/6f769a9b5715c7f1ca0b2279c8d72fdf8a3d0b3caa829fce2eedc7c4b6dd458b.jpg",
|
| 1031 |
+
"image_caption": [
|
| 1032 |
+
"(a) Small."
|
| 1033 |
+
],
|
| 1034 |
+
"image_footnote": [],
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
157,
|
| 1037 |
+
92,
|
| 1038 |
+
381,
|
| 1039 |
+
215
|
| 1040 |
+
],
|
| 1041 |
+
"page_idx": 5
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "image",
|
| 1045 |
+
"img_path": "images/006e52cc6a1b7514e0e84469796bff8ad80d5c980c80a12f9aa7a401d5b1ac30.jpg",
|
| 1046 |
+
"image_caption": [
|
| 1047 |
+
"(b) Medium."
|
| 1048 |
+
],
|
| 1049 |
+
"image_footnote": [],
|
| 1050 |
+
"bbox": [
|
| 1051 |
+
386,
|
| 1052 |
+
92,
|
| 1053 |
+
610,
|
| 1054 |
+
215
|
| 1055 |
+
],
|
| 1056 |
+
"page_idx": 5
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "image",
|
| 1060 |
+
"img_path": "images/571eedc7af86b59d9fb2b4823a7f30d29a2470abded1cc8485ac92b537950efd.jpg",
|
| 1061 |
+
"image_caption": [
|
| 1062 |
+
"(c) Large."
|
| 1063 |
+
],
|
| 1064 |
+
"image_footnote": [],
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
616,
|
| 1067 |
+
93,
|
| 1068 |
+
836,
|
| 1069 |
+
215
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 5
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "image",
|
| 1075 |
+
"img_path": "images/60d0c6b0e47f61c2335a5c3d1cebca0ee1ef8bfc477517c44ae0a262d3ce6e5a.jpg",
|
| 1076 |
+
"image_caption": [
|
| 1077 |
+
"Figure 5: Comparison results of $L_{1}$ error in GridWorld with increasing sizes of the map.",
|
| 1078 |
+
"(a) Small."
|
| 1079 |
+
],
|
| 1080 |
+
"image_footnote": [],
|
| 1081 |
+
"bbox": [
|
| 1082 |
+
157,
|
| 1083 |
+
295,
|
| 1084 |
+
381,
|
| 1085 |
+
420
|
| 1086 |
+
],
|
| 1087 |
+
"page_idx": 5
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "image",
|
| 1091 |
+
"img_path": "images/848bfb4748fa54c5643bcc329dd58bd8ee9a427e3b489e0c1d762162c4bd5caf.jpg",
|
| 1092 |
+
"image_caption": [
|
| 1093 |
+
"(b) Medium."
|
| 1094 |
+
],
|
| 1095 |
+
"image_footnote": [],
|
| 1096 |
+
"bbox": [
|
| 1097 |
+
386,
|
| 1098 |
+
295,
|
| 1099 |
+
610,
|
| 1100 |
+
419
|
| 1101 |
+
],
|
| 1102 |
+
"page_idx": 5
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "image",
|
| 1106 |
+
"img_path": "images/8d3a25cf7cba951e08a0a84a1fca9a9632c3170a7e7307484979dad0bf243d6d.jpg",
|
| 1107 |
+
"image_caption": [
|
| 1108 |
+
"(c) Large."
|
| 1109 |
+
],
|
| 1110 |
+
"image_footnote": [],
|
| 1111 |
+
"bbox": [
|
| 1112 |
+
616,
|
| 1113 |
+
296,
|
| 1114 |
+
838,
|
| 1115 |
+
420
|
| 1116 |
+
],
|
| 1117 |
+
"page_idx": 5
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "image",
|
| 1121 |
+
"img_path": "images/55cbe27bbe76912ff5fffb8a99517cc36f8ba5502f7ada10f62d30bd91c7adbb.jpg",
|
| 1122 |
+
"image_caption": [
|
| 1123 |
+
"(a) $\\alpha = 0.5$",
|
| 1124 |
+
"Figure 7: Results in small GridWorld with increasing stochasticity levels $\\alpha$ ."
|
| 1125 |
+
],
|
| 1126 |
+
"image_footnote": [],
|
| 1127 |
+
"bbox": [
|
| 1128 |
+
92,
|
| 1129 |
+
523,
|
| 1130 |
+
285,
|
| 1131 |
+
631
|
| 1132 |
+
],
|
| 1133 |
+
"page_idx": 5
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "image",
|
| 1137 |
+
"img_path": "images/7212a8685eebf76df0991740b02d843c85a9d3cf8e779628f7f6be1d07f5da02.jpg",
|
| 1138 |
+
"image_caption": [
|
| 1139 |
+
"(b) $\\alpha = 0.9$",
|
| 1140 |
+
"Figure 8: Results of Stochastic GFlowNet when built upon the trajectory balance (TB) objective in GridWorld with increasing sizes $H$ and stochasticity levels $\\alpha$ . (a) Small, low stochasticity level. (b) Large, low stochasticity level. (c) Small, high stochasticity level."
|
| 1141 |
+
],
|
| 1142 |
+
"image_footnote": [],
|
| 1143 |
+
"bbox": [
|
| 1144 |
+
289,
|
| 1145 |
+
523,
|
| 1146 |
+
480,
|
| 1147 |
+
630
|
| 1148 |
+
],
|
| 1149 |
+
"page_idx": 5
|
| 1150 |
+
},
|
| 1151 |
+
{
|
| 1152 |
+
"type": "text",
|
| 1153 |
+
"text": "sizes (including small with $H = 8$ and large with $H = 128$ ) and stochasticity levels (including low with $\\alpha = 0.25$ and high with $\\alpha = 0.9$ ). Specifically, Figure 8(a) corresponds to the result in a small map with a low stochasticity level, Figure 8(b) illustrates the results in a large map with a low stochasticity level, while Figure 8 shows the results in a small map with a high stochasticity level.",
|
| 1154 |
+
"bbox": [
|
| 1155 |
+
85,
|
| 1156 |
+
733,
|
| 1157 |
+
485,
|
| 1158 |
+
840
|
| 1159 |
+
],
|
| 1160 |
+
"page_idx": 5
|
| 1161 |
+
},
|
| 1162 |
+
{
|
| 1163 |
+
"type": "text",
|
| 1164 |
+
"text": "As shown in Figure 8, Stochastic TB (abbreviated as Stoch-GFN (TB) in the figure) greatly improves the performance of TB, validating the effectiveness of our proposed methodology. However, we observe that it underperforms relative to Stochastic DB when the scale of the problem increases",
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
85,
|
| 1167 |
+
847,
|
| 1168 |
+
485,
|
| 1169 |
+
922
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 5
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "image",
|
| 1175 |
+
"img_path": "images/713cf8f3885b5308b8224d5326ba8df1f42c797d3ea8d7381aa5a974ea6a094b.jpg",
|
| 1176 |
+
"image_caption": [
|
| 1177 |
+
"(a)"
|
| 1178 |
+
],
|
| 1179 |
+
"image_footnote": [],
|
| 1180 |
+
"bbox": [
|
| 1181 |
+
517,
|
| 1182 |
+
523,
|
| 1183 |
+
642,
|
| 1184 |
+
616
|
| 1185 |
+
],
|
| 1186 |
+
"page_idx": 5
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "image",
|
| 1190 |
+
"img_path": "images/67b0dbe84da86468d8bb40f3ceeb89e6ccab2074b0670f71db6b4232ffecf957.jpg",
|
| 1191 |
+
"image_caption": [
|
| 1192 |
+
"(b)"
|
| 1193 |
+
],
|
| 1194 |
+
"image_footnote": [],
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
645,
|
| 1197 |
+
523,
|
| 1198 |
+
771,
|
| 1199 |
+
616
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 5
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "image",
|
| 1205 |
+
"img_path": "images/bc75d73e03efe8cd31f6de7d23c1a37bdaa0e1da08f0a5ee4a1f07a62697311b.jpg",
|
| 1206 |
+
"image_caption": [
|
| 1207 |
+
"Figure 6: Comparison results of the number of modes captured during the training process in GridWorld with increasing sizes of the map.",
|
| 1208 |
+
"(c)"
|
| 1209 |
+
],
|
| 1210 |
+
"image_footnote": [],
|
| 1211 |
+
"bbox": [
|
| 1212 |
+
776,
|
| 1213 |
+
523,
|
| 1214 |
+
902,
|
| 1215 |
+
617
|
| 1216 |
+
],
|
| 1217 |
+
"page_idx": 5
|
| 1218 |
+
},
|
| 1219 |
+
{
|
| 1220 |
+
"type": "text",
|
| 1221 |
+
"text": "or with a higher level of stochasticity (Figure 8(c)), which can be attributed to the larger variance of TB [Madan et al., 2022] in stochastic environments.",
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
507,
|
| 1224 |
+
756,
|
| 1225 |
+
910,
|
| 1226 |
+
801
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 5
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "text",
|
| 1232 |
+
"text": "4.2 AUTOREGRESSIVE SEQUENCE GENERATION",
|
| 1233 |
+
"text_level": 1,
|
| 1234 |
+
"bbox": [
|
| 1235 |
+
509,
|
| 1236 |
+
830,
|
| 1237 |
+
805,
|
| 1238 |
+
859
|
| 1239 |
+
],
|
| 1240 |
+
"page_idx": 5
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "text",
|
| 1244 |
+
"text": "In this section, we study Stochastic GFN on autoregressive sequence generation tasks [Malkin et al., 2022a]. We first consider a bit sequence generation task to investigate the",
|
| 1245 |
+
"bbox": [
|
| 1246 |
+
507,
|
| 1247 |
+
877,
|
| 1248 |
+
910,
|
| 1249 |
+
922
|
| 1250 |
+
],
|
| 1251 |
+
"page_idx": 5
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "image",
|
| 1255 |
+
"img_path": "images/4143ffb8ee11283cbc98952161ed0de9aa462a4a22b93d803a87999e53c40f8e.jpg",
|
| 1256 |
+
"image_caption": [
|
| 1257 |
+
"(a)"
|
| 1258 |
+
],
|
| 1259 |
+
"image_footnote": [],
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
157,
|
| 1262 |
+
90,
|
| 1263 |
+
381,
|
| 1264 |
+
218
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 6
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "image",
|
| 1270 |
+
"img_path": "images/743f4195916c1ded7d24233ef4750b6cf5c72bfe40e7483c59af013e5abe4623.jpg",
|
| 1271 |
+
"image_caption": [
|
| 1272 |
+
"(b)"
|
| 1273 |
+
],
|
| 1274 |
+
"image_footnote": [],
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
386,
|
| 1277 |
+
90,
|
| 1278 |
+
610,
|
| 1279 |
+
218
|
| 1280 |
+
],
|
| 1281 |
+
"page_idx": 6
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"type": "image",
|
| 1285 |
+
"img_path": "images/85dfbcec41551a92edad843c6629571db789494d2d431ae5a1869fbeccc85917.jpg",
|
| 1286 |
+
"image_caption": [
|
| 1287 |
+
"(c)"
|
| 1288 |
+
],
|
| 1289 |
+
"image_footnote": [],
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
613,
|
| 1292 |
+
90,
|
| 1293 |
+
838,
|
| 1294 |
+
217
|
| 1295 |
+
],
|
| 1296 |
+
"page_idx": 6
|
| 1297 |
+
},
|
| 1298 |
+
{
|
| 1299 |
+
"type": "image",
|
| 1300 |
+
"img_path": "images/fdcbba0bb401c186ab0420f8b6b99238d287c071e6898ab36f3a43618c08af75.jpg",
|
| 1301 |
+
"image_caption": [
|
| 1302 |
+
"(d)"
|
| 1303 |
+
],
|
| 1304 |
+
"image_footnote": [],
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
157,
|
| 1307 |
+
253,
|
| 1308 |
+
381,
|
| 1309 |
+
378
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 6
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "image",
|
| 1315 |
+
"img_path": "images/f6bfd1ae72e7be5163a275864aaf9fa10bbdcb61ebf9aefbdbcd0dc2a1009a79.jpg",
|
| 1316 |
+
"image_caption": [
|
| 1317 |
+
"(e)"
|
| 1318 |
+
],
|
| 1319 |
+
"image_footnote": [],
|
| 1320 |
+
"bbox": [
|
| 1321 |
+
386,
|
| 1322 |
+
253,
|
| 1323 |
+
610,
|
| 1324 |
+
378
|
| 1325 |
+
],
|
| 1326 |
+
"page_idx": 6
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "image",
|
| 1330 |
+
"img_path": "images/d061f0eb7105178de1f9e9c312e94bbc3c77f89effc21d9c947af8d08f72e1f0.jpg",
|
| 1331 |
+
"image_caption": [
|
| 1332 |
+
"(f)",
|
| 1333 |
+
"Figure 9: Results in the bit sequence generation task. The first and second rows correspond to the results of the number of bits $k = 4$ and $k = 2$ . The first, second, and third columns correspond to the results of different stochasticity levels of 0.1, 0.3, and 0.5, respectively."
|
| 1334 |
+
],
|
| 1335 |
+
"image_footnote": [],
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
613,
|
| 1338 |
+
253,
|
| 1339 |
+
836,
|
| 1340 |
+
378
|
| 1341 |
+
],
|
| 1342 |
+
"page_idx": 6
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "text",
|
| 1346 |
+
"text": "effect of the size of the action space and length of the trajectory with varying levels of environment stochasticity. We then study the more realistic and complex tasks of generating biological sequences.",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
85,
|
| 1349 |
+
484,
|
| 1350 |
+
487,
|
| 1351 |
+
546
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 6
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"text": "4.2.1 Bit Sequences",
|
| 1358 |
+
"text_level": 1,
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
85,
|
| 1361 |
+
568,
|
| 1362 |
+
240,
|
| 1363 |
+
584
|
| 1364 |
+
],
|
| 1365 |
+
"page_idx": 6
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "text",
|
| 1369 |
+
"text": "Task. In the bit sequence generation task [Malkin et al., 2022a], the agent aims to generate bit sequences of length $n = 120$ . At each step, the agent appends a $k$ -bit \"word\" from a vocabulary $V$ to the current state from left to right, which is a partial sequence. Note that we consider a stochastic variant of the task, with noise level $\\alpha$ as described in Section 4.1.1. The resulting action space has a size of $|V| = 2^k$ , and the length of the complete trajectories is $\\frac{n}{k}$ . Following Malkin et al. [2022a], we define the reward function $R(x)$ to have modes at a fixed set of bit sequences $M$ with $R(x) = \\exp(-\\min_{y \\in M} d(x, y))$ , where $d$ is the edit distance. We evaluate each method in terms of the number of modes discovered during the course of training.",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
85,
|
| 1372 |
+
598,
|
| 1373 |
+
487,
|
| 1374 |
+
795
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 6
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "We study the performance of Stochastic DB with different levels of stochasticity, and compare it against vanilla DB and strong baselines including Advantage Actor-Critic (A2C) [Mnih et al., 2016], Soft Actor-Critic (SAC) [Haarnoja et al., 2018], and MCMC [Xie et al., 2021]. Each method is run for 3 different seeds and we report the mean and standard deviation. More details about the experimental setup in the stochastic bit sequence generation task",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
84,
|
| 1383 |
+
801,
|
| 1384 |
+
487,
|
| 1385 |
+
924
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 6
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "can be found in Appendix A.2. We use the same hyperparameters and architectures as in Malkin et al. [2022a].",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
507,
|
| 1394 |
+
484,
|
| 1395 |
+
910,
|
| 1396 |
+
515
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 6
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "Results. Figure 9 demonstrates the number of modes captured by each method throughout the training process with different levels of stochasticity ranging from 0.1 to 0.5, where the first and second rows correspond to the results for $k = 4$ and $k = 2$ , respectively. We observe that regular GFlowNets (GFN in the figure) fail to learn well, particularly when the trajectories are longer (with a smaller value of $k$ ). On the other hand, the Stochastic GFlowNet (Stoch-GFN in the figures) is robust to increasing trajectory lengths, and also performs well when the stochasticity level increases. In addition, Stoch-GFN significantly outperforms strong baselines including MCMC, A2C, and SAC, discovering more modes faster.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
507,
|
| 1405 |
+
532,
|
| 1406 |
+
910,
|
| 1407 |
+
728
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 6
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"text": "4.2.2 TF Bind 8 Generation",
|
| 1414 |
+
"text_level": 1,
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
509,
|
| 1417 |
+
756,
|
| 1418 |
+
722,
|
| 1419 |
+
770
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 6
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "Task. We now consider the practical task of generating DNA sequences with high binding activity with particular transcription factors, following Jain et al. [2022a]. At each time step, the agent appends a symbol from the vocabulary to the right of the current state. As with the bit generation task, we consider a stochastic variant of the task following Yang et al. [2022] with random actions taken with probability $\\alpha$ (as described in Section 4.1.1). We adopt a pre-trained neural network as the reward function follow",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
507,
|
| 1428 |
+
787,
|
| 1429 |
+
910,
|
| 1430 |
+
924
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 6
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"text": "ing Jain et al. [2022a] that estimates the binding activity. We investigate how well Stochastic DB performs by comparing it with vanilla DB, MCMC, and RL-based methods including A2C and SAC. For evaluation, we evaluate each method in terms of the number of modes with rewards above a threshold discovered in the batch of generated sequences. We also use the mean reward and 50th percentile score for the top 100 sequences ranked by their rewards from a batch of 2048 generated sequences for each method as in [Jain et al., 2022a, Trabucco et al., 2022]. We run each algorithm for 3 different seeds, and report their mean and standard deviation. We follow the same hyperparameters, architectures, and setup as in Jain et al. [2022a], and a detailed description of the setup can be found in Appendix A.3.",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
84,
|
| 1439 |
+
79,
|
| 1440 |
+
489,
|
| 1441 |
+
292
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 7
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "Results. Comparison results of Stoch-GFN and baselines with varying stochasticity levels (ranging from 0.1 to 0.5) in terms of the number of modes discovered with rewards above a threshold during the training process and top-100 mean rewards are summarized in Figure 10. As shown in Figure 10(a), Stoch-GFN discovers many more modes than GFN, MCMC, and RL-based methods in different stochasticity levels. Stoch-GFN also achieves higher top-100 rewards (in mean and median) than baselines as demonstrated in Figures 10(b)-(c), where the top-100 reward of GFN decrease with an increasing stochastic level. These results validate the effectiveness of Stoch-GFN in the more realistic task for biological sequence design with stochasticity in the environment.",
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
84,
|
| 1450 |
+
306,
|
| 1451 |
+
489,
|
| 1452 |
+
518
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 7
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "image",
|
| 1458 |
+
"img_path": "images/4f41d3d11f8c0bf6ee56983780599c0bddc3cf180ab707a6485f531100e5134c.jpg",
|
| 1459 |
+
"image_caption": [
|
| 1460 |
+
"(a) The number of modes."
|
| 1461 |
+
],
|
| 1462 |
+
"image_footnote": [],
|
| 1463 |
+
"bbox": [
|
| 1464 |
+
131,
|
| 1465 |
+
532,
|
| 1466 |
+
442,
|
| 1467 |
+
609
|
| 1468 |
+
],
|
| 1469 |
+
"page_idx": 7
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "image",
|
| 1473 |
+
"img_path": "images/9589a75ca53131ac9bc2cf798a9e16bd9bd3fee09ed299e93f5b5e08a67038c1.jpg",
|
| 1474 |
+
"image_caption": [
|
| 1475 |
+
"(b) Top-100 reward (mean)."
|
| 1476 |
+
],
|
| 1477 |
+
"image_footnote": [],
|
| 1478 |
+
"bbox": [
|
| 1479 |
+
129,
|
| 1480 |
+
643,
|
| 1481 |
+
442,
|
| 1482 |
+
720
|
| 1483 |
+
],
|
| 1484 |
+
"page_idx": 7
|
| 1485 |
+
},
|
| 1486 |
+
{
|
| 1487 |
+
"type": "image",
|
| 1488 |
+
"img_path": "images/5697877df23b3df7693c498c6469789a1be5253c23c7ed3c3d95a6b1806bd710.jpg",
|
| 1489 |
+
"image_caption": [
|
| 1490 |
+
"(c) Top-100 reward (median).",
|
| 1491 |
+
"Figure 10: Results on the TF Bind 8 generation task, with better results for Stoch-GFN against MCMC, A2C, SAC and GFN baselines."
|
| 1492 |
+
],
|
| 1493 |
+
"image_footnote": [],
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
131,
|
| 1496 |
+
755,
|
| 1497 |
+
442,
|
| 1498 |
+
832
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 7
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "text",
|
| 1504 |
+
"text": "4.2.3 Antimicrobial Peptide Generation",
|
| 1505 |
+
"text_level": 1,
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
509,
|
| 1508 |
+
79,
|
| 1509 |
+
808,
|
| 1510 |
+
95
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 7
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "text",
|
| 1516 |
+
"text": "Task. In this section, we study the realistic task of generating peptide sequences with anti-microbial properties [Malkin et al., 2022a, Jain et al., 2022a]. The agent chooses a symbol from the vocabulary that consists of 20 amino acids and a special end-of-sequence action to the current state in a left-to-right manner at each time step. The maximum length of the sequence is 60, and the size of the resulting state space is $21^{60}$ . We consider a stochastic variant of the task (as in Section 4.1.1) with a stochasticity level of $\\alpha = 0.1$ . The reward function is a pre-trained neural network that estimates the anti-microbial activity following [Malkin et al., 2022a] from the DBAASP database [Pirtskhalava et al., 2021]. As in Section 4.2.2, we generate 2048 sequences from each method and evaluate them in terms of the top-100 rewards and the number of modes discovered above a threshold. We study the performance of Stochastic DB by comparing it with DB, MCMC, and RL-based methods. We report the mean and standard deviation over 3 runs for each method. A detailed description of the setup is in Appendix A.4 following Malkin et al. [2022a].",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
507,
|
| 1519 |
+
109,
|
| 1520 |
+
910,
|
| 1521 |
+
412
|
| 1522 |
+
],
|
| 1523 |
+
"page_idx": 7
|
| 1524 |
+
},
|
| 1525 |
+
{
|
| 1526 |
+
"type": "text",
|
| 1527 |
+
"text": "Results. As shown in Table 1, we observe that Stoch-GFN significantly outperforms GFN and other baselines in terms of the top-100 reward. In addition, it also discovers more modes with rewards above a threshold than baseline methods, which further validates its effectiveness on the more complex and challenging task.",
|
| 1528 |
+
"bbox": [
|
| 1529 |
+
507,
|
| 1530 |
+
426,
|
| 1531 |
+
910,
|
| 1532 |
+
517
|
| 1533 |
+
],
|
| 1534 |
+
"page_idx": 7
|
| 1535 |
+
},
|
| 1536 |
+
{
|
| 1537 |
+
"type": "table",
|
| 1538 |
+
"img_path": "images/857b71a4b4163801b2e88c7b42e9fcd64120278f7b235410523b5c22145ce522.jpg",
|
| 1539 |
+
"table_caption": [
|
| 1540 |
+
"Table 1: Better results with Stoch-GFN on the AMP generation task. Larger is better."
|
| 1541 |
+
],
|
| 1542 |
+
"table_footnote": [],
|
| 1543 |
+
"table_body": "<table><tr><td></td><td>Top-100 reward</td><td>Number of modes</td></tr><tr><td>MCMC</td><td>0.632 ± 0.035</td><td>3.67 ± 0.58</td></tr><tr><td>A2C</td><td>0.682 ± 0.032</td><td>2.66 ± 0.58</td></tr><tr><td>SAC</td><td>0.754 ± 0.047</td><td>4.33 ± 1.33</td></tr><tr><td>GFN</td><td>0.748 ± 0.048</td><td>3.0 ± 3.0</td></tr><tr><td>Stoch-GFN</td><td>0.834 ± 0.023</td><td>19.5 ± 2.5</td></tr></table>",
|
| 1544 |
+
"bbox": [
|
| 1545 |
+
524,
|
| 1546 |
+
571,
|
| 1547 |
+
894,
|
| 1548 |
+
680
|
| 1549 |
+
],
|
| 1550 |
+
"page_idx": 7
|
| 1551 |
+
},
|
| 1552 |
+
{
|
| 1553 |
+
"type": "text",
|
| 1554 |
+
"text": "5 RELATED WORK",
|
| 1555 |
+
"text_level": 1,
|
| 1556 |
+
"bbox": [
|
| 1557 |
+
509,
|
| 1558 |
+
710,
|
| 1559 |
+
712,
|
| 1560 |
+
726
|
| 1561 |
+
],
|
| 1562 |
+
"page_idx": 7
|
| 1563 |
+
},
|
| 1564 |
+
{
|
| 1565 |
+
"type": "text",
|
| 1566 |
+
"text": "GFlowNets. The universality and effectiveness of GFlowNets have been demonstrated in various kinds of applications, including biological sequence design [Jain et al., 2022a], causal discovery and structure learning [Deleu et al., 2022, Nishikawa-Toomey et al., 2022], substructure learning of deep neural network weights via Dropout [Liu et al., 2022], multi-objective optimization [Jain et al., 2022b], and robust job scheduling problems [Zhang et al., 2023a]. Malkin et al. [2022a] proposed the trajectory balance (TB) objective to optimize GFlowNet at a trajectory level instead of at the transition level as in detailed balance Bengio et al. [2021b], but can induce large variance, where the problem is",
|
| 1567 |
+
"bbox": [
|
| 1568 |
+
507,
|
| 1569 |
+
741,
|
| 1570 |
+
910,
|
| 1571 |
+
924
|
| 1572 |
+
],
|
| 1573 |
+
"page_idx": 7
|
| 1574 |
+
},
|
| 1575 |
+
{
|
| 1576 |
+
"type": "text",
|
| 1577 |
+
"text": "exacerbated in stochastic environments. Madan et al. [2022] propose the sub-trajectory balance method considers subtrajectories. The early GFlowNet proposals from Bengio et al. [2021a,b] first formulated GFlowNets and pointed out possible future development directions. Originating from reinforcement learning, GFlowNets face the same long-term credit assignment challenges to propagate downstream reward signals to earlier states. Pan et al. [2023] proposed a forward-looking GFlowNet formulation to exploit intermediate energies or rewards for more efficient credit assignment, making it possible to learn from incomplete trajectories. Pan et al. [2022] incorporates intrinsic intermediate rewards into GFlowNets by augmenting the flow values for better exploration. EB-GFN [Zhang et al., 2022b] jointly learns from data an energy/reward function along with the corresponding GFlowNet. Zhang et al. [2022a] recently points out that the relationship between generative models and GFlowNets. It is worth mentioning that Zhang et al. [2023b] shares a similar goal to our work; it extends the GFlowNet framework for stochastic reward settings with distributional modeling, while this work focuses on stochasticity in the environment transition dynamics.",
|
| 1578 |
+
"bbox": [
|
| 1579 |
+
85,
|
| 1580 |
+
79,
|
| 1581 |
+
490,
|
| 1582 |
+
412
|
| 1583 |
+
],
|
| 1584 |
+
"page_idx": 8
|
| 1585 |
+
},
|
| 1586 |
+
{
|
| 1587 |
+
"type": "text",
|
| 1588 |
+
"text": "Model-based Reinforcement Learning. Model-based reinforcement learning (RL) is a promising approach for improved sample efficiency compared with model-free (RL) methods [Lillicrap et al., 2015, Fujimoto et al., 2018], and has been successfully applied to many tasks such as robotics leveraging different dynamics models. The stochastic value gradient method [Heess et al., 2015] learns a hybrid of model-based and model-free RL which can learn stochastic policies in stochastic continuous control tasks. Dreamer [Hafner et al., 2019] learns latent dynamics to solve long-horizon tasks from high-dimensional images. MuZero [Antonoglou et al., 2021] combines model-based methods with Monte-Carlo tree search for planning, and it has achieved great success in game playing. Stochastic MuZero [Schrittwieser et al., 2020] learns a stochastic model for extending MuZero to stochastic environments.",
|
| 1589 |
+
"bbox": [
|
| 1590 |
+
85,
|
| 1591 |
+
428,
|
| 1592 |
+
489,
|
| 1593 |
+
672
|
| 1594 |
+
],
|
| 1595 |
+
"page_idx": 8
|
| 1596 |
+
},
|
| 1597 |
+
{
|
| 1598 |
+
"type": "text",
|
| 1599 |
+
"text": "6 CONCLUSION",
|
| 1600 |
+
"text_level": 1,
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
85,
|
| 1603 |
+
695,
|
| 1604 |
+
258,
|
| 1605 |
+
710
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 8
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "text",
|
| 1611 |
+
"text": "In this paper, we introduce a new methodology, Stochastic GFlowNets, which is the first empirically effective approach to extend GFlowNets to the more general and realistic stochastic environments, where existing GFlowNet methods can fail. Our method learns the GFlowNet policy and also the environment model to capture the stochasticity in the environment. We conduct extensive experiments in standard tasks for benchmarking GFlowNets with stochastic transition dynamics. Results show that Stochastic GFlowNet learns significantly better than previous methods in the presence of stochastic transitions. It is interesting for future work to study advanced model-based approaches for approximating the transition dynamics, and also apply our method to",
|
| 1612 |
+
"bbox": [
|
| 1613 |
+
85,
|
| 1614 |
+
726,
|
| 1615 |
+
489,
|
| 1616 |
+
924
|
| 1617 |
+
],
|
| 1618 |
+
"page_idx": 8
|
| 1619 |
+
},
|
| 1620 |
+
{
|
| 1621 |
+
"type": "text",
|
| 1622 |
+
"text": "other challenging real-world tasks.",
|
| 1623 |
+
"bbox": [
|
| 1624 |
+
510,
|
| 1625 |
+
79,
|
| 1626 |
+
746,
|
| 1627 |
+
95
|
| 1628 |
+
],
|
| 1629 |
+
"page_idx": 8
|
| 1630 |
+
},
|
| 1631 |
+
{
|
| 1632 |
+
"type": "text",
|
| 1633 |
+
"text": "ACKNOWLEDGEMENTS",
|
| 1634 |
+
"text_level": 1,
|
| 1635 |
+
"bbox": [
|
| 1636 |
+
510,
|
| 1637 |
+
119,
|
| 1638 |
+
752,
|
| 1639 |
+
135
|
| 1640 |
+
],
|
| 1641 |
+
"page_idx": 8
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "text",
|
| 1645 |
+
"text": "The authors would like to thank Almer Van der Sloot, Kanika Madan, and Qingpeng Cai for insightful discussions about the paper and the baselines in the AMP generation task. Longbo Huang is supported in part by the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108403, and Tsinghua Precision Medicine Foundation 10001020109. Yoshua Bengio acknowledges the funding from CIFAR, Genentech, Samsung, and IBM.",
|
| 1646 |
+
"bbox": [
|
| 1647 |
+
507,
|
| 1648 |
+
151,
|
| 1649 |
+
910,
|
| 1650 |
+
289
|
| 1651 |
+
],
|
| 1652 |
+
"page_idx": 8
|
| 1653 |
+
},
|
| 1654 |
+
{
|
| 1655 |
+
"type": "text",
|
| 1656 |
+
"text": "References",
|
| 1657 |
+
"text_level": 1,
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
510,
|
| 1660 |
+
311,
|
| 1661 |
+
594,
|
| 1662 |
+
325
|
| 1663 |
+
],
|
| 1664 |
+
"page_idx": 8
|
| 1665 |
+
},
|
| 1666 |
+
{
|
| 1667 |
+
"type": "list",
|
| 1668 |
+
"sub_type": "ref_text",
|
| 1669 |
+
"list_items": [
|
| 1670 |
+
"Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction to mcmc for machine learning. Machine learning, 50(1):5-43, 2003.",
|
| 1671 |
+
"Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K Hubert, and David Silver. Planning in stochastic environments with a learned model. In International Conference on Learning Representations, 2021.",
|
| 1672 |
+
"Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. Advances in Neural Information Processing Systems, 34: 27381-27394, 2021a.",
|
| 1673 |
+
"Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward Hu, Mo Tiwari, and Emmanuel Bengio. GFlowNet foundations. arXiv preprint 2111.09266, 2021b.",
|
| 1674 |
+
"Tristan Deleu, Antonio Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. Uncertainty in Artificial Intelligence (UAI), 2022.",
|
| 1675 |
+
"Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587-1596. PMLR, 2018.",
|
| 1676 |
+
"Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Neural Information Processing Systems (NIPS), pages 2672-2680, 2014.",
|
| 1677 |
+
"Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. International Conference on Machine Learning (ICML), 2017."
|
| 1678 |
+
],
|
| 1679 |
+
"bbox": [
|
| 1680 |
+
510,
|
| 1681 |
+
342,
|
| 1682 |
+
912,
|
| 1683 |
+
922
|
| 1684 |
+
],
|
| 1685 |
+
"page_idx": 8
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "list",
|
| 1689 |
+
"sub_type": "ref_text",
|
| 1690 |
+
"list_items": [
|
| 1691 |
+
"Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. International Conference on Machine Learning (ICML), 2018.",
|
| 1692 |
+
"Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019.",
|
| 1693 |
+
"W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.",
|
| 1694 |
+
"Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. Advances in neural information processing systems, 28, 2015.",
|
| 1695 |
+
"Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.",
|
| 1696 |
+
"Moksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Bonaventure F.P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, and Yoshua Bengio. Biological sequence design with GFlowNets. International Conference on Machine Learning (ICML), 2022a.",
|
| 1697 |
+
"Moksh Jain, Sharath Chandra Rararthy, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Yoshua Bengio, Santiago Miret, and Emmanuel Bengio. Multi-objective gflownets. arXiv preprint arXiv:2210.12765, 2022b.",
|
| 1698 |
+
"Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015.",
|
| 1699 |
+
"Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.",
|
| 1700 |
+
"Matevž Kunaver and Tomaž Požrl. Diversity in recommender systems-a survey. Knowledge-based systems, 123:154-162, 2017.",
|
| 1701 |
+
"Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.",
|
| 1702 |
+
"Dianbo Liu, Moksh Jain, Bonaventure F. P. Dossou, Qianli Shen, Salem Lahlou, Anirudh Goyal, Nikolay Malkin, Chris C. Emezue, Dinghuai Zhang, Nadhir Hassen, Xu Ji, Kenji Kawaguchi, and Yoshua Bengio. Gflowout: Dropout with generative flow networks. ArXiv, abs/2210.12928, 2022."
|
| 1703 |
+
],
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
87,
|
| 1706 |
+
79,
|
| 1707 |
+
485,
|
| 1708 |
+
922
|
| 1709 |
+
],
|
| 1710 |
+
"page_idx": 9
|
| 1711 |
+
},
|
| 1712 |
+
{
|
| 1713 |
+
"type": "list",
|
| 1714 |
+
"sub_type": "ref_text",
|
| 1715 |
+
"list_items": [
|
| 1716 |
+
"Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523-562, 2018.",
|
| 1717 |
+
"Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, Emmanuel Bengio, Moksh Jain, Andrei Nica, Tom Bosc, Yoshua Bengio, and Nikolay Malkin. Learning GFlowNets from partial episodes for improved convergence and stability. *ICLR'2023; arXiv:2209.12782, 2022*.",
|
| 1718 |
+
"Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in GFlowNets. Neural Information Processing Systems (NeurIPS), 2022a.",
|
| 1719 |
+
"Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio. Gflows nets and variational inference. arXiv preprint arXiv:2210.00580, 2022b.",
|
| 1720 |
+
"Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087-1092, 1953.",
|
| 1721 |
+
"Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.",
|
| 1722 |
+
"Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. Neural Information Processing Systems (NIPS), 2016.",
|
| 1723 |
+
"Mizu Nishikawa-Toomey, Tristan Deleu, Jithendarraa Subramanian, Yoshua Bengio, and Laurent Charlin. Bayesian learning of causal structure and mechanisms with GFlowNets and variational bayes. arXiv preprint 2211.02763, 2022.",
|
| 1724 |
+
"Ling Pan, Dinghuai Zhang, Aaron Courville, Longbo Huang, and Yoshua Bengio. Generative augmented flow networks. arXiv preprint 2210.03308, 2022.",
|
| 1725 |
+
"Ling Pan, Nikolay Malkin, Dinghuai Zhang, and Yoshua Bengio. Better training of gflows nets with local credit and incomplete trajectories. *ArXiv*, abs/2302.01687, 2023.",
|
| 1726 |
+
"Keiran Paster, Sheila McIlraith, and Jimmy Ba. You can't count on luck: Why decision transformers fail in stochastic environments. arXiv preprint arXiv:2205.15967, 2022.",
|
| 1727 |
+
"Malak Pirtskhalava, Anthony A Amstrong, Maia Grigolava, Mindia Chubinidze, Evgenia Alimbarashvili, Boris Vishnepolsky, Andrei Gabrielian, Alex Rosenthal, Darrell E"
|
| 1728 |
+
],
|
| 1729 |
+
"bbox": [
|
| 1730 |
+
512,
|
| 1731 |
+
79,
|
| 1732 |
+
910,
|
| 1733 |
+
922
|
| 1734 |
+
],
|
| 1735 |
+
"page_idx": 9
|
| 1736 |
+
},
|
| 1737 |
+
{
|
| 1738 |
+
"type": "list",
|
| 1739 |
+
"sub_type": "ref_text",
|
| 1740 |
+
"list_items": [
|
| 1741 |
+
"Hurt, and Michael Tartakovsky. Dbaasp v3: database of antimicrobial/cytotoxic activity and structure of peptides as a resource for development of new therapeutics. *Nucleic acids research*, 49(D1):D288–D297, 2021.",
|
| 1742 |
+
"Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020.",
|
| 1743 |
+
"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint 1707.06347, 2017.",
|
| 1744 |
+
"Li-Fu Song, Zheng-Hua Deng, Zi-Yi Gong, Lu-Lu Li, and Bing-Zhi Li. Large-scale de novo oligonucleotide synthesis for whole-genome synthesis and data storage: Challenges and opportunities. Frontiers in bioengineering and biotechnology, 9:689797, 2021.",
|
| 1745 |
+
"Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.",
|
| 1746 |
+
"Brandon Trabucco, Xinyang Geng, Aviral Kumar, and Sergey Levine. Design-bench: Benchmarks for data-driven offline model-based optimization. In International Conference on Machine Learning, pages 21658-21676. PMLR, 2022.",
|
| 1747 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Neural Information Processing Systems (NIPS), 2017.",
|
| 1748 |
+
"Arun Venkatraman, Martial Hebert, and J Bagnell. Improving multi-step prediction of learned time series models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.",
|
| 1749 |
+
"Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, and Lei Li. Mars: Markov molecular sampling for multi-objective drug discovery. arXiv preprint arXiv:2103.10432, 2021.",
|
| 1750 |
+
"Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. Dichotomy of control: Separating what you can control from what you cannot. arXiv preprint arXiv:2210.13435, 2022.",
|
| 1751 |
+
"David Zhang, Corrado Rainone, Markus Peschl, and Roberto Bondesan. Robust scheduling with GFlowNets. International Conference on Learning Representations (ICLR), 2023a.",
|
| 1752 |
+
"Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, and Yoshua Bengio. Unifying generative models with GFlowNets. arXiv preprint 2209.02606, 2022a."
|
| 1753 |
+
],
|
| 1754 |
+
"bbox": [
|
| 1755 |
+
87,
|
| 1756 |
+
79,
|
| 1757 |
+
489,
|
| 1758 |
+
922
|
| 1759 |
+
],
|
| 1760 |
+
"page_idx": 10
|
| 1761 |
+
},
|
| 1762 |
+
{
|
| 1763 |
+
"type": "list",
|
| 1764 |
+
"sub_type": "ref_text",
|
| 1765 |
+
"list_items": [
|
| 1766 |
+
"Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Volokhova, Aaron Courville, and Yoshua Bengio. Generative flow networks for discrete probabilistic modeling. International Conference on Machine Learning (ICML), 2022b.",
|
| 1767 |
+
"Dinghuai Zhang, Ling Pan, Ricky TQ Chen, Aaron Courville, and Yoshua Bengio. Distributional gflows with quantile flows. arXiv preprint arXiv:2302.05793, 2023b."
|
| 1768 |
+
],
|
| 1769 |
+
"bbox": [
|
| 1770 |
+
512,
|
| 1771 |
+
79,
|
| 1772 |
+
912,
|
| 1773 |
+
224
|
| 1774 |
+
],
|
| 1775 |
+
"page_idx": 10
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "text",
|
| 1779 |
+
"text": "A EXPERIMENTAL DETAILS",
|
| 1780 |
+
"text_level": 1,
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
87,
|
| 1783 |
+
78,
|
| 1784 |
+
381,
|
| 1785 |
+
94
|
| 1786 |
+
],
|
| 1787 |
+
"page_idx": 11
|
| 1788 |
+
},
|
| 1789 |
+
{
|
| 1790 |
+
"type": "text",
|
| 1791 |
+
"text": "A.1 GRIDWORLD",
|
| 1792 |
+
"text_level": 1,
|
| 1793 |
+
"bbox": [
|
| 1794 |
+
87,
|
| 1795 |
+
111,
|
| 1796 |
+
243,
|
| 1797 |
+
125
|
| 1798 |
+
],
|
| 1799 |
+
"page_idx": 11
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "text",
|
| 1803 |
+
"text": "The reward function for GridWorld is defined as in Eq. (11) following Bengio et al. [2021a], where $R_0 = 2.0$ , $R_1 = 0.5$ , and $R_2 = 0.001$ .",
|
| 1804 |
+
"bbox": [
|
| 1805 |
+
85,
|
| 1806 |
+
141,
|
| 1807 |
+
489,
|
| 1808 |
+
186
|
| 1809 |
+
],
|
| 1810 |
+
"page_idx": 11
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "equation",
|
| 1814 |
+
"text": "\n$$\n\\begin{array}{l} R (x) = R _ {0} + R _ {1} \\prod_ {i} \\mathbb {I} \\left(0. 2 5 < | x _ {i} / H - 0. 5 |\\right) \\\\ + R _ {2} \\prod_ {i} ^ {\\tau} \\mathbb {I} (0. 3 < | x _ {i} / H - 0. 5 | < 0. 4) \\tag {11} \\\\ \\end{array}\n$$\n",
|
| 1815 |
+
"text_format": "latex",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
115,
|
| 1818 |
+
191,
|
| 1819 |
+
485,
|
| 1820 |
+
255
|
| 1821 |
+
],
|
| 1822 |
+
"page_idx": 11
|
| 1823 |
+
},
|
| 1824 |
+
{
|
| 1825 |
+
"type": "text",
|
| 1826 |
+
"text": "We use a feedforward network that consists of two hidden layers with 256 hidden units and LeakyReLU activation. States are represented using one-hot embeddings. As for the environment model in Stochastic GFlowNet, it is also a feedforward layer consisting of two hidden layers with 256 hidden units and LeakyReLU activation. All models are trained for 20000 iterations, and we use a parallel of 16 rollouts in the environment at each iteration (which are then stored in the experience replay buffer). The GFlowNet model is updated based on the rollouts, and we train it based on the Adam [Kingma and Ba, 2015] optimizer using a learning rate of 0.001 (the learning rate for $Z$ in TB is 0.1). We train the environment model using data sampled from the experience replay buffer with a batch size of 16, which is trained using the Adam optimizer with a learning rate of 0.0001. MCMC and PPO use the same configuration as in Bengio et al. [2021a].",
|
| 1827 |
+
"bbox": [
|
| 1828 |
+
84,
|
| 1829 |
+
261,
|
| 1830 |
+
487,
|
| 1831 |
+
518
|
| 1832 |
+
],
|
| 1833 |
+
"page_idx": 11
|
| 1834 |
+
},
|
| 1835 |
+
{
|
| 1836 |
+
"type": "text",
|
| 1837 |
+
"text": "A.2 BIT SEQUENCES",
|
| 1838 |
+
"text_level": 1,
|
| 1839 |
+
"bbox": [
|
| 1840 |
+
85,
|
| 1841 |
+
539,
|
| 1842 |
+
268,
|
| 1843 |
+
554
|
| 1844 |
+
],
|
| 1845 |
+
"page_idx": 11
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "text",
|
| 1849 |
+
"text": "We follow the same setup for the bit sequence generation task as in Malkin et al. [2022a]. The GFlowNet model is a Transformer [Vaswani et al., 2017] that consists of 3 hidden layers with 64 hidden units and uses 8 attention heads. The exploration strategy is $\\epsilon$ -greedy with $\\epsilon = 0.0005$ , while the sampling temperature is set to 1. It uses a reward exponent of 3. The learning rate for training the GFlowNet model is $5 \\times 10^{-3}$ , with a batch size of 16. As for the environment model in Stochastic GFlowNet, we use a feedforward network consisting of two hidden layers with 2048 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of $5 \\times 10^{-4}$ . It is trained using data sampled from the experience replay buffer with a batch size of 128. We train all models for 50000 iterations, using a parallel of 16 rollouts in the environment. MCMC, A2C, and SAC adopt the same configuration as in Malkin et al. [2022a].",
|
| 1850 |
+
"bbox": [
|
| 1851 |
+
84,
|
| 1852 |
+
569,
|
| 1853 |
+
489,
|
| 1854 |
+
828
|
| 1855 |
+
],
|
| 1856 |
+
"page_idx": 11
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "text",
|
| 1860 |
+
"text": "A.3 TFBIND-8",
|
| 1861 |
+
"text_level": 1,
|
| 1862 |
+
"bbox": [
|
| 1863 |
+
85,
|
| 1864 |
+
847,
|
| 1865 |
+
213,
|
| 1866 |
+
861
|
| 1867 |
+
],
|
| 1868 |
+
"page_idx": 11
|
| 1869 |
+
},
|
| 1870 |
+
{
|
| 1871 |
+
"type": "text",
|
| 1872 |
+
"text": "For the TFBind-8 generation task, we follow the same setup as in Jain et al. [2022a]. The vocabulary consists of 4 nucleobases, and the trajectory length is 8. The GFlowNet model",
|
| 1873 |
+
"bbox": [
|
| 1874 |
+
85,
|
| 1875 |
+
878,
|
| 1876 |
+
489,
|
| 1877 |
+
925
|
| 1878 |
+
],
|
| 1879 |
+
"page_idx": 11
|
| 1880 |
+
},
|
| 1881 |
+
{
|
| 1882 |
+
"type": "text",
|
| 1883 |
+
"text": "is a feedforward network that consists of 2 hidden layers with 2048 hidden units and ReLU activation. The exploration strategy is $\\epsilon$ -greedy with $\\epsilon = 0.001$ , while the reward exponent is 3. The learning rate for training the GFlowNet model is $10^{-4}$ , with a batch size of 32. As for the environment model, we use a feedforward network consisting of two hidden layers with 2048 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of $10^{-5}$ . It is trained using data sampled from the experience replay buffer with a batch size of 16. We train all models for 5000 iterations. MCMC, A2C, and SAC baselines follow the same configuration as in Jain et al. [2022a].",
|
| 1884 |
+
"bbox": [
|
| 1885 |
+
507,
|
| 1886 |
+
79,
|
| 1887 |
+
912,
|
| 1888 |
+
276
|
| 1889 |
+
],
|
| 1890 |
+
"page_idx": 11
|
| 1891 |
+
},
|
| 1892 |
+
{
|
| 1893 |
+
"type": "text",
|
| 1894 |
+
"text": "A.4 ANTIMICROBIAL PEPTIDE GENERATION",
|
| 1895 |
+
"text_level": 1,
|
| 1896 |
+
"bbox": [
|
| 1897 |
+
509,
|
| 1898 |
+
297,
|
| 1899 |
+
892,
|
| 1900 |
+
313
|
| 1901 |
+
],
|
| 1902 |
+
"page_idx": 11
|
| 1903 |
+
},
|
| 1904 |
+
{
|
| 1905 |
+
"type": "text",
|
| 1906 |
+
"text": "We follow the same setup for the antimicrobial peptide generation task as in Malkin et al. [2022a]. The GFlowNet model is a Transformer [Vaswani et al., 2017] that consists of 3 hidden layers with 64 hidden units and uses 8 attention heads. The exploration strategy is $\\epsilon$ -greedy with $\\epsilon = 0.01$ while the sampling temperature is set to 1. It uses a reward exponent of 3. The learning rate for training the GFlowNet model is 0.001, with a batch size of 16. As for the environment model, we use a feedforward network consisting of two hidden layers with 128 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of 0.0005. It is trained using data sampled from the experience replay buffer with a batch size of 128. We train all models for 20000 iterations, using a parallel of 16 rollouts in the environment.",
|
| 1907 |
+
"bbox": [
|
| 1908 |
+
507,
|
| 1909 |
+
328,
|
| 1910 |
+
912,
|
| 1911 |
+
554
|
| 1912 |
+
],
|
| 1913 |
+
"page_idx": 11
|
| 1914 |
+
}
|
| 1915 |
+
]
|
2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09465/49451c7e-5015-40a9-8475-bc4e421b3bab_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:12c7e99d6112a68677ede95a71b3c79bc58771cdd4e8713fdc42ffda3f64e7de
|
| 3 |
+
size 2036301
|
2302.09xxx/2302.09465/full.md
ADDED
|
@@ -0,0 +1,407 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Stochastic Generative Flow Networks
|
| 2 |
+
|
| 3 |
+
Ling Pan\*1,2
|
| 4 |
+
|
| 5 |
+
Dinghuai Zhang*1,2
|
| 6 |
+
|
| 7 |
+
Moksh Jain<sup>1,2</sup>
|
| 8 |
+
|
| 9 |
+
Longbo Huang
|
| 10 |
+
|
| 11 |
+
Yoshua Bengio $^{1,2,4}$
|
| 12 |
+
|
| 13 |
+
$^{1}$ Mila - Québec AI Institute
|
| 14 |
+
|
| 15 |
+
$^{2}$ Université de Montréal
|
| 16 |
+
|
| 17 |
+
3Tsinghua University
|
| 18 |
+
|
| 19 |
+
4 CIFAR AI Chair
|
| 20 |
+
|
| 21 |
+
# Abstract
|
| 22 |
+
|
| 23 |
+
Generative Flow Networks (or GFlowNets for short) are a family of probabilistic agents that learn to sample complex combinatorial structures through the lens of "inference as control". They have shown great potential in generating high-quality and diverse candidates from a given energy landscape. However, existing GFlowNets can be applied only to deterministic environments, and fail in more general tasks with stochastic dynamics, which can limit their applicability. To overcome this challenge, this paper introduces Stochastic GFlowNets, a new algorithm that extends GFlowNets to stochastic environments. By decomposing state transitions into two steps, Stochastic GFlowNets isolate environmental stochasticity and learn a dynamics model to capture it. Extensive experimental results demonstrate that Stochastic GFlowNets offer significant advantages over standard GFlowNets as well as MCMC- and RL-based approaches, on a variety of standard benchmarks with stochastic dynamics.
|
| 24 |
+
|
| 25 |
+
# 1 INTRODUCTION
|
| 26 |
+
|
| 27 |
+
Recently, Generative Flow Networks [GFlowNets; Bengio et al., 2021a,b] have been successfully applied to a wide variety of tasks, including molecule discovery [Bengio et al., 2021a, Jain et al., 2022b], biological sequence design [Jain et al., 2022a], and robust scheduling [Zhang et al., 2023a]. GFlowNets learn policies to generate objects $x \in \mathcal{X}$ sequentially, and are related to Monte-Carlo Markov chain (MCMC) methods [Metropolis et al., 1953, Hastings, 1970, Andrieu et al., 2003], generative models [Goodfellow et al., 2014, Ho et al., 2020], and amortized variational inference [Kingma and Welling, 2013]. The sequential process
|
| 28 |
+
|
| 29 |
+
of generating an object following a policy bears a close resemblance to reinforcement learning [RL; Sutton and Barto, 2018]. Contrary to the typical reward-maximizing policy in RL [Mnih et al., 2015, Lillicrap et al., 2015, Haarnoja et al., 2017, Fujimoto et al., 2018, Haarnoja et al., 2018], GFlowNets aim to learn a stochastic policy for sampling composite objects $x$ with probability proportional to the reward function $R(x)$ . This is desirable in many real-world tasks where the diversity of solutions is important, and we aim to sample a diverse set of high-reward candidates, including recommender systems [Kunaver and Požrl, 2017], drug discovery [Bengio et al., 2021a, Jain et al., 2022a], and sampling causal models from a Bayesian posterior [Deleu et al., 2022], among others.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1: An example illustrating the failure of existing GFlowNet approaches. (Left) Squares and circles denote states and actions, while solid and dotted arrows correspond to policy decisions and stochastic environment dynamics. The numbers above the dotted lines represent state transition probabilities, and the numbers below the blue squares (terminal states) denote the terminal reward. (Right) Results from existing GFlowNet approaches and the ideal solution.
|
| 33 |
+
|
| 34 |
+
$$
|
| 35 |
+
P \left(s _ {1}\right) = \frac {5}{1 2} \neq \boxed {\frac {1}{3} = \frac {R _ {1}}{R _ {1} + R _ {2}}}
|
| 36 |
+
$$
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
P (s _ {2}) = \frac {7}{1 2} \neq \boxed {\frac {2}{3} = \frac {R _ {2}}{R _ {1} + R _ {2}}}
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
Results
|
| 43 |
+
|
| 44 |
+
Ideal solution
|
| 45 |
+
|
| 46 |
+
Existing work on GFlowNets [Bengio et al., 2021a, Malkin et al., 2022a, Madan et al., 2022], however, is limited to deterministic environments, where state transitions are deterministic, which may limit their applicability in the more general stochastic cases in practice. Figure 1 illustrates an example with stochastic transition dynamics where existing GFlowNet approaches can fail. Standard GFlowNet approaches will result in $P(s_{1}) = \frac{5}{12}$ and $P(s_{2}) = \frac{7}{12}$ when trained to completion (with $P(s)$ denoting the probability
|
| 47 |
+
|
| 48 |
+
of sampling state $s$ ), which does not match the ideal case where $P(s_{1}) = \frac{1}{3}$ and $P(s_{2}) = \frac{2}{3}$ . Therefore, the learned policy does not sample proportionally to the reward function in the presence of stochastic transition dynamics. In practice, many tasks involve stochasticity in state transitions, which are more challenging to solve but are applicable to a wide variety of problems [Antonoglou et al., 2021, Yang et al., 2022, Paster et al., 2022].
|
| 49 |
+
|
| 50 |
+
To address this limitation, in this paper, we introduce a novel methodology, Stochastic GFlowNet, which is the first empirically effective approach for tackling environments with stochastic transition dynamics with GFlowNets. Stochastic GFlowNet decomposes the state transitions based on the concept of afterstates [Sutton and Barto, 2018, Bengio et al., 2021b]. Specifically, each stochastic state transition is decomposed into a deterministic step that transitions from the environment state $s$ to an intermediate state $(s,a)$ and a stochastic step that branches from the intermediate state $(s,a)$ to the next state $s'$ . We propose a practical way for training the dynamics model to capture the stochastic environment dynamics. The methodology is general and can be applied to different GFlowNet learning objectives. The code is publicly available at https://github.com/ling-pan/Stochastic-GFN.
|
| 51 |
+
|
| 52 |
+
In summary, the contribution of this work is as follows:
|
| 53 |
+
|
| 54 |
+
- We propose a novel method, Stochastic GFlowNet, which is the first empirically effective approach extending GFlowNets to the more general stochastic environments based on Bengio et al. [2021b].
|
| 55 |
+
- We conduct extensive experiments on GFlowNet benchmark tasks augmented with stochastic transition dynamics, and validate the effectiveness of our approach in tackling stochastic environments. Results show that our method significantly outperforms existing baselines and scales well to the more complex and challenging biological sequence generation tasks.
|
| 56 |
+
|
| 57 |
+
# 2 BACKGROUND
|
| 58 |
+
|
| 59 |
+
# 2.1 GFLOWNET PRELIMINARIES
|
| 60 |
+
|
| 61 |
+
We denote a directed acyclic graph (DAG) by $\mathcal{G} = (\mathcal{S},\mathcal{A})$ with $\mathcal{S}$ the set of vertices corresponding to the states and $\mathcal{A}\subseteq S\times S$ the set of edges, which corresponds to the set of actions. There is a unique initial state $s_0\in \mathcal{S}$ which has no parent state; on the other hand, we define all states without children to be terminal states, whose set is denoted by $\mathcal{X}\subseteq \mathcal{S}$ . A GFlowNet learns stochastic policy which aims to sample complete trajectories $\tau = (s_0\to s_1\to \dots \to s_n)$ where $s_n\in \mathcal{X}$ and $(s_i\rightarrow s_{i + 1})\in \mathcal{A},\forall i$ to sample terminal states. Each trajectory is assigned a non-negative flow $F(\tau)$ . A trajectory can be generated sequentially by sampling iteratively from the forward policy $P_F(s_{t + 1}|s_t)$ , which is a
|
| 62 |
+
|
| 63 |
+
collection of distributions over the children at each state. Existing work on GFlowNets assumes a one-to-one correspondence between action and next state, making the definition of forward policy to be consistent to the notion of policy in general RL. Nonetheless, in this work, we relax this assumption and generalize GFlowNets to more flexible stochastic environments. The objective of GFlowNet learning is to sample terminal states with probability proportional to a given non-negative reward function $R(x)$ for all $x \in \mathcal{X}$ . This indicates that all the flows that end up with $x$ should sum up to $R(x)$ , namely $\sum_{\tau \to x} F(\tau) = R(x), \forall x \in \mathcal{X}$ , where $\tau \to x$ is a trajectory $\tau$ that ends in $x$ and the sum is thus over all complete trajectories that lead to terminal state $x \in \mathcal{X}$ . To formalize this, we first define the terminating probability $P_T(x)$ to be the marginal likelihood of sampling trajectories terminating at a terminal state $x$ :
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
P _ {T} (x) = \sum_ {\tau \rightarrow x} P _ {F} (\tau) = \sum_ {\tau \rightarrow x} \prod_ {i = 1} ^ {n} P _ {F} \left(s _ {i} \mid s _ {i - 1}\right). \quad (1)
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
Therefore, the goal of GFlowNet learning is to obtain a policy such that $P_T(x) \propto R(x), \forall x \in \mathcal{X}$ .
|
| 70 |
+
|
| 71 |
+
# 2.2 LEARNING OBJECTIVES FOR GFLOWNETS
|
| 72 |
+
|
| 73 |
+
In applicative tasks, practitioners need to design the GFlowNet modules (e.g., policies, flows) with parameterized neural networks, and further choose a training criterion to train these networks. In this subsection, we briefly summarize some learning criteria of GFlowNets.
|
| 74 |
+
|
| 75 |
+
Detailed balance (DB). By summing the flows $F(\tau)$ of all the trajectories $\tau$ going through a state $s$ , we can define a state flow $F(s) := \sum_{\tau \ni s} F(\tau)$ . Such a function can be learned, together with forward and backward policy $P_F(s'|s)$ , $P_B(s|s')$ , where $s'$ is a child state of $s$ . The backward policy $P_B$ , a collection of distributions over the parents of each state, is not part of the generative process, but serves as a tool for learning the forward policy $P_F$ . The GFlowNet detailed balance (DB) constraint is defined as
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
F (s) P _ {F} (s ^ {\prime} | s) = F (s ^ {\prime}) P _ {B} (s | s ^ {\prime}), \forall (s \rightarrow s ^ {\prime}) \in \mathcal {A}. \quad (2)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
It is also worth noting that at terminal states $x$ , it pushes the flow at $x$ to match the terminal rewards $R(x)$ . In practice, we transform the DB constraint into a training objective by setting the loss function to be a squared difference between the logarithm of the left and right-hand sides Bengio et al. [2021a] of Eq. (2). If the optimization objective is perfectly minimized, it would make the above flow consistency constraint satisfied, thus making the forward policy $P_F$ sample proportionally to given reward values, as desired. It means that after training, the constraint is only approximately achieved (and in general it would be intractable to obtain an exact solution).
|
| 82 |
+
|
| 83 |
+
Trajectory balance (TB). In analogy to the forward decomposition of a complete trajectory in Eq. (1), we could use $\prod_{i=1}^{n} P_B(s_{i-1}|s_i)$ to represent the trajectory backward probability. As an alternative to DB, Malkin et al. [2022a] propose the trajectory balance (TB) criterion which operates on complete trajectories, instead of state transitions, defined as follows
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
Z \prod_ {i = 1} ^ {n} P _ {F} \left(s _ {i} \mid s _ {i - 1}\right) = R (x) \prod_ {i = 1} ^ {n} P _ {B} \left(s _ {i - 1} \mid s _ {i}\right), \tag {3}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $\tau = (s_0 \to s_1 \to \ldots \to s_n = x)$ is any complete trajectory and $Z$ is a learned scalar parameter, denoting the partition function of the reward distribution. Note that TB does not explicitly learn a flow function.
|
| 90 |
+
|
| 91 |
+
For on-policy training, we can simply use trajectories sampled from the forward policy $P_F$ to evaluate the training loss and its gradient with respect to the parameters of the neural networks. The GFlowNet training objectives can be further improved with off-policy training, i.e., with trajectories sampled from a broader and more exploratory distribution than $P_F$ [Malkin et al., 2022b]. A popular choice is using a tempered version of the current forward policy [Zhang et al., 2022b] or a mixture of the forward policy and a uniform random policy [Bengio et al., 2021a] that mimics $\epsilon$ -greedy exploration in RL.
|
| 92 |
+
|
| 93 |
+
# 3 STOCHASTIC GFLOWNETS
|
| 94 |
+
|
| 95 |
+
We now describe the Stochastic GFlowNet, a novel method that learns a model of the environment to capture the stochasticity of state transitions. We first describe a key idea introduced by Bengio et al. [2021b] to decompose the GFlowNet transitions as per Figure 2, and then introduce a new approach to learn the GFlowNet policy and the dynamics model. We also discuss the applicability to different GFlowNet learning objectives and the resulting effects.
|
| 96 |
+
|
| 97 |
+
# 3.1 PROPOSED METHOD
|
| 98 |
+
|
| 99 |
+
Existing work on GFlowNets [Bengio et al., 2021a, Malkin et al., 2022a] typically makes the assumption that all transitions from a state $s_t$ to the next state $s_{t+1}$ within a trajectory are defined deterministically based on the selected action $a_t$ (and also there is only one action $a_t$ that can transition from $s_t$ to $s_{t+1} = T(s_t, a_t)$ , with $T$ denoting the deterministic transition function). This applies to problems where the generative process for the objects is deterministic, which is appropriate when the actions are internal, e.g., choosing what to attend, what to imagine (such as solutions to problems), how to make inferences, etc. Yet, a number of real-world tasks are stochastic, either inherently or due to the environment complexity [Antonoglou et al., 2021, Paster et al., 2022, Yang et al., 2022]. In the more general stochastic environments, the action $a_t$ at $s_t$ can land in several
|
| 100 |
+
|
| 101 |
+
possible next states. For instance, synthesizing proteins with oligo pools can result in the generation of variants of the specified protein [Song et al., 2021].
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 2: We decompose traditional GFlowNet transitions (top) into two steps to facilitate the GFlowNet formalization: (a) stochastically choosing the action, (b) stochastically transitioning to a new state. We call the intermediate state after (1) an odd state (and starting and final points even states), as illustrated above.
|
| 105 |
+
|
| 106 |
+
To cope with stochasticity in the transition dynamics, we decompose state transitions based on the concept of after-states [Sutton and Barto, 2018, Bengio et al., 2021b]. Specifically, for a transition from a state $s_t$ to the next state $s_{t+1}$ , we decompose the transition in two steps. First, as illustrated in part (a) in Figure 2, we sample an action $a_t$ based on a policy $\pi$ in the current state $s_t$ (called an even state), and transit deterministically to an intermediate state $(s_t, a_t)$ , called an odd state. The flow consistency constraint for detailed balance (DB) for even-to-add transitions is shown in Eq. (4), since the backward policy probability is 1 here (we can only get to $(s_t, a_t)$ from $s_t$ ):
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
F \left(s _ {t}\right) \pi \left(a _ {t} \mid s _ {t}\right) = F \left(\left(s _ {t}, a _ {t}\right)\right). \tag {4}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
The odd state can be considered as a hypothetical state after we apply an action before the environment gets involved [Antonoglou et al., 2021]. The environment dynamics then transform $(s_t, a_t)$ into the next even state $s_{t+1}$ stochastically according to a distribution $P(\cdot | s_t, a_t)$ , which is the state transition function. This second step corresponds to part (b) in Figure 2, and the corresponding flow consistency constraint is
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
F \left(\left(s _ {t}, a _ {t}\right)\right) P \left(s _ {t + 1} \mid \left(s _ {t}, a _ {t}\right)\right) = F \left(s _ {t + 1}\right) \pi_ {B} \left(\left(s _ {t}, a _ {t}\right) \mid s _ {t + 1}\right). \tag {5}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
Note that an odd state can lead to many possible next even states due to stochasticity in the environment. With the introduction of odd states, we isolate the effect of choosing the action to apply in the environment and of the stochastic state transition given an action, with a deterministic and a stochastic step.
|
| 119 |
+
|
| 120 |
+
Training a Stochastic GFlowNet. Combining the two steps, we obtain a novel flow consistency constraint which
|
| 121 |
+
|
| 122 |
+
we call stochastic GFlowNets based on detailed balance (DB), where $P$ denotes the state transition function:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\begin{array}{l} F \left(s _ {t}\right) \pi \left(a _ {t} \mid s _ {t}\right) P \left(s _ {t + 1} \mid \left(s _ {t}, a _ {t}\right)\right) \\ = F \left(s _ {t + 1}\right) \pi_ {B} \left(\left(s _ {t}, a _ {t}\right) \mid s _ {t + 1}\right). \tag {6} \\ \end{array}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
In practice, for training stochastic GFlowNets, we would minimize the loss $\mathcal{L}_{\mathrm{StochGFN - DB}}(s,a,s^{\prime})$ in Eq. (7) based on the flow consistency constraint from Eq. (6), which is trained on a log-scale.
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\begin{array}{l} \left(\log F \left(s _ {t}\right) + \log \pi \left(a _ {t} \mid s _ {t}\right) + \log P \left(s _ {t + 1} \mid \left(s _ {t}, a _ {t}\right)\right) \right. \tag {7} \\ - \log F (s _ {t + 1}) - \log \pi_ {B} ((s _ {t}, a _ {t}) | s _ {t + 1})) ^ {2}. \\ \end{array}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
Note that our proposed methodology is general and can be applied to other GFlowNet learning objectives such as trajectory balance (TB), as we discuss in Section 3.2.
|
| 135 |
+
|
| 136 |
+
Learning the dynamics model. Since the transition dynamics $P(\cdot | s, a)$ are unknown in general, we need to learn it. In practice, we learn a model $\hat{P}$ with parameters $\phi$ to approximate $P$ through maximum likelihood estimation (other techniques from generative and dynamics modeling [Venkatraman et al., 2015] could also be applied). We optimize its parameters with the interaction data using the Adam optimizer [Kingma and Ba, 2015] based on the loss function in Eq. (8), where the output of $\hat{P}$ is a softmax distribution across all possible next states. The data is sampled from a experience replay buffer, which stores interaction data $\{s, a, s'\}$ from the GFlowNet policy and the environment.
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\mathcal {L} _ {\text {m o d e l}} (s, a, s ^ {\prime}) = - \log \hat {P} (s ^ {\prime} | s, a) \tag {8}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
Practical implementation. Figure 3 illustrates the major components of Stochastic GFlowNets as described above and how they interact with each other. The procedure for training Stochastic GFlowNet based on DB is summarized in Algorithm 1.
|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
Figure 3: Illustration of Stochastic GFlowNets.
|
| 146 |
+
|
| 147 |
+
# 3.2 DISCUSSION ON THE APPLICABILITY TO TRAJECTORY BALANCE (TB)
|
| 148 |
+
|
| 149 |
+
As discussed in Section 3.1, our proposed method is versatile and can be applied to other GFlowNet learning objectives beyond DB. We state the flow consistency constraint
|
| 150 |
+
|
| 151 |
+
# Algorithm 1 Stochastic Generative Flow Networks
|
| 152 |
+
|
| 153 |
+
1: Initialize the forward and backward policies $\pi, \pi_B$ , and the state flow function $F$ with parameters $\theta$
|
| 154 |
+
2: Initialize the transition dynamics $\dot{P}$ with parameters $\phi$
|
| 155 |
+
3: Initialize experience replay buffer $\mathcal{B}$
|
| 156 |
+
4: for each training step $t = 1$ to $T$ do
|
| 157 |
+
5: Collect a batch of $M$ trajectories $\tau = \{s_0\to \dots \to s_n\}$ from the policy $\pi$ , and store them in $\mathcal{B}$
|
| 158 |
+
6: Update the stochastic GFN model according to the loss $\mathcal{L}_{\mathrm{StochGFN - DB}}$ in Eq. (7) based on $\{\tau \}_{i = 1}^{M}$
|
| 159 |
+
7: Sample a batch of $K$ trajectories from $\mathcal{B}$
|
| 160 |
+
8: Update the transition dynamics model according to the loss $\mathcal{L}_{\mathrm{model}}$ in Eq. (8) using data sampled from the replay buffer
|
| 161 |
+
9: end for
|
| 162 |
+
|
| 163 |
+
for Stochastic TB in Eq. (9), which is obtained via a telescoping calculation based on Eq. (6).
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
Z \prod_ {t = 0} ^ {n - 1} \pi \left(a _ {t} \mid s _ {t}\right) P \left(s _ {t + 1} \mid \left(s _ {t}, a _ {t}\right)\right) = R (x) \prod_ {t = 0} ^ {n - 1} \pi_ {B} \left(\left(s _ {t}, a _ {t}\right) \mid s _ {t + 1}\right) \tag {9}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
In practice, we can train with Stochastic TB by minimizing the loss $\mathcal{L}_{\mathrm{StochGFN - TB}}(s,a,s^{\prime})$ obtained from Eq (9), i.e.,
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\begin{array}{l} \left[ \log Z + \sum_ {t = 0} ^ {n - 1} \log \pi \left(a _ {t} \mid s _ {t}\right) + \sum_ {t = 0} ^ {n - 1} \log \hat {P} \left(s _ {t + 1} \mid \left(s _ {t}, a _ {t}\right)\right) \right. \\ \left. - \log R (x) - \sum_ {t = 0} ^ {n - 1} \log \pi_ {B} \left(\left(s _ {t}, a _ {t}\right) \mid s _ {t + 1}\right) \right] ^ {2}. \tag {10} \\ \end{array}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
However, as TB is optimized based on a sampled trajectory instead of each transition, it can lead to a larger variance as studied in Madan et al. [2022] even in deterministic environments. This problem can be further exacerbated in stochastic environments. In Section 4.1.3, we find that the Stochastic TB underperforms relative to Stochastic DB, presumable due to a larger variance, as studied by Madan et al. [2022].
|
| 176 |
+
|
| 177 |
+
# 4 EXPERIMENTS
|
| 178 |
+
|
| 179 |
+
In this section, we conduct extensive experiments to investigate the following key questions: i) How much can Stochastic GFNs improve over GFNs in the presence of stochastic transition dynamics? ii) Can Stochastic GFNs be built upon different GFlowNets learning objectives? iii) Can Stochastic GFNs scale to the more complex and challenging tasks of generating biological sequences?
|
| 180 |
+
|
| 181 |
+
# 4.1 GRIDWORLD
|
| 182 |
+
|
| 183 |
+
# 4.1.1 Experimental Setup
|
| 184 |
+
|
| 185 |
+
We first conduct a series of experiments in the GridWorld task introduced in Bengio et al. [2021a] to understand the effectiveness of Stochastic GFlowNets. An illustration of the task with size $H \times H$ is shown in Figure 4. At each time step, the agent takes an action to navigate in the grid, where possible actions include operations to increase one coordinate and also a stop operation to terminate the episode, ensuring the underlying Markov decision process (MDP) is a directed acyclic graph. The agent obtains a reward $R(x)$ as defined in Bengio et al. [2021a] when the trajectory ends at a terminal state $x$ . The reward function $R(x)$ has 4 modes located at the corners of the map as illustrated in Figure 4. The goal for the agent is to model the target reward distribution, and captures all the modes of the reward function. The shade of color in Figure 4 indicates the magnitude of rewards, where a darker color corresponds to a larger reward. We consider a variant with stochastic transition dynamics, where the randomness in the environment is injected following Machado et al. [2018], Yang et al. [2022] in GridWorld and all other benchmark tasks in Sections 4.2.1-4.2.3. Specifically, the environment transitions according to the selected action with probability $1 - \alpha$ , while with probability $\alpha$ the environment executes a uniformly chosen action (like slipping to its neighboring regions randomly in Figure 4).
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
Figure 4: The GridWorld environment. The agent starts at the top-left corner and reward is largest at the four dark blue positions near the four corners (with the keys), lower in the $2 \times 2$ squares near the corner, and yet lower in other (light blue) positions. This can be extended to different sizes $H$ , as well as different degrees of noise $\alpha$ in the (state, action)-to-state transitions.
|
| 189 |
+
|
| 190 |
+
We compare Stochastic GFlowNet against vanilla GFlowNets trained with detailed balance (DB) [Bengio et al., 2021b] and trajectory balance (TB) [Malkin et al., 2022a] learning objectives, Metropolis-Hastings-MCMC [Xie et al., 2021], and PPO [Schulman et al., 2017] methods. We evaluate each method in terms of the empirical $L_{1}$ error defined as $\mathbb{E}[|p(x) - \pi(x)|]$ , with $p(x) = \frac{R(x)}{Z}$ denoting the true reward distribution, and we estimate $\pi$ according to repeated sampling and calculating frequencies for visiting every possible state $x$ . We also compare them in terms of the number of modes discovered by each method
|
| 191 |
+
|
| 192 |
+
during the course of training. Each algorithm is run for 5 different seeds, and the performance is reported in its mean and standard deviation. We implement all baselines based on the open-source code<sup>1</sup>, and a detailed description of the hyperparameters and setup can be found in Appendix A.1.
|
| 193 |
+
|
| 194 |
+
# 4.1.2 Performance Comparison
|
| 195 |
+
|
| 196 |
+
We now study the effectiveness of Stochastic GFNs on small, medium, and large GridWorlds with increasing sizes $H$ , and different levels of stochasticity.
|
| 197 |
+
|
| 198 |
+
Varying sizes of the map. Figure 5 demonstrates the empirical $L_{1}$ error for each method in GridWorld (with a stochasticity level of $\alpha = 0.25$ ) with increasing sizes. As shown, MCMC does not perform well and PPO fails to converge. We also observe that the performance of TB gets much worse as the size of the problem increases, which may be attributed to a larger gradient variance [Madan et al., 2022]. Stochastic GFlowNets significantly outperform the baselines, and converge fastest and to the lowest empirical $L_{1}$ error. Figure 6 illustrates the number of modes discovered by each method during the course of training. As demonstrated, in stochastic environments (where the original convergence guarantees of GFlowNets do not hold), existing GFlowNet methods including DB and TB fail to discover all of the modes in maps with larger sizes. It is also worth noting that TB performs much worse than DB in terms of the number of modes discovered with increasing sizes of the maps, as it is optimized on the trajectory level with a sampled trajectory instead of the transition level as in DB, and can induce large variance. The proposed Stochastic GFlowNet method outperforms previous GFlowNet methods as well as MCMC and PPO by a large margin, while being able to efficiently discover different modes in maps with different sizes.
|
| 199 |
+
|
| 200 |
+
Varying stochasticity levels. In Figure 7, we compare different methods in a small GridWorld with an increasing level of stochasticity $\alpha$ . We observe that TB also fails to learn well with an increasing $\alpha$ , and performs worse than DB besides the decreased performance with increasing sizes. On the other hand, Stochastic GFlowNets outperform the baselines by a significant margin, and are robust to higher levels of stochasticity, successfully handling stochastic transition dynamics.
|
| 201 |
+
|
| 202 |
+
# 4.1.3 Compatibility with Different GFlowNet Learning Objectives
|
| 203 |
+
|
| 204 |
+
In this section, we study Stochastic GFlowNets with the trajectory balance (TB) objective as described in Section 3.2. We evaluate Stochastic TB in GridWorlds with different
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
(a) Small.
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
(b) Medium.
|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
(c) Large.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
Figure 5: Comparison results of $L_{1}$ error in GridWorld with increasing sizes of the map.
|
| 217 |
+
(a) Small.
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
(b) Medium.
|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
(c) Large.
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
(a) $\alpha = 0.5$
|
| 227 |
+
Figure 7: Results in small GridWorld with increasing stochasticity levels $\alpha$ .
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
(b) $\alpha = 0.9$
|
| 231 |
+
Figure 8: Results of Stochastic GFlowNet when built upon the trajectory balance (TB) objective in GridWorld with increasing sizes $H$ and stochasticity levels $\alpha$ . (a) Small, low stochasticity level. (b) Large, low stochasticity level. (c) Small, high stochasticity level.
|
| 232 |
+
|
| 233 |
+
sizes (including small with $H = 8$ and large with $H = 128$ ) and stochasticity levels (including low with $\alpha = 0.25$ and high with $\alpha = 0.9$ ). Specifically, Figure 8(a) corresponds to the result in a small map with a low stochasticity level, Figure 8(b) illustrates the results in a large map with a low stochasticity level, while Figure 8 shows the results in a small map with a high stochasticity level.
|
| 234 |
+
|
| 235 |
+
As shown in Figure 8, Stochastic TB (abbreviated as Stoch-GFN (TB) in the figure) greatly improves the performance of TB, validating the effectiveness of our proposed methodology. However, we observe that it underperforms relative to Stochastic DB when the scale of the problem increases
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
(a)
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
(b)
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
Figure 6: Comparison results of the number of modes captured during the training process in GridWorld with increasing sizes of the map.
|
| 245 |
+
(c)
|
| 246 |
+
|
| 247 |
+
or with a higher level of stochasticity (Figure 8(c)), which can be attributed to the larger variance of TB [Madan et al., 2022] in stochastic environments.
|
| 248 |
+
|
| 249 |
+
# 4.2 AUTOREGRESSIVE SEQUENCE GENERATION
|
| 250 |
+
|
| 251 |
+
In this section, we study Stochastic GFN on autoregressive sequence generation tasks [Malkin et al., 2022a]. We first consider a bit sequence generation task to investigate the
|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
(a)
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
(b)
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
(c)
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
(d)
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
(e)
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
(f)
|
| 270 |
+
Figure 9: Results in the bit sequence generation task. The first and second rows correspond to the results of the number of bits $k = 4$ and $k = 2$ . The first, second, and third columns correspond to the results of different stochasticity levels of 0.1, 0.3, and 0.5, respectively.
|
| 271 |
+
|
| 272 |
+
effect of the size of the action space and length of the trajectory with varying levels of environment stochasticity. We then study the more realistic and complex tasks of generating biological sequences.
|
| 273 |
+
|
| 274 |
+
# 4.2.1 Bit Sequences
|
| 275 |
+
|
| 276 |
+
Task. In the bit sequence generation task [Malkin et al., 2022a], the agent aims to generate bit sequences of length $n = 120$ . At each step, the agent appends a $k$ -bit "word" from a vocabulary $V$ to the current state from left to right, which is a partial sequence. Note that we consider a stochastic variant of the task, with noise level $\alpha$ as described in Section 4.1.1. The resulting action space has a size of $|V| = 2^k$ , and the length of the complete trajectories is $\frac{n}{k}$ . Following Malkin et al. [2022a], we define the reward function $R(x)$ to have modes at a fixed set of bit sequences $M$ with $R(x) = \exp(-\min_{y \in M} d(x, y))$ , where $d$ is the edit distance. We evaluate each method in terms of the number of modes discovered during the course of training.
|
| 277 |
+
|
| 278 |
+
We study the performance of Stochastic DB with different levels of stochasticity, and compare it against vanilla DB and strong baselines including Advantage Actor-Critic (A2C) [Mnih et al., 2016], Soft Actor-Critic (SAC) [Haarnoja et al., 2018], and MCMC [Xie et al., 2021]. Each method is run for 3 different seeds and we report the mean and standard deviation. More details about the experimental setup in the stochastic bit sequence generation task
|
| 279 |
+
|
| 280 |
+
can be found in Appendix A.2. We use the same hyperparameters and architectures as in Malkin et al. [2022a].
|
| 281 |
+
|
| 282 |
+
Results. Figure 9 demonstrates the number of modes captured by each method throughout the training process with different levels of stochasticity ranging from 0.1 to 0.5, where the first and second rows correspond to the results for $k = 4$ and $k = 2$ , respectively. We observe that regular GFlowNets (GFN in the figure) fail to learn well, particularly when the trajectories are longer (with a smaller value of $k$ ). On the other hand, the Stochastic GFlowNet (Stoch-GFN in the figures) is robust to increasing trajectory lengths, and also performs well when the stochasticity level increases. In addition, Stoch-GFN significantly outperforms strong baselines including MCMC, A2C, and SAC, discovering more modes faster.
|
| 283 |
+
|
| 284 |
+
# 4.2.2 TF Bind 8 Generation
|
| 285 |
+
|
| 286 |
+
Task. We now consider the practical task of generating DNA sequences with high binding activity with particular transcription factors, following Jain et al. [2022a]. At each time step, the agent appends a symbol from the vocabulary to the right of the current state. As with the bit generation task, we consider a stochastic variant of the task following Yang et al. [2022] with random actions taken with probability $\alpha$ (as described in Section 4.1.1). We adopt a pre-trained neural network as the reward function follow
|
| 287 |
+
|
| 288 |
+
ing Jain et al. [2022a] that estimates the binding activity. We investigate how well Stochastic DB performs by comparing it with vanilla DB, MCMC, and RL-based methods including A2C and SAC. For evaluation, we evaluate each method in terms of the number of modes with rewards above a threshold discovered in the batch of generated sequences. We also use the mean reward and 50th percentile score for the top 100 sequences ranked by their rewards from a batch of 2048 generated sequences for each method as in [Jain et al., 2022a, Trabucco et al., 2022]. We run each algorithm for 3 different seeds, and report their mean and standard deviation. We follow the same hyperparameters, architectures, and setup as in Jain et al. [2022a], and a detailed description of the setup can be found in Appendix A.3.
|
| 289 |
+
|
| 290 |
+
Results. Comparison results of Stoch-GFN and baselines with varying stochasticity levels (ranging from 0.1 to 0.5) in terms of the number of modes discovered with rewards above a threshold during the training process and top-100 mean rewards are summarized in Figure 10. As shown in Figure 10(a), Stoch-GFN discovers many more modes than GFN, MCMC, and RL-based methods in different stochasticity levels. Stoch-GFN also achieves higher top-100 rewards (in mean and median) than baselines as demonstrated in Figures 10(b)-(c), where the top-100 reward of GFN decrease with an increasing stochastic level. These results validate the effectiveness of Stoch-GFN in the more realistic task for biological sequence design with stochasticity in the environment.
|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
(a) The number of modes.
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
(b) Top-100 reward (mean).
|
| 297 |
+
|
| 298 |
+

|
| 299 |
+
(c) Top-100 reward (median).
|
| 300 |
+
Figure 10: Results on the TF Bind 8 generation task, with better results for Stoch-GFN against MCMC, A2C, SAC and GFN baselines.
|
| 301 |
+
|
| 302 |
+
# 4.2.3 Antimicrobial Peptide Generation
|
| 303 |
+
|
| 304 |
+
Task. In this section, we study the realistic task of generating peptide sequences with anti-microbial properties [Malkin et al., 2022a, Jain et al., 2022a]. The agent chooses a symbol from the vocabulary that consists of 20 amino acids and a special end-of-sequence action to the current state in a left-to-right manner at each time step. The maximum length of the sequence is 60, and the size of the resulting state space is $21^{60}$ . We consider a stochastic variant of the task (as in Section 4.1.1) with a stochasticity level of $\alpha = 0.1$ . The reward function is a pre-trained neural network that estimates the anti-microbial activity following [Malkin et al., 2022a] from the DBAASP database [Pirtskhalava et al., 2021]. As in Section 4.2.2, we generate 2048 sequences from each method and evaluate them in terms of the top-100 rewards and the number of modes discovered above a threshold. We study the performance of Stochastic DB by comparing it with DB, MCMC, and RL-based methods. We report the mean and standard deviation over 3 runs for each method. A detailed description of the setup is in Appendix A.4 following Malkin et al. [2022a].
|
| 305 |
+
|
| 306 |
+
Results. As shown in Table 1, we observe that Stoch-GFN significantly outperforms GFN and other baselines in terms of the top-100 reward. In addition, it also discovers more modes with rewards above a threshold than baseline methods, which further validates its effectiveness on the more complex and challenging task.
|
| 307 |
+
|
| 308 |
+
Table 1: Better results with Stoch-GFN on the AMP generation task. Larger is better.
|
| 309 |
+
|
| 310 |
+
<table><tr><td></td><td>Top-100 reward</td><td>Number of modes</td></tr><tr><td>MCMC</td><td>0.632 ± 0.035</td><td>3.67 ± 0.58</td></tr><tr><td>A2C</td><td>0.682 ± 0.032</td><td>2.66 ± 0.58</td></tr><tr><td>SAC</td><td>0.754 ± 0.047</td><td>4.33 ± 1.33</td></tr><tr><td>GFN</td><td>0.748 ± 0.048</td><td>3.0 ± 3.0</td></tr><tr><td>Stoch-GFN</td><td>0.834 ± 0.023</td><td>19.5 ± 2.5</td></tr></table>
|
| 311 |
+
|
| 312 |
+
# 5 RELATED WORK
|
| 313 |
+
|
| 314 |
+
GFlowNets. The universality and effectiveness of GFlowNets have been demonstrated in various kinds of applications, including biological sequence design [Jain et al., 2022a], causal discovery and structure learning [Deleu et al., 2022, Nishikawa-Toomey et al., 2022], substructure learning of deep neural network weights via Dropout [Liu et al., 2022], multi-objective optimization [Jain et al., 2022b], and robust job scheduling problems [Zhang et al., 2023a]. Malkin et al. [2022a] proposed the trajectory balance (TB) objective to optimize GFlowNet at a trajectory level instead of at the transition level as in detailed balance Bengio et al. [2021b], but can induce large variance, where the problem is
|
| 315 |
+
|
| 316 |
+
exacerbated in stochastic environments. Madan et al. [2022] propose the sub-trajectory balance method considers subtrajectories. The early GFlowNet proposals from Bengio et al. [2021a,b] first formulated GFlowNets and pointed out possible future development directions. Originating from reinforcement learning, GFlowNets face the same long-term credit assignment challenges to propagate downstream reward signals to earlier states. Pan et al. [2023] proposed a forward-looking GFlowNet formulation to exploit intermediate energies or rewards for more efficient credit assignment, making it possible to learn from incomplete trajectories. Pan et al. [2022] incorporates intrinsic intermediate rewards into GFlowNets by augmenting the flow values for better exploration. EB-GFN [Zhang et al., 2022b] jointly learns from data an energy/reward function along with the corresponding GFlowNet. Zhang et al. [2022a] recently points out that the relationship between generative models and GFlowNets. It is worth mentioning that Zhang et al. [2023b] shares a similar goal to our work; it extends the GFlowNet framework for stochastic reward settings with distributional modeling, while this work focuses on stochasticity in the environment transition dynamics.
|
| 317 |
+
|
| 318 |
+
Model-based Reinforcement Learning. Model-based reinforcement learning (RL) is a promising approach for improved sample efficiency compared with model-free (RL) methods [Lillicrap et al., 2015, Fujimoto et al., 2018], and has been successfully applied to many tasks such as robotics leveraging different dynamics models. The stochastic value gradient method [Heess et al., 2015] learns a hybrid of model-based and model-free RL which can learn stochastic policies in stochastic continuous control tasks. Dreamer [Hafner et al., 2019] learns latent dynamics to solve long-horizon tasks from high-dimensional images. MuZero [Antonoglou et al., 2021] combines model-based methods with Monte-Carlo tree search for planning, and it has achieved great success in game playing. Stochastic MuZero [Schrittwieser et al., 2020] learns a stochastic model for extending MuZero to stochastic environments.
|
| 319 |
+
|
| 320 |
+
# 6 CONCLUSION
|
| 321 |
+
|
| 322 |
+
In this paper, we introduce a new methodology, Stochastic GFlowNets, which is the first empirically effective approach to extend GFlowNets to the more general and realistic stochastic environments, where existing GFlowNet methods can fail. Our method learns the GFlowNet policy and also the environment model to capture the stochasticity in the environment. We conduct extensive experiments in standard tasks for benchmarking GFlowNets with stochastic transition dynamics. Results show that Stochastic GFlowNet learns significantly better than previous methods in the presence of stochastic transitions. It is interesting for future work to study advanced model-based approaches for approximating the transition dynamics, and also apply our method to
|
| 323 |
+
|
| 324 |
+
other challenging real-world tasks.
|
| 325 |
+
|
| 326 |
+
# ACKNOWLEDGEMENTS
|
| 327 |
+
|
| 328 |
+
The authors would like to thank Almer Van der Sloot, Kanika Madan, and Qingpeng Cai for insightful discussions about the paper and the baselines in the AMP generation task. Longbo Huang is supported in part by the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108403, and Tsinghua Precision Medicine Foundation 10001020109. Yoshua Bengio acknowledges the funding from CIFAR, Genentech, Samsung, and IBM.
|
| 329 |
+
|
| 330 |
+
# References
|
| 331 |
+
|
| 332 |
+
Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction to mcmc for machine learning. Machine learning, 50(1):5-43, 2003.
|
| 333 |
+
Ioannis Antonoglou, Julian Schrittwieser, Sherjil Ozair, Thomas K Hubert, and David Silver. Planning in stochastic environments with a learned model. In International Conference on Learning Representations, 2021.
|
| 334 |
+
Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. Advances in Neural Information Processing Systems, 34: 27381-27394, 2021a.
|
| 335 |
+
Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward Hu, Mo Tiwari, and Emmanuel Bengio. GFlowNet foundations. arXiv preprint 2111.09266, 2021b.
|
| 336 |
+
Tristan Deleu, Antonio Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. Uncertainty in Artificial Intelligence (UAI), 2022.
|
| 337 |
+
Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587-1596. PMLR, 2018.
|
| 338 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Neural Information Processing Systems (NIPS), pages 2672-2680, 2014.
|
| 339 |
+
Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. International Conference on Machine Learning (ICML), 2017.
|
| 340 |
+
|
| 341 |
+
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. International Conference on Machine Learning (ICML), 2018.
|
| 342 |
+
Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019.
|
| 343 |
+
W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
|
| 344 |
+
Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. Advances in neural information processing systems, 28, 2015.
|
| 345 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
|
| 346 |
+
Moksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Bonaventure F.P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, and Yoshua Bengio. Biological sequence design with GFlowNets. International Conference on Machine Learning (ICML), 2022a.
|
| 347 |
+
Moksh Jain, Sharath Chandra Rararthy, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Yoshua Bengio, Santiago Miret, and Emmanuel Bengio. Multi-objective gflownets. arXiv preprint arXiv:2210.12765, 2022b.
|
| 348 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015.
|
| 349 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
|
| 350 |
+
Matevž Kunaver and Tomaž Požrl. Diversity in recommender systems-a survey. Knowledge-based systems, 123:154-162, 2017.
|
| 351 |
+
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
|
| 352 |
+
Dianbo Liu, Moksh Jain, Bonaventure F. P. Dossou, Qianli Shen, Salem Lahlou, Anirudh Goyal, Nikolay Malkin, Chris C. Emezue, Dinghuai Zhang, Nadhir Hassen, Xu Ji, Kenji Kawaguchi, and Yoshua Bengio. Gflowout: Dropout with generative flow networks. ArXiv, abs/2210.12928, 2022.
|
| 353 |
+
|
| 354 |
+
Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523-562, 2018.
|
| 355 |
+
Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, Emmanuel Bengio, Moksh Jain, Andrei Nica, Tom Bosc, Yoshua Bengio, and Nikolay Malkin. Learning GFlowNets from partial episodes for improved convergence and stability. *ICLR'2023; arXiv:2209.12782, 2022*.
|
| 356 |
+
Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in GFlowNets. Neural Information Processing Systems (NeurIPS), 2022a.
|
| 357 |
+
Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio. Gflows nets and variational inference. arXiv preprint arXiv:2210.00580, 2022b.
|
| 358 |
+
Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087-1092, 1953.
|
| 359 |
+
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.
|
| 360 |
+
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. Neural Information Processing Systems (NIPS), 2016.
|
| 361 |
+
Mizu Nishikawa-Toomey, Tristan Deleu, Jithendarraa Subramanian, Yoshua Bengio, and Laurent Charlin. Bayesian learning of causal structure and mechanisms with GFlowNets and variational bayes. arXiv preprint 2211.02763, 2022.
|
| 362 |
+
Ling Pan, Dinghuai Zhang, Aaron Courville, Longbo Huang, and Yoshua Bengio. Generative augmented flow networks. arXiv preprint 2210.03308, 2022.
|
| 363 |
+
Ling Pan, Nikolay Malkin, Dinghuai Zhang, and Yoshua Bengio. Better training of gflows nets with local credit and incomplete trajectories. *ArXiv*, abs/2302.01687, 2023.
|
| 364 |
+
Keiran Paster, Sheila McIlraith, and Jimmy Ba. You can't count on luck: Why decision transformers fail in stochastic environments. arXiv preprint arXiv:2205.15967, 2022.
|
| 365 |
+
Malak Pirtskhalava, Anthony A Amstrong, Maia Grigolava, Mindia Chubinidze, Evgenia Alimbarashvili, Boris Vishnepolsky, Andrei Gabrielian, Alex Rosenthal, Darrell E
|
| 366 |
+
|
| 367 |
+
Hurt, and Michael Tartakovsky. Dbaasp v3: database of antimicrobial/cytotoxic activity and structure of peptides as a resource for development of new therapeutics. *Nucleic acids research*, 49(D1):D288–D297, 2021.
|
| 368 |
+
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020.
|
| 369 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint 1707.06347, 2017.
|
| 370 |
+
Li-Fu Song, Zheng-Hua Deng, Zi-Yi Gong, Lu-Lu Li, and Bing-Zhi Li. Large-scale de novo oligonucleotide synthesis for whole-genome synthesis and data storage: Challenges and opportunities. Frontiers in bioengineering and biotechnology, 9:689797, 2021.
|
| 371 |
+
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
|
| 372 |
+
Brandon Trabucco, Xinyang Geng, Aviral Kumar, and Sergey Levine. Design-bench: Benchmarks for data-driven offline model-based optimization. In International Conference on Machine Learning, pages 21658-21676. PMLR, 2022.
|
| 373 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Neural Information Processing Systems (NIPS), 2017.
|
| 374 |
+
Arun Venkatraman, Martial Hebert, and J Bagnell. Improving multi-step prediction of learned time series models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
|
| 375 |
+
Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, and Lei Li. Mars: Markov molecular sampling for multi-objective drug discovery. arXiv preprint arXiv:2103.10432, 2021.
|
| 376 |
+
Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, and Ofir Nachum. Dichotomy of control: Separating what you can control from what you cannot. arXiv preprint arXiv:2210.13435, 2022.
|
| 377 |
+
David Zhang, Corrado Rainone, Markus Peschl, and Roberto Bondesan. Robust scheduling with GFlowNets. International Conference on Learning Representations (ICLR), 2023a.
|
| 378 |
+
Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, and Yoshua Bengio. Unifying generative models with GFlowNets. arXiv preprint 2209.02606, 2022a.
|
| 379 |
+
|
| 380 |
+
Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Volokhova, Aaron Courville, and Yoshua Bengio. Generative flow networks for discrete probabilistic modeling. International Conference on Machine Learning (ICML), 2022b.
|
| 381 |
+
Dinghuai Zhang, Ling Pan, Ricky TQ Chen, Aaron Courville, and Yoshua Bengio. Distributional gflows with quantile flows. arXiv preprint arXiv:2302.05793, 2023b.
|
| 382 |
+
|
| 383 |
+
# A EXPERIMENTAL DETAILS
|
| 384 |
+
|
| 385 |
+
# A.1 GRIDWORLD
|
| 386 |
+
|
| 387 |
+
The reward function for GridWorld is defined as in Eq. (11) following Bengio et al. [2021a], where $R_0 = 2.0$ , $R_1 = 0.5$ , and $R_2 = 0.001$ .
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\begin{array}{l} R (x) = R _ {0} + R _ {1} \prod_ {i} \mathbb {I} \left(0. 2 5 < | x _ {i} / H - 0. 5 |\right) \\ + R _ {2} \prod_ {i} ^ {\tau} \mathbb {I} (0. 3 < | x _ {i} / H - 0. 5 | < 0. 4) \tag {11} \\ \end{array}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
We use a feedforward network that consists of two hidden layers with 256 hidden units and LeakyReLU activation. States are represented using one-hot embeddings. As for the environment model in Stochastic GFlowNet, it is also a feedforward layer consisting of two hidden layers with 256 hidden units and LeakyReLU activation. All models are trained for 20000 iterations, and we use a parallel of 16 rollouts in the environment at each iteration (which are then stored in the experience replay buffer). The GFlowNet model is updated based on the rollouts, and we train it based on the Adam [Kingma and Ba, 2015] optimizer using a learning rate of 0.001 (the learning rate for $Z$ in TB is 0.1). We train the environment model using data sampled from the experience replay buffer with a batch size of 16, which is trained using the Adam optimizer with a learning rate of 0.0001. MCMC and PPO use the same configuration as in Bengio et al. [2021a].
|
| 394 |
+
|
| 395 |
+
# A.2 BIT SEQUENCES
|
| 396 |
+
|
| 397 |
+
We follow the same setup for the bit sequence generation task as in Malkin et al. [2022a]. The GFlowNet model is a Transformer [Vaswani et al., 2017] that consists of 3 hidden layers with 64 hidden units and uses 8 attention heads. The exploration strategy is $\epsilon$ -greedy with $\epsilon = 0.0005$ , while the sampling temperature is set to 1. It uses a reward exponent of 3. The learning rate for training the GFlowNet model is $5 \times 10^{-3}$ , with a batch size of 16. As for the environment model in Stochastic GFlowNet, we use a feedforward network consisting of two hidden layers with 2048 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of $5 \times 10^{-4}$ . It is trained using data sampled from the experience replay buffer with a batch size of 128. We train all models for 50000 iterations, using a parallel of 16 rollouts in the environment. MCMC, A2C, and SAC adopt the same configuration as in Malkin et al. [2022a].
|
| 398 |
+
|
| 399 |
+
# A.3 TFBIND-8
|
| 400 |
+
|
| 401 |
+
For the TFBind-8 generation task, we follow the same setup as in Jain et al. [2022a]. The vocabulary consists of 4 nucleobases, and the trajectory length is 8. The GFlowNet model
|
| 402 |
+
|
| 403 |
+
is a feedforward network that consists of 2 hidden layers with 2048 hidden units and ReLU activation. The exploration strategy is $\epsilon$ -greedy with $\epsilon = 0.001$ , while the reward exponent is 3. The learning rate for training the GFlowNet model is $10^{-4}$ , with a batch size of 32. As for the environment model, we use a feedforward network consisting of two hidden layers with 2048 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of $10^{-5}$ . It is trained using data sampled from the experience replay buffer with a batch size of 16. We train all models for 5000 iterations. MCMC, A2C, and SAC baselines follow the same configuration as in Jain et al. [2022a].
|
| 404 |
+
|
| 405 |
+
# A.4 ANTIMICROBIAL PEPTIDE GENERATION
|
| 406 |
+
|
| 407 |
+
We follow the same setup for the antimicrobial peptide generation task as in Malkin et al. [2022a]. The GFlowNet model is a Transformer [Vaswani et al., 2017] that consists of 3 hidden layers with 64 hidden units and uses 8 attention heads. The exploration strategy is $\epsilon$ -greedy with $\epsilon = 0.01$ while the sampling temperature is set to 1. It uses a reward exponent of 3. The learning rate for training the GFlowNet model is 0.001, with a batch size of 16. As for the environment model, we use a feedforward network consisting of two hidden layers with 128 hidden units and ReLU activation, which is trained using the Adam optimizer with a learning rate of 0.0005. It is trained using data sampled from the experience replay buffer with a batch size of 128. We train all models for 20000 iterations, using a parallel of 16 rollouts in the environment.
|
2302.09xxx/2302.09465/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d60242a449e2d9908da59c1207d84451cf65d1a8569184fd0fce3001d53e7f5
|
| 3 |
+
size 410343
|
2302.09xxx/2302.09465/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09466/7e5e86fd-e7fd-4bb8-a09d-316647e461b9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:28ee2b00a8b0b4268b7443c4ce201bbfe3eab297f0417ac37a587e925c4de300
|
| 3 |
+
size 7119211
|
2302.09xxx/2302.09466/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09466/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9683b64c66e6ad99c3c75a3c807bf4bb1b251ba301b147e3482eb076fa688445
|
| 3 |
+
size 1203494
|
2302.09xxx/2302.09466/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_content_list.json
ADDED
|
@@ -0,0 +1,1287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Delving into the Adversarial Robustness of Federated Learning",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
179,
|
| 8 |
+
119,
|
| 9 |
+
816,
|
| 10 |
+
142
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Jie Zhang $^{1*}$ Bo Li $^{2*‡}$ Chen Chen $^{3}$ Lingjuan Lyu $^{3‡}$ Shuang Wu $^{2}$ Shouhong Ding $^{2}$ Chao Wu $^{1‡}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
266,
|
| 19 |
+
154,
|
| 20 |
+
733,
|
| 21 |
+
191
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Zhejiang University $^{2}$ Youtu Lab, Tencent $^{3}$ Sony AI",
|
| 28 |
+
"bbox": [
|
| 29 |
+
292,
|
| 30 |
+
194,
|
| 31 |
+
702,
|
| 32 |
+
210
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{zj_zhangjie, chao.wu} @zju.edu.cn",
|
| 39 |
+
"bbox": [
|
| 40 |
+
380,
|
| 41 |
+
210,
|
| 42 |
+
616,
|
| 43 |
+
224
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "{libraboli, calvinwu, ericshding} @tencent.com, {chen.chen, LingjuanLv} @sony.com",
|
| 50 |
+
"bbox": [
|
| 51 |
+
215,
|
| 52 |
+
224,
|
| 53 |
+
782,
|
| 54 |
+
239
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
248,
|
| 64 |
+
273,
|
| 65 |
+
313,
|
| 66 |
+
286
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
99,
|
| 75 |
+
292,
|
| 76 |
+
464,
|
| 77 |
+
521
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
225,
|
| 87 |
+
537,
|
| 88 |
+
336,
|
| 89 |
+
551
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Nowadays, end devices are generating massive amounts of potentially sensitive user data, raising practical concerns over security and privacy. Federated Learning (FL) (McMahan et al. 2017) emerges as a privacy-aware learning paradigm that allows multiple clients to collaboratively train neural networks without revealing their raw data. Recently, FL has attracted increasing attention from different areas, including medical image analysis (Liu et al. 2021a; Chen et al. 2021b), recommender systems (Liang, Pan, and Ming 2021; Liu et al. 2021b), natural language processing (Zhu et al. 2020; Wang et al. 2021), etc.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
81,
|
| 98 |
+
555,
|
| 99 |
+
478,
|
| 100 |
+
708
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Prior studies have demonstrated that neural networks are vulnerable to evasion attacks by adversarial examples (Goodfellow, Shlens, and Szegedy 2014) during inference time. The goal of inference-time adversarial attack (Li et al. 2021a; Chen et al. 2022c; Zhang et al. 2022b; Chen et al. 2022b) is to damage the global model by adding a carefully generated imperceptible perturbation on the test examples. As shown in Table 1, federated models are as fragile to",
|
| 107 |
+
"bbox": [
|
| 108 |
+
81,
|
| 109 |
+
708,
|
| 110 |
+
478,
|
| 111 |
+
820
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.",
|
| 118 |
+
"bbox": [
|
| 119 |
+
83,
|
| 120 |
+
825,
|
| 121 |
+
478,
|
| 122 |
+
849
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "*Equal contribution. Work done during Jie Zhang's internship at Tencent Youtu Lab and partly done at Sony AI.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
83,
|
| 131 |
+
849,
|
| 132 |
+
478,
|
| 133 |
+
875
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "$\\ddagger$ Corresponding author.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
104,
|
| 142 |
+
875,
|
| 143 |
+
245,
|
| 144 |
+
888
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "adversarial examples as centrally trained models (i.e. zero accuracy under PGD-40 attack (Madry et al. 2017)). Hence, it is also important to consider how to defend against adversarial attacks in federated learning.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
514,
|
| 153 |
+
273,
|
| 154 |
+
911,
|
| 155 |
+
329
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "There are several works that aim to deal with adversarial attacks in FL (Zhang et al. 2022c,a), i.e., federated adversarial training (FAT) (Zizzo et al. 2020; Hong et al. 2021; Shah et al. 2021; Chen, Zhang, and Lyu 2022; Chen et al. 2022a). (Zizzo et al. 2020) and (Hong et al. 2021) proposed to conduct adversarial training (AT) on a proportion of clients but conduct plain training on other clients. (Shah et al. 2021) investigated the impact of local training rounds in FAT. Nevertheless, these methods all ignore the issue that the clean accuracy of federated adversarial training is very low.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
514,
|
| 164 |
+
330,
|
| 165 |
+
911,
|
| 166 |
+
469
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"text": "To further show the problems of federated adversarial training, we first begin with the comparison between the plainly-trained models and AT-trained (Madry et al. 2017) models in both the IID (Independent and Identically Distributed) and non-IID FL settings, measured by clean accuracy $A_{cln}$ and robust accuracy $A_{rob}$ , respectively. We show the test accuracy of plain training and adversarial training (AT) on CIFAR10 dataset under both IID and non-IID FL settings in Fig. 1 (left sub-figure). We summarize some valuable observations as follows: 1) Compared with the plainly-trained models, AT-trained models achieve a lower accuracy, which indicates that directly adopting adversarial training in FL can hurt $A_{cln}$ ; 2) $A_{cln}$ drops heavily for both the plainly-trained models and AT-trained models under non-IID distribution, which is exactly the challenge that typical federated learning with heterogeneous data encountered (Zhao et al. 2018); 3) The performance of AT-trained models with non-IID data distribution decrease significantly compared with IID data distribution. Motivated by these observations, we focus on improving both adversarial robustness and clean accuracy of adversarial training in FL, i.e., we aim to increase $A_{cln}$ while keeping $A_{rob}$ as high as possible.",
|
| 173 |
+
"bbox": [
|
| 174 |
+
514,
|
| 175 |
+
470,
|
| 176 |
+
913,
|
| 177 |
+
776
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 0
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "To achieve this goal, in this paper, we investigate the impact of decision boundary, which can greatly influence the performance of the model in FAT. Specifically, 1) we apply adversarial training with a re-weighting strategy in local update to get a better $A_{rob}$ . Our method takes the limited data of each client into account, those samples that are close to/ far from the decision boundary are assigned larger/smaller weight. 2) Moreover, since the global model in FL has a",
|
| 184 |
+
"bbox": [
|
| 185 |
+
514,
|
| 186 |
+
777,
|
| 187 |
+
913,
|
| 188 |
+
888
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 0
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "aside_text",
|
| 194 |
+
"text": "arXiv:2302.09479v1 [cs.LG] 19 Feb 2023",
|
| 195 |
+
"bbox": [
|
| 196 |
+
22,
|
| 197 |
+
265,
|
| 198 |
+
57,
|
| 199 |
+
705
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 0
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "table",
|
| 205 |
+
"img_path": "images/8022f12ebede576864973daabb48e7ad0aee374292a9ff1db5c7a2d49e18d8a5.jpg",
|
| 206 |
+
"table_caption": [
|
| 207 |
+
"Table 1: The accuracy (%) is tested under PGD-40 attack (Madry et al. 2017). For MNIST, FMNIST, CIFAR10, ImageNet-12, CIFAR100, and Tiny-ImageNet, the perturbation bound is $\\{0.3, 32/255, 0.031, 0.031, 0.031, 0.031\\}$ , respectively. $A_{cln}$ and $A_{rob}$ refer to clean accuracy and robust accuracy."
|
| 208 |
+
],
|
| 209 |
+
"table_footnote": [],
|
| 210 |
+
"table_body": "<table><tr><td>Type</td><td>Dataset</td><td>MNIST</td><td>FMNIST</td><td>ImageNet-12</td><td>CIFAR10</td><td>CIFAR100</td><td>Tiny-ImageNet</td></tr><tr><td rowspan=\"2\">Centralized</td><td>\\(A_{cln}\\)</td><td>99.42</td><td>92.47</td><td>78.96</td><td>94.26</td><td>86.93</td><td>57.93</td></tr><tr><td>\\(A_{rob}\\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td rowspan=\"2\">Federated</td><td>\\(A_{cln}\\)</td><td>99.01</td><td>88.51</td><td>71.65</td><td>85.81</td><td>81.28</td><td>49.79</td></tr><tr><td>\\(A_{rob}\\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>",
|
| 211 |
+
"bbox": [
|
| 212 |
+
214,
|
| 213 |
+
119,
|
| 214 |
+
787,
|
| 215 |
+
195
|
| 216 |
+
],
|
| 217 |
+
"page_idx": 1
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"type": "image",
|
| 221 |
+
"img_path": "images/f646fe90a9a302aa5f4685e7dd92e8b1033e398dd94790c0c39ef5c900bdd2cf.jpg",
|
| 222 |
+
"image_caption": [
|
| 223 |
+
"Figure 1: Left: Test accuracy reduces for plainly trained model and adversarially trained model under non-IID data. Meanwhile, adversarial training hurts the performance. Right: Evaluations on CIFAR10 for both accuracy and robustness, including several state-of-the-art defense methods combined with FL. Our method outperforms existing baselines on both metric dimensions."
|
| 224 |
+
],
|
| 225 |
+
"image_footnote": [],
|
| 226 |
+
"bbox": [
|
| 227 |
+
151,
|
| 228 |
+
208,
|
| 229 |
+
844,
|
| 230 |
+
398
|
| 231 |
+
],
|
| 232 |
+
"page_idx": 1
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "text",
|
| 236 |
+
"text": "more accurate decision boundary through model aggregation, we take advantage of the logits from the global model and introduce a new regularization term to increase $A_{cln}$ . This regularization term aims to alleviate the accuracy reduction across distributed clients.",
|
| 237 |
+
"bbox": [
|
| 238 |
+
81,
|
| 239 |
+
463,
|
| 240 |
+
478,
|
| 241 |
+
532
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "We conclude our major contributions as follows:",
|
| 248 |
+
"bbox": [
|
| 249 |
+
99,
|
| 250 |
+
534,
|
| 251 |
+
421,
|
| 252 |
+
547
|
| 253 |
+
],
|
| 254 |
+
"page_idx": 1
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "list",
|
| 258 |
+
"sub_type": "text",
|
| 259 |
+
"list_items": [
|
| 260 |
+
"- We conduct systematic studies on the adversarial robustness of FL, and provide valuable observations from extensive experiments.",
|
| 261 |
+
"- We reveal the negative impacts of adopting adversarial training in FL, and then propose an effective algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which utilized local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.",
|
| 262 |
+
"- Extensive experiments on multiple datasets demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. We present the performance of our method in Fig. 1 (right sub-figure), which indicates the improvement in both robustness and accuracy of adversarial training in FL."
|
| 263 |
+
],
|
| 264 |
+
"bbox": [
|
| 265 |
+
89,
|
| 266 |
+
554,
|
| 267 |
+
480,
|
| 268 |
+
773
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "Related Works",
|
| 275 |
+
"text_level": 1,
|
| 276 |
+
"bbox": [
|
| 277 |
+
217,
|
| 278 |
+
795,
|
| 279 |
+
346,
|
| 280 |
+
811
|
| 281 |
+
],
|
| 282 |
+
"page_idx": 1
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"type": "text",
|
| 286 |
+
"text": "Federated Learning. Following the success of DNNs in various tasks (Li et al. 2019; Li, Sun, and Guo 2019; ?; Huang et al. 2022b,a; Dong et al. 2021), FL has attracted increasing attention. A recent survey has pointed out that existing FL systems are vulnerable to various attacks that",
|
| 287 |
+
"bbox": [
|
| 288 |
+
81,
|
| 289 |
+
818,
|
| 290 |
+
480,
|
| 291 |
+
888
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "aim to either compromise data privacy or system robustness (Lyu et al. 2022). In particular, robustness attacks can be broadly classified into training-time attacks (data poisoning and model poisoning) and inference-time attacks (evision attacks, i.e., using adversarial examples to attack the global model during inference phase). In FL, the architectural design, distributed nature, and data constraints can bring new threats and failures (Kairouz 2021).",
|
| 298 |
+
"bbox": [
|
| 299 |
+
514,
|
| 300 |
+
463,
|
| 301 |
+
913,
|
| 302 |
+
575
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 1
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "text",
|
| 308 |
+
"text": "Adversarial Attacks. The white-box attacks have access to the whole details of threat models, including parameters and architectures. Goodfellow et al. (Goodfellow, Shlens, and Szegedy 2014) introduced the Fast Gradient Sign Method (FGSM) to generate adversarial examples, which uses a single-step first-order approximation to perform gradient ascent. Kurakin et al. (Kurakin, Goodfellow, and Bengio 2017) iteratively applied FGSM with a small step-size to develop a significantly stronger multi-step variant, called Iterative FGSM (I-FGSM). Based on these findings, more powerful attacks have been proposed in recent years including MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017), and AA (Croce and Hein 2020).",
|
| 309 |
+
"bbox": [
|
| 310 |
+
514,
|
| 311 |
+
585,
|
| 312 |
+
913,
|
| 313 |
+
779
|
| 314 |
+
],
|
| 315 |
+
"page_idx": 1
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"type": "text",
|
| 319 |
+
"text": "Adversarial Training. Adversarial training has been one of the most effective defense strategies against adversarial attacks. Madry et al. (Madry et al. 2017) regarded adversarial training as a min-max formulation using empirical risk minimization under PGD attack. Kannan et al. (Kannan, Kurakin, and Goodfellow 2018) presented adversarial logit pairing (ALP), a method that encourages logits for pairs of",
|
| 320 |
+
"bbox": [
|
| 321 |
+
514,
|
| 322 |
+
790,
|
| 323 |
+
913,
|
| 324 |
+
888
|
| 325 |
+
],
|
| 326 |
+
"page_idx": 1
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"type": "table",
|
| 330 |
+
"img_path": "images/57aab263a317648a80b034ba1b30472f3e52463b68f7eac3632a8f9bcfa97bd5.jpg",
|
| 331 |
+
"table_caption": [
|
| 332 |
+
"Table 2: An empirical study on the adversarial robustness of FL, measured by various combination of defense methods and FL algorithms. We report the clean accuracy and robust accuracy, respectively. Best results are in bold."
|
| 333 |
+
],
|
| 334 |
+
"table_footnote": [],
|
| 335 |
+
"table_body": "<table><tr><td>Type</td><td colspan=\"8\">IID</td><td colspan=\"8\">Non-IID</td></tr><tr><td>Methods</td><td colspan=\"2\">FedAvg</td><td colspan=\"2\">FedProx</td><td colspan=\"2\">FedNova</td><td colspan=\"2\">Scaffold</td><td colspan=\"2\">FedAvg</td><td colspan=\"2\">FedProx</td><td colspan=\"2\">FedNova</td><td colspan=\"2\">Scaffold</td></tr><tr><td>Performance</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td></tr><tr><td>PGD-AT</td><td>57.99</td><td>31.95</td><td>58.17</td><td>32.06</td><td>58.45</td><td>31.74</td><td>56.84</td><td>29.26</td><td>46.84</td><td>26.79</td><td>48.03</td><td>27.46</td><td>46.95</td><td>26.54</td><td>42.44</td><td>27.19</td></tr><tr><td>ALP</td><td>62.81</td><td>31.84</td><td>62.88</td><td>31.20</td><td>62.91</td><td>31.79</td><td>60.30</td><td>29.58</td><td>56.16</td><td>28.78</td><td>55.79</td><td>29.06</td><td>55.80</td><td>29.18</td><td>48.29</td><td>26.56</td></tr><tr><td>TRADES</td><td>64.94</td><td>32.93</td><td>64.29</td><td>32.97</td><td>64.46</td><td>33.29</td><td>63.14</td><td>33.58</td><td>60.94</td><td>27.06</td><td>61.05</td><td>27.94</td><td>60.34</td><td>28.78</td><td>59.53</td><td>27.78</td></tr><tr><td>MMA</td><td>65.14</td><td>30.29</td><td>63.65</td><td>31.29</td><td>65.27</td><td>29.31</td><td>64.28</td><td>32.98</td><td>59.69</td><td>28.64</td><td>60.17</td><td>28.09</td><td>61.03</td><td>28.47</td><td>61.53</td><td>28.13</td></tr><tr><td>AVMixup</td><td>66.14</td><td>32.27</td><td>65.12</td><td>33.19</td><td>65.14</td><td>33.75</td><td>65.11</td><td>33.24</td><td>61.17</td><td>28.56</td><td>61.47</td><td>28.34</td><td>62.04</td><td>28.12</td><td>61.91</td><td>28.81</td></tr></table>",
|
| 336 |
+
"bbox": [
|
| 337 |
+
120,
|
| 338 |
+
102,
|
| 339 |
+
877,
|
| 340 |
+
209
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 2
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "examples to be similar, to improve robust accuracy. To quantify the trade-off between accuracy and robustness, Zhang et al. (Zhang et al. 2019) introduced a TRADES loss to achieve a tight upper bound on the gap between clean and robust error. Based on the margin theory and soft-labeled data augmentation, Ding et al. (Ding et al. 2020) proposed Max-Margin Adversarial (MMA) training and Lee et al. (Lee, Lee, and Yoon 2020) introduced Adversarial Vertex mixup (AVmixup).",
|
| 347 |
+
"bbox": [
|
| 348 |
+
81,
|
| 349 |
+
233,
|
| 350 |
+
478,
|
| 351 |
+
359
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "Federated Adversarial Training. In terms of the adversarial robustness, Zizzo et al. (Zizzo et al. 2020) investigated the effectiveness of the federated adversarial training protocol for idealized federated settings, and showed the performance of their models in a traditional centralized setting and a distributed FL scenario. Zhou et al. (Zhou et al. 2022) decomposed the aggregation error of the central server into bias and variance. However, all these methods sacrificed clean accuracy (compared to plainly trained models) to gain robustness. In addition, certified defense (Chen et al. 2021a) against adversarial examples in FL is another interesting direction, which will be discussed in the future.",
|
| 358 |
+
"bbox": [
|
| 359 |
+
81,
|
| 360 |
+
364,
|
| 361 |
+
480,
|
| 362 |
+
531
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "text",
|
| 368 |
+
"text": "Adversarial Robustness of FL",
|
| 369 |
+
"text_level": 1,
|
| 370 |
+
"bbox": [
|
| 371 |
+
153,
|
| 372 |
+
542,
|
| 373 |
+
408,
|
| 374 |
+
558
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 2
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "In this section, we briefly define the goal of federated adversarial training. Then we conduct a systematic study on some popular federated learning algorithms with the combination of various adversarial training methods and evaluate their robustness under several attacks. Besides, we further reveal the challenges of adversarial training in non-IID FL.",
|
| 381 |
+
"bbox": [
|
| 382 |
+
81,
|
| 383 |
+
561,
|
| 384 |
+
480,
|
| 385 |
+
645
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 2
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "Problem Definition",
|
| 392 |
+
"text_level": 1,
|
| 393 |
+
"bbox": [
|
| 394 |
+
83,
|
| 395 |
+
654,
|
| 396 |
+
233,
|
| 397 |
+
669
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "In typical federated learning, training data are distributed across all the $K$ clients, and there is a central server managing model aggregations and communications with clients. In general, federated learning attempts to minimize the following optimization:",
|
| 404 |
+
"bbox": [
|
| 405 |
+
81,
|
| 406 |
+
672,
|
| 407 |
+
480,
|
| 408 |
+
743
|
| 409 |
+
],
|
| 410 |
+
"page_idx": 2
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "equation",
|
| 414 |
+
"text": "\n$$\n\\min _ {w} f (w) = \\sum_ {k = 1} ^ {K} \\frac {n _ {k}}{n} F _ {k} (w). \\tag {1}\n$$\n",
|
| 415 |
+
"text_format": "latex",
|
| 416 |
+
"bbox": [
|
| 417 |
+
186,
|
| 418 |
+
744,
|
| 419 |
+
478,
|
| 420 |
+
787
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 2
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "text",
|
| 426 |
+
"text": "Here, we denote that the global approximate optimal is a sum of local objectives weighted by the local data size $n_k$ , and $n$ is the total data size of all clients that participate in a communication round. Moreover, each local objective measures the empirical risk over possibly different data distributions $D_k$ , which can be expressed as:",
|
| 427 |
+
"bbox": [
|
| 428 |
+
81,
|
| 429 |
+
787,
|
| 430 |
+
480,
|
| 431 |
+
872
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 2
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "equation",
|
| 437 |
+
"text": "\n$$\nF _ {k} (w) := \\mathbb {E} _ {x _ {k} \\sim \\mathcal {D} _ {k}} \\left[ f _ {k} (w; x _ {k}) \\right]. \\tag {2}\n$$\n",
|
| 438 |
+
"text_format": "latex",
|
| 439 |
+
"bbox": [
|
| 440 |
+
173,
|
| 441 |
+
873,
|
| 442 |
+
478,
|
| 443 |
+
891
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 2
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"text": "Let $x$ denote the original image, $x^{adv}$ denote the corresponding adversarial example, and $\\delta$ denote the perturbation added on the original image, then $x^{adv} = x + \\delta$ . To generate powerful adversarial examples, we attempt to maximize the loss $L(x + \\delta; w)$ , where $L$ is the loss function for local update.",
|
| 450 |
+
"bbox": [
|
| 451 |
+
514,
|
| 452 |
+
233,
|
| 453 |
+
911,
|
| 454 |
+
316
|
| 455 |
+
],
|
| 456 |
+
"page_idx": 2
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"text": "To improve the robustness of the neural networks, many adversarial defense methods have been proposed. Among them, adversarial training (Carlini and Wagner 2017) is one of the most prevailing and effective algorithms. Combined with adversarial training, the local objective becomes solving the following min-max optimization problem:",
|
| 461 |
+
"bbox": [
|
| 462 |
+
514,
|
| 463 |
+
316,
|
| 464 |
+
913,
|
| 465 |
+
402
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 2
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "equation",
|
| 471 |
+
"text": "\n$$\nF _ {k} (w) = \\min \\mathbb {E} _ {x _ {k} \\sim \\mathcal {D} _ {k}} \\left[ \\max _ {\\| x ^ {a d v} - x \\| _ {\\infty} \\leq \\delta} L (w, x ^ {a d v}, y) \\right]. \\tag {3}\n$$\n",
|
| 472 |
+
"text_format": "latex",
|
| 473 |
+
"bbox": [
|
| 474 |
+
524,
|
| 475 |
+
409,
|
| 476 |
+
911,
|
| 477 |
+
444
|
| 478 |
+
],
|
| 479 |
+
"page_idx": 2
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"type": "text",
|
| 483 |
+
"text": "The inner maximization problem aims to find effective adversarial examples that achieve a high loss, while the outer optimization updates local models to minimize training loss.",
|
| 484 |
+
"bbox": [
|
| 485 |
+
514,
|
| 486 |
+
452,
|
| 487 |
+
913,
|
| 488 |
+
494
|
| 489 |
+
],
|
| 490 |
+
"page_idx": 2
|
| 491 |
+
},
|
| 492 |
+
{
|
| 493 |
+
"type": "text",
|
| 494 |
+
"text": "In this work, we conduct a systematic study on several state-of-the-art FL algorithms including FedAvg (McMahon et al. 2017), FedProx (Li et al. 2018), FedNova (Wang et al. 2020) and Scaffold (Karimireddy et al. 2020), and explore their combinations with AT methods to defend against adversarial attacks. We report detailed results in Table 2, here robustness is averaged over four popular attacks (FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), and CW (Carlini and Wagner 2017)). Besides, we implement some prevailing adversarial training methods including PGD_AT (Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We observe that there is no federated adversarial learning algorithm that can outperform all the others in all cases. Moreover, the clean accuracy drops heavily under non-IID distribution. As such, we are motivated to develop a more effective method. Due to the similar performance of these FL methods observed from Table 2, we design our method based on FedAvg - a representative algorithm in FL.",
|
| 495 |
+
"bbox": [
|
| 496 |
+
514,
|
| 497 |
+
494,
|
| 498 |
+
913,
|
| 499 |
+
785
|
| 500 |
+
],
|
| 501 |
+
"page_idx": 2
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"type": "text",
|
| 505 |
+
"text": "Adversarial Traning with non-HID Data",
|
| 506 |
+
"text_level": 1,
|
| 507 |
+
"bbox": [
|
| 508 |
+
516,
|
| 509 |
+
797,
|
| 510 |
+
823,
|
| 511 |
+
814
|
| 512 |
+
],
|
| 513 |
+
"page_idx": 2
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "text",
|
| 517 |
+
"text": "Federated learning faces the statistical challenge in real-world scenarios. The IID data makes the stochastic gradient as an unbiased estimate of the full gradient (McMahan et al. 2017). However, the clients are typically highly heterogeneous with various kinds of non-IID settings, such as",
|
| 518 |
+
"bbox": [
|
| 519 |
+
514,
|
| 520 |
+
818,
|
| 521 |
+
913,
|
| 522 |
+
888
|
| 523 |
+
],
|
| 524 |
+
"page_idx": 2
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "image",
|
| 528 |
+
"img_path": "images/a0b69425a3b626fa29aa100e0aaa9ed1e4303fe91bda7b0d48d1f9d6d6b1a1f2.jpg",
|
| 529 |
+
"image_caption": [
|
| 530 |
+
"Figure 2: Test accuracy on a randomly selected client."
|
| 531 |
+
],
|
| 532 |
+
"image_footnote": [],
|
| 533 |
+
"bbox": [
|
| 534 |
+
122,
|
| 535 |
+
70,
|
| 536 |
+
439,
|
| 537 |
+
239
|
| 538 |
+
],
|
| 539 |
+
"page_idx": 3
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"type": "image",
|
| 543 |
+
"img_path": "images/82ecb378db965fdc033b6f90896988a18e874370782ca075a84447b6a7956f59.jpg",
|
| 544 |
+
"image_caption": [
|
| 545 |
+
"Figure 3: Plain training and adversarial training under non-IID setting. Compared with plainly trained situation, the aggregation of adversarially trained models can lead to a more biased model which enlarges accuracy gap. Consequently, it results in poor consistency between different clients."
|
| 546 |
+
],
|
| 547 |
+
"image_footnote": [],
|
| 548 |
+
"bbox": [
|
| 549 |
+
96,
|
| 550 |
+
287,
|
| 551 |
+
277,
|
| 552 |
+
416
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "image",
|
| 558 |
+
"img_path": "images/841506fd380fdf841c6043124ffdcbedd962e29d98520b2887d23acac5e1d0fc.jpg",
|
| 559 |
+
"image_caption": [],
|
| 560 |
+
"image_footnote": [],
|
| 561 |
+
"bbox": [
|
| 562 |
+
279,
|
| 563 |
+
287,
|
| 564 |
+
467,
|
| 565 |
+
414
|
| 566 |
+
],
|
| 567 |
+
"page_idx": 3
|
| 568 |
+
},
|
| 569 |
+
{
|
| 570 |
+
"type": "text",
|
| 571 |
+
"text": "label skewness and feature skewness (Li et al. 2021b). According to previous studies (Wang et al. 2020; Karimireddy et al. 2020), the non-IID data settings can degrade the effectiveness of the deployed model.",
|
| 572 |
+
"bbox": [
|
| 573 |
+
81,
|
| 574 |
+
512,
|
| 575 |
+
478,
|
| 576 |
+
568
|
| 577 |
+
],
|
| 578 |
+
"page_idx": 3
|
| 579 |
+
},
|
| 580 |
+
{
|
| 581 |
+
"type": "text",
|
| 582 |
+
"text": "Similarly, due to the non-IID data, the performance of AT may vary widely across clients. To better understand the challenge of adversarial training with non-IID data, we examine the performance of both clean accuracy and robustness on a randomly selected client and report the results in Fig. 2. Observed from Fig. 2, we can find that: 1) $A_{cln}$ on the plainly trained model drops from majority classes to minority classes, which is exactly what traditional imbalanced learning attempts to solve; 2) A similar decreasing tendency reasonably occurs in $A_{rob}$ . It is obvious that adopting adversarial training in federated learning with non-IID data is more challenging.",
|
| 583 |
+
"bbox": [
|
| 584 |
+
81,
|
| 585 |
+
569,
|
| 586 |
+
478,
|
| 587 |
+
734
|
| 588 |
+
],
|
| 589 |
+
"page_idx": 3
|
| 590 |
+
},
|
| 591 |
+
{
|
| 592 |
+
"type": "text",
|
| 593 |
+
"text": "According to above observations, we conjecture that AT-trained local models with imbalanced data lead to a more biased decision boundary than plainly trained ones. Since adversarial examples need a larger number of epochs to achieve near-zero error (Zhang et al. 2021), it becomes harder to fit adversarial examples than clean data. However, for the local client itself, imbalanced clean data generates imbalanced adversarial examples, making it more difficult for training and enlarging the accuracy gap, which can reduce the performance both in accuracy and robustness. In Fig. 3, we also show the differences between plain train",
|
| 594 |
+
"bbox": [
|
| 595 |
+
81,
|
| 596 |
+
736,
|
| 597 |
+
480,
|
| 598 |
+
888
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 3
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "image",
|
| 604 |
+
"img_path": "images/e12d4f58ae8f86135928a75bc512f54f8f22a9589866332a8c4ec4d1e5619d8d.jpg",
|
| 605 |
+
"image_caption": [
|
| 606 |
+
"Figure 4: Left panel: Decision boundary of plainly trained model. Middle panel: Decision boundary of AT-trained model. Right panel: Decision boundary of DBFAT-trained model. We use the dotted line to represent the boundary of the clean model, and solid line to represent the boundary of the robust model. The size of the shape represents the value of the weight. Those samples that are close to far from boundary are assigned larger/smaller weight. The decision boundary of DBFAT-trained model (see the right sub-figure) can achieve a higher $A_{rob}$ and meanwhile maintain $A_{cln}$ ."
|
| 607 |
+
],
|
| 608 |
+
"image_footnote": [],
|
| 609 |
+
"bbox": [
|
| 610 |
+
535,
|
| 611 |
+
66,
|
| 612 |
+
893,
|
| 613 |
+
162
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 3
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "text",
|
| 619 |
+
"text": "ing and adversarial training in federated settings. Compared with the plainly trained models, the aggregation of adversarially trained models can enlarge the accuracy gap, which results in poor consistency between different clients. To overcome this problem, we propose a novel method to utilize local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.",
|
| 620 |
+
"bbox": [
|
| 621 |
+
514,
|
| 622 |
+
333,
|
| 623 |
+
911,
|
| 624 |
+
431
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 3
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "Methodology",
|
| 631 |
+
"text_level": 1,
|
| 632 |
+
"bbox": [
|
| 633 |
+
656,
|
| 634 |
+
445,
|
| 635 |
+
769,
|
| 636 |
+
463
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 3
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "text",
|
| 642 |
+
"text": "The generalization performance of a neural network is closely related to its decision boundary. However, models trained in the federated setting are biased compared with the centrally trained models. This is mainly caused by heterogeneous data and objective inconsistency between clients (Kairouz 2021). Moreover, a highly skewed data distribution can lead to an extremely biased boundary (Wang et al. 2020). We tackle this problem in two ways: 1) locally, we take full advantage of the limited data on the distributed client; 2) globally, we utilize the information obtained from the global model to alleviate the biases between clients.",
|
| 643 |
+
"bbox": [
|
| 644 |
+
514,
|
| 645 |
+
465,
|
| 646 |
+
911,
|
| 647 |
+
619
|
| 648 |
+
],
|
| 649 |
+
"page_idx": 3
|
| 650 |
+
},
|
| 651 |
+
{
|
| 652 |
+
"type": "text",
|
| 653 |
+
"text": "Subsequently, we propose a simple yet effective approach called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components. For local training, we re-weight adversarial examples to improve robustness; while for global aggregation, we utilize the global model to regularize the accuracy for a lower boundary error $A_{bdy}$ . We show the training process of DBFAT in the supplementary and illustrate an example of the decision boundary of our approach in Fig. 4.",
|
| 654 |
+
"bbox": [
|
| 655 |
+
514,
|
| 656 |
+
619,
|
| 657 |
+
913,
|
| 658 |
+
744
|
| 659 |
+
],
|
| 660 |
+
"page_idx": 3
|
| 661 |
+
},
|
| 662 |
+
{
|
| 663 |
+
"type": "text",
|
| 664 |
+
"text": "Re-weighting with Limited Data",
|
| 665 |
+
"text_level": 1,
|
| 666 |
+
"bbox": [
|
| 667 |
+
516,
|
| 668 |
+
757,
|
| 669 |
+
767,
|
| 670 |
+
773
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 3
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "Adversarial examples have the ability to approximately measure the distances from original inputs to a classifier's decision boundary (Heo et al. 2018), which can be calculated by the least number of steps that iterative attack (e.g. PGD attack (Madry et al. 2017)) needs in order to find its misclassified adversarial variant. To better utilize limited adversarial examples, we attempt to re-weight the adversarial examples to guide adversarial training. For clean examples that",
|
| 677 |
+
"bbox": [
|
| 678 |
+
514,
|
| 679 |
+
777,
|
| 680 |
+
911,
|
| 681 |
+
888
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 3
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "table",
|
| 687 |
+
"img_path": "images/ea48e67a22816416acc90be0642b930710659707b18664788b82028d013df761.jpg",
|
| 688 |
+
"table_caption": [
|
| 689 |
+
"Table 3: Loss functions of different adversarial training methods."
|
| 690 |
+
],
|
| 691 |
+
"table_footnote": [],
|
| 692 |
+
"table_body": "<table><tr><td>Defense</td><td>Loss Function</td></tr><tr><td>PGD_AT</td><td>CE (f (xadv), y)</td></tr><tr><td>ALP</td><td>CE (f (xadv), y) + β · ||f (xadv) - f (x)||2</td></tr><tr><td>TRADES</td><td>CE (f (x), y) + β · KL (f (xadv) ||f (x))</td></tr><tr><td>MMA</td><td>CE (f (xadv), y) · R(hθ(x) = y) + CE (f (x), y) · R(hθ(x) ≠ y)</td></tr><tr><td>AVMixup</td><td>CE (f (xadv), yadv)</td></tr><tr><td>DBFAT(ours)</td><td>ρ · CE(f(xadv), y) + β · KL (f (xadv) ||f glo (x))</td></tr></table>",
|
| 693 |
+
"bbox": [
|
| 694 |
+
86,
|
| 695 |
+
104,
|
| 696 |
+
480,
|
| 697 |
+
191
|
| 698 |
+
],
|
| 699 |
+
"page_idx": 4
|
| 700 |
+
},
|
| 701 |
+
{
|
| 702 |
+
"type": "text",
|
| 703 |
+
"text": "are close to the decision boundary, we assign larger weights; while those examples that are far from the boundary are assigned with smaller weights.",
|
| 704 |
+
"bbox": [
|
| 705 |
+
81,
|
| 706 |
+
210,
|
| 707 |
+
478,
|
| 708 |
+
253
|
| 709 |
+
],
|
| 710 |
+
"page_idx": 4
|
| 711 |
+
},
|
| 712 |
+
{
|
| 713 |
+
"type": "text",
|
| 714 |
+
"text": "In this paper, we use PGD- $S$ to approximately measure the geometric distance to the decision boundary, $S$ denotes the number of maximum iteration. We generate adversarial examples as follows (Madry et al. 2017):",
|
| 715 |
+
"bbox": [
|
| 716 |
+
81,
|
| 717 |
+
253,
|
| 718 |
+
478,
|
| 719 |
+
309
|
| 720 |
+
],
|
| 721 |
+
"page_idx": 4
|
| 722 |
+
},
|
| 723 |
+
{
|
| 724 |
+
"type": "equation",
|
| 725 |
+
"text": "\n$$\nx ^ {a d v} \\leftarrow \\Pi_ {\\mathcal {B} [ x, \\epsilon ]} \\left(x ^ {a d v} + \\alpha \\cdot \\operatorname {s i g n} \\left(\\nabla_ {x ^ {a d v}} \\ell \\left(x ^ {a d v}, y\\right)\\right)\\right). \\tag {4}\n$$\n",
|
| 726 |
+
"text_format": "latex",
|
| 727 |
+
"bbox": [
|
| 728 |
+
94,
|
| 729 |
+
310,
|
| 730 |
+
478,
|
| 731 |
+
330
|
| 732 |
+
],
|
| 733 |
+
"page_idx": 4
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"text": "Here $\\Pi_{\\mathcal{B}[x,\\epsilon]}$ is the projection function that projects the adversarial data back into the $\\epsilon$ -ball centered at natural data, $\\alpha$ is the steps size, $\\epsilon$ is perturbation bound.",
|
| 738 |
+
"bbox": [
|
| 739 |
+
81,
|
| 740 |
+
330,
|
| 741 |
+
478,
|
| 742 |
+
372
|
| 743 |
+
],
|
| 744 |
+
"page_idx": 4
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"text": "We find the minimum step $d$ , such that after $d$ step of PGD, the adversarial variant can be misclassified by the network, i.e., $\\arg \\max_{c} f^{(c)}(x^{adv}) \\neq y$ , where $f^{(c)}(x^{adv})$ is the logits of the $c$ -th label.",
|
| 749 |
+
"bbox": [
|
| 750 |
+
81,
|
| 751 |
+
372,
|
| 752 |
+
480,
|
| 753 |
+
428
|
| 754 |
+
],
|
| 755 |
+
"page_idx": 4
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "text",
|
| 759 |
+
"text": "In this way, given a mini-batch samples $\\{(x_i,y_i)\\}_{i = 1}^m$ , then the weight list $\\rho$ can be formulated as:",
|
| 760 |
+
"bbox": [
|
| 761 |
+
83,
|
| 762 |
+
428,
|
| 763 |
+
478,
|
| 764 |
+
458
|
| 765 |
+
],
|
| 766 |
+
"page_idx": 4
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "equation",
|
| 770 |
+
"text": "\n$$\n\\rho \\leftarrow 1 - \\left\\{\\frac {d _ {i}}{\\sum_ {i = 1} ^ {m} d _ {i}} \\right\\}. \\tag {5}\n$$\n",
|
| 771 |
+
"text_format": "latex",
|
| 772 |
+
"bbox": [
|
| 773 |
+
205,
|
| 774 |
+
460,
|
| 775 |
+
478,
|
| 776 |
+
493
|
| 777 |
+
],
|
| 778 |
+
"page_idx": 4
|
| 779 |
+
},
|
| 780 |
+
{
|
| 781 |
+
"type": "text",
|
| 782 |
+
"text": "Regularization with Global Model",
|
| 783 |
+
"text_level": 1,
|
| 784 |
+
"bbox": [
|
| 785 |
+
83,
|
| 786 |
+
500,
|
| 787 |
+
349,
|
| 788 |
+
516
|
| 789 |
+
],
|
| 790 |
+
"page_idx": 4
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "text",
|
| 794 |
+
"text": "Early work (Zhang et al. 2019; Cui et al. 2021) claims that there exists a trade-off between accuracy and robustness, standard adversarial training can hurt accuracy. To achieve a lower boundary error $A_{bdy}$ , we take advantage of logits from the global model $f^{glo}$ , which is trained after aggregation. Particularly, in federated learning, the model owns the information obtained from the averaged parameters on distributed clients.",
|
| 795 |
+
"bbox": [
|
| 796 |
+
81,
|
| 797 |
+
517,
|
| 798 |
+
478,
|
| 799 |
+
630
|
| 800 |
+
],
|
| 801 |
+
"page_idx": 4
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"text": "Let $f^{loc}$ denote the adversarially trained model at each local client, $f^{glo}$ has the most desirable classifier boundary for natural data. Then we can modify the local objective mentioned in Equation 3 as below:",
|
| 806 |
+
"bbox": [
|
| 807 |
+
81,
|
| 808 |
+
630,
|
| 809 |
+
478,
|
| 810 |
+
686
|
| 811 |
+
],
|
| 812 |
+
"page_idx": 4
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "equation",
|
| 816 |
+
"text": "\n$$\n\\min _ {\\text {f o r r o b u s t n e s s}} \\underbrace {\\ell_ {c e} (\\rho \\cdot f ^ {l o c} (x ^ {a d v}) , y)} _ {f o r a c c u r a c y r e g u l a r i z a t i o n} + \\beta \\cdot \\underbrace {\\ell_ {k l} \\left(f ^ {l o c} (x ^ {a d v}) , f ^ {g l o} (x)\\right)} _ {\\text {f o r a c c u r a c y r e g u l a r i z a t i o n}}. \\tag {6}\n$$\n",
|
| 817 |
+
"text_format": "latex",
|
| 818 |
+
"bbox": [
|
| 819 |
+
91,
|
| 820 |
+
699,
|
| 821 |
+
477,
|
| 822 |
+
750
|
| 823 |
+
],
|
| 824 |
+
"page_idx": 4
|
| 825 |
+
},
|
| 826 |
+
{
|
| 827 |
+
"type": "text",
|
| 828 |
+
"text": "Where $\\ell_{ce}$ denotes the cross-entropy loss to improve the robustness, and $\\ell_{kl}$ is the KL divergence loss to constrain the logits of global model and local model. Here, $\\ell_{kl}$ appears as an additional regularization term, which is designed to reduce the boundary error $A_{bdy} = A_{cln} - A_{rob}$ . Additionally, $\\rho$ is the weight calculated by Equation 5, $\\beta$ is the parameter to be tuned.",
|
| 829 |
+
"bbox": [
|
| 830 |
+
81,
|
| 831 |
+
750,
|
| 832 |
+
478,
|
| 833 |
+
845
|
| 834 |
+
],
|
| 835 |
+
"page_idx": 4
|
| 836 |
+
},
|
| 837 |
+
{
|
| 838 |
+
"type": "text",
|
| 839 |
+
"text": "To show the difference between our DBFAT and existing defense methods, we list the loss functions of different adversarial training methods in Table 3.",
|
| 840 |
+
"bbox": [
|
| 841 |
+
81,
|
| 842 |
+
845,
|
| 843 |
+
478,
|
| 844 |
+
888
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 4
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "image",
|
| 850 |
+
"img_path": "images/41bfde87e755c184fb37ef6398374efebd8e8aefda1cb986273cb15d7f304377.jpg",
|
| 851 |
+
"image_caption": [
|
| 852 |
+
"Figure 5: Visualizations of IID and non-IID distribution (Dirichlet sampled and Sharding) across 5 clients on CIFAR10 dataset. Shards_5 is a type of non-IID setting, in which each client has five categories of data (McMahan et al. 2017). From left to right: client ID number #1-5."
|
| 853 |
+
],
|
| 854 |
+
"image_footnote": [],
|
| 855 |
+
"bbox": [
|
| 856 |
+
521,
|
| 857 |
+
65,
|
| 858 |
+
915,
|
| 859 |
+
200
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 4
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "Experimental Results",
|
| 866 |
+
"text_level": 1,
|
| 867 |
+
"bbox": [
|
| 868 |
+
620,
|
| 869 |
+
296,
|
| 870 |
+
807,
|
| 871 |
+
313
|
| 872 |
+
],
|
| 873 |
+
"page_idx": 4
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"type": "text",
|
| 877 |
+
"text": "Experimental Setup",
|
| 878 |
+
"text_level": 1,
|
| 879 |
+
"bbox": [
|
| 880 |
+
516,
|
| 881 |
+
318,
|
| 882 |
+
674,
|
| 883 |
+
335
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 4
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "Following the previous work of FL (McMahan et al. 2017), we distribute training data among 100 clients in both IID and non-IID fashion. For each communication round, we randomly select 10 clients to average the model parameters. All experiments are conducted with 8 Tesla V100 GPUs. More details can be referred to the supplemental material.",
|
| 890 |
+
"bbox": [
|
| 891 |
+
514,
|
| 892 |
+
338,
|
| 893 |
+
911,
|
| 894 |
+
422
|
| 895 |
+
],
|
| 896 |
+
"page_idx": 4
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"type": "text",
|
| 900 |
+
"text": "Datasets In this section, we show that DBFAT improves the robust generalization and meanwhile maintains a high accuracy with extensive experiments on benchmark CV datasets, including MNIST (Lecun et al. 1998), FashionMNIST (Xiao, Rasul, and Vollgraf 2017) (FMNIST), CIFAR10 (Krizhevsky and Hinton 2009), CIFAR100 (Krizhevsky and Hinton 2009), Tiny-ImageNet (Le and Yang 2015), and ImageNet-12 (Deng et al. 2009). The ImageNet-12 is generated via (Li et al. 2021c), which consists of 12 classes. We resize the original image with size $224*224*3$ to $64*64*3$ for fast training.",
|
| 901 |
+
"bbox": [
|
| 902 |
+
514,
|
| 903 |
+
431,
|
| 904 |
+
913,
|
| 905 |
+
585
|
| 906 |
+
],
|
| 907 |
+
"page_idx": 4
|
| 908 |
+
},
|
| 909 |
+
{
|
| 910 |
+
"type": "text",
|
| 911 |
+
"text": "Data partitioning In the federated learning setup, we evaluate all algorithms on two types of non-IID data partitioning: Dirichlet sampled data and Sharding. For Dirichlet sampled data, each local client is allocated with a proportion of the samples of each label according to Dirichlet distribution (Li et al. 2020). Specifically, we follow the setting in (Yurochkin et al. 2019), for each label $c$ , we sample $p_c \\sim \\mathrm{Dir}_J(0.5)$ and allocate $p_{c,j}$ proportion of the whole dataset of label $c$ to client $j$ . In this setting, some clients may entirely have no examples of a subset of classes. For Sharding (McMahan et al. 2017), each client owns data samples of a fixed number of labels. Let $K$ be the number of total clients, and $q$ is the number of labels we assign to each client. We divide the dataset by label into $K * q$ shards, and the amount of samples in each shard is $\\frac{n}{K \\cdot q}$ . We denote this distribution as shards $_q$ , where $q$ controls the level of difficulty. If $q$ is set to a smaller value, then the partition is more unbalanced. An example of these partitioning strategies is shown in Fig. 5, in which we visualize IID and non-IID distribution (Dirichlet sampled with $p_c \\sim \\mathrm{Dir}_J(0.5)$ and Sharding with shards_5) on five randomly selected clients.",
|
| 912 |
+
"bbox": [
|
| 913 |
+
514,
|
| 914 |
+
595,
|
| 915 |
+
913,
|
| 916 |
+
888
|
| 917 |
+
],
|
| 918 |
+
"page_idx": 4
|
| 919 |
+
},
|
| 920 |
+
{
|
| 921 |
+
"type": "table",
|
| 922 |
+
"img_path": "images/39b202c966205ba9262fad2cee96ec061cbb7610e37e666d5a160e229e96e75d.jpg",
|
| 923 |
+
"table_caption": [
|
| 924 |
+
"Table 4: Accuracy and adversarial robustness on MNIST, FMNIST and CIFAR10 under both IID and non-IID distribution. An empirical study of FedAvg combined with several defense methods, more detailed comparisons are reported in the supplementary (Section B). Our method significantly outperforms other baselines."
|
| 925 |
+
],
|
| 926 |
+
"table_footnote": [],
|
| 927 |
+
"table_body": "<table><tr><td>Type</td><td></td><td colspan=\"6\">IID</td><td colspan=\"6\">Non-IID</td></tr><tr><td>Dataset</td><td>Method</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td></tr><tr><td rowspan=\"6\">MNIST</td><td>Plain</td><td>99.01</td><td>28.35</td><td>8.65</td><td>5.29</td><td>3.84</td><td>3.02</td><td>98.45</td><td>11.78</td><td>14.06</td><td>8.44</td><td>9.51</td><td>7.45</td></tr><tr><td>PGD_AT</td><td>98.52</td><td>76.01</td><td>60.18</td><td>54.50</td><td>55.23</td><td>50.43</td><td>97.82</td><td>67.58</td><td>52.89</td><td>48.03</td><td>47.43</td><td>43.75</td></tr><tr><td>ALP</td><td>98.46</td><td>57.37</td><td>55.61</td><td>48.74</td><td>51.17</td><td>44.25</td><td>97.92</td><td>46.49</td><td>51.01</td><td>46.41</td><td>46.24</td><td>41.95</td></tr><tr><td>TRADES</td><td>97.89</td><td>76.79</td><td>63.29</td><td>58.25</td><td>57.24</td><td>53.72</td><td>92.03</td><td>48.45</td><td>51.56</td><td>47.21</td><td>45.81</td><td>42.36</td></tr><tr><td>AVMixup</td><td>98.63</td><td>61.41</td><td>53.34</td><td>42.33</td><td>46.95</td><td>37.78</td><td>97.47</td><td>56.50</td><td>51.86</td><td>46.28</td><td>44.46</td><td>41.84</td></tr><tr><td>Ours</td><td>98.86</td><td>78.06</td><td>70.97</td><td>68.39</td><td>63.09</td><td>59.39</td><td>97.95</td><td>68.54</td><td>54.18</td><td>50.33</td><td>49.12</td><td>44.32</td></tr><tr><td rowspan=\"6\">FMNIST</td><td>Plain</td><td>88.50</td><td>17.89</td><td>3.55</td><td>2.57</td><td>0.40</td><td>0.17</td><td>84.60</td><td>17.86</td><td>3.25</td><td>2.93</td><td>3.05</td><td>-1.40</td></tr><tr><td>PGD_AT</td><td>76.05</td><td>68.53</td><td>65.24</td><td>65.40</td><td>64.26</td><td>60.89</td><td>72.93</td><td>60.11</td><td>54.42</td><td>54.33</td><td>52.19</td><td>49.88</td></tr><tr><td>ALP</td><td>75.99</td><td>67.31</td><td>63.66</td><td>63.79</td><td>61.55</td><td>59.19</td><td>75.34</td><td>57.67</td><td>53.37</td><td>55.11</td><td>51.12</td><td>51.04</td></tr><tr><td>TRADES</td><td>78.13</td><td>59.33</td><td>52.65</td><td>52.78</td><td>51.44</td><td>48.78</td><td>74.93</td><td>56.53</td><td>44.01</td><td>44.01</td><td>31.80</td><td>39.61</td></tr><tr><td>AVMixup</td><td>79.34</td><td>61.22</td><td>54.93</td><td>54.67</td><td>49.48</td><td>50.07</td><td>72.06</td><td>56.26</td><td>49.21</td><td>49.72</td><td>47.99</td><td>45.15</td></tr><tr><td>Ours</td><td>81.49</td><td>69.23</td><td>66.22</td><td>66.24</td><td>65.71</td><td>61.49</td><td>76.19</td><td>63.11</td><td>56.45</td><td>58.31</td><td>56.96</td><td>53.91</td></tr><tr><td rowspan=\"6\">CIFAR10</td><td>Plain</td><td>78.80</td><td>6.87</td><td>1.15</td><td>1.06</td><td>1.30</td><td>1.23</td><td>61.10</td><td>7.58</td><td>2.94</td><td>2.67</td><td>2.87</td><td>1.28</td></tr><tr><td>PGD_AT</td><td>58.75</td><td>30.62</td><td>27.23</td><td>26.11</td><td>28.47</td><td>22.09</td><td>15.27</td><td>13.27</td><td>13.00</td><td>13.00</td><td>12.99</td><td>8.63</td></tr><tr><td>ALP</td><td>63.23</td><td>29.42</td><td>26.75</td><td>28.49</td><td>28.13</td><td>23.97</td><td>32.91</td><td>21.41</td><td>20.26</td><td>20.19</td><td>17.74</td><td>15.83</td></tr><tr><td>TRADES</td><td>68.58</td><td>31.53</td><td>25.92</td><td>25.49</td><td>23.07</td><td>20.89</td><td>46.30</td><td>24.81</td><td>22.20</td><td>22.05</td><td>19.59</td><td>17.85</td></tr><tr><td>AVMixup</td><td>70.28</td><td>29.51</td><td>26.22</td><td>26.34</td><td>24.07</td><td>22.25</td><td>48.23</td><td>25.29</td><td>21.42</td><td>24.25</td><td>20.25</td><td>19.43</td></tr><tr><td>Ours</td><td>72.21</td><td>31.47</td><td>28.57</td><td>29.03</td><td>29.31</td><td>24.25</td><td>52.24</td><td>27.03</td><td>24.12</td><td>27.02</td><td>22.13</td><td>21.20</td></tr></table>",
|
| 928 |
+
"bbox": [
|
| 929 |
+
138,
|
| 930 |
+
119,
|
| 931 |
+
861,
|
| 932 |
+
371
|
| 933 |
+
],
|
| 934 |
+
"page_idx": 5
|
| 935 |
+
},
|
| 936 |
+
{
|
| 937 |
+
"type": "text",
|
| 938 |
+
"text": "MNIST and FMNIST setup We use a simple CNN with two convolutional layers, followed by two fully connected layers. Following the setting used in (Goodfellow, Shlens, and Szegedy 2014), for MNIST, we set perturbation bound $\\epsilon = 0.3$ , and step size $\\alpha = 0.01$ , and apply adversarial attacks for 20 iterations. For FMNIST, we set perturbation bound $\\epsilon = 32/255$ , and step size $\\alpha = 0.031$ , we adversarially train the network for 10 steps and apply adversarial attacks for 20 iterations. Due to the simplicity of MNIST and FMNIST, we mainly use non-IID data (Sharding), which is hard to train.",
|
| 939 |
+
"bbox": [
|
| 940 |
+
81,
|
| 941 |
+
395,
|
| 942 |
+
478,
|
| 943 |
+
547
|
| 944 |
+
],
|
| 945 |
+
"page_idx": 5
|
| 946 |
+
},
|
| 947 |
+
{
|
| 948 |
+
"type": "text",
|
| 949 |
+
"text": "CIFAR10, CIFAR100, Tiny-ImageNet and ImageNet-12 setup We apply a larger CNN architecture, and follow the setting used in (Madry et al. 2017), i.e., we set the perturbation bound $\\epsilon = 0.031$ , step size $\\alpha = 0.007$ . To evaluate the robustness, we conduct extensive experiments with various data partitioning.",
|
| 950 |
+
"bbox": [
|
| 951 |
+
81,
|
| 952 |
+
565,
|
| 953 |
+
480,
|
| 954 |
+
650
|
| 955 |
+
],
|
| 956 |
+
"page_idx": 5
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"type": "text",
|
| 960 |
+
"text": "Baselines For attack methods, we perform five popular attacks including FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017) and AA (Croce and Hein 2020). We further use Square (Andriushchenko et al. 2020) for black-box attack. To investigate the effectiveness of existing FL algorithms, we implement FedAvg(McMahan et al. 2017), FedProx(Li et al. 2018), FedNova(Wang et al. 2020) and Scaffold(Karimireddy et al. 2020). To defend against adversarial attacks, we implement four most prevailing methods including PGD_AT(Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We compare the performance of our DBFAT with various kinds of defense methods combined with FL methods.",
|
| 961 |
+
"bbox": [
|
| 962 |
+
81,
|
| 963 |
+
666,
|
| 964 |
+
480,
|
| 965 |
+
888
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 5
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "image",
|
| 971 |
+
"img_path": "images/4f48e8fb5f575245e90978b9cd99db166d15a71a816b25aeaa595f2482457dc7.jpg",
|
| 972 |
+
"image_caption": [
|
| 973 |
+
"Convergence For Local Training",
|
| 974 |
+
"Figure 6: Left: Convergence rate for different local epochs. Right: Training curves of FedAvg combined with different AT methods."
|
| 975 |
+
],
|
| 976 |
+
"image_footnote": [],
|
| 977 |
+
"bbox": [
|
| 978 |
+
524,
|
| 979 |
+
433,
|
| 980 |
+
903,
|
| 981 |
+
532
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 5
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "text",
|
| 987 |
+
"text": "To show the convergence rate of DBFAT, we use the Dirichlet sampled CIFAR10 dataset, where each client owns 500 samples from 5 classes. Fig. 6 (left sub-figure) shows the impact of local epoch $E$ during adversarial training. Indeed, for a very small epoch (e.g., $E = 2$ ), it has an extremely slow convergence rate, which may incur more communications. Besides, a large epoch (e.g., $E = 20$ ) also leads to a slow convergence, as model may overfit to the local data. Considering both the communication cost and convergence issues, we set $E = 5$ in our experiments, which can maintain a proper communication efficiency and fast convergence.",
|
| 988 |
+
"bbox": [
|
| 989 |
+
514,
|
| 990 |
+
604,
|
| 991 |
+
913,
|
| 992 |
+
758
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 5
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "text",
|
| 998 |
+
"text": "Effectiveness of Our Method",
|
| 999 |
+
"text_level": 1,
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
516,
|
| 1002 |
+
770,
|
| 1003 |
+
740,
|
| 1004 |
+
784
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "We verify the effectiveness of our method compared with several adversarial training techniques on Dirichlet sampled CIFAR10. Evaluation of model robustness is averaged under four attacks using the same setting for a fair comparison and all defense methods are combined with FedAvg.",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
514,
|
| 1013 |
+
790,
|
| 1014 |
+
911,
|
| 1015 |
+
859
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 5
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "text",
|
| 1021 |
+
"text": "To show the differences between DBFAT and above mentioned defense methods, we report the training curves on",
|
| 1022 |
+
"bbox": [
|
| 1023 |
+
516,
|
| 1024 |
+
861,
|
| 1025 |
+
911,
|
| 1026 |
+
888
|
| 1027 |
+
],
|
| 1028 |
+
"page_idx": 5
|
| 1029 |
+
},
|
| 1030 |
+
{
|
| 1031 |
+
"type": "table",
|
| 1032 |
+
"img_path": "images/960f022fb913d9e005832751afee74d896edb9efee17b434748780e0b1d4be09.jpg",
|
| 1033 |
+
"table_caption": [
|
| 1034 |
+
"Table 5: Accuracy and adversarial robustness on CIFAR100, Tiny-ImageNet, and ImageNet-12."
|
| 1035 |
+
],
|
| 1036 |
+
"table_footnote": [],
|
| 1037 |
+
"table_body": "<table><tr><td>Dataset</td><td colspan=\"4\">CIFAR100</td><td colspan=\"4\">Tiny-ImageNet</td><td colspan=\"4\">ImageNet-12</td></tr><tr><td>Method</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td></tr><tr><td>PGD_AT</td><td>39.32</td><td>16.07</td><td>14.36</td><td>23.44</td><td>26.33</td><td>12.26</td><td>10.26</td><td>13.54</td><td>37.42</td><td>22.61</td><td>18.30</td><td>25.57</td></tr><tr><td>ALP</td><td>41.12</td><td>18.46</td><td>14.78</td><td>24.54</td><td>32.78</td><td>14.62</td><td>12.19</td><td>16.48</td><td>54.96</td><td>24.78</td><td>19.57</td><td>27.73</td></tr><tr><td>TRADES</td><td>43.39</td><td>20.05</td><td>16.85</td><td>26.43</td><td>37.81</td><td>15.49</td><td>13.26</td><td>19.38</td><td>58.82</td><td>25.49</td><td>21.81</td><td>28.96</td></tr><tr><td>AVMixup</td><td>46.64</td><td>23.56</td><td>19.46</td><td>29.16</td><td>36.19</td><td>15.28</td><td>13.18</td><td>19.25</td><td>59.63</td><td>25.81</td><td>21.92</td><td>29.28</td></tr><tr><td>Ours</td><td>48.31</td><td>24.47</td><td>22.46</td><td>31.57</td><td>38.24</td><td>16.17</td><td>13.96</td><td>20.26</td><td>61.38</td><td>26.47</td><td>22.08</td><td>30.91</td></tr></table>",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
138,
|
| 1040 |
+
90,
|
| 1041 |
+
857,
|
| 1042 |
+
193
|
| 1043 |
+
],
|
| 1044 |
+
"page_idx": 6
|
| 1045 |
+
},
|
| 1046 |
+
{
|
| 1047 |
+
"type": "table",
|
| 1048 |
+
"img_path": "images/f44cf1db8465a4c9cb60b019d83eb992cac4905bbf980b067fc7de24aebe170a.jpg",
|
| 1049 |
+
"table_caption": [
|
| 1050 |
+
"Table 6: Ablation Study by cutting off different modules."
|
| 1051 |
+
],
|
| 1052 |
+
"table_footnote": [],
|
| 1053 |
+
"table_body": "<table><tr><td>Dataset</td><td colspan=\"2\">CIFAR10</td><td colspan=\"2\">FMNIST</td></tr><tr><td>Methods</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>Ours</td><td>52.16</td><td>27.80</td><td>75.89</td><td>59.63</td></tr><tr><td>Ours (w/o re-weighting)</td><td>48.44</td><td>25.89</td><td>72.35</td><td>56.34</td></tr><tr><td>Ours (w/o regularization)</td><td>51.04</td><td>26.84</td><td>73.96</td><td>58.23</td></tr></table>",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
88,
|
| 1056 |
+
229,
|
| 1057 |
+
485,
|
| 1058 |
+
319
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 6
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "non-IID CIFAR10 dataset in the right sub-figure of Fig. 6. Fig. 6 confirms that our DBFAT achieves the highest clean accuracy. We speculate that this benefit is due to the regularization term and re-weighting strategy introduced in Equation 6. It is worth mentioning that in the training curves, the model trained with PGD_AT performs very poorly. It indicates that standard AT may not be a suitable choice for adversarial robustness in FL, as it only uses cross-entropy loss with adversarial examples, but ignores the negative impact on clean accuracy. We further report the results on various datasets under both IID and non-IID settings in Table 4, which indicates that DBFAT significantly outperforms other methods in terms of both accuracy and robustness.",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
81,
|
| 1067 |
+
334,
|
| 1068 |
+
478,
|
| 1069 |
+
515
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 6
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "text",
|
| 1075 |
+
"text": "Performance on large datasets In Table 5, we show the accuracy and robustness of each method on large datasets (e.g., CIFAR100, Tiny-ImageNet, and ImageNet-12). All results are tested under PGD-20 attack (Madry et al. 2017), AutoAttack (Croce and Hein 2020), and Square attack (Andriushchenko et al. 2020) in non-IID settings. From the results reported in Table 5, we can find that our method still outperforms other baselines in terms of both clean accuracy and robustness. Note that our method can achieve the highest accuracy and robustness of $61.38\\%$ and $22.08\\%$ under AutoAttack, respectively. It thus proves that our method can also be used to improve the accuracy and robustness of the model on large datasets. We think that the higher clean accuracy is a result of the regularization term introduced in Equation 6, while maintaining a high robustness.",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
81,
|
| 1078 |
+
523,
|
| 1079 |
+
478,
|
| 1080 |
+
733
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "Ablation Study",
|
| 1087 |
+
"text_level": 1,
|
| 1088 |
+
"bbox": [
|
| 1089 |
+
83,
|
| 1090 |
+
744,
|
| 1091 |
+
202,
|
| 1092 |
+
758
|
| 1093 |
+
],
|
| 1094 |
+
"page_idx": 6
|
| 1095 |
+
},
|
| 1096 |
+
{
|
| 1097 |
+
"type": "text",
|
| 1098 |
+
"text": "Cutting off different modules As part of our ablation study, we first investigate the contributions of different modules introduced in DBFAT. As shown in Table 6, turning off both the re-weighting strategy and regularization term will lead to poor performance, which demonstrates the importance of both modules. Moreover, cut-offing the reweighting strategy can lead to a more severe degradation. We conjecture this is a reasonable phenomenon. As mentioned in Fig. 1, non-IID data can cause a serious accuracy",
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
81,
|
| 1101 |
+
762,
|
| 1102 |
+
478,
|
| 1103 |
+
888
|
| 1104 |
+
],
|
| 1105 |
+
"page_idx": 6
|
| 1106 |
+
},
|
| 1107 |
+
{
|
| 1108 |
+
"type": "table",
|
| 1109 |
+
"img_path": "images/018dfaf3ed29a8a9e80f6a836b6afeb5ca7d47f2b34592514bd00b53d086ee22.jpg",
|
| 1110 |
+
"table_caption": [
|
| 1111 |
+
"Table 7: Effect of hyper-parameter $\\beta$ . \"Avg ${A}_{rob}$ \" refers to the average robustness under four attacks."
|
| 1112 |
+
],
|
| 1113 |
+
"table_footnote": [],
|
| 1114 |
+
"table_body": "<table><tr><td>Dataset</td><td colspan=\"2\">MNIST</td><td colspan=\"2\">FMNIST</td></tr><tr><td>β</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>4</td><td>98.30</td><td>26.64</td><td>81.73</td><td>37.36</td></tr><tr><td>2</td><td>98.14</td><td>34.24</td><td>75.59</td><td>47.83</td></tr><tr><td>1.5</td><td>98.46</td><td>53.22</td><td>74.93</td><td>44.08</td></tr><tr><td>1</td><td>97.32</td><td>47.35</td><td>65.43</td><td>42.33</td></tr><tr><td>0.5</td><td>96.57</td><td>44.09</td><td>61.02</td><td>45.28</td></tr></table>",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
544,
|
| 1117 |
+
242,
|
| 1118 |
+
887,
|
| 1119 |
+
359
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 6
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "reduction. Our re-weighting strategy can alleviate the bias by taking the limited data on each client into account.",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
514,
|
| 1128 |
+
375,
|
| 1129 |
+
911,
|
| 1130 |
+
402
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 6
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "Effects of Regularization The regularization parameter $\\beta$ is an important hyperparameter in our proposed method. We show how the regularization parameter affects the performance of our robust classifiers by numerical experiments on two datasets, MNIST and FMNIST. In Equation 6, $\\beta$ controls the accuracy obtained from the global model, which contains information from distributed clients. Since directly training on adversarial examples could hurt the clean accuracy, here we explore the effects of $\\beta$ on both accuracy and robustness. As shown in Table 7, we report the clean accuracy and robustness by varying the value of $\\beta$ . We empirically choose the best $\\beta$ for different datasets. For example, for MNIST, $\\beta = 1.5$ can achieve better accuracy and robustness. For FMNIST, we let $\\beta = 2$ for a proper trade-off in accuracy and robustness.",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
514,
|
| 1139 |
+
419,
|
| 1140 |
+
911,
|
| 1141 |
+
627
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 6
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "text",
|
| 1147 |
+
"text": "Conclusion",
|
| 1148 |
+
"text_level": 1,
|
| 1149 |
+
"bbox": [
|
| 1150 |
+
665,
|
| 1151 |
+
651,
|
| 1152 |
+
764,
|
| 1153 |
+
667
|
| 1154 |
+
],
|
| 1155 |
+
"page_idx": 6
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "text",
|
| 1159 |
+
"text": "In this paper, we investigate an interesting yet not well explored problem in FL: the robustness against adversarial attacks. We first find that directly adopting adversarial training in federated learning can hurt accuracy significantly especially in non-IID setting. We then propose a novel and effective adversarial training method called DBFAT, which is based on the decision boundary of federated learning, and utilizes local re-weighting and global regularization to improve both accuracy and robustness of FL systems. Comprehensive experiments on various datasets and detailed comparisons with the state-of-the-art adversarial training methods demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. This work would potentially benefit researchers who are interested in adversarial robustness of FL.",
|
| 1160 |
+
"bbox": [
|
| 1161 |
+
514,
|
| 1162 |
+
680,
|
| 1163 |
+
911,
|
| 1164 |
+
888
|
| 1165 |
+
],
|
| 1166 |
+
"page_idx": 6
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "text",
|
| 1170 |
+
"text": "References",
|
| 1171 |
+
"text_level": 1,
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
233,
|
| 1174 |
+
66,
|
| 1175 |
+
330,
|
| 1176 |
+
82
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 7
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "list",
|
| 1182 |
+
"sub_type": "ref_text",
|
| 1183 |
+
"list_items": [
|
| 1184 |
+
"Andriushchenko, M.; Croce, F.; Flamarion, N.; and Hein, M. 2020. Square Attack: a query-efficient black-box adversarial attack via random search. arXiv:1912.00049.",
|
| 1185 |
+
"Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), 39-57. IEEE.",
|
| 1186 |
+
"Chen, C.; Kailkhura, B.; Goldhahn, R.; and Zhou, Y. 2021a.",
|
| 1187 |
+
"Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. arXiv:2103.16031.",
|
| 1188 |
+
"Chen, C.; Liu, Y.; Ma, X.; and Lyu, L. 2022a. CalFAT: Calibrated Federated Adversarial Training with Label Skewness. In Advances in Neural Information Processing Systems.",
|
| 1189 |
+
"Chen, C.; Zhang, J.; and Lyu, L. 2022. Gear: a margin-based federated adversarial training approach. In International Workshop on Trustable, Verifiable, and Auditable Federated Learning in Conjunction with AAAI, volume 2022.",
|
| 1190 |
+
"Chen, Z.; Li, B.; Wu, S.; Xu, J.; Ding, S.; and Zhang, W. 2022b. Shape Matters: Deformable Patch Attack. In Avidan, S.; Brostow, G. J.; Cisse, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision - ECCV 2022. Springer.",
|
| 1191 |
+
"Chen, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Zhang, W. 2022c. Towards Practical Certifiable Patch Defense With Vision Transformer. In Proceedings of the IEEE/CVF Conference on CVPR, 15148-15158.",
|
| 1192 |
+
"Chen, Z.; Zhu, M.; Yang, C.; and Yuan, Y. 2021b. Personalized Retrogress-Resilient Framework for Real-World Medical Federated Learning. In de Bruijne, M.; Cattin, P. C.; Cotin, S.; Padoy, N.; Speidel, S.; Zheng, Y.; and Essert, C., eds., Medical Image Computing and Computer Assisted Intervention - MICCAI 2021 - 24th International Conference, Strasbourg, France, September 27 - October 1, 2021, Proceedings, Part III, volume 12903 of Lecture Notes in Computer Science, 347-356. Springer.",
|
| 1193 |
+
"Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv:2003.01690.",
|
| 1194 |
+
"Cui, J.; Liu, S.; Wang, L.; and Jia, J. 2021. Learnable boundary guided adversarial training. In Proceedings of the IEEE/CVF international conference on computer vision, 15721-15730.",
|
| 1195 |
+
"Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248-255.",
|
| 1196 |
+
"Ding, G. W.; Sharma, Y.; Lui, K. Y. C.; and Huang, R. 2020. MMA Training: Direct Input Space Margin Maximization through Adversarial Training. arXiv:1812.02637.",
|
| 1197 |
+
"Dong, J.; Cong, Y.; Sun, G.; Fang, Z.; and Ding, Z. 2021. Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1): 1-17.",
|
| 1198 |
+
"Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185-9193."
|
| 1199 |
+
],
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
83,
|
| 1202 |
+
85,
|
| 1203 |
+
478,
|
| 1204 |
+
888
|
| 1205 |
+
],
|
| 1206 |
+
"page_idx": 7
|
| 1207 |
+
},
|
| 1208 |
+
{
|
| 1209 |
+
"type": "list",
|
| 1210 |
+
"sub_type": "ref_text",
|
| 1211 |
+
"list_items": [
|
| 1212 |
+
"Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.",
|
| 1213 |
+
"Heo, B.; Lee, M.; Yun, S.; and Choi, J. Y. 2018. Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. arXiv:1805.05532.",
|
| 1214 |
+
"Hong, J.; Wang, H.; Wang, Z.; and Zhou, J. 2021. Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. arXiv preprint arXiv:2106.10196.",
|
| 1215 |
+
"Huang, R.; Cui, C.; Chen, F.; Ren, Y.; Liu, J.; Zhao, Z.; Huai, B.; and Wang, Z. 2022a. Singgan: Generative adversarial network for high-fidelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, 2525-2535.",
|
| 1216 |
+
"Huang, R.; Lam, M. W.; Wang, J.; Su, D.; Yu, D.; Ren, Y.; and Zhao, Z. 2022b. FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. arXiv preprint arXiv:2204.09934.",
|
| 1217 |
+
"Kairouz, P. 2021. Advances and Open Problems in Federated Learning. arXiv:1912.04977.",
|
| 1218 |
+
"Kannan, H.; Kurakin, A.; and Goodfellow, I. 2018. Adversarial logit pairing. arXiv preprint arXiv:1803.06373.",
|
| 1219 |
+
"Karimireddy, S. P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; and Suresh, A. T. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, 5132-5143. PMLR.",
|
| 1220 |
+
"Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario.",
|
| 1221 |
+
"Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533.",
|
| 1222 |
+
"Le, Y.; and Yang, X. 2015. Tiny imagenet visual recognition challenge. CS 231N, 7(7): 3.",
|
| 1223 |
+
"Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324.",
|
| 1224 |
+
"Lee, S.; Lee, H.; and Yoon, S. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 272-281.",
|
| 1225 |
+
"Li, B.; Sun, Z.; and Guo, Y. 2019. SuperVAE: Superpixelwise Variational Autoencoder for Salient Object Detection. In The Thirty-Third AAAI Conference.",
|
| 1226 |
+
"Li, B.; Sun, Z.; Tang, L.; Sun, Y.; and Shi, J. 2019. Detecting Robust Co-Saliency with Recurrent Co-Attention Neural Network. In Kraus, S., ed., IJCAI.",
|
| 1227 |
+
"Li, B.; Xu, J.; Wu, S.; Ding, S.; Li, J.; and Huang, F. 2021a. Detecting Adversarial Patch Attacks through Global-local Consistency. In ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, 35-41. ACM.",
|
| 1228 |
+
"Li, Q.; Diao, Y.; Chen, Q.; and He, B. 2021b. Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv preprint arXiv:2102.02079."
|
| 1229 |
+
],
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
517,
|
| 1232 |
+
66,
|
| 1233 |
+
911,
|
| 1234 |
+
888
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 7
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "list",
|
| 1240 |
+
"sub_type": "ref_text",
|
| 1241 |
+
"list_items": [
|
| 1242 |
+
"Li, T.; Sahu, A. K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; and Smith, V. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127.",
|
| 1243 |
+
"Li, X.; Huang, K.; Yang, W.; Wang, S.; and Zhang, Z. 2020. On the Convergence of FedAvg on Non-IID Data. arXiv:1907.02189.",
|
| 1244 |
+
"Li, Y.; Lyu, X.; Koren, N.; Lyu, L.; Li, B.; and Ma, X. 2021c. Anti-backdoor learning: Training clean models on poisoned data. NeurIPS, 34.",
|
| 1245 |
+
"Liang, F.; Pan, W.; and Ming, Z. 2021. FedRec++: Lossless Federated Recommendation with Explicit Feedback. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 4224-4231. AAAI Press.",
|
| 1246 |
+
"Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P. 2021a. FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 1013-1023. Computer Vision Foundation / IEEE.",
|
| 1247 |
+
"Liu, S.; Xu, S.; Yu, W.; Fu, Z.; Zhang, Y.; and Marian, A. 2021b. FedCT: Federated Collaborative Transfer for Recommendation. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 716-725. ACM.",
|
| 1248 |
+
"Lyu, L.; Yu, H.; Ma, X.; Chen, C.; Sun, L.; Zhao, J.; Yang, Q.; and Philip, S. Y. 2022. Privacy and robustness in federated learning: Attacks and defenses. IEEE transactions on neural networks and learning systems.",
|
| 1249 |
+
"Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.",
|
| 1250 |
+
"McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 1273-1282. PMLR.",
|
| 1251 |
+
"Shah, D.; Dube, P.; Chakraborty, S.; and Verma, A. 2021. Adversarial training in communication constrained federated learning. arXiv preprint arXiv:2103.01319.",
|
| 1252 |
+
"Wang, C.; Deng, J.; Meng, X.; Wang, Y.; Li, J.; Miao, F.; Rajasekaran, S.; and Ding, C. 2021. A Secure and Efficient Federated Learning Framework for NLP. In EMNLP 2021, 7676-7682. Association for Computational Linguistics.",
|
| 1253 |
+
"Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; and Poor, H. V. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint arXiv:2007.07481.",
|
| 1254 |
+
"Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747.",
|
| 1255 |
+
"Yurochkin, M.; Agarwal, M.; Ghosh, S.; Greenewald, K.; Hoang, T. N.; and Khazaeni, Y. 2019. Bayesian Nonparametric Federated Learning of Neural Networks. arXiv:1905.12022."
|
| 1256 |
+
],
|
| 1257 |
+
"bbox": [
|
| 1258 |
+
83,
|
| 1259 |
+
68,
|
| 1260 |
+
478,
|
| 1261 |
+
888
|
| 1262 |
+
],
|
| 1263 |
+
"page_idx": 8
|
| 1264 |
+
},
|
| 1265 |
+
{
|
| 1266 |
+
"type": "list",
|
| 1267 |
+
"sub_type": "ref_text",
|
| 1268 |
+
"list_items": [
|
| 1269 |
+
"Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472-7482. PMLR.",
|
| 1270 |
+
"Zhang, J.; Chen, C.; Li, B.; Lyu, L.; Wu, S.; Ding, S.; Shen, C.; and Wu, C. 2022a. DENSE: Data-Free One-Shot Federated Learning. In Advances in NeurIPS.",
|
| 1271 |
+
"Zhang, J.; Li, B.; Xu, J.; Wu, S.; Ding, S.; Zhang, L.; and Wu, C. 2022b. Towards Efficient Data Free Black-Box Adversarial Attack. In Proceedings of the IEEE/CVF Conference on CVPR, 15115–15125.",
|
| 1272 |
+
"Zhang, J.; Li, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Wu, C. 2022c. Federated Learning with Label Distribution Skew via Logits Calibration. In Proceedings of the ICML. PMLR.",
|
| 1273 |
+
"Zhang, J.; Zhu, J.; Niu, G.; Han, B.; Sugiyama, M.; and Kankanhalli, M. 2021. Geometry-aware Instance-reweighted Adversarial Training. In International Conference on Learning Representations.",
|
| 1274 |
+
"Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; and Chandra, V. 2018. Federated Learning with Non-IID Data. arXiv:1806.00582.",
|
| 1275 |
+
"Zhou, Y.; Wu, J.; Wang, H.; and He, J. 2022. Adversarial robustness through bias variance decomposition: A new perspective for federated learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2753-2762.",
|
| 1276 |
+
"Zhu, X.; Wang, J.; Hong, Z.; and Xiao, J. 2020. Empirical Studies of Institutional Federated Learning For Natural Language Processing. In Cohn, T.; He, Y.; and Liu, Y., eds., Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, 625-634. Association for Computational Linguistics.",
|
| 1277 |
+
"Zizzo, G.; Rawat, A.; Sinn, M.; and Buesser, B. 2020. FAT: Federated Adversarial Training. arXiv:2012.01791."
|
| 1278 |
+
],
|
| 1279 |
+
"bbox": [
|
| 1280 |
+
517,
|
| 1281 |
+
68,
|
| 1282 |
+
913,
|
| 1283 |
+
579
|
| 1284 |
+
],
|
| 1285 |
+
"page_idx": 8
|
| 1286 |
+
}
|
| 1287 |
+
]
|
2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_model.json
ADDED
|
@@ -0,0 +1,1945 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.18,
|
| 7 |
+
0.121,
|
| 8 |
+
0.817,
|
| 9 |
+
0.143
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "Delving into the Adversarial Robustness of Federated Learning"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.267,
|
| 18 |
+
0.155,
|
| 19 |
+
0.734,
|
| 20 |
+
0.193
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Jie Zhang\\(^{1*}\\) Bo Li\\(^{2*‡}\\) Chen Chen\\(^{3}\\) Lingjuan Lyu\\(^{3‡}\\) Shuang Wu\\(^{2}\\) Shouhong Ding\\(^{2}\\) Chao Wu\\(^{1‡}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.294,
|
| 29 |
+
0.195,
|
| 30 |
+
0.704,
|
| 31 |
+
0.211
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "\\(^{1}\\)Zhejiang University \\(^{2}\\)Youtu Lab, Tencent \\(^{3}\\)Sony AI"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.382,
|
| 40 |
+
0.211,
|
| 41 |
+
0.617,
|
| 42 |
+
0.226
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "{zj_zhangjie, chao.wu} @zju.edu.cn"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.216,
|
| 51 |
+
0.226,
|
| 52 |
+
0.783,
|
| 53 |
+
0.24
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "{libraboli, calvinwu, ericshding} @tencent.com, {chen.chen, LingjuanLv} @sony.com"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.249,
|
| 62 |
+
0.274,
|
| 63 |
+
0.314,
|
| 64 |
+
0.287
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Abstract"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.1,
|
| 73 |
+
0.294,
|
| 74 |
+
0.465,
|
| 75 |
+
0.522
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings."
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.227,
|
| 84 |
+
0.538,
|
| 85 |
+
0.337,
|
| 86 |
+
0.552
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Introduction"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.082,
|
| 95 |
+
0.556,
|
| 96 |
+
0.48,
|
| 97 |
+
0.709
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Nowadays, end devices are generating massive amounts of potentially sensitive user data, raising practical concerns over security and privacy. Federated Learning (FL) (McMahan et al. 2017) emerges as a privacy-aware learning paradigm that allows multiple clients to collaboratively train neural networks without revealing their raw data. Recently, FL has attracted increasing attention from different areas, including medical image analysis (Liu et al. 2021a; Chen et al. 2021b), recommender systems (Liang, Pan, and Ming 2021; Liu et al. 2021b), natural language processing (Zhu et al. 2020; Wang et al. 2021), etc."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.082,
|
| 106 |
+
0.709,
|
| 107 |
+
0.48,
|
| 108 |
+
0.821
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Prior studies have demonstrated that neural networks are vulnerable to evasion attacks by adversarial examples (Goodfellow, Shlens, and Szegedy 2014) during inference time. The goal of inference-time adversarial attack (Li et al. 2021a; Chen et al. 2022c; Zhang et al. 2022b; Chen et al. 2022b) is to damage the global model by adding a carefully generated imperceptible perturbation on the test examples. As shown in Table 1, federated models are as fragile to"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.084,
|
| 117 |
+
0.826,
|
| 118 |
+
0.48,
|
| 119 |
+
0.851
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.084,
|
| 128 |
+
0.851,
|
| 129 |
+
0.48,
|
| 130 |
+
0.876
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "*Equal contribution. Work done during Jie Zhang's internship at Tencent Youtu Lab and partly done at Sony AI."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.106,
|
| 139 |
+
0.876,
|
| 140 |
+
0.246,
|
| 141 |
+
0.889
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "\\(\\ddagger\\) Corresponding author."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.516,
|
| 150 |
+
0.274,
|
| 151 |
+
0.913,
|
| 152 |
+
0.33
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "adversarial examples as centrally trained models (i.e. zero accuracy under PGD-40 attack (Madry et al. 2017)). Hence, it is also important to consider how to defend against adversarial attacks in federated learning."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.516,
|
| 161 |
+
0.331,
|
| 162 |
+
0.913,
|
| 163 |
+
0.47
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "There are several works that aim to deal with adversarial attacks in FL (Zhang et al. 2022c,a), i.e., federated adversarial training (FAT) (Zizzo et al. 2020; Hong et al. 2021; Shah et al. 2021; Chen, Zhang, and Lyu 2022; Chen et al. 2022a). (Zizzo et al. 2020) and (Hong et al. 2021) proposed to conduct adversarial training (AT) on a proportion of clients but conduct plain training on other clients. (Shah et al. 2021) investigated the impact of local training rounds in FAT. Nevertheless, these methods all ignore the issue that the clean accuracy of federated adversarial training is very low."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.516,
|
| 172 |
+
0.471,
|
| 173 |
+
0.914,
|
| 174 |
+
0.777
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "To further show the problems of federated adversarial training, we first begin with the comparison between the plainly-trained models and AT-trained (Madry et al. 2017) models in both the IID (Independent and Identically Distributed) and non-IID FL settings, measured by clean accuracy \\( A_{cln} \\) and robust accuracy \\( A_{rob} \\), respectively. We show the test accuracy of plain training and adversarial training (AT) on CIFAR10 dataset under both IID and non-IID FL settings in Fig. 1 (left sub-figure). We summarize some valuable observations as follows: 1) Compared with the plainly-trained models, AT-trained models achieve a lower accuracy, which indicates that directly adopting adversarial training in FL can hurt \\( A_{cln} \\); 2) \\( A_{cln} \\) drops heavily for both the plainly-trained models and AT-trained models under non-IID distribution, which is exactly the challenge that typical federated learning with heterogeneous data encountered (Zhao et al. 2018); 3) The performance of AT-trained models with non-IID data distribution decrease significantly compared with IID data distribution. Motivated by these observations, we focus on improving both adversarial robustness and clean accuracy of adversarial training in FL, i.e., we aim to increase \\( A_{cln} \\) while keeping \\( A_{rob} \\) as high as possible."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "text",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.516,
|
| 183 |
+
0.778,
|
| 184 |
+
0.915,
|
| 185 |
+
0.89
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "To achieve this goal, in this paper, we investigate the impact of decision boundary, which can greatly influence the performance of the model in FAT. Specifically, 1) we apply adversarial training with a re-weighting strategy in local update to get a better \\( A_{rob} \\). Our method takes the limited data of each client into account, those samples that are close to/ far from the decision boundary are assigned larger/smaller weight. 2) Moreover, since the global model in FL has a"
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "aside_text",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.023,
|
| 194 |
+
0.266,
|
| 195 |
+
0.058,
|
| 196 |
+
0.707
|
| 197 |
+
],
|
| 198 |
+
"angle": 270,
|
| 199 |
+
"content": "arXiv:2302.09479v1 [cs.LG] 19 Feb 2023"
|
| 200 |
+
}
|
| 201 |
+
],
|
| 202 |
+
[
|
| 203 |
+
{
|
| 204 |
+
"type": "table_caption",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.083,
|
| 207 |
+
0.065,
|
| 208 |
+
0.916,
|
| 209 |
+
0.11
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Table 1: The accuracy (%) is tested under PGD-40 attack (Madry et al. 2017). For MNIST, FMNIST, CIFAR10, ImageNet-12, CIFAR100, and Tiny-ImageNet, the perturbation bound is \\(\\{0.3, 32/255, 0.031, 0.031, 0.031, 0.031\\}\\), respectively. \\(A_{cln}\\) and \\(A_{rob}\\) refer to clean accuracy and robust accuracy."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "table",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.215,
|
| 218 |
+
0.12,
|
| 219 |
+
0.788,
|
| 220 |
+
0.196
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "<table><tr><td>Type</td><td>Dataset</td><td>MNIST</td><td>FMNIST</td><td>ImageNet-12</td><td>CIFAR10</td><td>CIFAR100</td><td>Tiny-ImageNet</td></tr><tr><td rowspan=\"2\">Centralized</td><td>\\(A_{cln}\\)</td><td>99.42</td><td>92.47</td><td>78.96</td><td>94.26</td><td>86.93</td><td>57.93</td></tr><tr><td>\\(A_{rob}\\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td rowspan=\"2\">Federated</td><td>\\(A_{cln}\\)</td><td>99.01</td><td>88.51</td><td>71.65</td><td>85.81</td><td>81.28</td><td>49.79</td></tr><tr><td>\\(A_{rob}\\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "image",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.152,
|
| 229 |
+
0.209,
|
| 230 |
+
0.845,
|
| 231 |
+
0.4
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": null
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "image_caption",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.082,
|
| 240 |
+
0.412,
|
| 241 |
+
0.913,
|
| 242 |
+
0.456
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Figure 1: Left: Test accuracy reduces for plainly trained model and adversarially trained model under non-IID data. Meanwhile, adversarial training hurts the performance. Right: Evaluations on CIFAR10 for both accuracy and robustness, including several state-of-the-art defense methods combined with FL. Our method outperforms existing baselines on both metric dimensions."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.082,
|
| 251 |
+
0.464,
|
| 252 |
+
0.48,
|
| 253 |
+
0.533
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "more accurate decision boundary through model aggregation, we take advantage of the logits from the global model and introduce a new regularization term to increase \\( A_{cln} \\). This regularization term aims to alleviate the accuracy reduction across distributed clients."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.1,
|
| 262 |
+
0.535,
|
| 263 |
+
0.423,
|
| 264 |
+
0.549
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "We conclude our major contributions as follows:"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.091,
|
| 273 |
+
0.555,
|
| 274 |
+
0.48,
|
| 275 |
+
0.599
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "- We conduct systematic studies on the adversarial robustness of FL, and provide valuable observations from extensive experiments."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.091,
|
| 284 |
+
0.602,
|
| 285 |
+
0.48,
|
| 286 |
+
0.687
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "- We reveal the negative impacts of adopting adversarial training in FL, and then propose an effective algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which utilized local re-weighting and global regularization to improve both the accuracy and robustness of FL systems."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.091,
|
| 295 |
+
0.69,
|
| 296 |
+
0.481,
|
| 297 |
+
0.775
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "- Extensive experiments on multiple datasets demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. We present the performance of our method in Fig. 1 (right sub-figure), which indicates the improvement in both robustness and accuracy of adversarial training in FL."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "list",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.091,
|
| 306 |
+
0.555,
|
| 307 |
+
0.481,
|
| 308 |
+
0.775
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": null
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "title",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.218,
|
| 317 |
+
0.796,
|
| 318 |
+
0.347,
|
| 319 |
+
0.812
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "Related Works"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.082,
|
| 328 |
+
0.819,
|
| 329 |
+
0.481,
|
| 330 |
+
0.89
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Federated Learning. Following the success of DNNs in various tasks (Li et al. 2019; Li, Sun, and Guo 2019; ?; Huang et al. 2022b,a; Dong et al. 2021), FL has attracted increasing attention. A recent survey has pointed out that existing FL systems are vulnerable to various attacks that"
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.516,
|
| 339 |
+
0.464,
|
| 340 |
+
0.914,
|
| 341 |
+
0.577
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "aim to either compromise data privacy or system robustness (Lyu et al. 2022). In particular, robustness attacks can be broadly classified into training-time attacks (data poisoning and model poisoning) and inference-time attacks (evision attacks, i.e., using adversarial examples to attack the global model during inference phase). In FL, the architectural design, distributed nature, and data constraints can bring new threats and failures (Kairouz 2021)."
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.516,
|
| 350 |
+
0.587,
|
| 351 |
+
0.915,
|
| 352 |
+
0.78
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "Adversarial Attacks. The white-box attacks have access to the whole details of threat models, including parameters and architectures. Goodfellow et al. (Goodfellow, Shlens, and Szegedy 2014) introduced the Fast Gradient Sign Method (FGSM) to generate adversarial examples, which uses a single-step first-order approximation to perform gradient ascent. Kurakin et al. (Kurakin, Goodfellow, and Bengio 2017) iteratively applied FGSM with a small step-size to develop a significantly stronger multi-step variant, called Iterative FGSM (I-FGSM). Based on these findings, more powerful attacks have been proposed in recent years including MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017), and AA (Croce and Hein 2020)."
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.516,
|
| 361 |
+
0.791,
|
| 362 |
+
0.915,
|
| 363 |
+
0.89
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "Adversarial Training. Adversarial training has been one of the most effective defense strategies against adversarial attacks. Madry et al. (Madry et al. 2017) regarded adversarial training as a min-max formulation using empirical risk minimization under PGD attack. Kannan et al. (Kannan, Kurakin, and Goodfellow 2018) presented adversarial logit pairing (ALP), a method that encourages logits for pairs of"
|
| 367 |
+
}
|
| 368 |
+
],
|
| 369 |
+
[
|
| 370 |
+
{
|
| 371 |
+
"type": "table_caption",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.083,
|
| 374 |
+
0.062,
|
| 375 |
+
0.916,
|
| 376 |
+
0.093
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Table 2: An empirical study on the adversarial robustness of FL, measured by various combination of defense methods and FL algorithms. We report the clean accuracy and robust accuracy, respectively. Best results are in bold."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "table",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.122,
|
| 385 |
+
0.103,
|
| 386 |
+
0.878,
|
| 387 |
+
0.21
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "<table><tr><td>Type</td><td colspan=\"8\">IID</td><td colspan=\"8\">Non-IID</td></tr><tr><td>Methods</td><td colspan=\"2\">FedAvg</td><td colspan=\"2\">FedProx</td><td colspan=\"2\">FedNova</td><td colspan=\"2\">Scaffold</td><td colspan=\"2\">FedAvg</td><td colspan=\"2\">FedProx</td><td colspan=\"2\">FedNova</td><td colspan=\"2\">Scaffold</td></tr><tr><td>Performance</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td></tr><tr><td>PGD-AT</td><td>57.99</td><td>31.95</td><td>58.17</td><td>32.06</td><td>58.45</td><td>31.74</td><td>56.84</td><td>29.26</td><td>46.84</td><td>26.79</td><td>48.03</td><td>27.46</td><td>46.95</td><td>26.54</td><td>42.44</td><td>27.19</td></tr><tr><td>ALP</td><td>62.81</td><td>31.84</td><td>62.88</td><td>31.20</td><td>62.91</td><td>31.79</td><td>60.30</td><td>29.58</td><td>56.16</td><td>28.78</td><td>55.79</td><td>29.06</td><td>55.80</td><td>29.18</td><td>48.29</td><td>26.56</td></tr><tr><td>TRADES</td><td>64.94</td><td>32.93</td><td>64.29</td><td>32.97</td><td>64.46</td><td>33.29</td><td>63.14</td><td>33.58</td><td>60.94</td><td>27.06</td><td>61.05</td><td>27.94</td><td>60.34</td><td>28.78</td><td>59.53</td><td>27.78</td></tr><tr><td>MMA</td><td>65.14</td><td>30.29</td><td>63.65</td><td>31.29</td><td>65.27</td><td>29.31</td><td>64.28</td><td>32.98</td><td>59.69</td><td>28.64</td><td>60.17</td><td>28.09</td><td>61.03</td><td>28.47</td><td>61.53</td><td>28.13</td></tr><tr><td>AVMixup</td><td>66.14</td><td>32.27</td><td>65.12</td><td>33.19</td><td>65.14</td><td>33.75</td><td>65.11</td><td>33.24</td><td>61.17</td><td>28.56</td><td>61.47</td><td>28.34</td><td>62.04</td><td>28.12</td><td>61.91</td><td>28.81</td></tr></table>"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.082,
|
| 396 |
+
0.234,
|
| 397 |
+
0.48,
|
| 398 |
+
0.36
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "examples to be similar, to improve robust accuracy. To quantify the trade-off between accuracy and robustness, Zhang et al. (Zhang et al. 2019) introduced a TRADES loss to achieve a tight upper bound on the gap between clean and robust error. Based on the margin theory and soft-labeled data augmentation, Ding et al. (Ding et al. 2020) proposed Max-Margin Adversarial (MMA) training and Lee et al. (Lee, Lee, and Yoon 2020) introduced Adversarial Vertex mixup (AVmixup)."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.082,
|
| 407 |
+
0.365,
|
| 408 |
+
0.481,
|
| 409 |
+
0.532
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "Federated Adversarial Training. In terms of the adversarial robustness, Zizzo et al. (Zizzo et al. 2020) investigated the effectiveness of the federated adversarial training protocol for idealized federated settings, and showed the performance of their models in a traditional centralized setting and a distributed FL scenario. Zhou et al. (Zhou et al. 2022) decomposed the aggregation error of the central server into bias and variance. However, all these methods sacrificed clean accuracy (compared to plainly trained models) to gain robustness. In addition, certified defense (Chen et al. 2021a) against adversarial examples in FL is another interesting direction, which will be discussed in the future."
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "title",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.154,
|
| 418 |
+
0.544,
|
| 419 |
+
0.409,
|
| 420 |
+
0.559
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "Adversarial Robustness of FL"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.082,
|
| 429 |
+
0.562,
|
| 430 |
+
0.481,
|
| 431 |
+
0.646
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "In this section, we briefly define the goal of federated adversarial training. Then we conduct a systematic study on some popular federated learning algorithms with the combination of various adversarial training methods and evaluate their robustness under several attacks. Besides, we further reveal the challenges of adversarial training in non-IID FL."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "title",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.084,
|
| 440 |
+
0.655,
|
| 441 |
+
0.235,
|
| 442 |
+
0.67
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "Problem Definition"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.082,
|
| 451 |
+
0.673,
|
| 452 |
+
0.481,
|
| 453 |
+
0.744
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "In typical federated learning, training data are distributed across all the \\( K \\) clients, and there is a central server managing model aggregations and communications with clients. In general, federated learning attempts to minimize the following optimization:"
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "equation",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.187,
|
| 462 |
+
0.746,
|
| 463 |
+
0.48,
|
| 464 |
+
0.788
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "\\[\n\\min _ {w} f (w) = \\sum_ {k = 1} ^ {K} \\frac {n _ {k}}{n} F _ {k} (w). \\tag {1}\n\\]"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.082,
|
| 473 |
+
0.789,
|
| 474 |
+
0.481,
|
| 475 |
+
0.873
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "Here, we denote that the global approximate optimal is a sum of local objectives weighted by the local data size \\( n_k \\), and \\( n \\) is the total data size of all clients that participate in a communication round. Moreover, each local objective measures the empirical risk over possibly different data distributions \\( D_k \\), which can be expressed as:"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "equation",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.174,
|
| 484 |
+
0.874,
|
| 485 |
+
0.48,
|
| 486 |
+
0.892
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "\\[\nF _ {k} (w) := \\mathbb {E} _ {x _ {k} \\sim \\mathcal {D} _ {k}} \\left[ f _ {k} (w; x _ {k}) \\right]. \\tag {2}\n\\]"
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.516,
|
| 495 |
+
0.234,
|
| 496 |
+
0.913,
|
| 497 |
+
0.317
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "Let \\( x \\) denote the original image, \\( x^{adv} \\) denote the corresponding adversarial example, and \\( \\delta \\) denote the perturbation added on the original image, then \\( x^{adv} = x + \\delta \\). To generate powerful adversarial examples, we attempt to maximize the loss \\( L(x + \\delta; w) \\), where \\( L \\) is the loss function for local update."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.516,
|
| 506 |
+
0.318,
|
| 507 |
+
0.914,
|
| 508 |
+
0.403
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "To improve the robustness of the neural networks, many adversarial defense methods have been proposed. Among them, adversarial training (Carlini and Wagner 2017) is one of the most prevailing and effective algorithms. Combined with adversarial training, the local objective becomes solving the following min-max optimization problem:"
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "equation",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.526,
|
| 517 |
+
0.41,
|
| 518 |
+
0.913,
|
| 519 |
+
0.445
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "\\[\nF _ {k} (w) = \\min \\mathbb {E} _ {x _ {k} \\sim \\mathcal {D} _ {k}} \\left[ \\max _ {\\| x ^ {a d v} - x \\| _ {\\infty} \\leq \\delta} L (w, x ^ {a d v}, y) \\right]. \\tag {3}\n\\]"
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.516,
|
| 528 |
+
0.453,
|
| 529 |
+
0.914,
|
| 530 |
+
0.495
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "The inner maximization problem aims to find effective adversarial examples that achieve a high loss, while the outer optimization updates local models to minimize training loss."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.516,
|
| 539 |
+
0.495,
|
| 540 |
+
0.915,
|
| 541 |
+
0.786
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "In this work, we conduct a systematic study on several state-of-the-art FL algorithms including FedAvg (McMahon et al. 2017), FedProx (Li et al. 2018), FedNova (Wang et al. 2020) and Scaffold (Karimireddy et al. 2020), and explore their combinations with AT methods to defend against adversarial attacks. We report detailed results in Table 2, here robustness is averaged over four popular attacks (FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), and CW (Carlini and Wagner 2017)). Besides, we implement some prevailing adversarial training methods including PGD_AT (Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We observe that there is no federated adversarial learning algorithm that can outperform all the others in all cases. Moreover, the clean accuracy drops heavily under non-IID distribution. As such, we are motivated to develop a more effective method. Due to the similar performance of these FL methods observed from Table 2, we design our method based on FedAvg - a representative algorithm in FL."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "title",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.517,
|
| 550 |
+
0.799,
|
| 551 |
+
0.825,
|
| 552 |
+
0.815
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "Adversarial Traning with non-HID Data"
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.516,
|
| 561 |
+
0.819,
|
| 562 |
+
0.914,
|
| 563 |
+
0.89
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "Federated learning faces the statistical challenge in real-world scenarios. The IID data makes the stochastic gradient as an unbiased estimate of the full gradient (McMahan et al. 2017). However, the clients are typically highly heterogeneous with various kinds of non-IID settings, such as"
|
| 567 |
+
}
|
| 568 |
+
],
|
| 569 |
+
[
|
| 570 |
+
{
|
| 571 |
+
"type": "image",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.123,
|
| 574 |
+
0.071,
|
| 575 |
+
0.441,
|
| 576 |
+
0.24
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": null
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "image_caption",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.103,
|
| 585 |
+
0.257,
|
| 586 |
+
0.459,
|
| 587 |
+
0.273
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "Figure 2: Test accuracy on a randomly selected client."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "image",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.097,
|
| 596 |
+
0.289,
|
| 597 |
+
0.279,
|
| 598 |
+
0.417
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": null
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "image",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.28,
|
| 607 |
+
0.289,
|
| 608 |
+
0.468,
|
| 609 |
+
0.415
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": null
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "image_caption",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.082,
|
| 618 |
+
0.429,
|
| 619 |
+
0.48,
|
| 620 |
+
0.5
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "Figure 3: Plain training and adversarial training under non-IID setting. Compared with plainly trained situation, the aggregation of adversarially trained models can lead to a more biased model which enlarges accuracy gap. Consequently, it results in poor consistency between different clients."
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.082,
|
| 629 |
+
0.513,
|
| 630 |
+
0.48,
|
| 631 |
+
0.569
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "label skewness and feature skewness (Li et al. 2021b). According to previous studies (Wang et al. 2020; Karimireddy et al. 2020), the non-IID data settings can degrade the effectiveness of the deployed model."
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.082,
|
| 640 |
+
0.57,
|
| 641 |
+
0.48,
|
| 642 |
+
0.736
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "Similarly, due to the non-IID data, the performance of AT may vary widely across clients. To better understand the challenge of adversarial training with non-IID data, we examine the performance of both clean accuracy and robustness on a randomly selected client and report the results in Fig. 2. Observed from Fig. 2, we can find that: 1) \\( A_{cln} \\) on the plainly trained model drops from majority classes to minority classes, which is exactly what traditional imbalanced learning attempts to solve; 2) A similar decreasing tendency reasonably occurs in \\( A_{rob} \\). It is obvious that adopting adversarial training in federated learning with non-IID data is more challenging."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.082,
|
| 651 |
+
0.737,
|
| 652 |
+
0.481,
|
| 653 |
+
0.89
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "According to above observations, we conjecture that AT-trained local models with imbalanced data lead to a more biased decision boundary than plainly trained ones. Since adversarial examples need a larger number of epochs to achieve near-zero error (Zhang et al. 2021), it becomes harder to fit adversarial examples than clean data. However, for the local client itself, imbalanced clean data generates imbalanced adversarial examples, making it more difficult for training and enlarging the accuracy gap, which can reduce the performance both in accuracy and robustness. In Fig. 3, we also show the differences between plain train"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "image",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.537,
|
| 662 |
+
0.067,
|
| 663 |
+
0.895,
|
| 664 |
+
0.164
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": null
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "image_caption",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.515,
|
| 673 |
+
0.174,
|
| 674 |
+
0.915,
|
| 675 |
+
0.314
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "Figure 4: Left panel: Decision boundary of plainly trained model. Middle panel: Decision boundary of AT-trained model. Right panel: Decision boundary of DBFAT-trained model. We use the dotted line to represent the boundary of the clean model, and solid line to represent the boundary of the robust model. The size of the shape represents the value of the weight. Those samples that are close to far from boundary are assigned larger/smaller weight. The decision boundary of DBFAT-trained model (see the right sub-figure) can achieve a higher \\( A_{rob} \\) and meanwhile maintain \\( A_{cln} \\)."
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.515,
|
| 684 |
+
0.334,
|
| 685 |
+
0.913,
|
| 686 |
+
0.432
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "ing and adversarial training in federated settings. Compared with the plainly trained models, the aggregation of adversarially trained models can enlarge the accuracy gap, which results in poor consistency between different clients. To overcome this problem, we propose a novel method to utilize local re-weighting and global regularization to improve both the accuracy and robustness of FL systems."
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "title",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.658,
|
| 695 |
+
0.446,
|
| 696 |
+
0.771,
|
| 697 |
+
0.464
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "Methodology"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.515,
|
| 706 |
+
0.467,
|
| 707 |
+
0.913,
|
| 708 |
+
0.62
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "The generalization performance of a neural network is closely related to its decision boundary. However, models trained in the federated setting are biased compared with the centrally trained models. This is mainly caused by heterogeneous data and objective inconsistency between clients (Kairouz 2021). Moreover, a highly skewed data distribution can lead to an extremely biased boundary (Wang et al. 2020). We tackle this problem in two ways: 1) locally, we take full advantage of the limited data on the distributed client; 2) globally, we utilize the information obtained from the global model to alleviate the biases between clients."
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.515,
|
| 717 |
+
0.621,
|
| 718 |
+
0.914,
|
| 719 |
+
0.746
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "Subsequently, we propose a simple yet effective approach called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components. For local training, we re-weight adversarial examples to improve robustness; while for global aggregation, we utilize the global model to regularize the accuracy for a lower boundary error \\( A_{bdy} \\). We show the training process of DBFAT in the supplementary and illustrate an example of the decision boundary of our approach in Fig. 4."
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "title",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.517,
|
| 728 |
+
0.758,
|
| 729 |
+
0.768,
|
| 730 |
+
0.774
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "Re-weighting with Limited Data"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.515,
|
| 739 |
+
0.778,
|
| 740 |
+
0.913,
|
| 741 |
+
0.89
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "Adversarial examples have the ability to approximately measure the distances from original inputs to a classifier's decision boundary (Heo et al. 2018), which can be calculated by the least number of steps that iterative attack (e.g. PGD attack (Madry et al. 2017)) needs in order to find its misclassified adversarial variant. To better utilize limited adversarial examples, we attempt to re-weight the adversarial examples to guide adversarial training. For clean examples that"
|
| 745 |
+
}
|
| 746 |
+
],
|
| 747 |
+
[
|
| 748 |
+
{
|
| 749 |
+
"type": "table_caption",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.084,
|
| 752 |
+
0.066,
|
| 753 |
+
0.48,
|
| 754 |
+
0.094
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": "Table 3: Loss functions of different adversarial training methods."
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "table",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.088,
|
| 763 |
+
0.106,
|
| 764 |
+
0.481,
|
| 765 |
+
0.192
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "<table><tr><td>Defense</td><td>Loss Function</td></tr><tr><td>PGD_AT</td><td>CE (f (xadv), y)</td></tr><tr><td>ALP</td><td>CE (f (xadv), y) + β · ||f (xadv) - f (x)||2</td></tr><tr><td>TRADES</td><td>CE (f (x), y) + β · KL (f (xadv) ||f (x))</td></tr><tr><td>MMA</td><td>CE (f (xadv), y) · R(hθ(x) = y) + CE (f (x), y) · R(hθ(x) ≠ y)</td></tr><tr><td>AVMixup</td><td>CE (f (xadv), yadv)</td></tr><tr><td>DBFAT(ours)</td><td>ρ · CE(f(xadv), y) + β · KL (f (xadv) ||f glo (x))</td></tr></table>"
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.082,
|
| 774 |
+
0.212,
|
| 775 |
+
0.48,
|
| 776 |
+
0.255
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "are close to the decision boundary, we assign larger weights; while those examples that are far from the boundary are assigned with smaller weights."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.082,
|
| 785 |
+
0.255,
|
| 786 |
+
0.48,
|
| 787 |
+
0.31
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "In this paper, we use PGD- \\(S\\) to approximately measure the geometric distance to the decision boundary, \\(S\\) denotes the number of maximum iteration. We generate adversarial examples as follows (Madry et al. 2017):"
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "equation",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.095,
|
| 796 |
+
0.311,
|
| 797 |
+
0.479,
|
| 798 |
+
0.331
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "\\[\nx ^ {a d v} \\leftarrow \\Pi_ {\\mathcal {B} [ x, \\epsilon ]} \\left(x ^ {a d v} + \\alpha \\cdot \\operatorname {s i g n} \\left(\\nabla_ {x ^ {a d v}} \\ell \\left(x ^ {a d v}, y\\right)\\right)\\right). \\tag {4}\n\\]"
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.082,
|
| 807 |
+
0.332,
|
| 808 |
+
0.48,
|
| 809 |
+
0.373
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "Here \\(\\Pi_{\\mathcal{B}[x,\\epsilon]}\\) is the projection function that projects the adversarial data back into the \\(\\epsilon\\)-ball centered at natural data, \\(\\alpha\\) is the steps size, \\(\\epsilon\\) is perturbation bound."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "text",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.082,
|
| 818 |
+
0.373,
|
| 819 |
+
0.481,
|
| 820 |
+
0.429
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "We find the minimum step \\(d\\), such that after \\(d\\) step of PGD, the adversarial variant can be misclassified by the network, i.e., \\(\\arg \\max_{c} f^{(c)}(x^{adv}) \\neq y\\), where \\(f^{(c)}(x^{adv})\\) is the logits of the \\(c\\)-th label."
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.084,
|
| 829 |
+
0.429,
|
| 830 |
+
0.48,
|
| 831 |
+
0.459
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "In this way, given a mini-batch samples \\(\\{(x_i,y_i)\\}_{i = 1}^m\\), then the weight list \\(\\rho\\) can be formulated as:"
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "equation",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.207,
|
| 840 |
+
0.461,
|
| 841 |
+
0.48,
|
| 842 |
+
0.494
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "\\[\n\\rho \\leftarrow 1 - \\left\\{\\frac {d _ {i}}{\\sum_ {i = 1} ^ {m} d _ {i}} \\right\\}. \\tag {5}\n\\]"
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "title",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.084,
|
| 851 |
+
0.501,
|
| 852 |
+
0.35,
|
| 853 |
+
0.517
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Regularization with Global Model"
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.082,
|
| 862 |
+
0.518,
|
| 863 |
+
0.48,
|
| 864 |
+
0.631
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "Early work (Zhang et al. 2019; Cui et al. 2021) claims that there exists a trade-off between accuracy and robustness, standard adversarial training can hurt accuracy. To achieve a lower boundary error \\( A_{bdy} \\), we take advantage of logits from the global model \\( f^{glo} \\), which is trained after aggregation. Particularly, in federated learning, the model owns the information obtained from the averaged parameters on distributed clients."
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.082,
|
| 873 |
+
0.631,
|
| 874 |
+
0.48,
|
| 875 |
+
0.687
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "Let \\( f^{loc} \\) denote the adversarially trained model at each local client, \\( f^{glo} \\) has the most desirable classifier boundary for natural data. Then we can modify the local objective mentioned in Equation 3 as below:"
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "equation",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.092,
|
| 884 |
+
0.7,
|
| 885 |
+
0.478,
|
| 886 |
+
0.75
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "\\[\n\\min _ {\\text {f o r r o b u s t n e s s}} \\underbrace {\\ell_ {c e} (\\rho \\cdot f ^ {l o c} (x ^ {a d v}) , y)} _ {f o r a c c u r a c y r e g u l a r i z a t i o n} + \\beta \\cdot \\underbrace {\\ell_ {k l} \\left(f ^ {l o c} (x ^ {a d v}) , f ^ {g l o} (x)\\right)} _ {\\text {f o r a c c u r a c y r e g u l a r i z a t i o n}}. \\tag {6}\n\\]"
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.082,
|
| 895 |
+
0.751,
|
| 896 |
+
0.48,
|
| 897 |
+
0.847
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "Where \\(\\ell_{ce}\\) denotes the cross-entropy loss to improve the robustness, and \\(\\ell_{kl}\\) is the KL divergence loss to constrain the logits of global model and local model. Here, \\(\\ell_{kl}\\) appears as an additional regularization term, which is designed to reduce the boundary error \\(A_{bdy} = A_{cln} - A_{rob}\\). Additionally, \\(\\rho\\) is the weight calculated by Equation 5, \\(\\beta\\) is the parameter to be tuned."
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.082,
|
| 906 |
+
0.847,
|
| 907 |
+
0.48,
|
| 908 |
+
0.89
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "To show the difference between our DBFAT and existing defense methods, we list the loss functions of different adversarial training methods in Table 3."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "image",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.522,
|
| 917 |
+
0.066,
|
| 918 |
+
0.916,
|
| 919 |
+
0.202
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": null
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "image_caption",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.516,
|
| 928 |
+
0.213,
|
| 929 |
+
0.915,
|
| 930 |
+
0.284
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "Figure 5: Visualizations of IID and non-IID distribution (Dirichlet sampled and Sharding) across 5 clients on CIFAR10 dataset. Shards_5 is a type of non-IID setting, in which each client has five categories of data (McMahan et al. 2017). From left to right: client ID number #1-5."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "title",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.622,
|
| 939 |
+
0.297,
|
| 940 |
+
0.808,
|
| 941 |
+
0.314
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "Experimental Results"
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "title",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.517,
|
| 950 |
+
0.319,
|
| 951 |
+
0.676,
|
| 952 |
+
0.336
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "Experimental Setup"
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.516,
|
| 961 |
+
0.339,
|
| 962 |
+
0.913,
|
| 963 |
+
0.424
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "Following the previous work of FL (McMahan et al. 2017), we distribute training data among 100 clients in both IID and non-IID fashion. For each communication round, we randomly select 10 clients to average the model parameters. All experiments are conducted with 8 Tesla V100 GPUs. More details can be referred to the supplemental material."
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.516,
|
| 972 |
+
0.433,
|
| 973 |
+
0.914,
|
| 974 |
+
0.587
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "Datasets In this section, we show that DBFAT improves the robust generalization and meanwhile maintains a high accuracy with extensive experiments on benchmark CV datasets, including MNIST (Lecun et al. 1998), FashionMNIST (Xiao, Rasul, and Vollgraf 2017) (FMNIST), CIFAR10 (Krizhevsky and Hinton 2009), CIFAR100 (Krizhevsky and Hinton 2009), Tiny-ImageNet (Le and Yang 2015), and ImageNet-12 (Deng et al. 2009). The ImageNet-12 is generated via (Li et al. 2021c), which consists of 12 classes. We resize the original image with size \\(224*224*3\\) to \\(64*64*3\\) for fast training."
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.516,
|
| 983 |
+
0.596,
|
| 984 |
+
0.915,
|
| 985 |
+
0.89
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "Data partitioning In the federated learning setup, we evaluate all algorithms on two types of non-IID data partitioning: Dirichlet sampled data and Sharding. For Dirichlet sampled data, each local client is allocated with a proportion of the samples of each label according to Dirichlet distribution (Li et al. 2020). Specifically, we follow the setting in (Yurochkin et al. 2019), for each label \\( c \\), we sample \\( p_c \\sim \\mathrm{Dir}_J(0.5) \\) and allocate \\( p_{c,j} \\) proportion of the whole dataset of label \\( c \\) to client \\( j \\). In this setting, some clients may entirely have no examples of a subset of classes. For Sharding (McMahan et al. 2017), each client owns data samples of a fixed number of labels. Let \\( K \\) be the number of total clients, and \\( q \\) is the number of labels we assign to each client. We divide the dataset by label into \\( K * q \\) shards, and the amount of samples in each shard is \\( \\frac{n}{K \\cdot q} \\). We denote this distribution as shards \\( _q \\), where \\( q \\) controls the level of difficulty. If \\( q \\) is set to a smaller value, then the partition is more unbalanced. An example of these partitioning strategies is shown in Fig. 5, in which we visualize IID and non-IID distribution (Dirichlet sampled with \\( p_c \\sim \\mathrm{Dir}_J(0.5) \\) and Sharding with shards_5) on five randomly selected clients."
|
| 989 |
+
}
|
| 990 |
+
],
|
| 991 |
+
[
|
| 992 |
+
{
|
| 993 |
+
"type": "table_caption",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.083,
|
| 996 |
+
0.065,
|
| 997 |
+
0.916,
|
| 998 |
+
0.109
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "Table 4: Accuracy and adversarial robustness on MNIST, FMNIST and CIFAR10 under both IID and non-IID distribution. An empirical study of FedAvg combined with several defense methods, more detailed comparisons are reported in the supplementary (Section B). Our method significantly outperforms other baselines."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "table",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.14,
|
| 1007 |
+
0.12,
|
| 1008 |
+
0.862,
|
| 1009 |
+
0.372
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "<table><tr><td>Type</td><td></td><td colspan=\"6\">IID</td><td colspan=\"6\">Non-IID</td></tr><tr><td>Dataset</td><td>Method</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td></tr><tr><td rowspan=\"6\">MNIST</td><td>Plain</td><td>99.01</td><td>28.35</td><td>8.65</td><td>5.29</td><td>3.84</td><td>3.02</td><td>98.45</td><td>11.78</td><td>14.06</td><td>8.44</td><td>9.51</td><td>7.45</td></tr><tr><td>PGD_AT</td><td>98.52</td><td>76.01</td><td>60.18</td><td>54.50</td><td>55.23</td><td>50.43</td><td>97.82</td><td>67.58</td><td>52.89</td><td>48.03</td><td>47.43</td><td>43.75</td></tr><tr><td>ALP</td><td>98.46</td><td>57.37</td><td>55.61</td><td>48.74</td><td>51.17</td><td>44.25</td><td>97.92</td><td>46.49</td><td>51.01</td><td>46.41</td><td>46.24</td><td>41.95</td></tr><tr><td>TRADES</td><td>97.89</td><td>76.79</td><td>63.29</td><td>58.25</td><td>57.24</td><td>53.72</td><td>92.03</td><td>48.45</td><td>51.56</td><td>47.21</td><td>45.81</td><td>42.36</td></tr><tr><td>AVMixup</td><td>98.63</td><td>61.41</td><td>53.34</td><td>42.33</td><td>46.95</td><td>37.78</td><td>97.47</td><td>56.50</td><td>51.86</td><td>46.28</td><td>44.46</td><td>41.84</td></tr><tr><td>Ours</td><td>98.86</td><td>78.06</td><td>70.97</td><td>68.39</td><td>63.09</td><td>59.39</td><td>97.95</td><td>68.54</td><td>54.18</td><td>50.33</td><td>49.12</td><td>44.32</td></tr><tr><td rowspan=\"6\">FMNIST</td><td>Plain</td><td>88.50</td><td>17.89</td><td>3.55</td><td>2.57</td><td>0.40</td><td>0.17</td><td>84.60</td><td>17.86</td><td>3.25</td><td>2.93</td><td>3.05</td><td>-1.40</td></tr><tr><td>PGD_AT</td><td>76.05</td><td>68.53</td><td>65.24</td><td>65.40</td><td>64.26</td><td>60.89</td><td>72.93</td><td>60.11</td><td>54.42</td><td>54.33</td><td>52.19</td><td>49.88</td></tr><tr><td>ALP</td><td>75.99</td><td>67.31</td><td>63.66</td><td>63.79</td><td>61.55</td><td>59.19</td><td>75.34</td><td>57.67</td><td>53.37</td><td>55.11</td><td>51.12</td><td>51.04</td></tr><tr><td>TRADES</td><td>78.13</td><td>59.33</td><td>52.65</td><td>52.78</td><td>51.44</td><td>48.78</td><td>74.93</td><td>56.53</td><td>44.01</td><td>44.01</td><td>31.80</td><td>39.61</td></tr><tr><td>AVMixup</td><td>79.34</td><td>61.22</td><td>54.93</td><td>54.67</td><td>49.48</td><td>50.07</td><td>72.06</td><td>56.26</td><td>49.21</td><td>49.72</td><td>47.99</td><td>45.15</td></tr><tr><td>Ours</td><td>81.49</td><td>69.23</td><td>66.22</td><td>66.24</td><td>65.71</td><td>61.49</td><td>76.19</td><td>63.11</td><td>56.45</td><td>58.31</td><td>56.96</td><td>53.91</td></tr><tr><td rowspan=\"6\">CIFAR10</td><td>Plain</td><td>78.80</td><td>6.87</td><td>1.15</td><td>1.06</td><td>1.30</td><td>1.23</td><td>61.10</td><td>7.58</td><td>2.94</td><td>2.67</td><td>2.87</td><td>1.28</td></tr><tr><td>PGD_AT</td><td>58.75</td><td>30.62</td><td>27.23</td><td>26.11</td><td>28.47</td><td>22.09</td><td>15.27</td><td>13.27</td><td>13.00</td><td>13.00</td><td>12.99</td><td>8.63</td></tr><tr><td>ALP</td><td>63.23</td><td>29.42</td><td>26.75</td><td>28.49</td><td>28.13</td><td>23.97</td><td>32.91</td><td>21.41</td><td>20.26</td><td>20.19</td><td>17.74</td><td>15.83</td></tr><tr><td>TRADES</td><td>68.58</td><td>31.53</td><td>25.92</td><td>25.49</td><td>23.07</td><td>20.89</td><td>46.30</td><td>24.81</td><td>22.20</td><td>22.05</td><td>19.59</td><td>17.85</td></tr><tr><td>AVMixup</td><td>70.28</td><td>29.51</td><td>26.22</td><td>26.34</td><td>24.07</td><td>22.25</td><td>48.23</td><td>25.29</td><td>21.42</td><td>24.25</td><td>20.25</td><td>19.43</td></tr><tr><td>Ours</td><td>72.21</td><td>31.47</td><td>28.57</td><td>29.03</td><td>29.31</td><td>24.25</td><td>52.24</td><td>27.03</td><td>24.12</td><td>27.02</td><td>22.13</td><td>21.20</td></tr></table>"
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.082,
|
| 1018 |
+
0.396,
|
| 1019 |
+
0.48,
|
| 1020 |
+
0.549
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "MNIST and FMNIST setup We use a simple CNN with two convolutional layers, followed by two fully connected layers. Following the setting used in (Goodfellow, Shlens, and Szegedy 2014), for MNIST, we set perturbation bound \\(\\epsilon = 0.3\\), and step size \\(\\alpha = 0.01\\), and apply adversarial attacks for 20 iterations. For FMNIST, we set perturbation bound \\(\\epsilon = 32/255\\), and step size \\(\\alpha = 0.031\\), we adversarially train the network for 10 steps and apply adversarial attacks for 20 iterations. Due to the simplicity of MNIST and FMNIST, we mainly use non-IID data (Sharding), which is hard to train."
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.082,
|
| 1029 |
+
0.566,
|
| 1030 |
+
0.481,
|
| 1031 |
+
0.651
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "CIFAR10, CIFAR100, Tiny-ImageNet and ImageNet-12 setup We apply a larger CNN architecture, and follow the setting used in (Madry et al. 2017), i.e., we set the perturbation bound \\(\\epsilon = 0.031\\), step size \\(\\alpha = 0.007\\). To evaluate the robustness, we conduct extensive experiments with various data partitioning."
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.082,
|
| 1040 |
+
0.667,
|
| 1041 |
+
0.481,
|
| 1042 |
+
0.89
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "Baselines For attack methods, we perform five popular attacks including FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017) and AA (Croce and Hein 2020). We further use Square (Andriushchenko et al. 2020) for black-box attack. To investigate the effectiveness of existing FL algorithms, we implement FedAvg(McMahan et al. 2017), FedProx(Li et al. 2018), FedNova(Wang et al. 2020) and Scaffold(Karimireddy et al. 2020). To defend against adversarial attacks, we implement four most prevailing methods including PGD_AT(Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We compare the performance of our DBFAT with various kinds of defense methods combined with FL methods."
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "image_caption",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.518,
|
| 1051 |
+
0.395,
|
| 1052 |
+
0.771,
|
| 1053 |
+
0.412
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Convergence For Local Training"
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "image",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.526,
|
| 1062 |
+
0.434,
|
| 1063 |
+
0.905,
|
| 1064 |
+
0.533
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": null
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "image_caption",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.516,
|
| 1073 |
+
0.545,
|
| 1074 |
+
0.913,
|
| 1075 |
+
0.587
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "Figure 6: Left: Convergence rate for different local epochs. Right: Training curves of FedAvg combined with different AT methods."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.516,
|
| 1084 |
+
0.605,
|
| 1085 |
+
0.914,
|
| 1086 |
+
0.759
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "To show the convergence rate of DBFAT, we use the Dirichlet sampled CIFAR10 dataset, where each client owns 500 samples from 5 classes. Fig. 6 (left sub-figure) shows the impact of local epoch \\( E \\) during adversarial training. Indeed, for a very small epoch (e.g., \\( E = 2 \\)), it has an extremely slow convergence rate, which may incur more communications. Besides, a large epoch (e.g., \\( E = 20 \\)) also leads to a slow convergence, as model may overfit to the local data. Considering both the communication cost and convergence issues, we set \\( E = 5 \\) in our experiments, which can maintain a proper communication efficiency and fast convergence."
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "title",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.517,
|
| 1095 |
+
0.771,
|
| 1096 |
+
0.741,
|
| 1097 |
+
0.785
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "Effectiveness of Our Method"
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.516,
|
| 1106 |
+
0.791,
|
| 1107 |
+
0.913,
|
| 1108 |
+
0.861
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "We verify the effectiveness of our method compared with several adversarial training techniques on Dirichlet sampled CIFAR10. Evaluation of model robustness is averaged under four attacks using the same setting for a fair comparison and all defense methods are combined with FedAvg."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "text",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.517,
|
| 1117 |
+
0.862,
|
| 1118 |
+
0.913,
|
| 1119 |
+
0.89
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "To show the differences between DBFAT and above mentioned defense methods, we report the training curves on"
|
| 1123 |
+
}
|
| 1124 |
+
],
|
| 1125 |
+
[
|
| 1126 |
+
{
|
| 1127 |
+
"type": "table_caption",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.185,
|
| 1130 |
+
0.066,
|
| 1131 |
+
0.812,
|
| 1132 |
+
0.082
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "Table 5: Accuracy and adversarial robustness on CIFAR100, Tiny-ImageNet, and ImageNet-12."
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "table",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.139,
|
| 1141 |
+
0.092,
|
| 1142 |
+
0.859,
|
| 1143 |
+
0.194
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "<table><tr><td>Dataset</td><td colspan=\"4\">CIFAR100</td><td colspan=\"4\">Tiny-ImageNet</td><td colspan=\"4\">ImageNet-12</td></tr><tr><td>Method</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td></tr><tr><td>PGD_AT</td><td>39.32</td><td>16.07</td><td>14.36</td><td>23.44</td><td>26.33</td><td>12.26</td><td>10.26</td><td>13.54</td><td>37.42</td><td>22.61</td><td>18.30</td><td>25.57</td></tr><tr><td>ALP</td><td>41.12</td><td>18.46</td><td>14.78</td><td>24.54</td><td>32.78</td><td>14.62</td><td>12.19</td><td>16.48</td><td>54.96</td><td>24.78</td><td>19.57</td><td>27.73</td></tr><tr><td>TRADES</td><td>43.39</td><td>20.05</td><td>16.85</td><td>26.43</td><td>37.81</td><td>15.49</td><td>13.26</td><td>19.38</td><td>58.82</td><td>25.49</td><td>21.81</td><td>28.96</td></tr><tr><td>AVMixup</td><td>46.64</td><td>23.56</td><td>19.46</td><td>29.16</td><td>36.19</td><td>15.28</td><td>13.18</td><td>19.25</td><td>59.63</td><td>25.81</td><td>21.92</td><td>29.28</td></tr><tr><td>Ours</td><td>48.31</td><td>24.47</td><td>22.46</td><td>31.57</td><td>38.24</td><td>16.17</td><td>13.96</td><td>20.26</td><td>61.38</td><td>26.47</td><td>22.08</td><td>30.91</td></tr></table>"
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "table_caption",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.094,
|
| 1152 |
+
0.203,
|
| 1153 |
+
0.468,
|
| 1154 |
+
0.218
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": "Table 6: Ablation Study by cutting off different modules."
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "table",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.089,
|
| 1163 |
+
0.23,
|
| 1164 |
+
0.486,
|
| 1165 |
+
0.32
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">CIFAR10</td><td colspan=\"2\">FMNIST</td></tr><tr><td>Methods</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>Ours</td><td>52.16</td><td>27.80</td><td>75.89</td><td>59.63</td></tr><tr><td>Ours (w/o re-weighting)</td><td>48.44</td><td>25.89</td><td>72.35</td><td>56.34</td></tr><tr><td>Ours (w/o regularization)</td><td>51.04</td><td>26.84</td><td>73.96</td><td>58.23</td></tr></table>"
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "text",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.082,
|
| 1174 |
+
0.335,
|
| 1175 |
+
0.48,
|
| 1176 |
+
0.516
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "non-IID CIFAR10 dataset in the right sub-figure of Fig. 6. Fig. 6 confirms that our DBFAT achieves the highest clean accuracy. We speculate that this benefit is due to the regularization term and re-weighting strategy introduced in Equation 6. It is worth mentioning that in the training curves, the model trained with PGD_AT performs very poorly. It indicates that standard AT may not be a suitable choice for adversarial robustness in FL, as it only uses cross-entropy loss with adversarial examples, but ignores the negative impact on clean accuracy. We further report the results on various datasets under both IID and non-IID settings in Table 4, which indicates that DBFAT significantly outperforms other methods in terms of both accuracy and robustness."
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.082,
|
| 1185 |
+
0.524,
|
| 1186 |
+
0.48,
|
| 1187 |
+
0.734
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "Performance on large datasets In Table 5, we show the accuracy and robustness of each method on large datasets (e.g., CIFAR100, Tiny-ImageNet, and ImageNet-12). All results are tested under PGD-20 attack (Madry et al. 2017), AutoAttack (Croce and Hein 2020), and Square attack (Andriushchenko et al. 2020) in non-IID settings. From the results reported in Table 5, we can find that our method still outperforms other baselines in terms of both clean accuracy and robustness. Note that our method can achieve the highest accuracy and robustness of \\(61.38\\%\\) and \\(22.08\\%\\) under AutoAttack, respectively. It thus proves that our method can also be used to improve the accuracy and robustness of the model on large datasets. We think that the higher clean accuracy is a result of the regularization term introduced in Equation 6, while maintaining a high robustness."
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "title",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.084,
|
| 1196 |
+
0.745,
|
| 1197 |
+
0.204,
|
| 1198 |
+
0.76
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "Ablation Study"
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.082,
|
| 1207 |
+
0.763,
|
| 1208 |
+
0.48,
|
| 1209 |
+
0.89
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "Cutting off different modules As part of our ablation study, we first investigate the contributions of different modules introduced in DBFAT. As shown in Table 6, turning off both the re-weighting strategy and regularization term will lead to poor performance, which demonstrates the importance of both modules. Moreover, cut-offing the reweighting strategy can lead to a more severe degradation. We conjecture this is a reasonable phenomenon. As mentioned in Fig. 1, non-IID data can cause a serious accuracy"
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "table_caption",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.516,
|
| 1218 |
+
0.203,
|
| 1219 |
+
0.912,
|
| 1220 |
+
0.232
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "Table 7: Effect of hyper-parameter \\( \\beta \\) . \"Avg \\( {A}_{rob} \\) \" refers to the average robustness under four attacks."
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "table",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.545,
|
| 1229 |
+
0.243,
|
| 1230 |
+
0.888,
|
| 1231 |
+
0.361
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">MNIST</td><td colspan=\"2\">FMNIST</td></tr><tr><td>β</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>4</td><td>98.30</td><td>26.64</td><td>81.73</td><td>37.36</td></tr><tr><td>2</td><td>98.14</td><td>34.24</td><td>75.59</td><td>47.83</td></tr><tr><td>1.5</td><td>98.46</td><td>53.22</td><td>74.93</td><td>44.08</td></tr><tr><td>1</td><td>97.32</td><td>47.35</td><td>65.43</td><td>42.33</td></tr><tr><td>0.5</td><td>96.57</td><td>44.09</td><td>61.02</td><td>45.28</td></tr></table>"
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "text",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.516,
|
| 1240 |
+
0.375,
|
| 1241 |
+
0.912,
|
| 1242 |
+
0.403
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "reduction. Our re-weighting strategy can alleviate the bias by taking the limited data on each client into account."
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "text",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.516,
|
| 1251 |
+
0.42,
|
| 1252 |
+
0.912,
|
| 1253 |
+
0.628
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "Effects of Regularization The regularization parameter \\(\\beta\\) is an important hyperparameter in our proposed method. We show how the regularization parameter affects the performance of our robust classifiers by numerical experiments on two datasets, MNIST and FMNIST. In Equation 6, \\(\\beta\\) controls the accuracy obtained from the global model, which contains information from distributed clients. Since directly training on adversarial examples could hurt the clean accuracy, here we explore the effects of \\(\\beta\\) on both accuracy and robustness. As shown in Table 7, we report the clean accuracy and robustness by varying the value of \\(\\beta\\). We empirically choose the best \\(\\beta\\) for different datasets. For example, for MNIST, \\(\\beta = 1.5\\) can achieve better accuracy and robustness. For FMNIST, we let \\(\\beta = 2\\) for a proper trade-off in accuracy and robustness."
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "title",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.666,
|
| 1262 |
+
0.652,
|
| 1263 |
+
0.765,
|
| 1264 |
+
0.668
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "Conclusion"
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "text",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.516,
|
| 1273 |
+
0.681,
|
| 1274 |
+
0.912,
|
| 1275 |
+
0.889
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": "In this paper, we investigate an interesting yet not well explored problem in FL: the robustness against adversarial attacks. We first find that directly adopting adversarial training in federated learning can hurt accuracy significantly especially in non-IID setting. We then propose a novel and effective adversarial training method called DBFAT, which is based on the decision boundary of federated learning, and utilizes local re-weighting and global regularization to improve both accuracy and robustness of FL systems. Comprehensive experiments on various datasets and detailed comparisons with the state-of-the-art adversarial training methods demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. This work would potentially benefit researchers who are interested in adversarial robustness of FL."
|
| 1279 |
+
}
|
| 1280 |
+
],
|
| 1281 |
+
[
|
| 1282 |
+
{
|
| 1283 |
+
"type": "title",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
0.235,
|
| 1286 |
+
0.068,
|
| 1287 |
+
0.331,
|
| 1288 |
+
0.083
|
| 1289 |
+
],
|
| 1290 |
+
"angle": 0,
|
| 1291 |
+
"content": "References"
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "ref_text",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
0.085,
|
| 1297 |
+
0.086,
|
| 1298 |
+
0.48,
|
| 1299 |
+
0.127
|
| 1300 |
+
],
|
| 1301 |
+
"angle": 0,
|
| 1302 |
+
"content": "Andriushchenko, M.; Croce, F.; Flamarion, N.; and Hein, M. 2020. Square Attack: a query-efficient black-box adversarial attack via random search. arXiv:1912.00049."
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "ref_text",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
0.085,
|
| 1308 |
+
0.129,
|
| 1309 |
+
0.48,
|
| 1310 |
+
0.171
|
| 1311 |
+
],
|
| 1312 |
+
"angle": 0,
|
| 1313 |
+
"content": "Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), 39-57. IEEE."
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "ref_text",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
0.085,
|
| 1319 |
+
0.173,
|
| 1320 |
+
0.478,
|
| 1321 |
+
0.187
|
| 1322 |
+
],
|
| 1323 |
+
"angle": 0,
|
| 1324 |
+
"content": "Chen, C.; Kailkhura, B.; Goldhahn, R.; and Zhou, Y. 2021a."
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "ref_text",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
0.085,
|
| 1330 |
+
0.188,
|
| 1331 |
+
0.478,
|
| 1332 |
+
0.215
|
| 1333 |
+
],
|
| 1334 |
+
"angle": 0,
|
| 1335 |
+
"content": "Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. arXiv:2103.16031."
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "ref_text",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
0.085,
|
| 1341 |
+
0.217,
|
| 1342 |
+
0.478,
|
| 1343 |
+
0.259
|
| 1344 |
+
],
|
| 1345 |
+
"angle": 0,
|
| 1346 |
+
"content": "Chen, C.; Liu, Y.; Ma, X.; and Lyu, L. 2022a. CalFAT: Calibrated Federated Adversarial Training with Label Skewness. In Advances in Neural Information Processing Systems."
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "ref_text",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
0.085,
|
| 1352 |
+
0.261,
|
| 1353 |
+
0.48,
|
| 1354 |
+
0.316
|
| 1355 |
+
],
|
| 1356 |
+
"angle": 0,
|
| 1357 |
+
"content": "Chen, C.; Zhang, J.; and Lyu, L. 2022. Gear: a margin-based federated adversarial training approach. In International Workshop on Trustable, Verifiable, and Auditable Federated Learning in Conjunction with AAAI, volume 2022."
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "ref_text",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
0.085,
|
| 1363 |
+
0.318,
|
| 1364 |
+
0.478,
|
| 1365 |
+
0.374
|
| 1366 |
+
],
|
| 1367 |
+
"angle": 0,
|
| 1368 |
+
"content": "Chen, Z.; Li, B.; Wu, S.; Xu, J.; Ding, S.; and Zhang, W. 2022b. Shape Matters: Deformable Patch Attack. In Avidan, S.; Brostow, G. J.; Cisse, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision - ECCV 2022. Springer."
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "ref_text",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
0.084,
|
| 1374 |
+
0.376,
|
| 1375 |
+
0.478,
|
| 1376 |
+
0.432
|
| 1377 |
+
],
|
| 1378 |
+
"angle": 0,
|
| 1379 |
+
"content": "Chen, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Zhang, W. 2022c. Towards Practical Certifiable Patch Defense With Vision Transformer. In Proceedings of the IEEE/CVF Conference on CVPR, 15148-15158."
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "ref_text",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
0.085,
|
| 1385 |
+
0.434,
|
| 1386 |
+
0.48,
|
| 1387 |
+
0.558
|
| 1388 |
+
],
|
| 1389 |
+
"angle": 0,
|
| 1390 |
+
"content": "Chen, Z.; Zhu, M.; Yang, C.; and Yuan, Y. 2021b. Personalized Retrogress-Resilient Framework for Real-World Medical Federated Learning. In de Bruijne, M.; Cattin, P. C.; Cotin, S.; Padoy, N.; Speidel, S.; Zheng, Y.; and Essert, C., eds., Medical Image Computing and Computer Assisted Intervention - MICCAI 2021 - 24th International Conference, Strasbourg, France, September 27 - October 1, 2021, Proceedings, Part III, volume 12903 of Lecture Notes in Computer Science, 347-356. Springer."
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "ref_text",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
0.085,
|
| 1396 |
+
0.56,
|
| 1397 |
+
0.478,
|
| 1398 |
+
0.601
|
| 1399 |
+
],
|
| 1400 |
+
"angle": 0,
|
| 1401 |
+
"content": "Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv:2003.01690."
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "ref_text",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
0.085,
|
| 1407 |
+
0.604,
|
| 1408 |
+
0.48,
|
| 1409 |
+
0.658
|
| 1410 |
+
],
|
| 1411 |
+
"angle": 0,
|
| 1412 |
+
"content": "Cui, J.; Liu, S.; Wang, L.; and Jia, J. 2021. Learnable boundary guided adversarial training. In Proceedings of the IEEE/CVF international conference on computer vision, 15721-15730."
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "ref_text",
|
| 1416 |
+
"bbox": [
|
| 1417 |
+
0.085,
|
| 1418 |
+
0.661,
|
| 1419 |
+
0.478,
|
| 1420 |
+
0.717
|
| 1421 |
+
],
|
| 1422 |
+
"angle": 0,
|
| 1423 |
+
"content": "Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248-255."
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "ref_text",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
0.085,
|
| 1429 |
+
0.719,
|
| 1430 |
+
0.478,
|
| 1431 |
+
0.761
|
| 1432 |
+
],
|
| 1433 |
+
"angle": 0,
|
| 1434 |
+
"content": "Ding, G. W.; Sharma, Y.; Lui, K. Y. C.; and Huang, R. 2020. MMA Training: Direct Input Space Margin Maximization through Adversarial Training. arXiv:1812.02637."
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "ref_text",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
0.085,
|
| 1440 |
+
0.763,
|
| 1441 |
+
0.478,
|
| 1442 |
+
0.832
|
| 1443 |
+
],
|
| 1444 |
+
"angle": 0,
|
| 1445 |
+
"content": "Dong, J.; Cong, Y.; Sun, G.; Fang, Z.; and Ding, Z. 2021. Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1): 1-17."
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "ref_text",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
0.085,
|
| 1451 |
+
0.834,
|
| 1452 |
+
0.48,
|
| 1453 |
+
0.89
|
| 1454 |
+
],
|
| 1455 |
+
"angle": 0,
|
| 1456 |
+
"content": "Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185-9193."
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "list",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
0.084,
|
| 1462 |
+
0.086,
|
| 1463 |
+
0.48,
|
| 1464 |
+
0.89
|
| 1465 |
+
],
|
| 1466 |
+
"angle": 0,
|
| 1467 |
+
"content": null
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "ref_text",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
0.518,
|
| 1473 |
+
0.068,
|
| 1474 |
+
0.913,
|
| 1475 |
+
0.11
|
| 1476 |
+
],
|
| 1477 |
+
"angle": 0,
|
| 1478 |
+
"content": "Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572."
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "ref_text",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
0.518,
|
| 1484 |
+
0.115,
|
| 1485 |
+
0.913,
|
| 1486 |
+
0.157
|
| 1487 |
+
],
|
| 1488 |
+
"angle": 0,
|
| 1489 |
+
"content": "Heo, B.; Lee, M.; Yun, S.; and Choi, J. Y. 2018. Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. arXiv:1805.05532."
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "ref_text",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
0.519,
|
| 1495 |
+
0.161,
|
| 1496 |
+
0.913,
|
| 1497 |
+
0.204
|
| 1498 |
+
],
|
| 1499 |
+
"angle": 0,
|
| 1500 |
+
"content": "Hong, J.; Wang, H.; Wang, Z.; and Zhou, J. 2021. Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. arXiv preprint arXiv:2106.10196."
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "ref_text",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
0.519,
|
| 1506 |
+
0.207,
|
| 1507 |
+
0.913,
|
| 1508 |
+
0.276
|
| 1509 |
+
],
|
| 1510 |
+
"angle": 0,
|
| 1511 |
+
"content": "Huang, R.; Cui, C.; Chen, F.; Ren, Y.; Liu, J.; Zhao, Z.; Huai, B.; and Wang, Z. 2022a. Singgan: Generative adversarial network for high-fidelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, 2525-2535."
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "ref_text",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
0.519,
|
| 1517 |
+
0.281,
|
| 1518 |
+
0.913,
|
| 1519 |
+
0.336
|
| 1520 |
+
],
|
| 1521 |
+
"angle": 0,
|
| 1522 |
+
"content": "Huang, R.; Lam, M. W.; Wang, J.; Su, D.; Yu, D.; Ren, Y.; and Zhao, Z. 2022b. FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. arXiv preprint arXiv:2204.09934."
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "ref_text",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
0.519,
|
| 1528 |
+
0.34,
|
| 1529 |
+
0.913,
|
| 1530 |
+
0.369
|
| 1531 |
+
],
|
| 1532 |
+
"angle": 0,
|
| 1533 |
+
"content": "Kairouz, P. 2021. Advances and Open Problems in Federated Learning. arXiv:1912.04977."
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "ref_text",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
0.519,
|
| 1539 |
+
0.373,
|
| 1540 |
+
0.913,
|
| 1541 |
+
0.402
|
| 1542 |
+
],
|
| 1543 |
+
"angle": 0,
|
| 1544 |
+
"content": "Kannan, H.; Kurakin, A.; and Goodfellow, I. 2018. Adversarial logit pairing. arXiv preprint arXiv:1803.06373."
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "ref_text",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
0.519,
|
| 1550 |
+
0.405,
|
| 1551 |
+
0.913,
|
| 1552 |
+
0.461
|
| 1553 |
+
],
|
| 1554 |
+
"angle": 0,
|
| 1555 |
+
"content": "Karimireddy, S. P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; and Suresh, A. T. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, 5132-5143. PMLR."
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "ref_text",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
0.519,
|
| 1561 |
+
0.465,
|
| 1562 |
+
0.913,
|
| 1563 |
+
0.507
|
| 1564 |
+
],
|
| 1565 |
+
"angle": 0,
|
| 1566 |
+
"content": "Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario."
|
| 1567 |
+
},
|
| 1568 |
+
{
|
| 1569 |
+
"type": "ref_text",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
0.519,
|
| 1572 |
+
0.511,
|
| 1573 |
+
0.913,
|
| 1574 |
+
0.539
|
| 1575 |
+
],
|
| 1576 |
+
"angle": 0,
|
| 1577 |
+
"content": "Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533."
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "ref_text",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.519,
|
| 1583 |
+
0.543,
|
| 1584 |
+
0.913,
|
| 1585 |
+
0.572
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "Le, Y.; and Yang, X. 2015. Tiny imagenet visual recognition challenge. CS 231N, 7(7): 3."
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "ref_text",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
0.519,
|
| 1594 |
+
0.576,
|
| 1595 |
+
0.913,
|
| 1596 |
+
0.617
|
| 1597 |
+
],
|
| 1598 |
+
"angle": 0,
|
| 1599 |
+
"content": "Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324."
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "ref_text",
|
| 1603 |
+
"bbox": [
|
| 1604 |
+
0.519,
|
| 1605 |
+
0.622,
|
| 1606 |
+
0.913,
|
| 1607 |
+
0.677
|
| 1608 |
+
],
|
| 1609 |
+
"angle": 0,
|
| 1610 |
+
"content": "Lee, S.; Lee, H.; and Yoon, S. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 272-281."
|
| 1611 |
+
},
|
| 1612 |
+
{
|
| 1613 |
+
"type": "ref_text",
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
0.519,
|
| 1616 |
+
0.681,
|
| 1617 |
+
0.913,
|
| 1618 |
+
0.724
|
| 1619 |
+
],
|
| 1620 |
+
"angle": 0,
|
| 1621 |
+
"content": "Li, B.; Sun, Z.; and Guo, Y. 2019. SuperVAE: Superpixelwise Variational Autoencoder for Salient Object Detection. In The Thirty-Third AAAI Conference."
|
| 1622 |
+
},
|
| 1623 |
+
{
|
| 1624 |
+
"type": "ref_text",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
0.519,
|
| 1627 |
+
0.728,
|
| 1628 |
+
0.913,
|
| 1629 |
+
0.769
|
| 1630 |
+
],
|
| 1631 |
+
"angle": 0,
|
| 1632 |
+
"content": "Li, B.; Sun, Z.; Tang, L.; Sun, Y.; and Shi, J. 2019. Detecting Robust Co-Saliency with Recurrent Co-Attention Neural Network. In Kraus, S., ed., IJCAI."
|
| 1633 |
+
},
|
| 1634 |
+
{
|
| 1635 |
+
"type": "ref_text",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
0.519,
|
| 1638 |
+
0.773,
|
| 1639 |
+
0.913,
|
| 1640 |
+
0.842
|
| 1641 |
+
],
|
| 1642 |
+
"angle": 0,
|
| 1643 |
+
"content": "Li, B.; Xu, J.; Wu, S.; Ding, S.; Li, J.; and Huang, F. 2021a. Detecting Adversarial Patch Attacks through Global-local Consistency. In ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, 35-41. ACM."
|
| 1644 |
+
},
|
| 1645 |
+
{
|
| 1646 |
+
"type": "ref_text",
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
0.519,
|
| 1649 |
+
0.847,
|
| 1650 |
+
0.913,
|
| 1651 |
+
0.89
|
| 1652 |
+
],
|
| 1653 |
+
"angle": 0,
|
| 1654 |
+
"content": "Li, Q.; Diao, Y.; Chen, Q.; and He, B. 2021b. Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv preprint arXiv:2102.02079."
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "list",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
0.518,
|
| 1660 |
+
0.068,
|
| 1661 |
+
0.913,
|
| 1662 |
+
0.89
|
| 1663 |
+
],
|
| 1664 |
+
"angle": 0,
|
| 1665 |
+
"content": null
|
| 1666 |
+
}
|
| 1667 |
+
],
|
| 1668 |
+
[
|
| 1669 |
+
{
|
| 1670 |
+
"type": "ref_text",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.084,
|
| 1673 |
+
0.069,
|
| 1674 |
+
0.48,
|
| 1675 |
+
0.113
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "Li, T.; Sahu, A. K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; and Smith, V. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127."
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "ref_text",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.085,
|
| 1684 |
+
0.114,
|
| 1685 |
+
0.48,
|
| 1686 |
+
0.157
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": "Li, X.; Huang, K.; Yang, W.; Wang, S.; and Zhang, Z. 2020. On the Convergence of FedAvg on Non-IID Data. arXiv:1907.02189."
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "ref_text",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.085,
|
| 1695 |
+
0.16,
|
| 1696 |
+
0.48,
|
| 1697 |
+
0.203
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "Li, Y.; Lyu, X.; Koren, N.; Lyu, L.; Li, B.; and Ma, X. 2021c. Anti-backdoor learning: Training clean models on poisoned data. NeurIPS, 34."
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"type": "ref_text",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
0.084,
|
| 1706 |
+
0.206,
|
| 1707 |
+
0.48,
|
| 1708 |
+
0.263
|
| 1709 |
+
],
|
| 1710 |
+
"angle": 0,
|
| 1711 |
+
"content": "Liang, F.; Pan, W.; and Ming, Z. 2021. FedRec++: Lossless Federated Recommendation with Explicit Feedback. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 4224-4231. AAAI Press."
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "ref_text",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
0.084,
|
| 1717 |
+
0.266,
|
| 1718 |
+
0.48,
|
| 1719 |
+
0.35
|
| 1720 |
+
],
|
| 1721 |
+
"angle": 0,
|
| 1722 |
+
"content": "Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P. 2021a. FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 1013-1023. Computer Vision Foundation / IEEE."
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "ref_text",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
0.084,
|
| 1728 |
+
0.354,
|
| 1729 |
+
0.48,
|
| 1730 |
+
0.452
|
| 1731 |
+
],
|
| 1732 |
+
"angle": 0,
|
| 1733 |
+
"content": "Liu, S.; Xu, S.; Yu, W.; Fu, Z.; Zhang, Y.; and Marian, A. 2021b. FedCT: Federated Collaborative Transfer for Recommendation. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 716-725. ACM."
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "ref_text",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
0.084,
|
| 1739 |
+
0.455,
|
| 1740 |
+
0.48,
|
| 1741 |
+
0.512
|
| 1742 |
+
],
|
| 1743 |
+
"angle": 0,
|
| 1744 |
+
"content": "Lyu, L.; Yu, H.; Ma, X.; Chen, C.; Sun, L.; Zhao, J.; Yang, Q.; and Philip, S. Y. 2022. Privacy and robustness in federated learning: Attacks and defenses. IEEE transactions on neural networks and learning systems."
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "ref_text",
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
0.084,
|
| 1750 |
+
0.515,
|
| 1751 |
+
0.48,
|
| 1752 |
+
0.559
|
| 1753 |
+
],
|
| 1754 |
+
"angle": 0,
|
| 1755 |
+
"content": "Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083."
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "ref_text",
|
| 1759 |
+
"bbox": [
|
| 1760 |
+
0.084,
|
| 1761 |
+
0.562,
|
| 1762 |
+
0.48,
|
| 1763 |
+
0.618
|
| 1764 |
+
],
|
| 1765 |
+
"angle": 0,
|
| 1766 |
+
"content": "McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 1273-1282. PMLR."
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "ref_text",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
0.084,
|
| 1772 |
+
0.621,
|
| 1773 |
+
0.48,
|
| 1774 |
+
0.664
|
| 1775 |
+
],
|
| 1776 |
+
"angle": 0,
|
| 1777 |
+
"content": "Shah, D.; Dube, P.; Chakraborty, S.; and Verma, A. 2021. Adversarial training in communication constrained federated learning. arXiv preprint arXiv:2103.01319."
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "ref_text",
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
0.084,
|
| 1783 |
+
0.667,
|
| 1784 |
+
0.48,
|
| 1785 |
+
0.724
|
| 1786 |
+
],
|
| 1787 |
+
"angle": 0,
|
| 1788 |
+
"content": "Wang, C.; Deng, J.; Meng, X.; Wang, Y.; Li, J.; Miao, F.; Rajasekaran, S.; and Ding, C. 2021. A Secure and Efficient Federated Learning Framework for NLP. In EMNLP 2021, 7676-7682. Association for Computational Linguistics."
|
| 1789 |
+
},
|
| 1790 |
+
{
|
| 1791 |
+
"type": "ref_text",
|
| 1792 |
+
"bbox": [
|
| 1793 |
+
0.084,
|
| 1794 |
+
0.727,
|
| 1795 |
+
0.48,
|
| 1796 |
+
0.783
|
| 1797 |
+
],
|
| 1798 |
+
"angle": 0,
|
| 1799 |
+
"content": "Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; and Poor, H. V. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint arXiv:2007.07481."
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "ref_text",
|
| 1803 |
+
"bbox": [
|
| 1804 |
+
0.084,
|
| 1805 |
+
0.786,
|
| 1806 |
+
0.48,
|
| 1807 |
+
0.83
|
| 1808 |
+
],
|
| 1809 |
+
"angle": 0,
|
| 1810 |
+
"content": "Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747."
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "ref_text",
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
0.084,
|
| 1816 |
+
0.833,
|
| 1817 |
+
0.48,
|
| 1818 |
+
0.889
|
| 1819 |
+
],
|
| 1820 |
+
"angle": 0,
|
| 1821 |
+
"content": "Yurochkin, M.; Agarwal, M.; Ghosh, S.; Greenewald, K.; Hoang, T. N.; and Khazaeni, Y. 2019. Bayesian Nonparametric Federated Learning of Neural Networks. arXiv:1905.12022."
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "list",
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
0.084,
|
| 1827 |
+
0.069,
|
| 1828 |
+
0.48,
|
| 1829 |
+
0.889
|
| 1830 |
+
],
|
| 1831 |
+
"angle": 0,
|
| 1832 |
+
"content": null
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "ref_text",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
0.518,
|
| 1838 |
+
0.069,
|
| 1839 |
+
0.913,
|
| 1840 |
+
0.126
|
| 1841 |
+
],
|
| 1842 |
+
"angle": 0,
|
| 1843 |
+
"content": "Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472-7482. PMLR."
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "ref_text",
|
| 1847 |
+
"bbox": [
|
| 1848 |
+
0.518,
|
| 1849 |
+
0.128,
|
| 1850 |
+
0.913,
|
| 1851 |
+
0.17
|
| 1852 |
+
],
|
| 1853 |
+
"angle": 0,
|
| 1854 |
+
"content": "Zhang, J.; Chen, C.; Li, B.; Lyu, L.; Wu, S.; Ding, S.; Shen, C.; and Wu, C. 2022a. DENSE: Data-Free One-Shot Federated Learning. In Advances in NeurIPS."
|
| 1855 |
+
},
|
| 1856 |
+
{
|
| 1857 |
+
"type": "ref_text",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
0.519,
|
| 1860 |
+
0.172,
|
| 1861 |
+
0.914,
|
| 1862 |
+
0.228
|
| 1863 |
+
],
|
| 1864 |
+
"angle": 0,
|
| 1865 |
+
"content": "Zhang, J.; Li, B.; Xu, J.; Wu, S.; Ding, S.; Zhang, L.; and Wu, C. 2022b. Towards Efficient Data Free Black-Box Adversarial Attack. In Proceedings of the IEEE/CVF Conference on CVPR, 15115–15125."
|
| 1866 |
+
},
|
| 1867 |
+
{
|
| 1868 |
+
"type": "ref_text",
|
| 1869 |
+
"bbox": [
|
| 1870 |
+
0.519,
|
| 1871 |
+
0.231,
|
| 1872 |
+
0.914,
|
| 1873 |
+
0.274
|
| 1874 |
+
],
|
| 1875 |
+
"angle": 0,
|
| 1876 |
+
"content": "Zhang, J.; Li, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Wu, C. 2022c. Federated Learning with Label Distribution Skew via Logits Calibration. In Proceedings of the ICML. PMLR."
|
| 1877 |
+
},
|
| 1878 |
+
{
|
| 1879 |
+
"type": "ref_text",
|
| 1880 |
+
"bbox": [
|
| 1881 |
+
0.519,
|
| 1882 |
+
0.276,
|
| 1883 |
+
0.913,
|
| 1884 |
+
0.332
|
| 1885 |
+
],
|
| 1886 |
+
"angle": 0,
|
| 1887 |
+
"content": "Zhang, J.; Zhu, J.; Niu, G.; Han, B.; Sugiyama, M.; and Kankanhalli, M. 2021. Geometry-aware Instance-reweighted Adversarial Training. In International Conference on Learning Representations."
|
| 1888 |
+
},
|
| 1889 |
+
{
|
| 1890 |
+
"type": "ref_text",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
0.519,
|
| 1893 |
+
0.334,
|
| 1894 |
+
0.913,
|
| 1895 |
+
0.376
|
| 1896 |
+
],
|
| 1897 |
+
"angle": 0,
|
| 1898 |
+
"content": "Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; and Chandra, V. 2018. Federated Learning with Non-IID Data. arXiv:1806.00582."
|
| 1899 |
+
},
|
| 1900 |
+
{
|
| 1901 |
+
"type": "ref_text",
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
0.519,
|
| 1904 |
+
0.378,
|
| 1905 |
+
0.914,
|
| 1906 |
+
0.449
|
| 1907 |
+
],
|
| 1908 |
+
"angle": 0,
|
| 1909 |
+
"content": "Zhou, Y.; Wu, J.; Wang, H.; and He, J. 2022. Adversarial robustness through bias variance decomposition: A new perspective for federated learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2753-2762."
|
| 1910 |
+
},
|
| 1911 |
+
{
|
| 1912 |
+
"type": "ref_text",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
0.519,
|
| 1915 |
+
0.451,
|
| 1916 |
+
0.915,
|
| 1917 |
+
0.549
|
| 1918 |
+
],
|
| 1919 |
+
"angle": 0,
|
| 1920 |
+
"content": "Zhu, X.; Wang, J.; Hong, Z.; and Xiao, J. 2020. Empirical Studies of Institutional Federated Learning For Natural Language Processing. In Cohn, T.; He, Y.; and Liu, Y., eds., Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, 625-634. Association for Computational Linguistics."
|
| 1921 |
+
},
|
| 1922 |
+
{
|
| 1923 |
+
"type": "ref_text",
|
| 1924 |
+
"bbox": [
|
| 1925 |
+
0.519,
|
| 1926 |
+
0.551,
|
| 1927 |
+
0.913,
|
| 1928 |
+
0.58
|
| 1929 |
+
],
|
| 1930 |
+
"angle": 0,
|
| 1931 |
+
"content": "Zizzo, G.; Rawat, A.; Sinn, M.; and Buesser, B. 2020. FAT: Federated Adversarial Training. arXiv:2012.01791."
|
| 1932 |
+
},
|
| 1933 |
+
{
|
| 1934 |
+
"type": "list",
|
| 1935 |
+
"bbox": [
|
| 1936 |
+
0.518,
|
| 1937 |
+
0.069,
|
| 1938 |
+
0.915,
|
| 1939 |
+
0.58
|
| 1940 |
+
],
|
| 1941 |
+
"angle": 0,
|
| 1942 |
+
"content": null
|
| 1943 |
+
}
|
| 1944 |
+
]
|
| 1945 |
+
]
|
2302.09xxx/2302.09479/45788fdf-ede0-4bf6-9d03-22c2914ea5db_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6d9a7df94dd1b2503f5716537a2e98402e67b3fb2ab1c857659610532e154b35
|
| 3 |
+
size 1480755
|
2302.09xxx/2302.09479/full.md
ADDED
|
@@ -0,0 +1,289 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Delving into the Adversarial Robustness of Federated Learning
|
| 2 |
+
|
| 3 |
+
Jie Zhang $^{1*}$ Bo Li $^{2*‡}$ Chen Chen $^{3}$ Lingjuan Lyu $^{3‡}$ Shuang Wu $^{2}$ Shouhong Ding $^{2}$ Chao Wu $^{1‡}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Zhejiang University $^{2}$ Youtu Lab, Tencent $^{3}$ Sony AI
|
| 6 |
+
|
| 7 |
+
{zj_zhangjie, chao.wu} @zju.edu.cn
|
| 8 |
+
|
| 9 |
+
{libraboli, calvinwu, ericshding} @tencent.com, {chen.chen, LingjuanLv} @sony.com
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings.
|
| 14 |
+
|
| 15 |
+
# Introduction
|
| 16 |
+
|
| 17 |
+
Nowadays, end devices are generating massive amounts of potentially sensitive user data, raising practical concerns over security and privacy. Federated Learning (FL) (McMahan et al. 2017) emerges as a privacy-aware learning paradigm that allows multiple clients to collaboratively train neural networks without revealing their raw data. Recently, FL has attracted increasing attention from different areas, including medical image analysis (Liu et al. 2021a; Chen et al. 2021b), recommender systems (Liang, Pan, and Ming 2021; Liu et al. 2021b), natural language processing (Zhu et al. 2020; Wang et al. 2021), etc.
|
| 18 |
+
|
| 19 |
+
Prior studies have demonstrated that neural networks are vulnerable to evasion attacks by adversarial examples (Goodfellow, Shlens, and Szegedy 2014) during inference time. The goal of inference-time adversarial attack (Li et al. 2021a; Chen et al. 2022c; Zhang et al. 2022b; Chen et al. 2022b) is to damage the global model by adding a carefully generated imperceptible perturbation on the test examples. As shown in Table 1, federated models are as fragile to
|
| 20 |
+
|
| 21 |
+
Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 22 |
+
|
| 23 |
+
*Equal contribution. Work done during Jie Zhang's internship at Tencent Youtu Lab and partly done at Sony AI.
|
| 24 |
+
|
| 25 |
+
$\ddagger$ Corresponding author.
|
| 26 |
+
|
| 27 |
+
adversarial examples as centrally trained models (i.e. zero accuracy under PGD-40 attack (Madry et al. 2017)). Hence, it is also important to consider how to defend against adversarial attacks in federated learning.
|
| 28 |
+
|
| 29 |
+
There are several works that aim to deal with adversarial attacks in FL (Zhang et al. 2022c,a), i.e., federated adversarial training (FAT) (Zizzo et al. 2020; Hong et al. 2021; Shah et al. 2021; Chen, Zhang, and Lyu 2022; Chen et al. 2022a). (Zizzo et al. 2020) and (Hong et al. 2021) proposed to conduct adversarial training (AT) on a proportion of clients but conduct plain training on other clients. (Shah et al. 2021) investigated the impact of local training rounds in FAT. Nevertheless, these methods all ignore the issue that the clean accuracy of federated adversarial training is very low.
|
| 30 |
+
|
| 31 |
+
To further show the problems of federated adversarial training, we first begin with the comparison between the plainly-trained models and AT-trained (Madry et al. 2017) models in both the IID (Independent and Identically Distributed) and non-IID FL settings, measured by clean accuracy $A_{cln}$ and robust accuracy $A_{rob}$ , respectively. We show the test accuracy of plain training and adversarial training (AT) on CIFAR10 dataset under both IID and non-IID FL settings in Fig. 1 (left sub-figure). We summarize some valuable observations as follows: 1) Compared with the plainly-trained models, AT-trained models achieve a lower accuracy, which indicates that directly adopting adversarial training in FL can hurt $A_{cln}$ ; 2) $A_{cln}$ drops heavily for both the plainly-trained models and AT-trained models under non-IID distribution, which is exactly the challenge that typical federated learning with heterogeneous data encountered (Zhao et al. 2018); 3) The performance of AT-trained models with non-IID data distribution decrease significantly compared with IID data distribution. Motivated by these observations, we focus on improving both adversarial robustness and clean accuracy of adversarial training in FL, i.e., we aim to increase $A_{cln}$ while keeping $A_{rob}$ as high as possible.
|
| 32 |
+
|
| 33 |
+
To achieve this goal, in this paper, we investigate the impact of decision boundary, which can greatly influence the performance of the model in FAT. Specifically, 1) we apply adversarial training with a re-weighting strategy in local update to get a better $A_{rob}$ . Our method takes the limited data of each client into account, those samples that are close to/ far from the decision boundary are assigned larger/smaller weight. 2) Moreover, since the global model in FL has a
|
| 34 |
+
|
| 35 |
+
Table 1: The accuracy (%) is tested under PGD-40 attack (Madry et al. 2017). For MNIST, FMNIST, CIFAR10, ImageNet-12, CIFAR100, and Tiny-ImageNet, the perturbation bound is $\{0.3, 32/255, 0.031, 0.031, 0.031, 0.031\}$ , respectively. $A_{cln}$ and $A_{rob}$ refer to clean accuracy and robust accuracy.
|
| 36 |
+
|
| 37 |
+
<table><tr><td>Type</td><td>Dataset</td><td>MNIST</td><td>FMNIST</td><td>ImageNet-12</td><td>CIFAR10</td><td>CIFAR100</td><td>Tiny-ImageNet</td></tr><tr><td rowspan="2">Centralized</td><td>\(A_{cln}\)</td><td>99.42</td><td>92.47</td><td>78.96</td><td>94.26</td><td>86.93</td><td>57.93</td></tr><tr><td>\(A_{rob}\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td rowspan="2">Federated</td><td>\(A_{cln}\)</td><td>99.01</td><td>88.51</td><td>71.65</td><td>85.81</td><td>81.28</td><td>49.79</td></tr><tr><td>\(A_{rob}\)</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr></table>
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Figure 1: Left: Test accuracy reduces for plainly trained model and adversarially trained model under non-IID data. Meanwhile, adversarial training hurts the performance. Right: Evaluations on CIFAR10 for both accuracy and robustness, including several state-of-the-art defense methods combined with FL. Our method outperforms existing baselines on both metric dimensions.
|
| 41 |
+
|
| 42 |
+
more accurate decision boundary through model aggregation, we take advantage of the logits from the global model and introduce a new regularization term to increase $A_{cln}$ . This regularization term aims to alleviate the accuracy reduction across distributed clients.
|
| 43 |
+
|
| 44 |
+
We conclude our major contributions as follows:
|
| 45 |
+
|
| 46 |
+
- We conduct systematic studies on the adversarial robustness of FL, and provide valuable observations from extensive experiments.
|
| 47 |
+
- We reveal the negative impacts of adopting adversarial training in FL, and then propose an effective algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which utilized local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.
|
| 48 |
+
- Extensive experiments on multiple datasets demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. We present the performance of our method in Fig. 1 (right sub-figure), which indicates the improvement in both robustness and accuracy of adversarial training in FL.
|
| 49 |
+
|
| 50 |
+
# Related Works
|
| 51 |
+
|
| 52 |
+
Federated Learning. Following the success of DNNs in various tasks (Li et al. 2019; Li, Sun, and Guo 2019; ?; Huang et al. 2022b,a; Dong et al. 2021), FL has attracted increasing attention. A recent survey has pointed out that existing FL systems are vulnerable to various attacks that
|
| 53 |
+
|
| 54 |
+
aim to either compromise data privacy or system robustness (Lyu et al. 2022). In particular, robustness attacks can be broadly classified into training-time attacks (data poisoning and model poisoning) and inference-time attacks (evision attacks, i.e., using adversarial examples to attack the global model during inference phase). In FL, the architectural design, distributed nature, and data constraints can bring new threats and failures (Kairouz 2021).
|
| 55 |
+
|
| 56 |
+
Adversarial Attacks. The white-box attacks have access to the whole details of threat models, including parameters and architectures. Goodfellow et al. (Goodfellow, Shlens, and Szegedy 2014) introduced the Fast Gradient Sign Method (FGSM) to generate adversarial examples, which uses a single-step first-order approximation to perform gradient ascent. Kurakin et al. (Kurakin, Goodfellow, and Bengio 2017) iteratively applied FGSM with a small step-size to develop a significantly stronger multi-step variant, called Iterative FGSM (I-FGSM). Based on these findings, more powerful attacks have been proposed in recent years including MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017), and AA (Croce and Hein 2020).
|
| 57 |
+
|
| 58 |
+
Adversarial Training. Adversarial training has been one of the most effective defense strategies against adversarial attacks. Madry et al. (Madry et al. 2017) regarded adversarial training as a min-max formulation using empirical risk minimization under PGD attack. Kannan et al. (Kannan, Kurakin, and Goodfellow 2018) presented adversarial logit pairing (ALP), a method that encourages logits for pairs of
|
| 59 |
+
|
| 60 |
+
Table 2: An empirical study on the adversarial robustness of FL, measured by various combination of defense methods and FL algorithms. We report the clean accuracy and robust accuracy, respectively. Best results are in bold.
|
| 61 |
+
|
| 62 |
+
<table><tr><td>Type</td><td colspan="8">IID</td><td colspan="8">Non-IID</td></tr><tr><td>Methods</td><td colspan="2">FedAvg</td><td colspan="2">FedProx</td><td colspan="2">FedNova</td><td colspan="2">Scaffold</td><td colspan="2">FedAvg</td><td colspan="2">FedProx</td><td colspan="2">FedNova</td><td colspan="2">Scaffold</td></tr><tr><td>Performance</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td><td>Acln</td><td>Arob</td></tr><tr><td>PGD-AT</td><td>57.99</td><td>31.95</td><td>58.17</td><td>32.06</td><td>58.45</td><td>31.74</td><td>56.84</td><td>29.26</td><td>46.84</td><td>26.79</td><td>48.03</td><td>27.46</td><td>46.95</td><td>26.54</td><td>42.44</td><td>27.19</td></tr><tr><td>ALP</td><td>62.81</td><td>31.84</td><td>62.88</td><td>31.20</td><td>62.91</td><td>31.79</td><td>60.30</td><td>29.58</td><td>56.16</td><td>28.78</td><td>55.79</td><td>29.06</td><td>55.80</td><td>29.18</td><td>48.29</td><td>26.56</td></tr><tr><td>TRADES</td><td>64.94</td><td>32.93</td><td>64.29</td><td>32.97</td><td>64.46</td><td>33.29</td><td>63.14</td><td>33.58</td><td>60.94</td><td>27.06</td><td>61.05</td><td>27.94</td><td>60.34</td><td>28.78</td><td>59.53</td><td>27.78</td></tr><tr><td>MMA</td><td>65.14</td><td>30.29</td><td>63.65</td><td>31.29</td><td>65.27</td><td>29.31</td><td>64.28</td><td>32.98</td><td>59.69</td><td>28.64</td><td>60.17</td><td>28.09</td><td>61.03</td><td>28.47</td><td>61.53</td><td>28.13</td></tr><tr><td>AVMixup</td><td>66.14</td><td>32.27</td><td>65.12</td><td>33.19</td><td>65.14</td><td>33.75</td><td>65.11</td><td>33.24</td><td>61.17</td><td>28.56</td><td>61.47</td><td>28.34</td><td>62.04</td><td>28.12</td><td>61.91</td><td>28.81</td></tr></table>
|
| 63 |
+
|
| 64 |
+
examples to be similar, to improve robust accuracy. To quantify the trade-off between accuracy and robustness, Zhang et al. (Zhang et al. 2019) introduced a TRADES loss to achieve a tight upper bound on the gap between clean and robust error. Based on the margin theory and soft-labeled data augmentation, Ding et al. (Ding et al. 2020) proposed Max-Margin Adversarial (MMA) training and Lee et al. (Lee, Lee, and Yoon 2020) introduced Adversarial Vertex mixup (AVmixup).
|
| 65 |
+
|
| 66 |
+
Federated Adversarial Training. In terms of the adversarial robustness, Zizzo et al. (Zizzo et al. 2020) investigated the effectiveness of the federated adversarial training protocol for idealized federated settings, and showed the performance of their models in a traditional centralized setting and a distributed FL scenario. Zhou et al. (Zhou et al. 2022) decomposed the aggregation error of the central server into bias and variance. However, all these methods sacrificed clean accuracy (compared to plainly trained models) to gain robustness. In addition, certified defense (Chen et al. 2021a) against adversarial examples in FL is another interesting direction, which will be discussed in the future.
|
| 67 |
+
|
| 68 |
+
# Adversarial Robustness of FL
|
| 69 |
+
|
| 70 |
+
In this section, we briefly define the goal of federated adversarial training. Then we conduct a systematic study on some popular federated learning algorithms with the combination of various adversarial training methods and evaluate their robustness under several attacks. Besides, we further reveal the challenges of adversarial training in non-IID FL.
|
| 71 |
+
|
| 72 |
+
# Problem Definition
|
| 73 |
+
|
| 74 |
+
In typical federated learning, training data are distributed across all the $K$ clients, and there is a central server managing model aggregations and communications with clients. In general, federated learning attempts to minimize the following optimization:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\min _ {w} f (w) = \sum_ {k = 1} ^ {K} \frac {n _ {k}}{n} F _ {k} (w). \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Here, we denote that the global approximate optimal is a sum of local objectives weighted by the local data size $n_k$ , and $n$ is the total data size of all clients that participate in a communication round. Moreover, each local objective measures the empirical risk over possibly different data distributions $D_k$ , which can be expressed as:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
F _ {k} (w) := \mathbb {E} _ {x _ {k} \sim \mathcal {D} _ {k}} \left[ f _ {k} (w; x _ {k}) \right]. \tag {2}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
Let $x$ denote the original image, $x^{adv}$ denote the corresponding adversarial example, and $\delta$ denote the perturbation added on the original image, then $x^{adv} = x + \delta$ . To generate powerful adversarial examples, we attempt to maximize the loss $L(x + \delta; w)$ , where $L$ is the loss function for local update.
|
| 87 |
+
|
| 88 |
+
To improve the robustness of the neural networks, many adversarial defense methods have been proposed. Among them, adversarial training (Carlini and Wagner 2017) is one of the most prevailing and effective algorithms. Combined with adversarial training, the local objective becomes solving the following min-max optimization problem:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
F _ {k} (w) = \min \mathbb {E} _ {x _ {k} \sim \mathcal {D} _ {k}} \left[ \max _ {\| x ^ {a d v} - x \| _ {\infty} \leq \delta} L (w, x ^ {a d v}, y) \right]. \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
The inner maximization problem aims to find effective adversarial examples that achieve a high loss, while the outer optimization updates local models to minimize training loss.
|
| 95 |
+
|
| 96 |
+
In this work, we conduct a systematic study on several state-of-the-art FL algorithms including FedAvg (McMahon et al. 2017), FedProx (Li et al. 2018), FedNova (Wang et al. 2020) and Scaffold (Karimireddy et al. 2020), and explore their combinations with AT methods to defend against adversarial attacks. We report detailed results in Table 2, here robustness is averaged over four popular attacks (FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), and CW (Carlini and Wagner 2017)). Besides, we implement some prevailing adversarial training methods including PGD_AT (Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We observe that there is no federated adversarial learning algorithm that can outperform all the others in all cases. Moreover, the clean accuracy drops heavily under non-IID distribution. As such, we are motivated to develop a more effective method. Due to the similar performance of these FL methods observed from Table 2, we design our method based on FedAvg - a representative algorithm in FL.
|
| 97 |
+
|
| 98 |
+
# Adversarial Traning with non-HID Data
|
| 99 |
+
|
| 100 |
+
Federated learning faces the statistical challenge in real-world scenarios. The IID data makes the stochastic gradient as an unbiased estimate of the full gradient (McMahan et al. 2017). However, the clients are typically highly heterogeneous with various kinds of non-IID settings, such as
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
Figure 2: Test accuracy on a randomly selected client.
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
Figure 3: Plain training and adversarial training under non-IID setting. Compared with plainly trained situation, the aggregation of adversarially trained models can lead to a more biased model which enlarges accuracy gap. Consequently, it results in poor consistency between different clients.
|
| 107 |
+
|
| 108 |
+

|
| 109 |
+
|
| 110 |
+
label skewness and feature skewness (Li et al. 2021b). According to previous studies (Wang et al. 2020; Karimireddy et al. 2020), the non-IID data settings can degrade the effectiveness of the deployed model.
|
| 111 |
+
|
| 112 |
+
Similarly, due to the non-IID data, the performance of AT may vary widely across clients. To better understand the challenge of adversarial training with non-IID data, we examine the performance of both clean accuracy and robustness on a randomly selected client and report the results in Fig. 2. Observed from Fig. 2, we can find that: 1) $A_{cln}$ on the plainly trained model drops from majority classes to minority classes, which is exactly what traditional imbalanced learning attempts to solve; 2) A similar decreasing tendency reasonably occurs in $A_{rob}$ . It is obvious that adopting adversarial training in federated learning with non-IID data is more challenging.
|
| 113 |
+
|
| 114 |
+
According to above observations, we conjecture that AT-trained local models with imbalanced data lead to a more biased decision boundary than plainly trained ones. Since adversarial examples need a larger number of epochs to achieve near-zero error (Zhang et al. 2021), it becomes harder to fit adversarial examples than clean data. However, for the local client itself, imbalanced clean data generates imbalanced adversarial examples, making it more difficult for training and enlarging the accuracy gap, which can reduce the performance both in accuracy and robustness. In Fig. 3, we also show the differences between plain train
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Figure 4: Left panel: Decision boundary of plainly trained model. Middle panel: Decision boundary of AT-trained model. Right panel: Decision boundary of DBFAT-trained model. We use the dotted line to represent the boundary of the clean model, and solid line to represent the boundary of the robust model. The size of the shape represents the value of the weight. Those samples that are close to far from boundary are assigned larger/smaller weight. The decision boundary of DBFAT-trained model (see the right sub-figure) can achieve a higher $A_{rob}$ and meanwhile maintain $A_{cln}$ .
|
| 118 |
+
|
| 119 |
+
ing and adversarial training in federated settings. Compared with the plainly trained models, the aggregation of adversarially trained models can enlarge the accuracy gap, which results in poor consistency between different clients. To overcome this problem, we propose a novel method to utilize local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.
|
| 120 |
+
|
| 121 |
+
# Methodology
|
| 122 |
+
|
| 123 |
+
The generalization performance of a neural network is closely related to its decision boundary. However, models trained in the federated setting are biased compared with the centrally trained models. This is mainly caused by heterogeneous data and objective inconsistency between clients (Kairouz 2021). Moreover, a highly skewed data distribution can lead to an extremely biased boundary (Wang et al. 2020). We tackle this problem in two ways: 1) locally, we take full advantage of the limited data on the distributed client; 2) globally, we utilize the information obtained from the global model to alleviate the biases between clients.
|
| 124 |
+
|
| 125 |
+
Subsequently, we propose a simple yet effective approach called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components. For local training, we re-weight adversarial examples to improve robustness; while for global aggregation, we utilize the global model to regularize the accuracy for a lower boundary error $A_{bdy}$ . We show the training process of DBFAT in the supplementary and illustrate an example of the decision boundary of our approach in Fig. 4.
|
| 126 |
+
|
| 127 |
+
# Re-weighting with Limited Data
|
| 128 |
+
|
| 129 |
+
Adversarial examples have the ability to approximately measure the distances from original inputs to a classifier's decision boundary (Heo et al. 2018), which can be calculated by the least number of steps that iterative attack (e.g. PGD attack (Madry et al. 2017)) needs in order to find its misclassified adversarial variant. To better utilize limited adversarial examples, we attempt to re-weight the adversarial examples to guide adversarial training. For clean examples that
|
| 130 |
+
|
| 131 |
+
Table 3: Loss functions of different adversarial training methods.
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Defense</td><td>Loss Function</td></tr><tr><td>PGD_AT</td><td>CE (f (xadv), y)</td></tr><tr><td>ALP</td><td>CE (f (xadv), y) + β · ||f (xadv) - f (x)||2</td></tr><tr><td>TRADES</td><td>CE (f (x), y) + β · KL (f (xadv) ||f (x))</td></tr><tr><td>MMA</td><td>CE (f (xadv), y) · R(hθ(x) = y) + CE (f (x), y) · R(hθ(x) ≠ y)</td></tr><tr><td>AVMixup</td><td>CE (f (xadv), yadv)</td></tr><tr><td>DBFAT(ours)</td><td>ρ · CE(f(xadv), y) + β · KL (f (xadv) ||f glo (x))</td></tr></table>
|
| 134 |
+
|
| 135 |
+
are close to the decision boundary, we assign larger weights; while those examples that are far from the boundary are assigned with smaller weights.
|
| 136 |
+
|
| 137 |
+
In this paper, we use PGD- $S$ to approximately measure the geometric distance to the decision boundary, $S$ denotes the number of maximum iteration. We generate adversarial examples as follows (Madry et al. 2017):
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
x ^ {a d v} \leftarrow \Pi_ {\mathcal {B} [ x, \epsilon ]} \left(x ^ {a d v} + \alpha \cdot \operatorname {s i g n} \left(\nabla_ {x ^ {a d v}} \ell \left(x ^ {a d v}, y\right)\right)\right). \tag {4}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Here $\Pi_{\mathcal{B}[x,\epsilon]}$ is the projection function that projects the adversarial data back into the $\epsilon$ -ball centered at natural data, $\alpha$ is the steps size, $\epsilon$ is perturbation bound.
|
| 144 |
+
|
| 145 |
+
We find the minimum step $d$ , such that after $d$ step of PGD, the adversarial variant can be misclassified by the network, i.e., $\arg \max_{c} f^{(c)}(x^{adv}) \neq y$ , where $f^{(c)}(x^{adv})$ is the logits of the $c$ -th label.
|
| 146 |
+
|
| 147 |
+
In this way, given a mini-batch samples $\{(x_i,y_i)\}_{i = 1}^m$ , then the weight list $\rho$ can be formulated as:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\rho \leftarrow 1 - \left\{\frac {d _ {i}}{\sum_ {i = 1} ^ {m} d _ {i}} \right\}. \tag {5}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
# Regularization with Global Model
|
| 154 |
+
|
| 155 |
+
Early work (Zhang et al. 2019; Cui et al. 2021) claims that there exists a trade-off between accuracy and robustness, standard adversarial training can hurt accuracy. To achieve a lower boundary error $A_{bdy}$ , we take advantage of logits from the global model $f^{glo}$ , which is trained after aggregation. Particularly, in federated learning, the model owns the information obtained from the averaged parameters on distributed clients.
|
| 156 |
+
|
| 157 |
+
Let $f^{loc}$ denote the adversarially trained model at each local client, $f^{glo}$ has the most desirable classifier boundary for natural data. Then we can modify the local objective mentioned in Equation 3 as below:
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\min _ {\text {f o r r o b u s t n e s s}} \underbrace {\ell_ {c e} (\rho \cdot f ^ {l o c} (x ^ {a d v}) , y)} _ {f o r a c c u r a c y r e g u l a r i z a t i o n} + \beta \cdot \underbrace {\ell_ {k l} \left(f ^ {l o c} (x ^ {a d v}) , f ^ {g l o} (x)\right)} _ {\text {f o r a c c u r a c y r e g u l a r i z a t i o n}}. \tag {6}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
Where $\ell_{ce}$ denotes the cross-entropy loss to improve the robustness, and $\ell_{kl}$ is the KL divergence loss to constrain the logits of global model and local model. Here, $\ell_{kl}$ appears as an additional regularization term, which is designed to reduce the boundary error $A_{bdy} = A_{cln} - A_{rob}$ . Additionally, $\rho$ is the weight calculated by Equation 5, $\beta$ is the parameter to be tuned.
|
| 164 |
+
|
| 165 |
+
To show the difference between our DBFAT and existing defense methods, we list the loss functions of different adversarial training methods in Table 3.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
Figure 5: Visualizations of IID and non-IID distribution (Dirichlet sampled and Sharding) across 5 clients on CIFAR10 dataset. Shards_5 is a type of non-IID setting, in which each client has five categories of data (McMahan et al. 2017). From left to right: client ID number #1-5.
|
| 169 |
+
|
| 170 |
+
# Experimental Results
|
| 171 |
+
|
| 172 |
+
# Experimental Setup
|
| 173 |
+
|
| 174 |
+
Following the previous work of FL (McMahan et al. 2017), we distribute training data among 100 clients in both IID and non-IID fashion. For each communication round, we randomly select 10 clients to average the model parameters. All experiments are conducted with 8 Tesla V100 GPUs. More details can be referred to the supplemental material.
|
| 175 |
+
|
| 176 |
+
Datasets In this section, we show that DBFAT improves the robust generalization and meanwhile maintains a high accuracy with extensive experiments on benchmark CV datasets, including MNIST (Lecun et al. 1998), FashionMNIST (Xiao, Rasul, and Vollgraf 2017) (FMNIST), CIFAR10 (Krizhevsky and Hinton 2009), CIFAR100 (Krizhevsky and Hinton 2009), Tiny-ImageNet (Le and Yang 2015), and ImageNet-12 (Deng et al. 2009). The ImageNet-12 is generated via (Li et al. 2021c), which consists of 12 classes. We resize the original image with size $224*224*3$ to $64*64*3$ for fast training.
|
| 177 |
+
|
| 178 |
+
Data partitioning In the federated learning setup, we evaluate all algorithms on two types of non-IID data partitioning: Dirichlet sampled data and Sharding. For Dirichlet sampled data, each local client is allocated with a proportion of the samples of each label according to Dirichlet distribution (Li et al. 2020). Specifically, we follow the setting in (Yurochkin et al. 2019), for each label $c$ , we sample $p_c \sim \mathrm{Dir}_J(0.5)$ and allocate $p_{c,j}$ proportion of the whole dataset of label $c$ to client $j$ . In this setting, some clients may entirely have no examples of a subset of classes. For Sharding (McMahan et al. 2017), each client owns data samples of a fixed number of labels. Let $K$ be the number of total clients, and $q$ is the number of labels we assign to each client. We divide the dataset by label into $K * q$ shards, and the amount of samples in each shard is $\frac{n}{K \cdot q}$ . We denote this distribution as shards $_q$ , where $q$ controls the level of difficulty. If $q$ is set to a smaller value, then the partition is more unbalanced. An example of these partitioning strategies is shown in Fig. 5, in which we visualize IID and non-IID distribution (Dirichlet sampled with $p_c \sim \mathrm{Dir}_J(0.5)$ and Sharding with shards_5) on five randomly selected clients.
|
| 179 |
+
|
| 180 |
+
Table 4: Accuracy and adversarial robustness on MNIST, FMNIST and CIFAR10 under both IID and non-IID distribution. An empirical study of FedAvg combined with several defense methods, more detailed comparisons are reported in the supplementary (Section B). Our method significantly outperforms other baselines.
|
| 181 |
+
|
| 182 |
+
<table><tr><td>Type</td><td></td><td colspan="6">IID</td><td colspan="6">Non-IID</td></tr><tr><td>Dataset</td><td>Method</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td><td>Clean</td><td>FGSM</td><td>MIM</td><td>PGD-20</td><td>CW</td><td>AA</td></tr><tr><td rowspan="6">MNIST</td><td>Plain</td><td>99.01</td><td>28.35</td><td>8.65</td><td>5.29</td><td>3.84</td><td>3.02</td><td>98.45</td><td>11.78</td><td>14.06</td><td>8.44</td><td>9.51</td><td>7.45</td></tr><tr><td>PGD_AT</td><td>98.52</td><td>76.01</td><td>60.18</td><td>54.50</td><td>55.23</td><td>50.43</td><td>97.82</td><td>67.58</td><td>52.89</td><td>48.03</td><td>47.43</td><td>43.75</td></tr><tr><td>ALP</td><td>98.46</td><td>57.37</td><td>55.61</td><td>48.74</td><td>51.17</td><td>44.25</td><td>97.92</td><td>46.49</td><td>51.01</td><td>46.41</td><td>46.24</td><td>41.95</td></tr><tr><td>TRADES</td><td>97.89</td><td>76.79</td><td>63.29</td><td>58.25</td><td>57.24</td><td>53.72</td><td>92.03</td><td>48.45</td><td>51.56</td><td>47.21</td><td>45.81</td><td>42.36</td></tr><tr><td>AVMixup</td><td>98.63</td><td>61.41</td><td>53.34</td><td>42.33</td><td>46.95</td><td>37.78</td><td>97.47</td><td>56.50</td><td>51.86</td><td>46.28</td><td>44.46</td><td>41.84</td></tr><tr><td>Ours</td><td>98.86</td><td>78.06</td><td>70.97</td><td>68.39</td><td>63.09</td><td>59.39</td><td>97.95</td><td>68.54</td><td>54.18</td><td>50.33</td><td>49.12</td><td>44.32</td></tr><tr><td rowspan="6">FMNIST</td><td>Plain</td><td>88.50</td><td>17.89</td><td>3.55</td><td>2.57</td><td>0.40</td><td>0.17</td><td>84.60</td><td>17.86</td><td>3.25</td><td>2.93</td><td>3.05</td><td>-1.40</td></tr><tr><td>PGD_AT</td><td>76.05</td><td>68.53</td><td>65.24</td><td>65.40</td><td>64.26</td><td>60.89</td><td>72.93</td><td>60.11</td><td>54.42</td><td>54.33</td><td>52.19</td><td>49.88</td></tr><tr><td>ALP</td><td>75.99</td><td>67.31</td><td>63.66</td><td>63.79</td><td>61.55</td><td>59.19</td><td>75.34</td><td>57.67</td><td>53.37</td><td>55.11</td><td>51.12</td><td>51.04</td></tr><tr><td>TRADES</td><td>78.13</td><td>59.33</td><td>52.65</td><td>52.78</td><td>51.44</td><td>48.78</td><td>74.93</td><td>56.53</td><td>44.01</td><td>44.01</td><td>31.80</td><td>39.61</td></tr><tr><td>AVMixup</td><td>79.34</td><td>61.22</td><td>54.93</td><td>54.67</td><td>49.48</td><td>50.07</td><td>72.06</td><td>56.26</td><td>49.21</td><td>49.72</td><td>47.99</td><td>45.15</td></tr><tr><td>Ours</td><td>81.49</td><td>69.23</td><td>66.22</td><td>66.24</td><td>65.71</td><td>61.49</td><td>76.19</td><td>63.11</td><td>56.45</td><td>58.31</td><td>56.96</td><td>53.91</td></tr><tr><td rowspan="6">CIFAR10</td><td>Plain</td><td>78.80</td><td>6.87</td><td>1.15</td><td>1.06</td><td>1.30</td><td>1.23</td><td>61.10</td><td>7.58</td><td>2.94</td><td>2.67</td><td>2.87</td><td>1.28</td></tr><tr><td>PGD_AT</td><td>58.75</td><td>30.62</td><td>27.23</td><td>26.11</td><td>28.47</td><td>22.09</td><td>15.27</td><td>13.27</td><td>13.00</td><td>13.00</td><td>12.99</td><td>8.63</td></tr><tr><td>ALP</td><td>63.23</td><td>29.42</td><td>26.75</td><td>28.49</td><td>28.13</td><td>23.97</td><td>32.91</td><td>21.41</td><td>20.26</td><td>20.19</td><td>17.74</td><td>15.83</td></tr><tr><td>TRADES</td><td>68.58</td><td>31.53</td><td>25.92</td><td>25.49</td><td>23.07</td><td>20.89</td><td>46.30</td><td>24.81</td><td>22.20</td><td>22.05</td><td>19.59</td><td>17.85</td></tr><tr><td>AVMixup</td><td>70.28</td><td>29.51</td><td>26.22</td><td>26.34</td><td>24.07</td><td>22.25</td><td>48.23</td><td>25.29</td><td>21.42</td><td>24.25</td><td>20.25</td><td>19.43</td></tr><tr><td>Ours</td><td>72.21</td><td>31.47</td><td>28.57</td><td>29.03</td><td>29.31</td><td>24.25</td><td>52.24</td><td>27.03</td><td>24.12</td><td>27.02</td><td>22.13</td><td>21.20</td></tr></table>
|
| 183 |
+
|
| 184 |
+
MNIST and FMNIST setup We use a simple CNN with two convolutional layers, followed by two fully connected layers. Following the setting used in (Goodfellow, Shlens, and Szegedy 2014), for MNIST, we set perturbation bound $\epsilon = 0.3$ , and step size $\alpha = 0.01$ , and apply adversarial attacks for 20 iterations. For FMNIST, we set perturbation bound $\epsilon = 32/255$ , and step size $\alpha = 0.031$ , we adversarially train the network for 10 steps and apply adversarial attacks for 20 iterations. Due to the simplicity of MNIST and FMNIST, we mainly use non-IID data (Sharding), which is hard to train.
|
| 185 |
+
|
| 186 |
+
CIFAR10, CIFAR100, Tiny-ImageNet and ImageNet-12 setup We apply a larger CNN architecture, and follow the setting used in (Madry et al. 2017), i.e., we set the perturbation bound $\epsilon = 0.031$ , step size $\alpha = 0.007$ . To evaluate the robustness, we conduct extensive experiments with various data partitioning.
|
| 187 |
+
|
| 188 |
+
Baselines For attack methods, we perform five popular attacks including FGSM (Kurakin, Goodfellow, and Bengio 2017), MIM (Dong et al. 2018), PGD (Madry et al. 2017), CW (Carlini and Wagner 2017) and AA (Croce and Hein 2020). We further use Square (Andriushchenko et al. 2020) for black-box attack. To investigate the effectiveness of existing FL algorithms, we implement FedAvg(McMahan et al. 2017), FedProx(Li et al. 2018), FedNova(Wang et al. 2020) and Scaffold(Karimireddy et al. 2020). To defend against adversarial attacks, we implement four most prevailing methods including PGD_AT(Madry et al. 2017), TRADES (Zhang et al. 2019), ALP (Kannan, Kurakin, and Goodfellow 2018), MMA (Ding et al. 2020) and AVMixup (Lee, Lee, and Yoon 2020). We compare the performance of our DBFAT with various kinds of defense methods combined with FL methods.
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Convergence For Local Training
|
| 192 |
+
Figure 6: Left: Convergence rate for different local epochs. Right: Training curves of FedAvg combined with different AT methods.
|
| 193 |
+
|
| 194 |
+
To show the convergence rate of DBFAT, we use the Dirichlet sampled CIFAR10 dataset, where each client owns 500 samples from 5 classes. Fig. 6 (left sub-figure) shows the impact of local epoch $E$ during adversarial training. Indeed, for a very small epoch (e.g., $E = 2$ ), it has an extremely slow convergence rate, which may incur more communications. Besides, a large epoch (e.g., $E = 20$ ) also leads to a slow convergence, as model may overfit to the local data. Considering both the communication cost and convergence issues, we set $E = 5$ in our experiments, which can maintain a proper communication efficiency and fast convergence.
|
| 195 |
+
|
| 196 |
+
# Effectiveness of Our Method
|
| 197 |
+
|
| 198 |
+
We verify the effectiveness of our method compared with several adversarial training techniques on Dirichlet sampled CIFAR10. Evaluation of model robustness is averaged under four attacks using the same setting for a fair comparison and all defense methods are combined with FedAvg.
|
| 199 |
+
|
| 200 |
+
To show the differences between DBFAT and above mentioned defense methods, we report the training curves on
|
| 201 |
+
|
| 202 |
+
Table 5: Accuracy and adversarial robustness on CIFAR100, Tiny-ImageNet, and ImageNet-12.
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Dataset</td><td colspan="4">CIFAR100</td><td colspan="4">Tiny-ImageNet</td><td colspan="4">ImageNet-12</td></tr><tr><td>Method</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td><td>Clean</td><td>PGD-20</td><td>AA</td><td>Square</td></tr><tr><td>PGD_AT</td><td>39.32</td><td>16.07</td><td>14.36</td><td>23.44</td><td>26.33</td><td>12.26</td><td>10.26</td><td>13.54</td><td>37.42</td><td>22.61</td><td>18.30</td><td>25.57</td></tr><tr><td>ALP</td><td>41.12</td><td>18.46</td><td>14.78</td><td>24.54</td><td>32.78</td><td>14.62</td><td>12.19</td><td>16.48</td><td>54.96</td><td>24.78</td><td>19.57</td><td>27.73</td></tr><tr><td>TRADES</td><td>43.39</td><td>20.05</td><td>16.85</td><td>26.43</td><td>37.81</td><td>15.49</td><td>13.26</td><td>19.38</td><td>58.82</td><td>25.49</td><td>21.81</td><td>28.96</td></tr><tr><td>AVMixup</td><td>46.64</td><td>23.56</td><td>19.46</td><td>29.16</td><td>36.19</td><td>15.28</td><td>13.18</td><td>19.25</td><td>59.63</td><td>25.81</td><td>21.92</td><td>29.28</td></tr><tr><td>Ours</td><td>48.31</td><td>24.47</td><td>22.46</td><td>31.57</td><td>38.24</td><td>16.17</td><td>13.96</td><td>20.26</td><td>61.38</td><td>26.47</td><td>22.08</td><td>30.91</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 6: Ablation Study by cutting off different modules.
|
| 207 |
+
|
| 208 |
+
<table><tr><td>Dataset</td><td colspan="2">CIFAR10</td><td colspan="2">FMNIST</td></tr><tr><td>Methods</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>Ours</td><td>52.16</td><td>27.80</td><td>75.89</td><td>59.63</td></tr><tr><td>Ours (w/o re-weighting)</td><td>48.44</td><td>25.89</td><td>72.35</td><td>56.34</td></tr><tr><td>Ours (w/o regularization)</td><td>51.04</td><td>26.84</td><td>73.96</td><td>58.23</td></tr></table>
|
| 209 |
+
|
| 210 |
+
non-IID CIFAR10 dataset in the right sub-figure of Fig. 6. Fig. 6 confirms that our DBFAT achieves the highest clean accuracy. We speculate that this benefit is due to the regularization term and re-weighting strategy introduced in Equation 6. It is worth mentioning that in the training curves, the model trained with PGD_AT performs very poorly. It indicates that standard AT may not be a suitable choice for adversarial robustness in FL, as it only uses cross-entropy loss with adversarial examples, but ignores the negative impact on clean accuracy. We further report the results on various datasets under both IID and non-IID settings in Table 4, which indicates that DBFAT significantly outperforms other methods in terms of both accuracy and robustness.
|
| 211 |
+
|
| 212 |
+
Performance on large datasets In Table 5, we show the accuracy and robustness of each method on large datasets (e.g., CIFAR100, Tiny-ImageNet, and ImageNet-12). All results are tested under PGD-20 attack (Madry et al. 2017), AutoAttack (Croce and Hein 2020), and Square attack (Andriushchenko et al. 2020) in non-IID settings. From the results reported in Table 5, we can find that our method still outperforms other baselines in terms of both clean accuracy and robustness. Note that our method can achieve the highest accuracy and robustness of $61.38\%$ and $22.08\%$ under AutoAttack, respectively. It thus proves that our method can also be used to improve the accuracy and robustness of the model on large datasets. We think that the higher clean accuracy is a result of the regularization term introduced in Equation 6, while maintaining a high robustness.
|
| 213 |
+
|
| 214 |
+
# Ablation Study
|
| 215 |
+
|
| 216 |
+
Cutting off different modules As part of our ablation study, we first investigate the contributions of different modules introduced in DBFAT. As shown in Table 6, turning off both the re-weighting strategy and regularization term will lead to poor performance, which demonstrates the importance of both modules. Moreover, cut-offing the reweighting strategy can lead to a more severe degradation. We conjecture this is a reasonable phenomenon. As mentioned in Fig. 1, non-IID data can cause a serious accuracy
|
| 217 |
+
|
| 218 |
+
Table 7: Effect of hyper-parameter $\beta$ . "Avg ${A}_{rob}$ " refers to the average robustness under four attacks.
|
| 219 |
+
|
| 220 |
+
<table><tr><td>Dataset</td><td colspan="2">MNIST</td><td colspan="2">FMNIST</td></tr><tr><td>β</td><td>Acln</td><td>Avg Arob</td><td>Acln</td><td>Avg Arob</td></tr><tr><td>4</td><td>98.30</td><td>26.64</td><td>81.73</td><td>37.36</td></tr><tr><td>2</td><td>98.14</td><td>34.24</td><td>75.59</td><td>47.83</td></tr><tr><td>1.5</td><td>98.46</td><td>53.22</td><td>74.93</td><td>44.08</td></tr><tr><td>1</td><td>97.32</td><td>47.35</td><td>65.43</td><td>42.33</td></tr><tr><td>0.5</td><td>96.57</td><td>44.09</td><td>61.02</td><td>45.28</td></tr></table>
|
| 221 |
+
|
| 222 |
+
reduction. Our re-weighting strategy can alleviate the bias by taking the limited data on each client into account.
|
| 223 |
+
|
| 224 |
+
Effects of Regularization The regularization parameter $\beta$ is an important hyperparameter in our proposed method. We show how the regularization parameter affects the performance of our robust classifiers by numerical experiments on two datasets, MNIST and FMNIST. In Equation 6, $\beta$ controls the accuracy obtained from the global model, which contains information from distributed clients. Since directly training on adversarial examples could hurt the clean accuracy, here we explore the effects of $\beta$ on both accuracy and robustness. As shown in Table 7, we report the clean accuracy and robustness by varying the value of $\beta$ . We empirically choose the best $\beta$ for different datasets. For example, for MNIST, $\beta = 1.5$ can achieve better accuracy and robustness. For FMNIST, we let $\beta = 2$ for a proper trade-off in accuracy and robustness.
|
| 225 |
+
|
| 226 |
+
# Conclusion
|
| 227 |
+
|
| 228 |
+
In this paper, we investigate an interesting yet not well explored problem in FL: the robustness against adversarial attacks. We first find that directly adopting adversarial training in federated learning can hurt accuracy significantly especially in non-IID setting. We then propose a novel and effective adversarial training method called DBFAT, which is based on the decision boundary of federated learning, and utilizes local re-weighting and global regularization to improve both accuracy and robustness of FL systems. Comprehensive experiments on various datasets and detailed comparisons with the state-of-the-art adversarial training methods demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. This work would potentially benefit researchers who are interested in adversarial robustness of FL.
|
| 229 |
+
|
| 230 |
+
# References
|
| 231 |
+
|
| 232 |
+
Andriushchenko, M.; Croce, F.; Flamarion, N.; and Hein, M. 2020. Square Attack: a query-efficient black-box adversarial attack via random search. arXiv:1912.00049.
|
| 233 |
+
Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), 39-57. IEEE.
|
| 234 |
+
Chen, C.; Kailkhura, B.; Goldhahn, R.; and Zhou, Y. 2021a.
|
| 235 |
+
Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing. arXiv:2103.16031.
|
| 236 |
+
Chen, C.; Liu, Y.; Ma, X.; and Lyu, L. 2022a. CalFAT: Calibrated Federated Adversarial Training with Label Skewness. In Advances in Neural Information Processing Systems.
|
| 237 |
+
Chen, C.; Zhang, J.; and Lyu, L. 2022. Gear: a margin-based federated adversarial training approach. In International Workshop on Trustable, Verifiable, and Auditable Federated Learning in Conjunction with AAAI, volume 2022.
|
| 238 |
+
Chen, Z.; Li, B.; Wu, S.; Xu, J.; Ding, S.; and Zhang, W. 2022b. Shape Matters: Deformable Patch Attack. In Avidan, S.; Brostow, G. J.; Cisse, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision - ECCV 2022. Springer.
|
| 239 |
+
Chen, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Zhang, W. 2022c. Towards Practical Certifiable Patch Defense With Vision Transformer. In Proceedings of the IEEE/CVF Conference on CVPR, 15148-15158.
|
| 240 |
+
Chen, Z.; Zhu, M.; Yang, C.; and Yuan, Y. 2021b. Personalized Retrogress-Resilient Framework for Real-World Medical Federated Learning. In de Bruijne, M.; Cattin, P. C.; Cotin, S.; Padoy, N.; Speidel, S.; Zheng, Y.; and Essert, C., eds., Medical Image Computing and Computer Assisted Intervention - MICCAI 2021 - 24th International Conference, Strasbourg, France, September 27 - October 1, 2021, Proceedings, Part III, volume 12903 of Lecture Notes in Computer Science, 347-356. Springer.
|
| 241 |
+
Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv:2003.01690.
|
| 242 |
+
Cui, J.; Liu, S.; Wang, L.; and Jia, J. 2021. Learnable boundary guided adversarial training. In Proceedings of the IEEE/CVF international conference on computer vision, 15721-15730.
|
| 243 |
+
Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248-255.
|
| 244 |
+
Ding, G. W.; Sharma, Y.; Lui, K. Y. C.; and Huang, R. 2020. MMA Training: Direct Input Space Margin Maximization through Adversarial Training. arXiv:1812.02637.
|
| 245 |
+
Dong, J.; Cong, Y.; Sun, G.; Fang, Z.; and Ding, Z. 2021. Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1): 1-17.
|
| 246 |
+
Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, 9185-9193.
|
| 247 |
+
|
| 248 |
+
Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
|
| 249 |
+
Heo, B.; Lee, M.; Yun, S.; and Choi, J. Y. 2018. Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. arXiv:1805.05532.
|
| 250 |
+
Hong, J.; Wang, H.; Wang, Z.; and Zhou, J. 2021. Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning. arXiv preprint arXiv:2106.10196.
|
| 251 |
+
Huang, R.; Cui, C.; Chen, F.; Ren, Y.; Liu, J.; Zhao, Z.; Huai, B.; and Wang, Z. 2022a. Singgan: Generative adversarial network for high-fidelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, 2525-2535.
|
| 252 |
+
Huang, R.; Lam, M. W.; Wang, J.; Su, D.; Yu, D.; Ren, Y.; and Zhao, Z. 2022b. FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. arXiv preprint arXiv:2204.09934.
|
| 253 |
+
Kairouz, P. 2021. Advances and Open Problems in Federated Learning. arXiv:1912.04977.
|
| 254 |
+
Kannan, H.; Kurakin, A.; and Goodfellow, I. 2018. Adversarial logit pairing. arXiv preprint arXiv:1803.06373.
|
| 255 |
+
Karimireddy, S. P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; and Suresh, A. T. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, 5132-5143. PMLR.
|
| 256 |
+
Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario.
|
| 257 |
+
Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533.
|
| 258 |
+
Le, Y.; and Yang, X. 2015. Tiny imagenet visual recognition challenge. CS 231N, 7(7): 3.
|
| 259 |
+
Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324.
|
| 260 |
+
Lee, S.; Lee, H.; and Yoon, S. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 272-281.
|
| 261 |
+
Li, B.; Sun, Z.; and Guo, Y. 2019. SuperVAE: Superpixelwise Variational Autoencoder for Salient Object Detection. In The Thirty-Third AAAI Conference.
|
| 262 |
+
Li, B.; Sun, Z.; Tang, L.; Sun, Y.; and Shi, J. 2019. Detecting Robust Co-Saliency with Recurrent Co-Attention Neural Network. In Kraus, S., ed., IJCAI.
|
| 263 |
+
Li, B.; Xu, J.; Wu, S.; Ding, S.; Li, J.; and Huang, F. 2021a. Detecting Adversarial Patch Attacks through Global-local Consistency. In ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, 35-41. ACM.
|
| 264 |
+
Li, Q.; Diao, Y.; Chen, Q.; and He, B. 2021b. Federated Learning on Non-IID Data Silos: An Experimental Study. arXiv preprint arXiv:2102.02079.
|
| 265 |
+
|
| 266 |
+
Li, T.; Sahu, A. K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; and Smith, V. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127.
|
| 267 |
+
Li, X.; Huang, K.; Yang, W.; Wang, S.; and Zhang, Z. 2020. On the Convergence of FedAvg on Non-IID Data. arXiv:1907.02189.
|
| 268 |
+
Li, Y.; Lyu, X.; Koren, N.; Lyu, L.; Li, B.; and Ma, X. 2021c. Anti-backdoor learning: Training clean models on poisoned data. NeurIPS, 34.
|
| 269 |
+
Liang, F.; Pan, W.; and Ming, Z. 2021. FedRec++: Lossless Federated Recommendation with Explicit Feedback. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, 4224-4231. AAAI Press.
|
| 270 |
+
Liu, Q.; Chen, C.; Qin, J.; Dou, Q.; and Heng, P. 2021a. FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 1013-1023. Computer Vision Foundation / IEEE.
|
| 271 |
+
Liu, S.; Xu, S.; Yu, W.; Fu, Z.; Zhang, Y.; and Marian, A. 2021b. FedCT: Federated Collaborative Transfer for Recommendation. In Diaz, F.; Shah, C.; Suel, T.; Castells, P.; Jones, R.; and Sakai, T., eds., SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, 716-725. ACM.
|
| 272 |
+
Lyu, L.; Yu, H.; Ma, X.; Chen, C.; Sun, L.; Zhao, J.; Yang, Q.; and Philip, S. Y. 2022. Privacy and robustness in federated learning: Attacks and defenses. IEEE transactions on neural networks and learning systems.
|
| 273 |
+
Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
|
| 274 |
+
McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 1273-1282. PMLR.
|
| 275 |
+
Shah, D.; Dube, P.; Chakraborty, S.; and Verma, A. 2021. Adversarial training in communication constrained federated learning. arXiv preprint arXiv:2103.01319.
|
| 276 |
+
Wang, C.; Deng, J.; Meng, X.; Wang, Y.; Li, J.; Miao, F.; Rajasekaran, S.; and Ding, C. 2021. A Secure and Efficient Federated Learning Framework for NLP. In EMNLP 2021, 7676-7682. Association for Computational Linguistics.
|
| 277 |
+
Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; and Poor, H. V. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint arXiv:2007.07481.
|
| 278 |
+
Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747.
|
| 279 |
+
Yurochkin, M.; Agarwal, M.; Ghosh, S.; Greenewald, K.; Hoang, T. N.; and Khazaeni, Y. 2019. Bayesian Nonparametric Federated Learning of Neural Networks. arXiv:1905.12022.
|
| 280 |
+
|
| 281 |
+
Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472-7482. PMLR.
|
| 282 |
+
Zhang, J.; Chen, C.; Li, B.; Lyu, L.; Wu, S.; Ding, S.; Shen, C.; and Wu, C. 2022a. DENSE: Data-Free One-Shot Federated Learning. In Advances in NeurIPS.
|
| 283 |
+
Zhang, J.; Li, B.; Xu, J.; Wu, S.; Ding, S.; Zhang, L.; and Wu, C. 2022b. Towards Efficient Data Free Black-Box Adversarial Attack. In Proceedings of the IEEE/CVF Conference on CVPR, 15115–15125.
|
| 284 |
+
Zhang, J.; Li, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; and Wu, C. 2022c. Federated Learning with Label Distribution Skew via Logits Calibration. In Proceedings of the ICML. PMLR.
|
| 285 |
+
Zhang, J.; Zhu, J.; Niu, G.; Han, B.; Sugiyama, M.; and Kankanhalli, M. 2021. Geometry-aware Instance-reweighted Adversarial Training. In International Conference on Learning Representations.
|
| 286 |
+
Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; and Chandra, V. 2018. Federated Learning with Non-IID Data. arXiv:1806.00582.
|
| 287 |
+
Zhou, Y.; Wu, J.; Wang, H.; and He, J. 2022. Adversarial robustness through bias variance decomposition: A new perspective for federated learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2753-2762.
|
| 288 |
+
Zhu, X.; Wang, J.; Hong, Z.; and Xiao, J. 2020. Empirical Studies of Institutional Federated Learning For Natural Language Processing. In Cohn, T.; He, Y.; and Liu, Y., eds., Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, 625-634. Association for Computational Linguistics.
|
| 289 |
+
Zizzo, G.; Rawat, A.; Sinn, M.; and Buesser, B. 2020. FAT: Federated Adversarial Training. arXiv:2012.01791.
|
2302.09xxx/2302.09479/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef7515ccf86bceb1b67a750c897d3513c5ccd6faf5ba83de1189c822b5fe8ca1
|
| 3 |
+
size 551210
|
2302.09xxx/2302.09479/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09483/d004cf8e-ad29-4991-a6d0-2f48be625fa3_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59ce4145cef7696896c474dd51b8b6d7561b2c0f1f965553c7a000825e94aed9
|
| 3 |
+
size 1031766
|
2302.09xxx/2302.09483/full.md
ADDED
|
@@ -0,0 +1,480 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Why Is Public Pretraining Necessary for Private Model Training?
|
| 2 |
+
|
| 3 |
+
Arun Ganesh*, Mahdi Haghifam†, Milad Nasr*, Sewoong Oh*, Thomas Steinke*, Om Thakkar*, Abhradeep Thakurta*, Lun Wang*
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
In the privacy-utility tradeoff of a model trained on benchmark language and vision tasks, remarkable improvements have been widely reported with the use of pretraining on publicly available data. This is in part due to the benefits of transfer learning, which is the standard motivation for pretraining in non-private settings. However, the stark contrast in the improvement achieved through pretraining under privacy compared to non-private settings suggests that there may be a deeper, distinct cause driving these gains. To explain this phenomenon, we hypothesize that the non-convex loss landscape of a model training necessitates an optimization algorithm to go through two phases. In the first, the algorithm needs to select a good "basin" in the loss landscape. In the second, the algorithm solves an easy optimization within that basin. The former is a harder problem to solve with private data, while the latter is harder to solve with public data due to a distribution shift or data scarcity. Guided by this intuition, we provide theoretical constructions that provably demonstrate the separation between private training with and without public pretraining. Further, systematic experiments on CIFAR10 and LibriSpeech provide supporting evidence for our hypothesis.
|
| 8 |
+
|
| 9 |
+
# 1 Introduction
|
| 10 |
+
|
| 11 |
+
As modern machine learning models are increasingly capable of memorizing the training data, membership inference attacks and data reconstruction attacks have successfully demonstrated the vulnerability of sharing models trained on sensitive data. Differential Privacy (DP), introduced in [DMNS06], is now a gold standard measure of privacy leakage in training a model, which is parameterized by two scalars: $\varepsilon > 0$ and $\delta \in [0,1]$ . By introducing enough randomness in the training, one can ensure that the model does not depend too much on each individual training example. This provides plausible deniability to the participants [KOV17] and evades privacy attacks, achieving strong DP with small values of $(\varepsilon, \delta)$ . We give a formal definition in Definition 1.1.
|
| 12 |
+
|
| 13 |
+
One of the main challenges in training on private data is that utility and privacy trades off unfavorably on standard benchmark tasks. Given a target task, such as table-to-text generation, on a private dataset, say E2E dataset [NDR17], state-of-the-art techniques suffer from significant performance degradation to achieve even an acceptable level of privacy. For example, a weak privacy guarantee of $\varepsilon = 8$ significantly deteriorates the performance of the trained model compared to the one trained without privacy, i.e. $\varepsilon = \infty$ (second row of Table 1). Perhaps surprisingly, there is one simple change to the training algorithm that can significantly reduce this cost of privacy: pretraining the model on some public data (first row of Table 1).
|
| 14 |
+
|
| 15 |
+
Such remarkable gain of public pretraining has been widely observed in standard benchmark vision and language tasks, which we survey in Appendix B. This includes CIFAR-10, MNIST, and Fashion MNIST
|
| 16 |
+
|
| 17 |
+
<table><tr><td></td><td>ε = ∞</td><td>ε = 8</td><td>cost of privacy</td></tr><tr><td>with public pretrain</td><td>69.46</td><td>63.19</td><td>6.27</td></tr><tr><td>without public pretrain</td><td>65.73</td><td>24.25</td><td>41.48</td></tr><tr><td>gain of public pretraining</td><td>3.73</td><td>38.94</td><td></td></tr></table>
|
| 18 |
+
|
| 19 |
+
Table 1: BLEU score for generating descriptions of table entries on E2E dataset reported in [LTLH22, Table 2] with $\delta = 10^{-5}$ . The first row uses GPT-2 $[\mathrm{RWC}^{+}19]$ as a pretrained model.
|
| 20 |
+
|
| 21 |
+
in [TB20], CIFAR-100, ImageNet, and Places-365 in $\left[\mathrm{DBH}^{+}22\right]$ , text generation with E2E and DART in [LTLH22], and next word prediction on Reddit dataset [KST20]. Note that in all these cases, the public data distribution differs from the target task distribution. Nevertheless, we expect some gain from public pretraining, drawing analogy from its success in non-private training of large models (e.g., first column in Table 1). However, the stark difference in the gain of pretraining between the non-private case, i.e., $\varepsilon = \infty$ and the weakly private case, say $\varepsilon = 8$ , is striking. This suggests that the benefit of public pretraining in differentially private machine learning is a fundamentally different phenomenon from the typical benefits of standard transfer learning [BF76, SRASC14, BHA $^{+}$ 21]. Our goal is to give an insight into when such a phenomenon can be observed by carefully constructing synthetic public and private tasks. Recently, in a closely related work, $\left[\mathrm{LLH}^{+}22\right]$ formally demonstrated that public data mitigates the curse of dimensionality when fine-tuning with privacy. However, to the best of our knowledge, ours is the first work to understand the necessity of public data in private model training.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
(a)
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
(b)
|
| 28 |
+
Figure 1: An example of a non-convex loss function. While the overall function is non-convex, it consists of many locally convex "basins", some better than the others.
|
| 29 |
+
|
| 30 |
+
In this paper, we provide a theoretical example of a loss function that requires pretraining on public data and fine-tuning with private data. Our construction is guided by our hypothesis that the typical population loss landscape of standard machine learning tasks necessitates gradient based algorithms to go through two stages. A conceptual two-dimensional sketch of the landscape we envision is shown in Fig. 1. We start from a random initialization close to the origin. In the first stage, the algorithm is directed by the data towards a good basin with small local minima. This is followed by the second stage, where the algorithm solves what is effectively a convex optimization in the selected basin to arrive at the local minima. The key insight is that the first stage of selection should require significantly more samples to solve privately, compared to the
|
| 31 |
+
|
| 32 |
+
number of samples required to solve it without privacy. Concretely, for the example in Fig. 1, the gradient at the origin directs to the correct basin containing the global minima, but the gradient is small. A private gradient descent adds additional noise to the update, increasing the chance of ending up at worse basins. Hence, a significantly larger private dataset is needed to overcome the privacy noise. This construction is motivated by the private hypothesis selection problems where a similar fundamental separation in sample complexity is known [SU17]. This intuition would explain the widely observed failure of private training when starting from a random initialization. We turn this hypothesis into concrete constructions in Section 2, where we formally prove the separation in sample complexity.
|
| 33 |
+
|
| 34 |
+
Main contributions: In Section 2, we construct theoretical tasks to demonstrate the fundamental separation in sample complexity. First, we construct a theoretical loss function and a corresponding data distribution such that given $n_{pub}$ public samples and $n_{priv}$ private samples from this distribution with $n_{pub} \ll n_{priv}$ , pretraining on the public data and fine tuning with the private data achieves a much better loss than any algorithm with access to either alone. Next, we extend our result to a more relevant setting where $n_{pub}$ is large but the public data is out of distribution. This construction exhibits the need to have little to no privacy noise in the first "phase" of non-convex optimization. To the best of our knowledge, this is the first theoretical analysis demonstrating the need for public pretraining.
|
| 35 |
+
|
| 36 |
+
In Section 3, we empirically validate our two-phase hypothesis. First, treating CIFAR-10 as our target private task, we consider a setup where we are allowed $T$ epochs of pre- or post-training on in-distribution public data, out-of-distribution public data, or private data with low noise. In all settings we demonstrate it is best to use all these low- or non-private training epochs on pretraining (as opposed to post-training). This demonstrates that early rounds of training are more sensitive to privacy noise, as conjectured in our two-phase hypothesis. Secondly, we look at a manifold of the loss landscape interpolated between three models trained on LibriSpeech. We show that a publicly pretrained and privately fine-tuned model ends up in the same basin as a fully publicly trained model. On the other hand, a fully privately trained model ends up in a different basin. This provides evidence that public pretraining's benefits are in part due to selecting a better basin for fine-tuning.
|
| 37 |
+
|
| 38 |
+
# 1.1 Other Related Work
|
| 39 |
+
|
| 40 |
+
Pretraining on public data is now a default choice in large scale private training for NLP tasks $\left[\mathrm{YNB}^{+}22, \mathrm{HLY}^{+}22, \mathrm{BWZK}, \mathrm{GvdMZG}22\right]$ , including 175 billion parameter GPT-3 with $\varepsilon = 1$ , and vision tasks $\left[\mathrm{GAW}^{+}22, \mathrm{LWAFF}21, \mathrm{KCS}^{+}22, \mathrm{BWZK}22, \mathrm{DBH}^{+}22\right]$ . Motivated by pretraining providing good feature representations, [TB20] propose using handcrafted features for small scale problems, as opposed to learned features, to improve utility-privacy tradeoff. On the other hand, [TKC22] cautions against the indiscriminate use of large-scale public data in DP training, which we discuss in depth in Section 4.
|
| 41 |
+
|
| 42 |
+
Besides the aforementioned empirical results, public data has been used to show theoretical improvements for problems such as query release [ABM19, $\mathrm{BCM}^{+}20$ , $\mathrm{LVS}^{+}21$ ], mean estimation [ADK20, BKS22], and optimization [ZWB21, KRRT20, ALD21, $\mathrm{AGM}^{+}22$ ]. In the optimization case, besides pretraining, these papers use public data to learn the geometry of the private loss in various ways and use geometry-aware gradient descent methods, rather than vanilla DP-SGD.
|
| 43 |
+
|
| 44 |
+
[SU17] showed that for the problem of selecting the $k$ coins out of $d$ coins that land heads with the highest probability, any $(\varepsilon, \delta)$ -DP algorithm with constant error requires $n = \Omega(\sqrt{k} \log d)$ samples from each coin. This is in contrast with the non-private case, where $n = O(\log d)$ suffices for any $k$ . Selection and non-convex optimization are tightly connected: [GTU22] show a reduction from selection to non-convex optimization, by designing a loss with $d$ locally convex basins, each corresponding to a different coin in the selection problem. This gives a different perspective on why the first stage of non-convex optimization may be difficult privately but not with public data: it effectively involves solving a selection problem on the basins in the loss function.
|
| 45 |
+
|
| 46 |
+
# 1.2 Background on differential privacy and DP-SCO
|
| 47 |
+
|
| 48 |
+
Differential privacy is a privacy guarantee for algorithms that can be viewed as random functions of datasets:
|
| 49 |
+
|
| 50 |
+
Definition 1.1 (Differential Privacy [DMNS06]). Let $\mathcal{D}$ be a data domain, and $\mathcal{C}$ be a set of outputs. An algorithm $\mathcal{A}:\mathcal{D}^*\to \mathcal{C}$ is $(\varepsilon ,\delta)$ -differentially private if for any $D,D^{\prime}\in \mathcal{D}^{*}$ such that $D$ and $D^{\prime}$ differ in at most one element and any set of outputs $S\subseteq \mathcal{C}$ : $\mathbf{Pr}_{\theta \sim \mathcal{A}(D)}[\theta \in S]\leq e^{\varepsilon}\mathbf{Pr}_{\theta \sim \mathcal{A}(D^{\prime})}[\theta \in S] + \delta$
|
| 51 |
+
|
| 52 |
+
A well-studied problem in the differential privacy literature is differentially private stochastic (convex) optimization (DP-SCO) [BST14, BFTT19, FKT20, BFGT20, KLL21, ALD21, GLL22]. In DP-SCO, there is a loss function $\ell : \mathcal{C} \times \mathcal{D} \to \mathbb{R}$ , and an unknown distribution $\tau$ over $\mathcal{D}$ . Given $n$ i.i.d. samples from $\tau$ , we wish to find $\theta \in \mathcal{C}$ minimizing the population loss $\mathcal{L}(\theta) := \mathbb{E}_{d \sim \tau}[\ell(\theta; d)]$ . For any $\tau$ we denote the population minimizer by $\theta^{*}(\tau) := \arg \min_{\theta \in \mathcal{C}}\mathcal{L}(\theta)$ . The performance of a DP-SCO algorithm is measured by its risk, $\mathbb{E}_{D \sim \tau^{n}, \theta \sim \mathcal{A}(D)}[\mathcal{L}(\theta)] - \mathcal{L}(\theta^{*}(\tau))$ . DP-SCO captures most machine learning tasks we are interested in. The most widely studied algorithm in the DP-SCO literature is DP-SGD [SCS13, BST14, ACG+16, BFTT19, BFGT20], which minimizes the empirical loss $\ell(\theta; D) = (1 / |D|)\sum_{d \in D}\ell(\theta; d)$ over $\mathcal{C} \subseteq \mathbb{R}^{p}$ as follows: DP-SGD starts with $\theta_{0}$ , and for $t$ iterations computes $\theta_{t + 1} = \theta_{t} - \eta_{t}\nabla \ell(\theta_{t}; D) + \xi_{t}$ , where $\xi_{t} \sim N(0, \sigma^{2}\mathbb{I})$ and $\sigma^{2}$ is chosen to satisfy $(\varepsilon, \delta)$ -DP.
|
| 53 |
+
|
| 54 |
+
Perhaps the simplest problem captured by DP-SCO is private mean estimation with identity covariance. The following lemma gives a lower bound on private mean estimation. It follows from Theorem 5.5 of [BST14] and standard translation of ERM lower bounds to SCO lower bounds (see Appendix C of [BFTT19]):
|
| 55 |
+
|
| 56 |
+
Lemma 1.2. For $\ell(\theta; d) = (1/2) \| \theta - d\|_2^2$ , $\mathcal{C} = \mathbb{R}^p$ , and $\mathcal{D} = B_p(0,1)$ (the $p$ dimensional $\ell_2$ -ball of radius 1 centered at the origin), let $\theta^*(\tau) \coloneqq \arg \min_{\theta \in \mathcal{C}} \mathcal{L}(\theta)$ for a distribution $\tau$ over $\mathcal{D}$ . For $p \leq \varepsilon^2 n^2$ and $\delta = o(1/n)$ , there exists a set of distributions, $\mathcal{T}_1$ , over $\mathcal{D}$ , such that the following is true. For every $(\varepsilon, \delta)$ -DP algorithm $\mathcal{A}: \mathcal{D}^n \to \mathcal{C}$ , there exists $\tau(\mathcal{A}) \in \mathcal{T}_1$ such that:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\mathbb {E} _ {D \sim \tau (\mathcal {A}) ^ {n}, \theta \sim \mathcal {A} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} (\theta^ {*} (\tau (\mathcal {A}))) + \Omega \left(\frac {p}{\varepsilon^ {2} n ^ {2}} + \frac {1}{n}\right).
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
Furthermore, for some $M = \Omega\left(\frac{\sqrt{p}}{\varepsilon n}\right)$ and all such $\tau \in \mathcal{T}_1$ , $\left\| \theta^{*}(\tau)\right\|_{2} - M \leq 1 / n$ .
|
| 63 |
+
|
| 64 |
+
Non-privately, this translates to:
|
| 65 |
+
|
| 66 |
+
Lemma 1.3. For $\ell(\theta; d) = \frac{1}{2} \| \theta - d \|_2^2$ , $\mathcal{C} = \mathbb{R}^p$ , and $\mathcal{D} = B_p(0,1)$ , there exists a set of distributions, $\mathcal{T}_2$ , over $\mathcal{D}$ such that the following is true. For every $\mathcal{A}: \mathcal{D}^n \to \mathcal{C}$ there exists $\tau(\mathcal{A}) \in \mathcal{T}_2$ such that:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\mathbb {E} _ {D \sim \tau (\mathcal {A}) ^ {n}, \theta \sim \mathcal {A} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} \left(\theta^ {*} (\tau (\mathcal {A}))\right) + \Omega \left(\frac {1}{n}\right).
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
These lemmas are the basis of the results in Section 2. Results in [BST14] and standard translations from empirical loss bounds to population loss bounds via uniform stability (see e.g. [HRS16]) show that DP-SGD achieves upper bounds for mean estimation that match these lower bounds up to polylogarithmic factors.
|
| 73 |
+
|
| 74 |
+
# 2 Necessity of public pretraining
|
| 75 |
+
|
| 76 |
+
A typical scenario in pretraining on public data is when the public dataset is large but is Out-Of-Distribution (OOD); there is a potentially large distribution shift between the public and the private dataset $\left[\mathrm{YNB}^{+}22, \mathrm{HLY}^{+}22, \mathrm{BWZK}, \mathrm{GvdMZG}22, \mathrm{GAW}^{+}22, \mathrm{LWAFF}21, \mathrm{KCS}^{+}22, \mathrm{BWZK}22, \mathrm{DBH}^{+}22\right]$ . In this section, we start with a simpler scenario where a small number of In-Distribution (ID) samples are used in public pretraining. This simplifies the explanation of our construction and also corresponds to realistic scenarios where public data comes from users who consented. The more common OOD case is addressed in Section 2.4.
|
| 77 |
+
|
| 78 |
+
# 2.1 Pretraining on in-distribution public data
|
| 79 |
+
|
| 80 |
+
When a small number of in-distribution samples are publicly available, several techniques have been proposed to improve the accuracy-privacy trade-off. An immediate use is to reduce the sensitivity of a minibatch gradient by including the public data in the mini-batch. The public data can also be used to compute
|
| 81 |
+
|
| 82 |
+
useful statistics; one can reduce the privacy noise by projecting the gradient onto a low-dimensional subspace computed from public data [KRRT20, YZCL21, ZWB21, $\mathrm{GAW^{+}22}$ ] and by improving the adaptive clipping method with the geometry of the gradients estimated from public data [GAW $^+$ 22, ADF $^+$ 21, NMT $^+$ 22]. However, by far the most dominant technique in terms of the accuracy gain is pretraining on the indistribution public data. For example, on CIFAR-10 dataset, one can train a $(\varepsilon = 2,\delta = 10^{-5})$ -DP model that achieves $64.9\%$ test accuracy. Treating $4 \%$ of the training dataset as public data, the accuracy can be improved by $7.1\%$ [NMT $^+$ 22, Table 1]. All the other techniques only give $2.8\%$ extra gain, which includes using public data in fine-tuning, public data assisted adaptive clipping, and averaging past iterates. Such pretraining with in-distribution public data has been successful also in training variational autoencoders [JZK $^+$ 22]. We provide systematic study of these gains with numerical experiments on benchmark datasets in Section 3.
|
| 83 |
+
|
| 84 |
+
Motivated by the practical successes, we first consider the following setup. We are given $n_{pub}$ public examples, $D_{pub}$ , and $n_{priv}$ private examples, $D_{priv}$ , both drawn i.i.d. from the same distribution $\tau$ , where $n_{pub} \ll n_{priv}$ . We construct $\tau$ such that pretraining on small ID public data can significantly improve the performance of a private training. Concretely, we will show that for any integer $p$ , there exists a loss function $\ell$ , sample sizes $n_{pub}$ and $n_{priv}$ , and a data distribution $\tau$ such that (i) any non-private algorithm $A_{pub}$ given only $D_{pub}$ has worst-case excess population loss lower bounded by $\Omega(1)$ ; (ii) any $(\varepsilon, \delta)$ -DP algorithm $A_{priv}$ given only $D_{priv}$ has worst-case excess population loss lower bounded by $\Omega(1)$ ; and (iii) a gradient-based algorithm $A_{mixed}$ that pretrains on $D_{pub}$ and privately fine-tunes on $D_{priv}$ achieves excess population loss upper bounded by $O(1/p)$ . In particular, the dimensionality of $\ell$ , $n_{pub}$ , and $n_{priv}$ are polynomial functions of $p$ . We focus on the unconstrained case where $C = \mathbb{R}^p$ , as it aligns with how differentially private learning models are trained in practice.
|
| 85 |
+
|
| 86 |
+
# 2.2 Construction
|
| 87 |
+
|
| 88 |
+
We first give a high-level overview of a construction for our main theorem and defer details to Appendix C.1. While our construction builds on upper/ lower bounds for public/private mean estimation, one can build a similar construction using upper/ lower bounds for linear regression instead. This follows via standard reductions from mean estimation to linear regression. We focus here on mean estimation for simplicity of presentation. A reference for notation is in Appendix A.
|
| 89 |
+
|
| 90 |
+
Our strategy is to concatenate the two known lower bounds for mean estimation with private data in Lemma 1.2 and with public data in Lemma 1.3. We consider a distribution $\tau$ over a data point $d = (d_{1}, d_{2}) \in \mathbb{R}^{p^{4}} \times \mathbb{R}^{p}$ whose population mean is $\theta^{*}(\tau) = (\theta_{1}^{*}(\tau), \theta_{2}^{*}(\tau)) \in \mathbb{R}^{p^{4}} \times \mathbb{R}^{p}$ . The first $p^{4}$ coordinates are used to construct a hard distribution for private mean estimation with a loss function $\ell_{1}: \mathbb{R}^{p^{4}} \times \mathbb{R}^{p^{4}} \to \mathbb{R}$ , and the following $p$ coordinates are used to construct a hard distribution for public mean estimation with a loss function $\ell_{2}: \mathbb{R}^{p} \times \mathbb{R}^{p} \to \mathbb{R}$ . We assume we have $n_{pub}$ public samples and $n_{priv}$ private samples from the same distribution with $n_{pub} \ll n_{priv}$ .
|
| 91 |
+
|
| 92 |
+
We will define an appropriately chosen basin $S \subset \mathbb{R}^{p^4}$ and eventually combine our loss functions in Eq. (2) such that if $\theta_1$ is far from $S$ , then $\ell((\theta_1, \theta_2)) = \ell_1(\theta_1)$ , but inside of $S$ , $\ell((\theta_1, \theta_2)) = \ell_1(\theta_1) + \ell_2(\theta_2)$ . In particular, we will choose $\ell_2$ that is non-positive everywhere, so that is desirable to be in $S$ with respect to minimizing $\ell$ .
|
| 93 |
+
|
| 94 |
+
Starting outside of $S$ , the algorithm first needs to minimize $\ell_1$ to reach $S$ . We use $\ell_1$ from the private lower bound (Lemma 1.2) such that a private algorithm fails just on optimizing $\ell_1$ . On the other hand, an algorithm with a small amount of public data can easily optimize $\ell_1$ . We will eventually choose $S$ that contains all points close to the optimum of $\ell_1$ , so any public algorithm will reach $S$ after optimizing $\ell_1$ , and will not touch $\theta_2$ in doing so. Once inside the basin $S$ , the algorithm needs to also minimize $\ell_2$ to reach a small total loss. We use $\ell_2$ from the public lower bound (Lemma 1.3) such that a small-size public data alone is not sufficient to (approximately) reach global minima but large-size private data can. Precisely, we combine the two loss functions and define
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\ell \left(\left(\theta_ {1}, \theta_ {2}\right); \left(d _ {1}, d _ {2}\right)\right) = \ell_ {1} \left(\theta_ {1}; d _ {1}\right) + p q \left(\theta_ {1}\right) \cdot \ell_ {2} \left(\theta_ {2}; d _ {2}\right), \tag {1}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
q (\theta_ {1}) := \left\{ \begin{array}{l l} 0, & \| \theta_ {1} - \Pi_ {S} (\theta_ {1}) \| _ {2} > R _ {2} \\ 1 - \frac {\| \theta_ {1} - \Pi_ {S} (\theta_ {1}) \| _ {2}}{R _ {2}}, & 0 < \| \theta_ {1} - \Pi_ {S} (\theta_ {1}) \| _ {2} \leq R _ {2} \\ 1 & \| \theta_ {1} - \Pi_ {S} (\theta_ {1}) \| _ {2} = 0 (\mathrm {i . e . ,} \theta_ {1} \in S) \end{array} \right.,
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
for some $S \subset \mathbb{R}^{p^4}$ and $R_2 > 0$ to be defined later. Here $\Pi_S$ denotes Euclidean projection into $S$ . If $\theta_1$ is far from $S$ , $\ell$ is just $\ell_1(\theta_1)$ . If $\theta_1$ is in $S$ , then $\ell$ is just $\ell_1(\theta_1) + p \cdot \ell_2(\theta_2)$ . In between these two regimes, $\ell$ interpolates between these two loss functions; this interpolation is technically not necessary for our eventual theorem and proof, but gives a more realistic loss function. Note that $\ell_2$ is non-positive, so having larger $q(\theta_1)$ (i.e., being in or close to $S$ ) is advantageous with respect to minimizing the term depending on $\theta_2$ .
|
| 107 |
+
|
| 108 |
+
In Figure 2 is an example of our eventual construction. $S$ consists of two basins, centered at $-0.5$ and $0.5$ . If $\theta_{1}$ is near one of these points, then $\ell$ is a quadratic centered at .005 with respect to $\theta_{2}$ . If $\theta_{2}$ is far from these points, $\ell$ is a constant with respect to $\theta_{2}$ . So, if we start at the origin, using gradient-based methods we would first have to optimize $\theta_{1}$ to get to one of the basins, and then optimize $\theta_{2}$ . With private data choosing the right basin is hard, with public data optimizing $\theta_{2}$ within a basin is hard.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
(a)
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
(b)
|
| 115 |
+
Figure 2: (a) A 3-D visualization of the toy example of our construction for $\ell$ , for one-dimensional $\theta_{1}$ and $\theta_{2}$ . (b) A heatmap of the same example.
|
| 116 |
+
|
| 117 |
+
The loss functions: In the initial stage of the algorithm (outside of $S$ ), the lower bound for private algorithm follows from the choice of $\ell_1(\theta_1;d_1) \coloneqq \min \{(1/2)\|\theta_1 - d_1\|_2^2, \frac{9}{2}\}$ defined over the first $p^4$ coordinates. Note that as long as $\|\theta_1\|_2 \leq 2$ and $d_1$ is in $\mathcal{D}_1$ , this is equivalent to a loss function of $\|\theta_1 - d_1\|_2^2$ , i.e. we can still apply Lemma 1.2 to $\ell_1$ . The minimum is used in our upper bound to keep $\ell_1$ bounded in the low-probability event that DP-SGD adds a large amount of noise to $\theta_1$ .
|
| 118 |
+
|
| 119 |
+
For any $\mathcal{A}_{priv}$ , we define our basin to include the global minima of $\ell_{1}$ on the distribution $\tau(\mathcal{A}_{priv})$ in Lemma 1.2. Since we know $|\left\| \theta^{*}(\tau(\mathcal{A}_{priv}))\right\| - M\|_{2} = o_{n}(1)$ , we let
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
S := B _ {p ^ {4}} (0, M + R _ {1}) \backslash B _ {p ^ {4}} (0, M - R _ {1}). \tag {2}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $M = \Omega(1)$ is defined as in Lemma 1.2 for the case when dimension is $p^4$ , $\varepsilon = 1$ , and $n_{priv} = p^2$ , and we choose some $R_1 < M$ . Note that $S$ is the set of all points where $\ell_2$ -norm of $\theta_1$ is close to $M$ ; the basin is a single non-convex set. Our construction can seamlessly generalize to the case where there are numerous disconnected basins to resemble more realistic landscapes. If $R_1$ is sufficiently large, then Lemma 1.2 guarantees that the population minimizer of $\ell_1$ is contained in $S$ and far from the boundary of $S$ .
|
| 126 |
+
|
| 127 |
+
for distributions in $\mathcal{T}_1$ as defined in that lemma. Further, by a vector Azuma inequality [Hay03] the same is true of the empirical minimizer of $\ell_1$ over the public data with high probability. We will specify a value of $R_{1}$ in Appendix C.1.
|
| 128 |
+
|
| 129 |
+
In the next stage of the algorithm (inside $S$ ), the loss is dominated by $\ell_2(\theta_2, d_2) := \min \{0, \frac{\|\theta_2 - d_2\|^2}{2r^2} - \frac{9}{2}\}$ where we use $r$ to scale the domain of $\ell_2$ . In particular, let $\mathcal{T}_2'$ be the set of $p$ -dimensional data distributions over $\mathcal{D}_2' := B_p(0, r)$ , and $\mathcal{T}_2'$ is defined by shrinking the support of each distribution in $\mathcal{T}_2$ (as defined in Lemma 1.3) by a factor of $r < 1$ . We will specify the value of $r$ in Appendix C.1; for now, one can think of $r \ll 1$ . Since rescaling does not fundamentally change the problem, again Lemma 1.3 (up to a $1/r^2$ rescaling) holds also in $\mathcal{T}_2'$ .
|
| 130 |
+
|
| 131 |
+
Note that as long as $\| \theta_2\| _2\leq 2r$ and $d_{2}\in \mathcal{D}_{2}^{\prime}$ , minimizing $\ell_{2}$ is equivalent to minimizing $\frac{\|\theta_2 - d_2\|_2^2}{2r^2}$ , which is just a rescaling of minimizing $\frac{\|\theta_2 - d_2\|_2^2}{2}$ . In other words, we can still apply Lemma 1.3 to $\ell_{2}$ . Putting $\ell_{1}$ and $\ell_{2}$ together, our loss is defined in Eq. (1) with a choice of $R_{2} < M - R_{1}$ , which implies $q(0) = 0$ ; the exact value of $R_{2}$ is immaterial to our construction and eventual theorem statement.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 3: A projection of our two-dimensional toy example loss onto $\theta_{2} = 0.005$ and $\theta_{2} = -0.005$ .
|
| 135 |
+
|
| 136 |
+
Toy example of the loss function: In Figure 2 we provide a visualization of our loss $\ell(\cdot, d)$ for a single data point $d = (0.5, 0.005)$ as defined in Eq. (1) for $p = 1$ . Here, to simplify the visualization we have chosen $r = 0.01$ , $M = 0.5$ , $R_1 = 0.1$ , $R_2 = 0.2$ , which may not correspond to the actual values we choose in our construction. This gives $S = [-0.6, -0.4] \cup [0.4, 0.6]$ , and $q(\theta_1) = 0$ if $\theta_1 \in [-\infty, -0.8] \cup [-0.2, 0.2] \cup [0.8, \infty]$ . Since $0.5 \in S$ and thus $q(0.5) = 1$ , the minimizer is $(0.5, 0.005)$ . We can observe the following.
|
| 137 |
+
|
| 138 |
+
The first stage of the optimization (which corresponds to pretraining) tries to find the right part of the basin $S$ with small $\ell_1(\theta_1)$ . For a fixed $\theta_2$ , $\ell$ is a quadratic with respect to $\theta_1$ , except for the "wells" centered at $\theta_1 = 0.5$ and $-0.5$ (Figure 3). In our construction, the population minimizer of $\theta_1$ would always be in one of the basins in $S = [-0.6, -0.4] \cup [0.4, 0.6]$ . Note that the two basins are disconnected only because of the choice of $p = 1$ .
|
| 139 |
+
|
| 140 |
+
The second stage of the optimization (which corresponds to fine-tuning) tries to minimize the second loss $\ell_2(\theta_2)$ . For a fixed $\theta_{1}$ , $\ell$ is a quadratic with respect to $\theta_{2}$ . The strong convexity of this quadratic increases with $q$ ; when $q(\theta_1) = 0$ (e.g. at $\theta_{1} = 0$ ) then $\ell$ is a constant with respect to $\theta_{2}$ .
|
| 141 |
+
|
| 142 |
+
In particular, we can see from Figure 2 that if we are at the origin, we can see that a (non-noisy) gradient step will only optimize over $\theta_{1}$ , but once $\theta_{1}$ is inside $S$ then gradient steps will optimize both $\theta_{1}$ and $\theta_{2}$ . Furthermore, if we start at $\theta_{1}$ in $S$ and run (DP) gradient descent, the scale of $\theta_{2}$ , which is controlled by the choice of $r$ in the definition of $\ell_{2}$ , is much smaller than the scale of $\theta_{1}$ . So it should be possible to optimize $\theta_{2}$ using gradient descent once $\theta_{1}$ is inside $S$ , without causing $\theta_{1}$ to move very far. This roughly corresponds to fine-tuning staying within a basin in our hypothesis.
|
| 143 |
+
|
| 144 |
+
# 2.3 Analysis
|
| 145 |
+
|
| 146 |
+
With the above construction, we formally guarantee that for certain sizes of public and private datasets, both datasets are necessary to optimize the loss to a desired level. We defer the proof to Appendix C.1.
|
| 147 |
+
|
| 148 |
+
Theorem 2.1. For every integer $p \geq 1$ , for some $r > 0$ , $\mathcal{C} = \mathbb{R}^{p^4} \times \mathbb{R}^p$ , $\mathcal{D} = B_{p^4}(0,1) \times B_p(0,r)$ there exists $\ell$ and a set of distributions $\mathcal{T}$ over $\mathcal{D}$ such that:
|
| 149 |
+
|
| 150 |
+
(1) For $\delta = o(1 / p^2)$ , any $(1, \delta)$ -DP algorithm $\mathcal{A}_{priv} : \mathcal{D}^{p^2} \to \mathcal{C}$ , and any $\mathcal{A}_{pub} : \mathcal{D}^p \to \mathcal{C}$ there exists $\tau \in \mathcal{T}$ such that:
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathbb {E} _ {D \sim \tau^ {p ^ {2}}, \theta \sim \mathcal {A} _ {p r i v} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} (\theta^ {*} (\tau)) + \Omega (1),
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\mathbb {E} _ {D \sim \tau^ {p}, \theta \sim \mathcal {A} _ {p u b} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} (\theta^ {*} (\tau)) + \Omega (1)
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
(2) For any $\delta \geq 2^{-p}$ , there exists an algorithm $\mathcal{A}_{\text{mixed}}: \mathcal{D}^{p+p^2} \to \mathcal{C}$ which runs gradient descent on the first $p$ examples, followed by $(1, \delta)$ -DP-SGD on the last $p^2$ examples, such that for any $\tau \in \mathcal{T}$ :
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\mathbb {E} _ {D \sim \tau^ {n}, \theta \sim \mathcal {A} _ {m i x e d} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} \left(\theta^ {*} (\tau)\right) + \widetilde {O} (1 / p)
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
This demonstrates that there exist data distributions where a small number of public in-distribution data is necessary to achieve small loss, and pretraining on that public data is sufficient for DP-SGD to achieve the desired level of loss. The first part of the theorem shows that there are data distributions where neither a small-size, $n_{pub} = p$ , public data or a large-size, $n_{priv} = p^2$ , private data can reach the desired loss. However, on the same data distribution, pretraining on the small-size public data, followed by finetuning on the large-size private data, achieves a desired level, $O(1 / p)$ , of the excess loss.
|
| 167 |
+
|
| 168 |
+
Proof Sketch of Theorem 2.1. The high-level idea behind the construction is: Using private data alone cannot achieve risk $o(1)$ on $\ell_1$ , because $\ell_1$ has a high dimension, but using public data can achieve risk $O(1 / p)$ because the public mean estimation risk guarantees are dimension-independent. Similarly, using public data alone cannot achieve risk $o(1)$ on $\ell_2$ , because $\ell_2$ has a multiplier of $p$ and the amount of public data we are allowed to use is small. However, using private data can achieve risk $O(1 / p)$ on $\ell_2$ because $\ell_2$ has low dimension, and there is more private data to use.
|
| 169 |
+
|
| 170 |
+
To prove (1) using these observations, we show that the risk guarantee of $\mathcal{A}$ on $\ell$ is at least its risk on $\ell_{1}$ or $\ell_{2}$ alone. If $\mathcal{A}_{pub}$ only uses public data, this implies a lower bound on $\mathcal{A}_{pub}$ 's risk on $\ell$ from Lemma 1.3, which holds for some distribution $\tau_{2} \in \mathcal{T}_{2}^{\prime}$ . Similarly, if $\mathcal{A}_{priv}$ only uses private data, this implies a lower bound on $\mathcal{A}_{priv}$ 's risk on from Lemma 1.2, for some distribution $\tau_{1} \in \mathcal{T}_{1}$ . Then, the product distribution $\tau = \tau_{1} \times \tau_{2}$ gives a simultaneous lower bound on the risk of $\mathcal{A}_{pub}$ and $\mathcal{A}_{priv}$ , as desired.
|
| 171 |
+
|
| 172 |
+
To prove (2), we observe that a single step of (full-batch) gradient descent on the public data takes $\theta_{1}$ to the empirical minimizer of $\ell_{1}$ , which achieves risk $O(1 / p)$ for $\ell_{1}$ . If we use an initialization such that $q(\theta_1) = 0$ , a single step of gradient descent has no effect on $\theta_{2}$ , since the gradient of $\ell$ with respect to $\theta_{2}$ at the initialization is zero. Furthermore, if $R_{1}$ is sufficiently large, then with high probability after this single step $\theta_{1} \in S$ and is far from the boundary of $S$ , i.e. $q(\theta_{1}) = 1$ and we have $\ell = \ell_{1} + p \cdot \ell_{2}$ . Then, running DP-SGD with optimal parameters from this point will take $\theta_{2}$ to a point achieving risk $O(1 / p)$ on $p \cdot \ell_{2}$ . However, DP-SGD will also move $\theta_{1}$ , which could worsen our risk on $\ell_{1}$ substantially. We show that if $r$ is sufficiently small, then for DP-SGD with optimal parameters, the amount by which $\theta_{1}$ moves is $O(1 / p)$ , and in turn $\theta_{1}$ remains in $S$ and the risk guarantee on $\ell_{1}$ does not worsen by more than $O(1 / p)$ . Then, our overall risk guarantee is at most the sum of the risk guarantee on $\ell_{1}$ and $p \cdot \ell_{2}$ individually, which is $O(1 / p)$ .
|
| 173 |
+
|
| 174 |
+
# 2.4 Pretraining on out-of-distribution public data
|
| 175 |
+
|
| 176 |
+
A more common setting in practice is when out-of-distribution large-scale public data is used in pretraining, as we surveyed in the introduction and at the beginning of Section 2. We modify our previous construction in Theorem 2.1 so that (i) there is a distribution mismatch between the public and private examples and (ii) an arbitrarily large amount, $n_{pub}$ , of public data is available.
|
| 177 |
+
|
| 178 |
+
Theorem 2.2. For every integer $p \geq 1$ and $n_{pub} \geq p$ , for some $r > 0$ , $\mathcal{C} = \mathbb{R}^{p^4} \times \mathbb{R}^p$ , $\mathcal{D} = B_{p^4}(0,1) \times B_p(0,r)$ there exists $\ell$ and a set $\mathcal{T}$ of pairs of distributions $(\tau_{pub},\tau_{priv})$ over $\mathcal{D}$ such that:
|
| 179 |
+
|
| 180 |
+
(1) For $\delta = o(1 / p^2)$ , any $(1, \delta)$ -DP algorithm $\mathcal{A}_{priv} : \mathcal{D}^{p^2} \to \mathcal{C}$ , and any $\mathcal{A}_{pub} : \mathcal{D}^p \to \mathcal{C}$ there exists $(\tau_{pub}, \tau_{priv}) \in \mathcal{T}$ such that:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\mathbb {E} _ {D \sim \tau_ {p r i v} ^ {p 2}, \theta \sim \mathcal {A} _ {p r i v} (D)} [ \mathcal {L} (\boldsymbol {\theta}) ] = \mathcal {L} (\boldsymbol {\theta} ^ {*} (\tau)) + \Omega (1),
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
\mathbb {E} _ {D \sim \tau_ {p u b} ^ {n _ {p u b}}, \theta \sim \mathcal {A} _ {p u b} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} (\theta^ {*} (\tau)) + \Omega (1)
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
(2) For any $\delta \geq 2^{-p}$ , there exists an algorithm $\mathcal{A}_{\text{mixed}}: \mathcal{D}^{n_{\text{pub}} + p^2} \to \mathcal{C}$ which runs gradient descent on the first $n_{\text{pub}}$ examples, followed by $(1, \delta)$ -DP-SGD on the last $p^2$ examples, such that for any $\tau \in \mathcal{T}$ :
|
| 191 |
+
|
| 192 |
+
$$
|
| 193 |
+
\mathbb {E} _ {D \sim \tau_ {p u b} ^ {n _ {p u b}} \times \tau_ {p r i v} ^ {p 2}, \theta \sim \mathcal {A} _ {m i x e d} (D)} [ \mathcal {L} (\theta) ] = \mathcal {L} (\theta^ {*} (\tau)) + \widetilde {O} (1 / p)
|
| 194 |
+
$$
|
| 195 |
+
|
| 196 |
+
Here, $\mathcal{L}$ refers to the population loss over $\tau_{pri}$ .
|
| 197 |
+
|
| 198 |
+
This demonstrates that there exist data distributions where out-of-distribution public data is necessary to achieve small test loss on the target private task, and pretraining on the OOD public data is sufficient for DP-SGD to achieve the desired test loss. Note that all three cases are evaluated on the same private population loss, as is the case in real-world scenarios where we care about the performance on the private task.
|
| 199 |
+
|
| 200 |
+
We prove Theorem 2.2 in Appendix C.1 and give here a proof sketch for what modifications of Theorem 2.1 are needed. In particular, the value of $d_{2}$ in the public examples is irrelevant in the upper bound in Theorem 2.1. For example, we could have $d_{2} = 0$ in all public examples, and the upper bound is unaffected. With the extra freedom we have in the construction under this distribution mismatch, showing the lower bound on the risk on the private task for algorithms using only public data is easy; the value of $d_{2}$ in the public examples encodes no information about the distribution of $d_{2}$ in the private examples, so clearly no algorithm with only access to public data can achieve good risk on $\ell_{2}$ alone, regardless of how much public data it has access to.
|
| 201 |
+
|
| 202 |
+
Data abundance: If we had $p^2$ ID public examples or $p^5$ private examples in Theorem 2.1, we could achieve risk $O(1 / p)$ in the above construction using only public data or only private data. Of course, if we also have the distribution mismatch in the preceding paragraph, no amount of public data achieves low risk on the private population. In light of this, Theorem 2.1 should not be interpreted as saying that both public and private data are strictly necessary to optimize some loss functions. Instead, a better interpretation might be that a small amount of public data greatly reduces the amount of private data needed to solve an optimization problem. This can be seen as theoretical backing for an empirical observation made in [TB20, DBH⁺22, LTLH22, KST20].
|
| 203 |
+
|
| 204 |
+
Convex losses: Our construction is inherently non-convex, due to the term $q(\theta_1)$ we use to "activate" $\ell_2$ only after optimizing over the public data. Surprisingly, in Appendix D we show that Theorem 2.1 can be proven even for (non-isotropic) quadratic losses, at the cost of operating in a constrained setting (i.e. $\mathcal{C}$ is finite). The constrained requirement is necessary since unlike in the construction in this section, we cannot guarantee that gradient descent on the public data does not affect $\theta_2$ . However, in the constrained setting we have the guarantee that $\theta_2$ cannot leave the constraint set, so it is okay to take (arbitrarily large) public gradient steps that affect $\theta_2$ .
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
Figure 4: On CIFAR10, pretraining on the public data significantly improves accuracy compared to posttraining on the public data for both ID public data (left) and OOD public data (right).
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
|
| 211 |
+
# 3 Experiments
|
| 212 |
+
|
| 213 |
+
In this section, we conduct experiments to verify our hypothesis about the two-stage optimization phenomenon.
|
| 214 |
+
|
| 215 |
+
# 3.1 CIFAR10 Experiments
|
| 216 |
+
|
| 217 |
+
Setup: For the ID public data experiment in Figure 4 (left), we train a ConvNet model on CIFAR10 using DP-SGD. We train for 60 epochs with a clipping norm of one, learning rate of 0.001, batch size of 256, and Adam optimizer. Simulating an ID public data setting, we split CIFAR10 (60,000 images) into a public dataset of size 2,000 and a private dataset of size 58,000. We use Adam optimizer with learning rate of 0.002 for the public dataset. For the large-size OOD public data in Figure 4 (right), we used 20,000 images from the training part of the CINIC10 images as the public data.
|
| 218 |
+
|
| 219 |
+
Results: In Figure 4, we allow a limited number of epochs $T_{pub}$ on the public data. We show test accuracy as a function of $t$ (the x-axis), which is the number of epochs used in public pretraining. The remaining $T_{pub} - t$ epochs are used in public post-training after the private training. For ID public data in the left panel, we choose $T_{pub} = 200$ . Using this budget for pretraining has the highest accuracy. This demonstrates that the initial rounds of training are the most sensitive to noise, as is the case in both our hypothesis from Section 1 and our theoretical construction in Section 2. Note that the benefits of longer pretraining is small after $t = 100$ . It is possible that after around 100 epochs, pretraining converges to a good basin and the benefits of public pretraining plateau afterwards. We see the same trend with OOD public data using CINIC10 dataset with $T_{pub} = 30$ , shown in Figure 4 (right). Again, we observe that reducing privacy noise in the earlier rounds of training is more beneficial.
|
| 220 |
+
|
| 221 |
+
To further demonstrate the importance of the earlier iterations in the training, we designed an experiment where instead of using the same privacy budget for all iterations of private training, we train the first iteration with a lower noise multiplier (using more privacy budget) and compared it a setting where we train the last iteration with the lower noise multiplier. Table 2 compares the results for various choices of the end-to-end $\varepsilon$ . Again, we observe that reducing privacy noise in the earlier rounds of training is more beneficial.
|
| 222 |
+
|
| 223 |
+
# 3.2 Manifold on Large Speech Model
|
| 224 |
+
|
| 225 |
+
Setup To better understand the geometry of the loss function for training machine learning models, we evaluate training a ConformerM $\left[\mathrm{GQC}^{+}20\right]$ model on Librispeech [PCPK15] dataset with/without public
|
| 226 |
+
|
| 227 |
+
<table><tr><td>ε</td><td>first epoch</td><td>last epoch</td></tr><tr><td>1</td><td>46.7%±0.3</td><td>46.3%±0.3</td></tr><tr><td>3</td><td>49.6%±0.6</td><td>48.0%±0.5</td></tr><tr><td>8</td><td>54.0%±0.8</td><td>52.0%±0.9</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 2: Effect of having higher budget $\left( {{\sigma }^{2} = {0.6}}\right)$ on the first epoch compared to the last epoch on CIFAR10.
|
| 230 |
+
|
| 231 |
+
data pretraining using DP-Adam. Specifically, we train the following three models:
|
| 232 |
+
|
| 233 |
+
- Oracle model: We train a ConformerM model on the complete Librispeech dataset for 100k steps. This is considered as the global minima of the manifold.
|
| 234 |
+
- Private model: We train a ConformerM model on $90\%$ samples drawn uniformly from the Librispeech dataset using DP-Adam for 20k steps.
|
| 235 |
+
- Private model with public pretraining: We pretrain a ConformerM model on the $10\%$ of the samples with Adam for 10k steps and then fine-tune on the remaining $90\%$ samples with privacy for 1k steps.
|
| 236 |
+
|
| 237 |
+
Note that the hyper-parameters for the latter two settings are tuned to optimize the test word error rate under the same privacy budget $\varepsilon = 9.8$ . We fix the privacy parameter $\delta$ to $10^{-6}$ , ensuring that $\delta < n^{-1}$ where $n$ is the number of private samples.
|
| 238 |
+
|
| 239 |
+
Results As shown in Figure 5, we interpolate the three models above to draw a projected slice of the manifold. From both the heatmap and the contour figures, we can tell that private model with public training falls into the same "basin" as the oracle model, which we refer to as the global minima basin. The private model without pretraining falls into a different basin, separated from the global minima basin by a "hill". This is evidence for our hypothesis that public pretraining is useful specifically because it picks a good basin. The $\ell_2$ -distance between the oracle model and the private model with public pretraining is 671.22, much smaller than the distance between the oracle model and the private model which is 1738.27. This parallels the construction in Section 2, in which private fine-tuning takes place on a smaller scale than pretraining on public data.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
(a)
|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
(b)
|
| 246 |
+
Figure 5: Projected manifold of ConformerM on Librispeech by interpolating 3 models. $(0,0)$ is the oracle model. $(0,1)$ is the private model. $(1,0)$ is the private model with public pretraining. We can tell that $(0,0)$ and $(1,0)$ are within the same basin while $(0,1)$ is in a different basin separated by a hill on the manifold. The manifold is constructed by calculating cross entropy loss on a 128-sample subset of Librispeech's testother dataset.
|
| 247 |
+
|
| 248 |
+
# 4 Discussion
|
| 249 |
+
|
| 250 |
+
In this paper, we show that there exist natural learning tasks where public data is necessary and sufficient to achieve a target accuracy under DP model training. This conclusion is independent of whether the public data is in-distribution with the private training data or not. Recently, [TKC22] discussed the perils of indiscriminate use of public data in DP training, few of which are: (i) Publicly available data does not necessary mean that one can use that dataset for training models without privacy consideration, as the trained model can release information from the dataset verbatim, and (ii) Many existing empirical works on achieving better accuracy for DP training by using public data do not necessarily reflect realistic scenarios for model training. In particular, in real-world settings the available public data can be far out-of-distribution from the private dataset. The authors provide prescriptive recommendations on being judicious in the choice of public data for DP training. Our work is complementary to [TKC22], and we concur with all the concerns in their paper. Given our current impossibility result, and the concerns in [TKC22], an important research question for future exploration is given a public dataset which may be far out-of distribution from the private training data, what is the best DP training procedure that exploits the public dataset to obtain higher accuracy? To the best of our knowledge, all current works (see Section 1 for reference) on the use of public data in DP training do not provide an answer.
|
| 251 |
+
|
| 252 |
+
Another question raised by our work is whether one can use the insight that the geometry changes after public pretraining to improve private fine-tuning in practice. That is, in practice the ideal algorithm for training from scratch on private data may not be the ideal algorithm for fine-tuning a pretrained model. While some works [SSTT21, $\mathrm{LLH}^{+}22]$ observe that DP-SGD can inherently benefit from the geometry of the loss having certain properties, and others [ADF $^{+}$ 21, AGM $^{+}$ 22] develop algorithms that adapt to the local geometry, we are not aware of any work that uses properties of the geometry specific to the fine-tuning phase. One possibility is that if the loss function is locally convex after public pretraining, theoretical techniques whose utility guarantee depends on convexity might offer larger improvements for private fine-tuning in practice than for training from scratch.
|
| 253 |
+
|
| 254 |
+
In our experiments, we observed that the first phase of model training is more sensitive to noise when we do fully private training from random initialization. As our two-phase hypothesis suggests, this phenomenon only occurs in non-convex optimization. In convex optimization, an opposite strategy of reducing the privacy noise towards the end of training helps more, as theoretically analyzed and empirically demonstrated in [LK18]. For non-convex optimization, [HWZ22] proposes the use of a decreasing noise multiplier under a strong condition on the loss function known as the Polyak-Lojasiewicz condition. This implies that vanilla gradient descent converges fast and does not apply for the typical loss landscapes of deep learning problems. For typical non-convex optimization experiments in our setting, smaller privacy noise at the beginning of the training improves performance (compared to having smaller privacy noise at the end of training). Of course, the need to choose a strategy for scheduling the privacy budget adds a hyperparameter for the training process. In particular, this hyperparameter is not needed if we do public pretraining instead. It would be useful for practitioners to give guidelines on how to schedule the privacy budget across training rounds such that we improve over a fixed noise multiplier, while minimally increasing the number additional hyperparameters to tune.
|
| 255 |
+
|
| 256 |
+
# References
|
| 257 |
+
|
| 258 |
+
[ABM19] Noga Alon, Raef Bassily, and Shay Moran. Limits of private learning with access to public data. Advances in neural information processing systems, 32, 2019.
|
| 259 |
+
[ACG+16] Martin Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proc. of the 2016 ACM SIGSAC Conf. on Computer and Communications Security (CCS'16), pages 308-318, 2016.
|
| 260 |
+
$\left[\mathrm{ADF}^{+}21\right]$ Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, and Kunal Talwar. Private adaptive
|
| 261 |
+
|
| 262 |
+
gradient methods for convex optimization. In International Conference on Machine Learning, pages 383-392. PMLR, 2021.
|
| 263 |
+
[ADK20] Brendan Avent, Yatharth Dubey, and Aleksandra Korolova. The power of the hybrid model for mean estimation. Proceedings on Privacy Enhancing Technologies, 2020:48-68, 10 2020.
|
| 264 |
+
[AGM+22] Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, and Abhradeep Thakurta. Public data-assisted mirror descent for private model training. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 517-535. PMLR, 2022.
|
| 265 |
+
[ALD21] Hilal Asi, Daniel Asher Nathan Levy, and John Duchi. Adapting to function difficulty and growth conditions in private optimization. In Advances in Neural Information Processing Systems, 2021.
|
| 266 |
+
$\left[\mathrm{BCM}^{+}20\right]$ Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, and Steven Wu. Private query release assisted by public data. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 695-703. PMLR, 13-18 Jul 2020.
|
| 267 |
+
[BF76] Stevo Bozinovski and Ante Fulgosi. The influence of pattern similarity and transfer learning upon training of a base perceptron b2. In Proceedings of Symposium Informatica, volume 3, pages 121-126, 1976.
|
| 268 |
+
[BFGT20] Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. Advances in Neural Information Processing Systems, 33:4381-4391, 2020.
|
| 269 |
+
[BFTT19] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems 32, pages 11279-11288, 2019.
|
| 270 |
+
$\left[\mathrm{BHA}^{+}21\right]$ Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
|
| 271 |
+
[BKS22] Alex Bie, Gautam Kamath, and Vikrant Singhal. Private estimation with public data. Advances in neural information processing systems 35, 2022.
|
| 272 |
+
[BST14] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pages 464-473, 2014.
|
| 273 |
+
[BWZK] Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis. Differentially private biasterm only fine-tuning of foundation models. In Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022.
|
| 274 |
+
[BWZK22] Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis. Differentially private optimization on large model at small cost. arXiv preprint arXiv:2210.00038, 2022.
|
| 275 |
+
$\left[\mathrm{DBH}^{+}22\right]$ Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale. arXiv preprint arXiv:2204.13650, 2022.
|
| 276 |
+
|
| 277 |
+
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proc. of the Third Conf. on Theory of Cryptography (TCC), pages 265-284, 2006.
|
| 278 |
+
[FKT20] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in linear time. In Proc. of the Fifty-Second ACM Symp. on Theory of Computing (STOC'20), 2020.
|
| 279 |
+
[GAW+22] Aditya Golatkar, Alessandro Achille, Yu-Xiang Wang, Aaron Roth, Michael Kearns, and Stefano Soatto. Mixed differential privacy in computer vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8376-8386, 2022.
|
| 280 |
+
[GLL22] Sivakanth Gopi, Yin Tat Lee, and Daogao Liu. Private convex optimization via exponential mechanism. In Conference on Learning Theory, pages 1948-1989. PMLR, 2022.
|
| 281 |
+
$\left[\mathrm{GQC}^{+}20\right]$ Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. Proc. Interspeech 2020, pages 5036-5040, 2020.
|
| 282 |
+
[GTU22] Arun Ganesh, Abhradeep Thakurta, and Jalaj Upadhyay. On the universality of Langevin diffusion for private euclidean (convex) optimization, 2022. https://openreview.net/forum?id=ZrJPdY5k6sg.
|
| 283 |
+
[GvdMZG22] Antonio Ginart, Laurens van der Maaten, James Zou, and Chuan Guo. Submix: Practical private prediction for large-scale language models. arXiv preprint arXiv:2201.00971, 2022.
|
| 284 |
+
[Hay03] Thomas P. Hayes. A large-deviation inequality for vector-valued martingales. 2003. http://agl.cs.unm.edu/hayes/papers/VectorAzuma/VectorAzuma20030207.pdf.
|
| 285 |
+
$\left[\mathrm{HLY}^{+}22\right]$ Jiyan He, Xuechen Li, Da Yu, Huishuai Zhang, Janardhan Kulkarni, Yin Tat Lee, Arturs Backurs, Nenghai Yu, and Jiang Bian. Exploring the limits of differentially private deep learning with group-wise clipping. arXiv preprint arXiv:2212.01539, 2022.
|
| 286 |
+
[HRs16] Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, page 1225-1234. JMLR.org, 2016.
|
| 287 |
+
[HWZ22] Junyuan Hong, Zhangyang Wang, and Jiayu Zhou. Dynamic privacy budget allocation improves data efficiency of differentially private gradient descent. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, page 11-35, New York, NY, USA, 2022. Association for Computing Machinery.
|
| 288 |
+
[JZK+22] Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, and Yaoliang Yu. Dp $^2$ -vae: Differentially private pre-trained variational autoencoders. arXiv preprint arXiv:2208.03409, 2022.
|
| 289 |
+
[KCS+22] Alexey Kurakin, Steve Chien, Shuang Song, Roxana Geambasu, Andreas Terzis, and Abhradeep Thakurta. Toward training at imagenet scale with differential privacy. arXiv preprint arXiv:2201.12328, 2022.
|
| 290 |
+
[KLL21] Janardhan Kulkarni, Yin Tat Lee, and Daogao Liu. Private non-smooth erm and sco in subquadratic steps. Advances in Neural Information Processing Systems, 34, 2021.
|
| 291 |
+
[KOV17] Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential privacy. IEEE Trans. Inf. Theory, 63(6):4037-4049, 2017.
|
| 292 |
+
|
| 293 |
+
[KRRT20] Peter Kairouz, Mónica Ribero, Keith Rush, and Abhradeep Thakurta. Fast dimension independent private adagrad on publicly estimated subspaces. arXiv preprint arXiv:2008.06570, 2020.
|
| 294 |
+
[KST20] Gavin Kerrigan, Dylan Slack, and Jens Tuyls. Differentially private language models benefit from public pre-training. In Proceedings of the Second Workshop on Privacy in NLP, pages 39-45, 2020.
|
| 295 |
+
[LK18] Jaewoo Lee and Daniel Kifer. Concentrated differentially private gradient descent with adaptive per-iteration privacy budget. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1656–1665, 2018.
|
| 296 |
+
$\left[\mathrm{LLH}^{+}22\right]$ Xuechen Li, Daogao Liu, Tatsunori Hashimoto, Huseyin A Inan, Janardhan Kulkarni, YinTat Lee, and Abhradeep Guha Thakurta. When does differentially private learning not suffer in high dimensions? In Advances in Neural Information Processing Systems, 2022.
|
| 297 |
+
[LTLH22] Xuechen Li, Florian Tramér, Percy Liang, and Tatsunori Hashimoto. Large language models can be strong differentially private learners. In International Conference on Learning Representations, 2022.
|
| 298 |
+
$\left[\mathrm{LVS}^{+}21\right]$ Terrance Liu, Giuseppe Vietri, Thomas Steinke, Jonathan Ullman, and Steven Wu. Leveraging public data for practical private query release. In International Conference on Machine Learning, pages 6968-6977. PMLR, 2021.
|
| 299 |
+
[LWAFF21] Zelun Luo, Daniel J Wu, Ehsan Adeli, and Li Fei-Fei. Scalable differential privacy with sparse network finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5059–5068, 2021.
|
| 300 |
+
[NDR17] Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. The e2e dataset: New challenges for end-to-end generation. arXiv preprint arXiv:1706.09254, 2017.
|
| 301 |
+
$\left[\mathrm{NMT}^{+}22\right]$ Milad Nasr, Saeed Mahloujifar, Xinyu Tang, Prateek Mittal, and Amir Houmansadr. Effectively using public data in privacy preserving machine learning, 2022. https://openreview.net/pdf?id=5R96mIU85IW.
|
| 302 |
+
$\left[\mathrm{NRZ}^{+}20\right]$ Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. Dart: Open-domain structured data record to text generation. arXiv preprint arXiv:2007.02871, 2020.
|
| 303 |
+
[PCPK15] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE, 2015.
|
| 304 |
+
$\left[\mathrm{RWC}^{+}19\right]$ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
| 305 |
+
[SCS13] Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pages 245-248. IEEE, 2013.
|
| 306 |
+
[SRASC14] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806-813, 2014.
|
| 307 |
+
|
| 308 |
+
[SSTT21] Shuang Song, Thomas Steinke, Om Thakkar, and Abhradeep Thakurta. Evading the curse of dimensionality in unconstrained private glms. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2638-2646. PMLR, 13-15 Apr 2021.
|
| 309 |
+
[SU17] Thomas Steinke and Jonathan Ullman. Tight lower bounds for differentially private selection. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 552-563. IEEE, 2017.
|
| 310 |
+
[TB20] Florian Tramer and Dan Boneh. Differentially private learning needs better features (or much more data). In International Conference on Learning Representations, 2020.
|
| 311 |
+
[TKC22] Florian Tramér, Gautam Kamath, and Nicholas Carlini. Considerations for differentially private learning with large-scale public pretraining. arXiv preprint arXiv:2212.06470, 2022.
|
| 312 |
+
$\left[\mathrm{YNB}^{+}22\right]$ Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, et al. Differentially private fine-tuning of language models. In International Conference on Learning Representations, 2022.
|
| 313 |
+
[YZCL21] Da Yu, Huishuai Zhang, Wei Chen, and Tie-Yan Liu. Do not let privacy overbill utility: Gradient embedding perturbation for private learning. In International Conference on Learning Representations, 2021.
|
| 314 |
+
[ZWB21] Yingxue Zhou, Steven Wu, and Arindam Banerjee. Bypassing the ambient dimension: Private sgd with gradient subspace identification. In International Conference on Learning Representations, 2021.
|
| 315 |
+
|
| 316 |
+
# A Notation Reference
|
| 317 |
+
|
| 318 |
+
<table><tr><td>Notation</td><td>Meaning</td></tr><tr><td>A</td><td>algorithm, i.e. a randomized map from datasets to outputs</td></tr><tr><td>Bp(c,r)</td><td>p-dimensional ball of radius r centered at c</td></tr><tr><td>C</td><td>constraint set</td></tr><tr><td>D</td><td>data set (∈D*)</td></tr><tr><td>d</td><td>(singular) data point (∈D)</td></tr><tr><td>D</td><td>data domain</td></tr><tr><td>ε,δ</td><td>privacy parameters</td></tr><tr><td>η</td><td>step size in gradient descent</td></tr><tr><td>l</td><td>(per-example) loss function</td></tr><tr><td>L</td><td>population loss function</td></tr><tr><td>n</td><td>dataset size</td></tr><tr><td>N(μ,Σ)</td><td>normal distribution with mean μ and covariance matrix Σ</td></tr><tr><td>p</td><td>dimension</td></tr><tr><td>Π</td><td>projection operator</td></tr><tr><td>q</td><td>the “activation function” in Section 2</td></tr><tr><td>R,r</td><td>radii of various sets in our construction</td></tr><tr><td>τ</td><td>data (population) distribution</td></tr><tr><td>T</td><td>set of data distributions</td></tr><tr><td>θ</td><td>model</td></tr><tr><td>θ*(τ)</td><td>population minimizer on the distribution τ</td></tr></table>
|
| 319 |
+
|
| 320 |
+
In Table 3, we give a summary of the notation used throughout the paper.
|
| 321 |
+
|
| 322 |
+
# B Survey of the gain of pretraining
|
| 323 |
+
|
| 324 |
+
The stark difference in the gain of public pretraining between private and non-private model training has been widely observed in several tasks both in natural language and vision.
|
| 325 |
+
|
| 326 |
+
Table-To-Text Generation. The experiment details can be found in [LTLH22].
|
| 327 |
+
|
| 328 |
+
Table 3: Summary of notation
|
| 329 |
+
|
| 330 |
+
<table><tr><td></td><td colspan="3">BLEU</td><td colspan="3">ROUGE-L</td></tr><tr><td>ε</td><td>∞</td><td>8</td><td>3</td><td>∞</td><td>8</td><td>3</td></tr><tr><td>with public pretrain</td><td>69.46</td><td>63.19</td><td>61.52</td><td>71.36</td><td>66.43</td><td>65.67</td></tr><tr><td>without public pretrain</td><td>65.73</td><td>24.25</td><td>15.46</td><td>68.75</td><td>39.95</td><td>35.24</td></tr><tr><td>gain of public pretraining</td><td>3.73</td><td>38.94</td><td>46.06</td><td>2.61</td><td>26.48</td><td>30.43</td></tr></table>
|
| 331 |
+
|
| 332 |
+
Table 4: BLEU and ROUGE-L scores for generating natural language descriptions of table entries on E2E dataset [NDR17] reported in [LTLH22, Table 2] with $\delta = 10^{-5}$ . The pretrained model in the first row is GPT-2.
|
| 333 |
+
|
| 334 |
+
Image classification. The experimental details can be found in $\mathrm{[DBH^{+}22]}$
|
| 335 |
+
|
| 336 |
+
<table><tr><td></td><td colspan="3">BLEU</td><td colspan="3">ROUGE-L</td></tr><tr><td>ε</td><td>∞</td><td>8</td><td>3</td><td>∞</td><td>8</td><td>3</td></tr><tr><td>with public pretrain</td><td>42.78</td><td>35.06</td><td>31.03</td><td>56.72</td><td>54.58</td><td>52.06</td></tr><tr><td>without public pretrain</td><td>26.79</td><td>7.77</td><td>3.00</td><td>37.86</td><td>21.68</td><td>17.14</td></tr><tr><td>gain of public pretraining</td><td>15.99</td><td>27.29</td><td>28.03</td><td>18.86</td><td>32.90</td><td>34.92</td></tr></table>
|
| 337 |
+
|
| 338 |
+
Table 5: BLEU and ROUGE-L scores for generating natural language descriptions of table entries on DART dataset $\left[\mathrm{NRZ}^{+}20\right]$ reported in [LTLH22, Table 8] with $\delta = 10^{-5}$ . The pretrained model in the first row is GPT-2.
|
| 339 |
+
|
| 340 |
+
<table><tr><td></td><td colspan="4">CIFAR-10</td><td colspan="4">ImageNet</td></tr><tr><td>ε</td><td>8</td><td>4</td><td>2</td><td>1</td><td>8</td><td>4</td><td>2</td><td>1</td></tr><tr><td>with public pretrain</td><td>96.7</td><td>96.1</td><td>95.4</td><td>94.7</td><td>81.8</td><td>79.2</td><td>74.7</td><td>70.3</td></tr><tr><td>without public pretrain</td><td>81.4</td><td>73.5</td><td>65.9</td><td>56.8</td><td>32.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>gain of public pretraining</td><td>15.3</td><td>22.6</td><td>29.5</td><td>37.9</td><td>49.4</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 341 |
+
|
| 342 |
+
Table 6: Test accuracy for image classification on CIFAR-10 and ImageNet datasets reported in $\left[\mathrm{DBH}^{+}22\right.$ Table 1] with $\delta = 10^{-5}$ and $8\cdot 10^{-7}$ , respectively. The pretraining public data for CIFAR-10 is ImageNet and for ImageNet is JFT-4B. Without pretraining, private training on ImageNet failed to converge, indicated by -.
|
| 343 |
+
|
| 344 |
+
# C Missing Details from Section 2
|
| 345 |
+
|
| 346 |
+
Before turning to the proof, we fill in the details of the construction given in Section 2: We choose $R_{1} = 1 / p^{2} + \kappa \log (p) / \sqrt{p}$ , where $\kappa$ is a sufficiently large constant. Any $R_{2} < M - R_{1}$ suffices for our proof. We choose $r = O\left(\frac{1}{p^{5/2}\sqrt{\log(1/\delta)}}\right)$ . We formally define the range of data distributions we use in our construction as a set of products of two distributions: $\mathcal{T} := \{\tau_{1} \times \tau_{2} | \tau_{1} \in \mathcal{T}_{1}, \tau_{2} \in \mathcal{T}_{2}'\}$ , where $\mathcal{T}_{1}$ is defined as in Lemma 1.2, and $\mathcal{T}_{2}$ is defined as in Section 2.
|
| 347 |
+
|
| 348 |
+
# C.1 Proof of Theorem 2.1
|
| 349 |
+
|
| 350 |
+
In our proof we will use DP-SGD as instantiated in [BST14]. Combined with results on uniform stability of gradient descent on strongly convex losses (see e.g. [HRS16]), Theorem 2.4 of [BST14] and its proof implies the following:
|
| 351 |
+
|
| 352 |
+
Theorem C.1. Suppose $\ell$ has Hessian $m\mathbb{I}_p$ and for any $d,d^{\prime}$ $\| \nabla \ell (\theta ;d) - \nabla \ell (\theta ;d^{\prime})\|_{2}\leq L$ . Then for $T = n^{2}$ $\eta_t = \frac{1}{mt},\sigma^2 = O(\frac{L^2\log(1 / \delta)}{\varepsilon^2n^2}),$ if $\| \theta_0 - \theta^*\| _2\leq O(L / m)$ running $T$ steps of DP-SGD with step size $\eta_t$ in iteration $t$ and variance $\sigma^2$ is $(\varepsilon ,\delta)$ -DP and achieves population loss:
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
O \left(\frac {L ^ {2} p \log (n) \log (1 / \delta)}{m \varepsilon^ {2} n ^ {2}} + \frac {L ^ {2}}{m n}\right).
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
We also have the following lemma, which effectively says that unconstrained and DP-SGD stays within a ball with high probability.
|
| 359 |
+
|
| 360 |
+
Lemma C.2. With probability $1 - T2^{-\Omega(p)}$ over (unconstrained) DP-SGD using the parameters in Theorem C.1, for all $0 \leq t \leq T$ , we have $\| \theta_t - \theta^* \|_2 \leq \max \{2\sqrt{p}\sigma / m, \| \theta_0 - \theta^* \|_2\}$ .
|
| 361 |
+
|
| 362 |
+
Proof. By a multivariate Gaussian tail bound, w.p. $1 - T2^{-\Omega(p)}$ in each iteration of DP-SGD the noise we add has $\ell_2$ -norm at most $2\sqrt{p}\eta_t\sigma$ . Conditioned on this event, since we have an identity quadratic loss and $\eta_t \leq 1/m$ for all $t$ , each step of gradient descent is $(1 - m\eta_t)$ contractive. So we have:
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
\forall t: \| \theta_ {t} - \theta^ {*} \| _ {2} \leq (1 - m \eta_ {t}) \| \theta_ {t - 1} - \theta^ {*} \| _ {2} + 2 \sqrt {p} \eta_ {t} \sigma .
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
The lemma follows by induction.
|
| 369 |
+
|
| 370 |
+
Proof of Theorem 2.1. We use $S, \ell$ as defined in Section 2, (and the associated definitions of $\mathcal{D}, \mathcal{T}, q$ , etc.).
|
| 371 |
+
|
| 372 |
+
We will prove (1) in two parts. First, we will show that for any $\mathcal{A}_{priv}$ , there exists $\tau_{1} \in \mathcal{T}_{1}$ such that the desired lower bound holds for $\tau_{1} \times \tau_{2}$ for all $\tau_{2} \in \mathcal{T}_{2}'$ . Second, we show that for any $\mathcal{A}_{pub}$ there exists $\tau_{2} \in \mathcal{T}_{2}'$ such that the desired lower bound holds for $\tau_{1} \times \tau_{2}$ for all $\tau_{1} \in \mathcal{T}_{1}$ . Then taking $\tau_{1}$ from the first statement and $\tau_{2}$ from the second statement, both lower bounds hold for $\tau_{1} \times \tau_{2}$ as desired.
|
| 373 |
+
|
| 374 |
+
Proof of (1) for $\mathcal{A}_{\text{priv}}$ : Let $\tau_{2}$ be an arbitrary, fixed member of $\mathcal{T}_2'$ . Fix any $\mathcal{A}_{\text{priv}}$ and take any distribution $\tau_{1} \in \mathcal{T}_{1}$ . Let $\tau(\tau_{1}) \in \mathcal{T}$ be the distribution over $(d_{1}, d_{2})$ given by sampling $d_{1} \sim \tau_{1}$ and $d_{2} \sim \tau_{2}$ . Let $\mathcal{L}$ refer to the population loss over $\ell$ , and let $\mathcal{L}_{1}$ refer to the population loss over $\ell_{1}(\theta_{1}, d_{1})$ for $d_{1} \sim \tau_{1}$ . Consider the following algorithm $\mathcal{A}_{\text{priv}}'$ for minimizing $\ell_{1}$ given $p^{2}$ samples from $\tau_{1}$ : For each of these samples $d_{1}$ , $\mathcal{A}_{\text{priv}}'$ draws an i.i.d. sample $d_{2}$ from $\tau_{2}$ and pads $d_{1}$ with $d_{2}$ , giving $p^{2}$ i.i.d samples from $\tau(\tau_{1})$ . $\mathcal{A}_{\text{priv}}'$ then runs $\mathcal{A}_{\text{priv}}$ on these samples, and takes $\theta_{1}$ from the output of $\mathcal{A}_{\text{priv}}$ . Note that $\mathcal{A}_{\text{priv}}'$ is allowed to know the distribution $\tau_{2}$ since it is fixed and independent of the data $\mathcal{A}_{\text{priv}}$ receives. We observe a few facts about $\mathcal{L}$ . First:
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\mathcal {L} \left(\left(\theta_ {1}, \theta_ {2}\right)\right) - \mathcal {L} \left(\left(\theta_ {1}, \theta_ {2} ^ {\prime}\right)\right) = q \left(\theta_ {1}\right) \left(\mathcal {L} _ {2} \left(\theta_ {2}\right) - \mathcal {L} _ {2} \left(\theta_ {2} ^ {\prime}\right)\right). \tag {3}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
Since $q$ is non-negative, this gives:
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\mathcal {L} _ {2} \left(\theta_ {2}\right) \leq \mathcal {L} _ {2} \left(\theta_ {2} ^ {\prime}\right) \leftrightarrow \mathcal {L} \left(\left(\theta_ {1}, \theta_ {2}\right)\right) \leq \mathcal {L} \left(\left(\theta_ {1}, \theta_ {2} ^ {\prime}\right)\right) \tag {4}
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
This implies that replacing $\theta_{2}$ with the population minimizer of $\mathcal{L}_2$ can only improve our risk on $\ell$ . Next, note that for any $\tau_{1} \in \mathcal{T}_{1}$ , since its population minimizer is in $S$ , $q(\theta^{*}(\tau_{1})) = 1$ . In turn, $\theta^{*}(\tau_{1} \times \tau_{2}) = (\theta^{*}(\tau_{1}), \theta^{*}(\tau_{2}))$ for all $\tau_{1} \in \mathcal{T}_{1}$ . Finally, since $q$ is in $[0,1]$ and $\mathcal{L}_2$ is non-positive this gives:
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
\mathcal {L} \left(\left(\theta_ {1}, \theta^ {*} \left(\tau_ {2}\right)\right)\right) - \mathcal {L} \left(\theta^ {*} \left(\tau \left(\tau_ {1}\right)\right)\right) = \mathcal {L} _ {1} \left(\theta_ {1}\right) - \mathcal {L} _ {1} \left(\theta^ {*} \left(\tau_ {1}\right)\right) - (1 - q \left(\theta_ {1}\right)) \cdot \mathcal {L} _ {2} \left(\theta^ {*} \left(\tau_ {2}\right)\right) \geq \mathcal {L} _ {1} \left(\theta_ {1}\right) - \mathcal {L} _ {1} \left(\theta^ {*} \left(\tau_ {1}\right)\right). \tag {5}
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
In other words, if we choose $\theta_{2}$ to be the minimizer of $\mathcal{L}_2$ , then our risk on $\mathcal{L}$ is at least our risk on $\mathcal{L}_1$ alone. Putting it all together:
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
\begin{array}{l} \mathbb {E} _ {D \sim \tau (\tau_ {1}) ^ {p ^ {2}}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {p r i v} (D)} \left[ \mathcal {L} (\theta) \right] - \mathcal {L} (\theta^ {*} (\tau (\tau_ {1}))) \right] \\ \stackrel {(4)} {\geq} \mathbb {E} _ {D \sim \tau (\tau_ {1}) ^ {p ^ {2}}} \left[ \mathbb {E} _ {(\theta_ {1}, \theta_ {2}) \sim \mathcal {A} _ {p r i v} (D)} \left[ \mathcal {L} ((\theta_ {1}, \theta^ {*} (\tau_ {2}))) ] - \mathcal {L} (\theta^ {*} (\tau (\tau_ {1}))) \right] \right. \\ \stackrel {(5)} {\geq} \mathbb {E} _ {D \sim \tau_ {1} ^ {p ^ {2}}} \left[ \mathbb {E} _ {\theta_ {1} \sim \mathcal {A} _ {p r i v} ^ {\prime} (D)} \left[ \mathcal {L} _ {1} (\theta_ {1}) \right] - \mathcal {L} _ {1} \big (\theta^ {*} (\tau_ {1}) \big) \right]. \\ \end{array}
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
Using Lemma 1.2, since we are solving a $p^4$ -dimensional mean estimation problem with $p^2$ samples and $(1,o(1 / n))$ -DP, we know that the final expression (the risk of $\mathcal{A}_{pri}^{\prime}$ ) is $\Omega(1)$ for some $\tau_1 \in T_1$ , which implies the same lower bound on the risk of $\mathcal{A}_{pri}$ for $\tau_1 \times \tau_2$ .
|
| 399 |
+
|
| 400 |
+
Proof of (1) for $\mathcal{A}_{pub}$ : Fix an arbitrary $\tau_{1} \in \mathcal{T}_{1}$ . Let $\tau(\tau_{2})$ denote $\tau_{1} \times \tau_{2}$ . Given any $\mathcal{A}_{pub}$ , consider $\mathcal{A}_{pub}'$ that takes $p$ samples from $\tau_{2}$ , pads them with i.i.d. samples from $\tau_{1}$ to get $p$ samples from $\tau(\tau_{2})$ . It then runs $\mathcal{A}_{pub}$ on these samples, clips the norm of the $\theta_{2}$ in $\mathcal{A}_{pub}'$ s output to be at most $r$ , and uses this as its output.
|
| 401 |
+
|
| 402 |
+
We again make some observations on $\ell$ . First, since the population minimizer $\theta^{*}(\tau_{1})$ is always in $S$ by definition, we have $q(\theta^{*}(\tau_{1})) = 1$ . Then, since $\ell_{2}$ is non-positive:
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\mathcal {L} \left(\theta_ {1}, \theta_ {2}\right) \geq \mathcal {L} \left(\theta^ {*} \left(\tau_ {1}\right), \theta_ {2}\right) \tag {6}
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
Next, by definition of $\ell_2$ and non-expansiveness of Euclidean projection, clipping $\theta_{2}$ to a ball of radius $r$ can only decrease the loss, i.e., if we define $\mathrm{CLIP}(\theta_2,r)\coloneqq \frac{\theta_2}{\max\{1,\| \theta_2\|_2 / r\}}$ :
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
\mathcal {L} _ {2} \left(\operatorname {C L I P} \left(\theta_ {2}, r\right)\right) \leq \mathcal {L} _ {2} \left(\theta_ {2}\right). \tag {7}
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
Then we have:
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\begin{array}{l} \mathbb {E} _ {D \sim \tau (\tau_ {2}) ^ {p}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {p u b} (D)} \left[ \mathcal {L} (\theta) \right] - \mathcal {L} \big (\theta^ {*} (\tau (\tau_ {2})) \big) \right] \\ \stackrel {(6)} {\geq} \mathbb {E} _ {D \sim \tau (\tau_ {2}) ^ {p}} \left[ \mathbb {E} _ {(\theta_ {1}, \theta_ {2}) \sim \mathcal {A} _ {p u b} (D)} \left[ \mathcal {L} ((\theta^ {*} (\tau_ {1}), \theta_ {2})) ] - \mathcal {L} (\theta^ {*} (\tau (\tau_ {2}))) \right] \right. \\ \stackrel {(3)} {=} \mathbb {E} _ {D \sim \tau (\tau_ {2}) ^ {p}} \left[ \mathbb {E} _ {(\theta_ {1}, \theta_ {2}) \sim \mathcal {A} _ {p u b} (D)} \left[ \mathcal {L} _ {2} (\theta_ {2}) \right] - \mathcal {L} _ {2} (\theta^ {*} (\tau_ {2})) \right] \\ \stackrel {(7)} {\geq} \mathbb {E} _ {D \sim \tau_ {2} ^ {p}} \left[ \mathbb {E} _ {\theta_ {2} \sim \mathcal {A} _ {p u b} ^ {\prime} (D)} \left[ \mathcal {L} _ {2} (\theta_ {2}) \right] - \mathcal {L} _ {2} \big (\theta^ {*} (\tau_ {2}) \big) \right]. \\ \end{array}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
Since $\theta_{2}$ in the last line has norm at most $r$ , the last expression (the risk of $\mathcal{A}_{pub}^{\prime}$ on $\ell_2$ ) is equal to $p$ times the risk of $\mathcal{A}_{pub}^{\prime}$ on a rescaling of the mean-estimation problem in Lemma 1.3, which is $\Omega(1)$ for some $\tau_{2} \in \mathcal{T}_{2}^{\prime}$ . This implies the same lower bound on the risk of $\mathcal{A}_{pub}$ for $\tau_{1} \times \tau_{2}$ .
|
| 421 |
+
|
| 422 |
+
Proof of (2): We initialize $\theta_{1} = \theta_{2} = 0$ for simplicity<sup>1</sup>. Then, note that the term $p \cdot q(\theta_{1}) \cdot \ell_{2}(\theta_{2})$ is a constant in the region where $q(\theta_{1}) = 0$ , which includes the origin. So, the gradient of this term is 0 at the origin, and a single step of gradient descent on the public data sets $\theta_{1}$ to the empirical minimizer of $\ell_{1}$ (which achieves risk $O(1/p)$ on $\ell_{1}$ alone in expectation), and does not affect $\theta_{2}$ . We will then run DP-SGD from this point.
|
| 423 |
+
|
| 424 |
+
By a vector Azuma inequality, with probability at least $1 - p^{-\Omega(\kappa)}$ , $\theta_1 \in S$ and is distance at least $\Omega(\frac{\log p}{\sqrt{p}})$ from the boundary of $S$ . In the $p^{-\Omega(\kappa)}$ probability event this does not happen, since both $\ell_1$ and $\ell_2$ take on values in an interval of length $O(p)$ , our risk is at most $O(p)$ , and so for sufficiently large constant $\kappa$ the contribution of this event to our expected risk is negligible. So we just need to show our overall risk is $O(1/p)$ in expectation when after a single step of gradient descent on the public data, $\theta_1 \in S$ and is distance $\Omega(\frac{\log p}{\sqrt{p}})$ from the boundary.
|
| 425 |
+
|
| 426 |
+
Note that as long as $\theta_{1} \in S$ , $q(\theta_{1}) = 1$ and thus the Lipschitzness of $\ell$ with respect to $\theta_{1}$ is $O(1)$ . We will argue that if $r$ is sufficiently small, then $\theta_{1}$ does not move by more than $O(1/p)$ with probability $1 - p^{-\Omega(1)}$ (and as before, if this high probability event does not occur, the contribution to the risk is negligible). As long as $\theta_{1}$ does not move by more than $O(1/p)$ while we run DP-SGD, it will remain in $S$ since we assume at the start of DP-SGD, $\theta_{1}$ is distance $\Omega\left(\frac{\log p}{\sqrt{p}}\right)$ from the boundary of $S$ . Putting it all together, this implies that (i) the excess loss on $\ell_{1}$ does not increase by more than $O(1/p)$ , since $\ell_{1}$ is $O(1)$ -Lipschitz with respect to $\theta_{1}$ , and (ii) since $\theta_{1}$ stays within $S$ and thus $q(\theta_{1}) = 1$ , the change in $\theta_{2}$ is the same as the change if we ran DP-SGD on $\ell_{2}(\theta_{2})$ alone. Lemma C.2 implies that $\theta_{2}$ stays within $B_{p}(0,2r)$ , and thus DP-SGD on $\ell(\theta_{2})$ is the same as running DP-SGD on the purely quadratic loss $\frac{p}{2r^{2}}\|\theta_{2} - d_{2}\|_{2}^{2}$ , with high probability. So by Theorem C.1, DP-SGD with optimal parameters will give $\theta_{2}$ achieving risk $\widetilde{O}(1/p^{2})$ on $\ell_{2}$ alone. This gives an overall risk bound of $\widetilde{O}(1/p)$ , completing the proof.
|
| 427 |
+
|
| 428 |
+
Now, the idea is that in DP-SGD as in Theorem C.1, the total movement of $\theta_{1}$ due to both gradient steps and noise is an increasing function of $r$ , so we can set $r$ to be sufficiently small to guarantee $\theta_{1}$ does not move by more than $O(1 / p)$ . Specifically, as long as $\theta_{1} \in S$ we have $\ell = \ell_{1} + \ell_{2}$ , and so the loss $\ell$ satisfies $\| \nabla \ell (\theta ;d) - \nabla \ell (\theta ;d^{\prime}) \|_{2} = O(p / r)$ for all $d,d^{\prime}$ (the gradient difference bound on $\ell_{2}$ ) as long as $\theta_{1} \in S$ . We use the optimal setting of parameters in DP-SGD corresponding to $L = O\left(\frac{p}{r}\right), m = \frac{p}{r^{2}}$ , noting that the initialization condition in Theorem C.1 is satisfied by $\theta_{2} = 0$ . If we plug these into the parameter settings in Theorem C.1, we get that we should use $T = \Theta (p^{2})$ iterations with step-size $\eta_t = \frac{r^2}{pt}$ and per-iteration
|
| 429 |
+
|
| 430 |
+
variance $\sigma = \Theta (\frac{\sqrt{\log(1 / \delta)}}{r^2})$ , and achieves risk $\widetilde{O}(1 / p)$ on $\ell_2$ alone. Then, the movement of $\theta_{1}$ due to unnoised gradients in DP-SGD is at most $O(1) \cdot \sum_t \eta_t = \Theta (r^2 p \log p)$ (here we use the fact that the Lipschitz constant with respect to $\theta_{1}$ is $O(1)$ within $S$ , not $O(p / r)$ ), and with high probability the movement due to the noise is $O(\sqrt{p^4} \sqrt{\sum_t \eta_t^2} \sigma) = O(p^{3/2} r \sqrt{\log(1 / \delta)})$ . So if $r = O(\frac{1}{p^{5/2} \sqrt{\log(1 / \delta)}})$ , we get the desired upper bound on the movement of $\theta_{1}$ during DP-SGD.
|
| 431 |
+
|
| 432 |
+
Proof of Theorem 2.2. The proof is almost exactly the same as Theorem 2.1, so we only highlight the changes to that proof.
|
| 433 |
+
|
| 434 |
+
We define $\mathcal{T}$ similarly to Theorem 2.1: For each $\tau = \tau_{1} \times \tau_{2}$ in $\mathcal{T}$ as defined in Theorem 2.1, we replace it with $(\tau_{pub} = \tau_{1} \times Z, \tau_{pub} = \tau_{1} \times \tau_{2})$ where $Z$ is a point distribution on the origin.
|
| 435 |
+
|
| 436 |
+
Now, the lower bound in on $\mathcal{A}_{priv}$ in (1) can be proven exactly the same as in Theorem 2.1 since we're using the same set of private distributions. The lower bound on $\mathcal{A}_{pub}$ follows similarly to Theorem 2.1, since we proved it holds for $\tau_{1} \times \tau_{2}$ where $\tau_{1}$ can be arbitrary. Alternatively, one can note that $\mathcal{A}_{pub}$ learns no information about $\tau_{2}$ from the public data, so it cannot do better on $\ell_{2}$ than outputting a fixed point, which has risk $\Omega(1)$
|
| 437 |
+
|
| 438 |
+
The upper bound in (2) follows since the algorithm only evaluates gradients on the public data where $q(\theta_1) = 0$ , i.e. it never uses the coordinates in the public data that are changed between this theorem and Theorem 2.1.
|
| 439 |
+
|
| 440 |
+
# D Quadratic Example
|
| 441 |
+
|
| 442 |
+
In this section, we show that our construction holds even if the loss function is quadratic, as long as we are okay with using a constrained optimization problem.
|
| 443 |
+
|
| 444 |
+
Theorem D.1. For every integer $p \geq 1$ , for $\mathcal{C} = \mathcal{D} = B_{p^4}(0,1) \times B_p(0,1)$ there exists $\ell$ such that:
|
| 445 |
+
|
| 446 |
+
(1) For $\delta = o(1 / p^2)$ , any (non-private) algorithm $\mathcal{A}_{pub}:\mathcal{D}^p\to \mathcal{C}$ , and any $(1,\delta)$ -DP algorithm $\mathcal{A}_{priv}:\mathcal{D}^{p^2}\to \mathcal{C}$ there exists $\tau$ such that:
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
\mathbb {E} _ {D _ {p r i v} \sim \tau^ {p ^ {2}}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {p r i v} (D _ {p r i v})} [ \mathcal {L} (\theta) ] - \mathcal {L} (\theta^ {*} (\tau (\mathcal {A} _ {p r i v}))) \right] = \Omega (1)
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
\mathbb {E} _ {D _ {p u b} \sim \tau^ {p}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {p u b} (D _ {p u b})} [ \mathcal {L} (\theta) ] - \mathcal {L} \left(\theta^ {*} \left(\tau \left(\mathcal {A} _ {p u b}\right)\right)\right) \right] = \Omega (1)
|
| 454 |
+
$$
|
| 455 |
+
|
| 456 |
+
(2) For any $\delta \geq 2^{-p}$ , there exists an algorithm $\mathcal{A}_{\text{mixed}}: \mathcal{D}^{p+p^2} \to \mathcal{C}$ which runs projected gradient descent on the first $p$ examples, followed by $(1, \delta)$ -DP-SGD on the last $p^2$ examples, such that for any $\tau$ :
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\mathbb {E} _ {D \sim \tau^ {p + p ^ {2}}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {m i x e d} (D)} \left[ \mathcal {L} (\theta) \right] - \mathcal {L} \left(\theta^ {*} (\tau)\right) \right] = O (1 / p)
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
Proof. We will first state $\ell$ and then prove each item in the theorem statement. We use the following construction: $\mathcal{C} = \mathcal{D} = B_{p^4}(0,1)\times B_p(0,r)$ . Let $(\theta_{1},\theta_{2})$ denote an element of $\mathcal{C}$ , with $\theta_{1}\in B_{p^{4}}(0,1)$ and $\theta_{2}\in B_{p}(0,r)$ for $r = O(\frac{1}{p^{5 / 2}\sqrt{\log(1 / \delta)}})$ , and similarly with $(d_1,d_2)$ and $\mathcal{D}$ . We let $\ell ((\theta_1,\theta_2),(d_1,d_2)) = \frac{1}{2}\| \theta_1 - d_1\| _2^2 +\frac{p}{2r^2}\| \theta_2 - d_2\| _2^2$ .
|
| 463 |
+
|
| 464 |
+
As in the proof of Theorem 2.1, we will show (1) in two parts: for any $\mathcal{A}_{priv}$ , there exists $\tau_{1} \in \mathcal{T}_{1}$ such that the desired lower bound holds for $\tau_{1} \times \tau_{2}$ for all $\tau_{2} \in \mathcal{T}_{2}^{\prime}$ , and that for any $\mathcal{A}_{pub}$ there exists $\tau_{2} \in \mathcal{T}_{2}^{\prime}$ such that the desired lower bound holds for $\tau_{1} \times \tau_{2}$ for all $\tau_{1} \in \mathcal{T}_{1}$ .
|
| 465 |
+
|
| 466 |
+
Proof of (1) for $\mathcal{A}_{priv}$ : Fix $\mathcal{A}_{priv}$ and take any distribution $\tau_{1}$ over $B_{p^4}(0,1)$ . Let $\tau(\tau_1)$ be the distribution over $(d_1,d_2)$ given by sampling $d_{1} \sim \tau_{1}$ and letting $d_{2}$ be the origin with probability 1. Let $\mathcal{L}$ refer to the population loss over $\ell$ , and let $\mathcal{L}_1$ refer to the population loss over $\ell_1(\theta_1) := \| \theta_1 - d_1\|_2^2, d_1 \sim \tau_1$ .
|
| 467 |
+
|
| 468 |
+
Now, consider an algorithm $\mathcal{A}_{priv}^{\prime}$ that takes $p^2$ samples from $\tau_{1}$ , pads them with the origin to get $p^2$ samples from $\tau (\tau_{1})$ in the preceding paragraph, runs $\mathcal{A}_{priv}$ on these samples, and then takes $\theta_{1}$ from the output of $\mathcal{A}_{priv}$ . Notice that:
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\begin{array}{l} \mathbb {E} _ {D \sim \tau (\tau_ {1}) ^ {p ^ {2}}} \left[ \mathbb {E} _ {\theta \sim \mathcal {A} _ {p r i v} (D)} \left[ \mathcal {L} (\theta) \right] - \mathcal {L} \big (\theta^ {*} (\tau (\tau_ {1})) \big) \right] \\ \geq \mathbb {E} _ {D \sim \tau (\tau_ {1}) ^ {p ^ {2}}} \left[ \mathbb {E} _ {(\theta_ {1}, \theta_ {2}) \sim \mathcal {A} _ {p r i v} (D)} \left[ \mathcal {L} ((\theta_ {1}, 0)) ] - \mathcal {L} (\theta^ {*} (\tau (\tau_ {1}))) \right] \right. \\ = \mathbb {E} _ {D \sim \tau_ {1} ^ {p 2}} \left[ \mathbb {E} _ {(\theta_ {1}, \theta_ {2}) \sim \mathcal {A} _ {p r i v} (D), d _ {1} \sim \tau_ {1}} \left[ \frac {1}{2} \| \theta_ {1} - d _ {1} \| _ {2} ^ {2} \right] - \mathcal {L} _ {1} (\boldsymbol {\theta} ^ {*} (\tau_ {1})) \right] \\ = \mathbb {E} _ {D \sim \tau_ {1} ^ {p 2}} \left[ \mathbb {E} _ {\theta_ {1} \sim \mathcal {A} _ {p r i v} ^ {\prime} (D)} \left[ \mathcal {L} _ {1} (\theta_ {1}) \right] - \mathcal {L} _ {1} (\boldsymbol {\theta} ^ {*} (\tau_ {1})) \right]. \\ \end{array}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
By Lemma 1.2, the final expression is $\Omega(1)$ for some distribution $\tau_1(\mathcal{A}_{priv})$ . In turn, for the corresponding $\tau(\tau_1(\mathcal{A}_{priv}))$ , $\mathcal{A}_{priv}$ has excess population loss $\Omega(1)$ in expectation as desired.
|
| 475 |
+
|
| 476 |
+
Proof of (1) for $\mathcal{A}_{pub}$ : This follows by an argument symmetric to the previous part, except we use Lemma 1.3 instead of Lemma 1.2, and the observation that minimizing $\frac{p}{2r^2} \| \theta_2 - d_2\| _2^2$ is equivalent to minimizing $\frac{p}{2}\|\theta_2-d_2\|_2^2$ over $B_{p}(0,1)$ . In particular, the lower bound on just $\| \theta_2 - d_2\| _2^2$ given by Lemma 1.2 is $\Omega (1 / p)$ , and the lower bound of $\Omega (1)$ on $\mathcal{L}$ follows after using the same reduction as in the proof of (1) and taking into account the multiplier $\frac{p}{2}$ .
|
| 477 |
+
|
| 478 |
+
Proof of (2): This follows similarly to Theorem 2.1, so we only highlight the high-level proof and major changes here. A single step of projected gradient descent on the public data gets us to the empirical minimizer of $\theta_{1}$ , which achieves excess risk $O(1 / p)$ on $\frac{1}{2} \| \theta_1 - d_1\| _2^2$ . Then, since we are using projected gradient descent, we know $\theta_{2}$ is distance $O(r)$ from the population minimizer of $\theta_{2}$ , so projected DP-SGD on the private data gets to a point which achieves risk $O(1 / p)$ on $\frac{p}{2r^2}\left\| \theta_2 - d_2^2\right\| _2$ . By a similar argument to Theorem 2.1, projected DP-SGD does not cause $\theta_{1}$ to move by more than $O(1 / p)$ with high probability if $r = O\left(\frac{1}{p^{5 / 2}\sqrt{\log(1 / \delta)}}\right)$ .
|
| 479 |
+
|
| 480 |
+
If we want to take this same example and make it unconstrained, an issue arises: A single step of gradient step with step size 1 will cause $\theta_{2}$ to move by $1 / r^2$ , which is far larger than the radius of the ball that $\theta_{2}$ was restricted to in the constrained setting. In turn, the DP-SGD guarantees worsened. We can remedy this by taking smaller step sizes on the public data so that each step is non-expansive, i.e. $\theta_{2}$ does not leave the ball and the DP-SGD guarantees still hold. However, in order to do so we need to use step sizes where $\eta = O(r^{2})$ which means we will need to take $\Omega(1 / r^2)$ steps in order to reduce our distance to the minimizing $\theta_{1}$ by a constant. Since $r$ is being set to a small value, this is a large number of steps. In other words, it is possible to take this example and make it unconstrained, while still satisfying that public-then-private gradient descent achieves the desired excess loss, but the algorithm will not be efficient.
|
2302.09xxx/2302.09483/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05ee93de22200f58fa63e23e5f82e5fcc3c02b4bd03464bfe9b63cb6ae3c3eb0
|
| 3 |
+
size 719242
|
2302.09xxx/2302.09483/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2302.09xxx/2302.09491/43edad10-6281-465f-a84f-96b7449b2f43_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|