Add Batch e22e69ea-235f-45ae-bfd4-ea58019c33ec
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_content_list.json +1894 -0
- 2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_model.json +0 -0
- 2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_origin.pdf +3 -0
- 2401.06xxx/2401.06341/full.md +414 -0
- 2401.06xxx/2401.06341/images.zip +3 -0
- 2401.06xxx/2401.06341/layout.json +0 -0
- 2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_content_list.json +0 -0
- 2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_model.json +0 -0
- 2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_origin.pdf +3 -0
- 2401.06xxx/2401.06373/full.md +0 -0
- 2401.06xxx/2401.06373/images.zip +3 -0
- 2401.06xxx/2401.06373/layout.json +0 -0
- 2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_content_list.json +0 -0
- 2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_model.json +0 -0
- 2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_origin.pdf +3 -0
- 2401.06xxx/2401.06385/full.md +582 -0
- 2401.06xxx/2401.06385/images.zip +3 -0
- 2401.06xxx/2401.06385/layout.json +0 -0
- 2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_content_list.json +0 -0
- 2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_model.json +0 -0
- 2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_origin.pdf +3 -0
- 2401.06xxx/2401.06387/full.md +430 -0
- 2401.06xxx/2401.06387/images.zip +3 -0
- 2401.06xxx/2401.06387/layout.json +0 -0
- 2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_content_list.json +0 -0
- 2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_model.json +0 -0
- 2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_origin.pdf +3 -0
- 2401.06xxx/2401.06391/full.md +547 -0
- 2401.06xxx/2401.06391/images.zip +3 -0
- 2401.06xxx/2401.06391/layout.json +0 -0
- 2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_content_list.json +0 -0
- 2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_model.json +0 -0
- 2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_origin.pdf +3 -0
- 2401.06xxx/2401.06416/full.md +541 -0
- 2401.06xxx/2401.06416/images.zip +3 -0
- 2401.06xxx/2401.06416/layout.json +0 -0
- 2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_content_list.json +1637 -0
- 2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_model.json +0 -0
- 2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_origin.pdf +3 -0
- 2401.06xxx/2401.06431/full.md +359 -0
- 2401.06xxx/2401.06431/images.zip +3 -0
- 2401.06xxx/2401.06431/layout.json +0 -0
- 2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_content_list.json +1261 -0
- 2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_model.json +1861 -0
- 2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_origin.pdf +3 -0
- 2401.06xxx/2401.06432/full.md +235 -0
- 2401.06xxx/2401.06432/images.zip +3 -0
- 2401.06xxx/2401.06432/layout.json +0 -0
- 2401.06xxx/2401.06468/0fc0ae98-47c2-47d6-8b41-aa05f4217658_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -10582,3 +10582,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 10582 |
2401.12xxx/2401.12987/62ebfa05-bbf4-4429-9a21-3c5fb3c3fcfc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10583 |
2402.01xxx/2402.01666/b3f58bd6-a85a-4cd3-a8c9-4f047ea5a2b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10584 |
2402.12xxx/2402.12381/e7117c9c-a5e2-4e26-b7a7-62d00b68da6b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10582 |
2401.12xxx/2401.12987/62ebfa05-bbf4-4429-9a21-3c5fb3c3fcfc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10583 |
2402.01xxx/2402.01666/b3f58bd6-a85a-4cd3-a8c9-4f047ea5a2b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10584 |
2402.12xxx/2402.12381/e7117c9c-a5e2-4e26-b7a7-62d00b68da6b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10585 |
+
2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10586 |
+
2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10587 |
+
2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10588 |
+
2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10589 |
+
2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10590 |
+
2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10591 |
+
2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10592 |
+
2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10593 |
+
2401.06xxx/2401.06468/0fc0ae98-47c2-47d6-8b41-aa05f4217658_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10594 |
+
2401.06xxx/2401.06506/d01592c4-249e-4179-81b5-2b709d1d7586_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10595 |
+
2401.06xxx/2401.06532/a86a0072-d81a-4071-886f-e0f2648b1ad6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10596 |
+
2401.06xxx/2401.06561/61f8ed5e-2c99-426d-a3b1-353c50420913_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10597 |
+
2401.06xxx/2401.06578/f8c4a8fd-dc77-4363-a9f7-5f5ef2a00d01_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10598 |
+
2401.06xxx/2401.06580/ff4d4d0a-81a5-4482-a5fe-50d021ae2d58_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10599 |
+
2401.06xxx/2401.06591/a24fe38f-1809-45d8-858c-04427ed530a1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10600 |
+
2401.06xxx/2401.06654/966d48b7-9f81-4b7f-8514-35bf307d6ddd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10601 |
+
2401.06xxx/2401.06659/d123ee52-1abb-45a1-8545-2759ea2f8319_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10602 |
+
2401.06xxx/2401.06706/328e1c2f-3cca-44a5-af27-352127687da4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10603 |
+
2401.06xxx/2401.06712/37f91707-800f-40c1-8c8b-09387f10aa11_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10604 |
+
2401.06xxx/2401.06730/bfb9d340-65b7-48ba-81f9-6450d0ca83e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10605 |
+
2401.06xxx/2401.06751/fd1c2532-c8e7-44b1-9629-c9656d59a9e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10606 |
+
2401.06xxx/2401.06760/1ce67e04-c85b-4dc3-ab33-dfa945c2b625_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10607 |
+
2401.06xxx/2401.06766/7584a5dd-514e-4cf9-a1c1-a58c6e60363a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10608 |
+
2401.06xxx/2401.06836/2e6ab5a8-bbe5-4c1f-b3eb-8e6ed4d51bec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10609 |
+
2401.06xxx/2401.06838/a1d1a0c9-eb05-47a9-8216-5870bb5bc037_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10610 |
+
2401.06xxx/2401.06853/a1d2b804-c1f8-48a0-ada9-c87cd8ac8f59_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10611 |
+
2401.06xxx/2401.06855/85972f26-b74c-4187-b7df-e7092012de0f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10612 |
+
2401.06xxx/2401.06866/f49ff9b9-bfb0-4c75-bf2f-eb0ef4fdd05a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10613 |
+
2401.06xxx/2401.06883/f10db9d7-43b3-43ca-a115-e6ff40ad1ef5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10614 |
+
2401.06xxx/2401.06915/fc0c016d-fac9-4a01-b9a0-eea1fe415139_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10615 |
+
2401.06xxx/2401.06949/07bb5b04-c80c-4635-a21c-9b00692bae16_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10616 |
+
2401.06xxx/2401.06954/ad8dbbae-6f26-4827-91c3-688204c7b61f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10617 |
+
2401.06xxx/2401.06960/03fea434-5016-4eaf-8f62-d9687d5696b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10618 |
+
2401.06xxx/2401.06961/f763f99e-e38d-47cf-b8e3-4df308c68dc0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10619 |
+
2401.06xxx/2401.06982/70563287-acd0-486e-ac67-a244f8c1eef1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10620 |
+
2401.07xxx/2401.07037/cbbb28ba-fde5-4c29-b203-6b287f4e04ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10621 |
+
2401.07xxx/2401.07078/b8711823-2648-476d-9b2b-b95e21046da9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10622 |
+
2401.07xxx/2401.07102/5035c15b-bfae-4469-81e5-5f3831823c09_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10623 |
+
2401.07xxx/2401.07103/7f880470-56f8-487a-b041-98f1381103fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10624 |
+
2401.07xxx/2401.07128/6ab19e86-3380-4178-adb5-fc4644be6153_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10625 |
+
2401.07xxx/2401.07139/5c1c017f-d642-4525-a15a-bb40fb472ecb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10626 |
+
2401.07xxx/2401.07286/216b718f-544a-468c-99d9-fa8bc7889588_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10627 |
+
2401.07xxx/2401.07301/5ef3c8aa-d303-41b5-acd5-24843a32d333_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10628 |
+
2401.07xxx/2401.07310/f899b4d0-ffb7-48f5-a2cc-e6e00aeee7f3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10629 |
+
2401.07xxx/2401.07312/e302fa9a-42ed-451b-b15f-1bf74560ae09_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10630 |
+
2401.07xxx/2401.07314/59152651-a828-42ca-89c4-91b727702d04_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10631 |
+
2401.07xxx/2401.07324/d54c1cb6-5ae9-4c29-956a-44170a62d77d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10632 |
+
2401.07xxx/2401.07333/b4ef8de3-9d3c-41d8-8533-61668f956467_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10633 |
+
2401.07xxx/2401.07339/46fadb1f-5ce7-47ea-9b4a-4237ad7ac6f5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10634 |
+
2401.07xxx/2401.07348/b9202070-50bc-4b95-8308-3330f6f242ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10635 |
+
2401.07xxx/2401.07364/dfc0e3d0-9cb5-4f87-bad1-d3d74511a3b2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10636 |
+
2401.07xxx/2401.07389/769cb784-45cb-4af8-aafd-9995d03bd20a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10637 |
+
2401.07xxx/2401.07402/cbf8d546-833b-48b7-b251-10d9889fd412_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10638 |
+
2401.07xxx/2401.07408/ef22a72a-6456-443f-9858-8fca81090e6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10639 |
+
2401.07xxx/2401.07453/e5a10dd7-eb9e-4548-8830-4d4f5926d28f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10640 |
+
2401.07xxx/2401.07457/1f7fe7a8-ec70-49b2-9663-83b30f0a5755_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10641 |
+
2401.07xxx/2401.07487/6ec6c850-9de7-4175-84f9-f7912682cfdf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10642 |
+
2401.07xxx/2401.07519/0587cf56-d7df-4709-98a5-8eb8955f35cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10643 |
+
2401.10xxx/2401.10917/ad177b30-039b-4ee4-822f-bdf71a73c06d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10644 |
+
2402.01xxx/2402.01659/b3d33600-bf50-477e-800e-33c2a54fd58d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10645 |
+
2402.01xxx/2402.01662/30241957-50c3-4619-aa06-e21b151ad400_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10646 |
+
2402.01xxx/2402.01664/e0786831-a747-4436-8f85-0803b8ad8c41_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10647 |
+
2402.01xxx/2402.01665/b58f9c6c-1805-47d6-82cb-6cbc6f1ecb18_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 10648 |
+
2402.10xxx/2402.10908/728d3487-c81c-4f27-a8ea-36f2cd09ee49_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_content_list.json
ADDED
|
@@ -0,0 +1,1894 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "AffordanceLLM: Grounding Affordance from Vision Language Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
122,
|
| 8 |
+
130,
|
| 9 |
+
846,
|
| 10 |
+
152
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Shengyi Qian*",
|
| 17 |
+
"bbox": [
|
| 18 |
+
116,
|
| 19 |
+
180,
|
| 20 |
+
235,
|
| 21 |
+
199
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Weifeng Chen",
|
| 28 |
+
"bbox": [
|
| 29 |
+
258,
|
| 30 |
+
181,
|
| 31 |
+
372,
|
| 32 |
+
198
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Min Bai",
|
| 39 |
+
"bbox": [
|
| 40 |
+
401,
|
| 41 |
+
181,
|
| 42 |
+
470,
|
| 43 |
+
196
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Xiong Zhou",
|
| 50 |
+
"bbox": [
|
| 51 |
+
500,
|
| 52 |
+
181,
|
| 53 |
+
598,
|
| 54 |
+
198
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Zhuowen Tu",
|
| 61 |
+
"bbox": [
|
| 62 |
+
627,
|
| 63 |
+
181,
|
| 64 |
+
730,
|
| 65 |
+
196
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "Li Erran Li",
|
| 72 |
+
"bbox": [
|
| 73 |
+
759,
|
| 74 |
+
181,
|
| 75 |
+
851,
|
| 76 |
+
196
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "AWS AI, Amazon",
|
| 83 |
+
"bbox": [
|
| 84 |
+
411,
|
| 85 |
+
199,
|
| 86 |
+
557,
|
| 87 |
+
215
|
| 88 |
+
],
|
| 89 |
+
"page_idx": 0
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"text": "https://jasonqsy.github.io/AffordanceLLM",
|
| 94 |
+
"bbox": [
|
| 95 |
+
305,
|
| 96 |
+
219,
|
| 97 |
+
661,
|
| 98 |
+
232
|
| 99 |
+
],
|
| 100 |
+
"page_idx": 0
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "image",
|
| 104 |
+
"img_path": "images/af6f182f967247bc9a9eb690ec4ec53bee7fa949a0bd9609cbcc238c8095cf83.jpg",
|
| 105 |
+
"image_caption": [
|
| 106 |
+
"Figure 1. Illustration for the affordance grounding task. The input is a single image and the corresponding action (e.g., \"hold\"). The output is a heatmap which highlights regions one can interact. We aim to enhance the generalization capability of affordance grounding to in-the-wild objects that are unseen during training, by developing a new approach, AffordanceLLM, that takes the advantage of the rich knowledge from large-scale vision language models [40] beyond the supervision from the training images."
|
| 107 |
+
],
|
| 108 |
+
"image_footnote": [],
|
| 109 |
+
"bbox": [
|
| 110 |
+
81,
|
| 111 |
+
243,
|
| 112 |
+
887,
|
| 113 |
+
436
|
| 114 |
+
],
|
| 115 |
+
"page_idx": 0
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"type": "text",
|
| 119 |
+
"text": "Abstract",
|
| 120 |
+
"text_level": 1,
|
| 121 |
+
"bbox": [
|
| 122 |
+
233,
|
| 123 |
+
510,
|
| 124 |
+
313,
|
| 125 |
+
526
|
| 126 |
+
],
|
| 127 |
+
"page_idx": 0
|
| 128 |
+
},
|
| 129 |
+
{
|
| 130 |
+
"type": "text",
|
| 131 |
+
"text": "Affordance grounding refers to the task of finding the area of an object with which one can interact. It is a fundamental but challenging task, as a successful solution requires the comprehensive understanding of a scene in multiple aspects including detection, localization, and recognition of objects with their parts, of geo-spatial configuration/layout of the scene, of 3D shapes and physics, as well as of the functionality and potential interaction of the objects and humans. Much of the knowledge is hidden and beyond the image content with the supervised labels from a limited training set. In this paper, we make an attempt to improve the generalization capability of the current affordance grounding by taking the advantage of the rich world, abstract, and human-object-interaction knowledge from pretrained large-scale vision language models [40]. Under the AGD20K benchmark, our proposed model demonstrates a significant performance gain over the competing methods for in-the-wild object affordance grounding. We further demonstrate it can ground affordance for objects from random Internet images, even if both objects and actions are unseen during training.",
|
| 132 |
+
"bbox": [
|
| 133 |
+
73,
|
| 134 |
+
542,
|
| 135 |
+
472,
|
| 136 |
+
861
|
| 137 |
+
],
|
| 138 |
+
"page_idx": 0
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"type": "text",
|
| 142 |
+
"text": "1. Introduction",
|
| 143 |
+
"text_level": 1,
|
| 144 |
+
"bbox": [
|
| 145 |
+
501,
|
| 146 |
+
510,
|
| 147 |
+
630,
|
| 148 |
+
526
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "Grounding affordance from a single image is a fundamental problem in computer vision. It forms the stepping stone to downstream tasks such as understanding human-object interaction [8, 17, 60], visual navigation [31], and object manipulation [1, 23]. Past approaches generally use human demonstrations as supervision to advance this field with tremendous success [34, 44, 45, 50]. While such approaches perform well on objects and actions seen during training, they struggle when generalizing in the wild, i.e. on novel objects unseen during training (Fig. 1).",
|
| 155 |
+
"bbox": [
|
| 156 |
+
496,
|
| 157 |
+
536,
|
| 158 |
+
890,
|
| 159 |
+
686
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "The difficulties in generalization arise from the fact that affordance grounding is a challenging task that requires comprehensive understanding of an image from multiple aspects. A successful solution requires an understanding of the 3D geometry and functionality of objects and parts, of the actions and intentions of the executing agent, of the potential interaction between object and human, as well as of the spatial configuration of the environment. Much of these knowledge lies beyond the ground-truth localization/recognition of objects and parts provided as heatmaps in a limited training set.",
|
| 166 |
+
"bbox": [
|
| 167 |
+
496,
|
| 168 |
+
688,
|
| 169 |
+
892,
|
| 170 |
+
853
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "In this paper we make attempts to improve affordance grounding in the wild by leveraging the rich world, abstract, and human-object-interaction knowledge embedded",
|
| 177 |
+
"bbox": [
|
| 178 |
+
498,
|
| 179 |
+
854,
|
| 180 |
+
893,
|
| 181 |
+
900
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 0
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "page_footnote",
|
| 187 |
+
"text": "*The work was done during an Amazon internship.",
|
| 188 |
+
"bbox": [
|
| 189 |
+
94,
|
| 190 |
+
886,
|
| 191 |
+
367,
|
| 192 |
+
900
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 0
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "aside_text",
|
| 198 |
+
"text": "arXiv:2401.06341v2 [cs.CV] 17 Apr 2024",
|
| 199 |
+
"bbox": [
|
| 200 |
+
22,
|
| 201 |
+
260,
|
| 202 |
+
58,
|
| 203 |
+
705
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 0
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "image",
|
| 209 |
+
"img_path": "images/d88982dcba9746f466e9f95f1c0c2cc6bb744776efffbafa0a6ff867af9671d6.jpg",
|
| 210 |
+
"image_caption": [],
|
| 211 |
+
"image_footnote": [],
|
| 212 |
+
"bbox": [
|
| 213 |
+
101,
|
| 214 |
+
108,
|
| 215 |
+
133,
|
| 216 |
+
136
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "image",
|
| 222 |
+
"img_path": "images/7d9728093c112b2f3899e333b574ac5a4a873ae68ffa14ae027ee11268527e4d.jpg",
|
| 223 |
+
"image_caption": [
|
| 224 |
+
"How do I ride with this motorcycle?"
|
| 225 |
+
],
|
| 226 |
+
"image_footnote": [],
|
| 227 |
+
"bbox": [
|
| 228 |
+
145,
|
| 229 |
+
99,
|
| 230 |
+
240,
|
| 231 |
+
143
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "image",
|
| 237 |
+
"img_path": "images/af600c535ec6e83a108a549dcbc11aef20ad1cdbbf74d501af10d182bf70a9ba.jpg",
|
| 238 |
+
"image_caption": [
|
| 239 |
+
"LLaVA",
|
| 240 |
+
"Figure 2. State-of-the-art vision language models, such as LLaVA [40], has rich human-object-interaction knowledge, thanks to the large-scale text pretraining. Given a question about how to interact with an object, it typically gives a reasonable solution."
|
| 241 |
+
],
|
| 242 |
+
"image_footnote": [],
|
| 243 |
+
"bbox": [
|
| 244 |
+
398,
|
| 245 |
+
146,
|
| 246 |
+
450,
|
| 247 |
+
183
|
| 248 |
+
],
|
| 249 |
+
"page_idx": 1
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"type": "text",
|
| 253 |
+
"text": "To ride the motorcycle, you should interact with the handlebars, which are located at the front of the motorcycle. The handlebars are used to steer the motorcycle and control its direction and speed.",
|
| 254 |
+
"bbox": [
|
| 255 |
+
96,
|
| 256 |
+
165,
|
| 257 |
+
359,
|
| 258 |
+
207
|
| 259 |
+
],
|
| 260 |
+
"page_idx": 1
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"type": "text",
|
| 264 |
+
"text": "Additionally, you should also ensure that the motorcycle is parked in a safe and legal location, and that you have the necessary safety gear, such as a helmet and protective clothing, before attempting to ride it.",
|
| 265 |
+
"bbox": [
|
| 266 |
+
96,
|
| 267 |
+
210,
|
| 268 |
+
383,
|
| 269 |
+
258
|
| 270 |
+
],
|
| 271 |
+
"page_idx": 1
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"type": "text",
|
| 275 |
+
"text": "in large-scale Vision Language Models (VLMs). With large-scale text pretraining, modern VLMs such as GPT-4 [52], LLaVA [40] and Blip-2 [35] have a rich reservoir of world knowledge, as demonstrated by their extraordinary capabilities in answering visually grounded common sense questions [4]. World knowledge is instrumental to affordance reasoning — when presented with an image of a motorcycle and questioned about “How do I ride with this motorcycle?” LLaVA answers “To ride the motorcycle, you should interact with the handlebars...” (Fig. 2), which exhibits commonsensical understanding of affordance. Affordance models equipped with similar world knowledge have a better chance generalizing to the wild than a model that purely learns from limited affordance demonstration data.",
|
| 276 |
+
"bbox": [
|
| 277 |
+
75,
|
| 278 |
+
354,
|
| 279 |
+
468,
|
| 280 |
+
565
|
| 281 |
+
],
|
| 282 |
+
"page_idx": 1
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"type": "text",
|
| 286 |
+
"text": "Beside world knowledge, another novel factor we introduce to improve affordance reasoning is 3D geometry, as it holds rich information of object functionality. A cylindrical area, for example a handle or a stick, is closely related to the action of grabbing or holding, regardless of its color or texture. Similarly, a flat part, for example the surface of a chair or a bench, might indicate areas suitable for sitting or lying. Relating 3D geometries to actions allows us to bypass the difficulties in handling variations in visual appearances, and thus facilitates generalization.",
|
| 287 |
+
"bbox": [
|
| 288 |
+
75,
|
| 289 |
+
566,
|
| 290 |
+
468,
|
| 291 |
+
715
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "We propose a novel approach, AffordanceLLM, that reflects the above intuitions. Our approach builds upon a VLM backbone (LLaVA [40]) to tap into the its world knowledge. We achieve it by extending the backbone with a mask decoder and a special token <mask_token>, which are used to predict an affordance map. The whole model can be trained end-to-end. Additionally, we introduce depth maps as 3D information in parallel to RGB images as input to our network, with the goal of eliciting geometric reasoning capability from the network. We found both designs significantly improve performance.",
|
| 298 |
+
"bbox": [
|
| 299 |
+
75,
|
| 300 |
+
718,
|
| 301 |
+
468,
|
| 302 |
+
883
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 1
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "text",
|
| 308 |
+
"text": "We evaluate our method on the AGD20K [45] bench",
|
| 309 |
+
"bbox": [
|
| 310 |
+
96,
|
| 311 |
+
885,
|
| 312 |
+
468,
|
| 313 |
+
900
|
| 314 |
+
],
|
| 315 |
+
"page_idx": 1
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"type": "text",
|
| 319 |
+
"text": "mark, as this is the only large-scale affordance grounding dataset with accurate action and object labels. We re-split the benchmark to test capability of models to generalize to object categories unseen during training. We show that our approach outperforms all state-of-the-art baselines by a large margin. We take a further step to validate the generalization ability by testing our approach on random Internet images. It produces reasonable affordance maps on object categories very different from the ones in training set. Moreover, it even possesses some capability of generalizing to completely novel actions.",
|
| 320 |
+
"bbox": [
|
| 321 |
+
496,
|
| 322 |
+
90,
|
| 323 |
+
890,
|
| 324 |
+
256
|
| 325 |
+
],
|
| 326 |
+
"page_idx": 1
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"type": "text",
|
| 330 |
+
"text": "In summary, our contributions are as follows:",
|
| 331 |
+
"bbox": [
|
| 332 |
+
517,
|
| 333 |
+
258,
|
| 334 |
+
818,
|
| 335 |
+
272
|
| 336 |
+
],
|
| 337 |
+
"page_idx": 1
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"type": "list",
|
| 341 |
+
"sub_type": "text",
|
| 342 |
+
"list_items": [
|
| 343 |
+
"1. We introduce the first-ever affordance grounding approach that leverages the rich world knowledge embedded in pretrained VLMs, enabling the model to generalize beyond training data;",
|
| 344 |
+
"2. We demonstrate the importance of 3D information in affordance grounding;",
|
| 345 |
+
"3. Our proposed approach generalizes to novel objects and outperforms all state-of-the-art approaches on AGD20K. It even shows evidence that it could generalize to novel actions."
|
| 346 |
+
],
|
| 347 |
+
"bbox": [
|
| 348 |
+
500,
|
| 349 |
+
275,
|
| 350 |
+
890,
|
| 351 |
+
424
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 1
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "2. Related Work",
|
| 358 |
+
"text_level": 1,
|
| 359 |
+
"bbox": [
|
| 360 |
+
500,
|
| 361 |
+
444,
|
| 362 |
+
640,
|
| 363 |
+
459
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 1
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "Eliciting World Knowledge from Vision Language Models. Foundational Vision Language Models that bridge images and language have a rich reservoir of world knowledge, and recent researches have been tapping into it to make advancement in vision tasks. The joint visual-language embedding space learnt from simple image-text pair [13, 57] has made it possible to improve open-world detection [42, 47, 48, 61], and segmentation [33, 43, 64, 67]. The world knowledge here is the correspondence between visual and language concepts.",
|
| 370 |
+
"bbox": [
|
| 371 |
+
496,
|
| 372 |
+
472,
|
| 373 |
+
890,
|
| 374 |
+
625
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 1
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "Large language models (LLMs) trained on trillions of tokens contain even richer world knowledge and are capable of answering common-sense questions. Coupled with vision inputs, the resulting multi-modal LLMs are brought in to solve complex vision problems. For example, Kosmos-2 and Groundhog incorporate the reasoning skill of LLMs to generate bounding boxes and segmentation masks [32, 53, 75]. 3D LLM further extend LLMs to reason about 3D scenes, including visual grounding and navigation [12, 22, 24, 25, 37, 68, 77]. For robotics, PaLM-E and RT2 transfer the knowledge from visual-language domains into motion planing and manipulation [3, 14]. Our approach embodies the same idea to transfer the world knowledge from VLMs, but applies it on a novel setting - solving visual affordance grounding.",
|
| 381 |
+
"bbox": [
|
| 382 |
+
496,
|
| 383 |
+
625,
|
| 384 |
+
890,
|
| 385 |
+
851
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 1
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "Affordance Grounding. Understanding object affordance from a single image is an important step towards embodied visual intelligence, and researchers have built many dif",
|
| 392 |
+
"bbox": [
|
| 393 |
+
496,
|
| 394 |
+
854,
|
| 395 |
+
890,
|
| 396 |
+
900
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 1
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "image",
|
| 402 |
+
"img_path": "images/4f325426ec86cdeacf412ebb3d96fc62012ee481ac4d9d61fe418e383fd2bdf9.jpg",
|
| 403 |
+
"image_caption": [
|
| 404 |
+
"Figure 3. Overview of AffordanceLLM. The inputs of our model include a single image and a text prompt related to interaction. We use OWL-ViT [48] as the image encoder to generate image features and project it into the same hidden dimension as the large language model. As well, we use a tokenizer to encode the text prompt. The text features and image features are concatenated together and feed into the LLM. The LLM is fine-tuned to predict a special token, which is used as a query to the mask decoder to generate the final affordance map."
|
| 405 |
+
],
|
| 406 |
+
"image_footnote": [],
|
| 407 |
+
"bbox": [
|
| 408 |
+
83,
|
| 409 |
+
88,
|
| 410 |
+
893,
|
| 411 |
+
251
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "ferent approaches to endow machines to have this ability. Nagarajan et al. first proposes to ground object affordance from Internet videos [50]. Fang et al. constructs an object affordance dataset based on product review videos [16]. Luo et al. annotates the first large-scale affordance dataset and call it AGD20K [45]. LOCATE [34] is the state-of-the-art approach on AGD20K. More recently, researchers further extends the scope of the affordance grounding problem, including extending it to scene understanding [9, 30, 54], 3D models [69], egocentric videos [49], hand pose generation [27, 70], or associating it with human parts [46]. We use AGD20K as our primary benchmark, and compare our approach with state-of-the-art methods [21, 34, 44, 50, 54].",
|
| 418 |
+
"bbox": [
|
| 419 |
+
75,
|
| 420 |
+
324,
|
| 421 |
+
472,
|
| 422 |
+
521
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "text",
|
| 428 |
+
"text": "Incorporating 3D Information for Vision Tasks. 3D information has been shown to be critical in certain vision and robotics tasks. For example, Zhou et al. [76] found that visual navigation in mobile sensorimotor systems can benefit from 3D input. Kerr et al. [28] found the NeRF-rendered depth map can help grasping in robotics. Similarly, grounding affordance could benefit from 3D information as well, as 3D shapes of objects and their parts hold a lot of hints on their utility and the proper ways to interact with them. While 3D information is not usually available for an arbitrary image, fortunately, researchers have built a series of robust approaches to estimate the 3D of an image, ranging from surface normal estimation [15, 65], depth estimation [10, 36, 59, 72, 73], to 3D reconstruction [18, 39, 51] and few-image NeRF [5, 41, 56, 74]. In our paper, we mainly use DPT [59] to generate pseudo depth maps to help VLMs to build 3D understanding.",
|
| 429 |
+
"bbox": [
|
| 430 |
+
75,
|
| 431 |
+
529,
|
| 432 |
+
472,
|
| 433 |
+
787
|
| 434 |
+
],
|
| 435 |
+
"page_idx": 2
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"type": "text",
|
| 439 |
+
"text": "Robotics Manipulation. Manipulation of in-the-wild objects is an important but challenging task in robotics due to the difficulty of data collection. Researchers have developed many methods for different objects in different scenes, such as tabletop objects [19, 26, 62] and mobile manipulation [71]. While manipulation is not our goal, learning affordance can be a solution for manipulation [2, 23, 66].",
|
| 440 |
+
"bbox": [
|
| 441 |
+
75,
|
| 442 |
+
794,
|
| 443 |
+
470,
|
| 444 |
+
902
|
| 445 |
+
],
|
| 446 |
+
"page_idx": 2
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"type": "text",
|
| 450 |
+
"text": "3. Approach",
|
| 451 |
+
"text_level": 1,
|
| 452 |
+
"bbox": [
|
| 453 |
+
498,
|
| 454 |
+
323,
|
| 455 |
+
609,
|
| 456 |
+
340
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "text",
|
| 462 |
+
"text": "We now introduce our approach, AffordanceLLM, which takes a single image $I$ and an affordance text query $T$ , and generates an affordance map $M$ . We use a template of \"What part of the <object_name> should we interact with in order to <action_name> it?\" as the text query $T$ . We then train LLM to generate a special token <mask_token> and use its hidden state to decode a dense affordance map $M$ . A brief overview is shown in Fig 3.",
|
| 463 |
+
"bbox": [
|
| 464 |
+
496,
|
| 465 |
+
348,
|
| 466 |
+
893,
|
| 467 |
+
470
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 2
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "3.1. Overview",
|
| 474 |
+
"text_level": 1,
|
| 475 |
+
"bbox": [
|
| 476 |
+
500,
|
| 477 |
+
481,
|
| 478 |
+
609,
|
| 479 |
+
496
|
| 480 |
+
],
|
| 481 |
+
"page_idx": 2
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"type": "text",
|
| 485 |
+
"text": "Large language model. We choose LLaVA-7B as our backbone multimodal large language model. We refer the reader for a fuller explanation in [40], but briefly, LLaVA contains an image encoder, a text tokenizer, and a large language model LLM. The image encoder is typically a CLIP pretrained ViT, with a linear layer to project the hidden dimension. It encodes the image $I$ into image features $F_{I}$ . At the same time, the tokenizer encodes the text $T$ into text features $F_{T}$ . They are concatenated and feed into the language model. The LLM produces text output $A$ as:",
|
| 486 |
+
"bbox": [
|
| 487 |
+
496,
|
| 488 |
+
507,
|
| 489 |
+
893,
|
| 490 |
+
659
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 2
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "equation",
|
| 496 |
+
"text": "\n$$\nA = \\operatorname {L L M} \\left(F _ {I}, F _ {T}\\right). \\tag {1}\n$$\n",
|
| 497 |
+
"text_format": "latex",
|
| 498 |
+
"bbox": [
|
| 499 |
+
625,
|
| 500 |
+
672,
|
| 501 |
+
890,
|
| 502 |
+
689
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 2
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"text": "Predicting affordance. How do we perform affordance reasoning while leveraging the world knowledge embedded in LLM? We propose to treat affordance as an implicit text token predicted from the LLM, which could be further decoded into a 2D map. Specifically, we train the LLM to predict a special token $\\langle \\text{mask_token} \\rangle$ , the hidden state of which is first projected into a query embedding $q$ and then fed into a Decoder to generate a dense affordance map. Decoder shares a similar architecture as the ones in SAM [29] and 3DOI [54]. It takes in $q$ and image features $F_{I}$ to produce an affordance map $M$ , i.e.,",
|
| 509 |
+
"bbox": [
|
| 510 |
+
496,
|
| 511 |
+
704,
|
| 512 |
+
893,
|
| 513 |
+
872
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 2
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "equation",
|
| 519 |
+
"text": "\n$$\nM = \\operatorname {D e c o d e r} \\left(F _ {I}, q\\right). \\tag {2}\n$$\n",
|
| 520 |
+
"text_format": "latex",
|
| 521 |
+
"bbox": [
|
| 522 |
+
617,
|
| 523 |
+
885,
|
| 524 |
+
890,
|
| 525 |
+
901
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 2
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "text",
|
| 531 |
+
"text": "Pseudodepth as additional inputs. Besides images, the affordance reasoning task could benefit from 3D information (we will validate the benefits in Sec 4). However, modern VLMs are typically only trained with text and 2D images [35, 40]. Therefore, we also include a pseudo depth map as additional inputs to the large language model. For each image, we use the state-of-the-art depth estimation model DPT [59] to generate a pseudo depth map $D$ . We use the same image encoder to encode the depth map $D$ to produce depth features $F_{D}$ , and concatenate it with image features. Our final model is thus",
|
| 532 |
+
"bbox": [
|
| 533 |
+
75,
|
| 534 |
+
90,
|
| 535 |
+
472,
|
| 536 |
+
257
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 3
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "equation",
|
| 542 |
+
"text": "\n$$\nA, M = \\text {A f f o r d a n c e L L M} \\left(F _ {I}, F _ {D}, F _ {T}\\right). \\tag {3}\n$$\n",
|
| 543 |
+
"text_format": "latex",
|
| 544 |
+
"bbox": [
|
| 545 |
+
138,
|
| 546 |
+
271,
|
| 547 |
+
468,
|
| 548 |
+
287
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 3
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "Training objectives. Following the same practice as [54], we train the affordance map using a binary focal loss [38], and call it affordance loss $L_{\\mathrm{aff}}$ . We set the weight of positive examples to be 0.95 and that of negative ones to be 0.05 to balance positives and negatives, as there are more negatives than positives in ground truth affordance map. We follow the standard cross entropy loss for the text output of language models. Our final loss function is a linear combination of affordance loss and text loss, given by",
|
| 555 |
+
"bbox": [
|
| 556 |
+
75,
|
| 557 |
+
301,
|
| 558 |
+
472,
|
| 559 |
+
439
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 3
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "equation",
|
| 565 |
+
"text": "\n$$\nL = L _ {\\text {a f f}} + \\lambda \\cdot L _ {\\text {t e x t}}. \\tag {4}\n$$\n",
|
| 566 |
+
"text_format": "latex",
|
| 567 |
+
"bbox": [
|
| 568 |
+
204,
|
| 569 |
+
452,
|
| 570 |
+
468,
|
| 571 |
+
469
|
| 572 |
+
],
|
| 573 |
+
"page_idx": 3
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"type": "text",
|
| 577 |
+
"text": "In practice, we set $\\lambda = 0.01$ to balance two losses, as the affordance loss can be quite small due to the imbalance of positive and negative values.",
|
| 578 |
+
"bbox": [
|
| 579 |
+
76,
|
| 580 |
+
482,
|
| 581 |
+
470,
|
| 582 |
+
527
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "3.2. Network Architecture",
|
| 589 |
+
"text_level": 1,
|
| 590 |
+
"bbox": [
|
| 591 |
+
76,
|
| 592 |
+
539,
|
| 593 |
+
282,
|
| 594 |
+
554
|
| 595 |
+
],
|
| 596 |
+
"page_idx": 3
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"type": "text",
|
| 600 |
+
"text": "Next, we discuss the network architecture, and the training details of our model.",
|
| 601 |
+
"bbox": [
|
| 602 |
+
75,
|
| 603 |
+
563,
|
| 604 |
+
468,
|
| 605 |
+
593
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "Image encoder. The standard LLaVA uses CLIP image encoder and a linear projection layer [40, 67]. In practice, we find that the CLIP image encoder has low resolution (224x224) and does not capture sufficient information about grounding. Therefore, we use OWL-ViT [48] to replace the standard CLIP-ViT [57]. OWL-ViT has an input resolution of 768x768, which is significantly higher than CLIP. At the same time, OWL-ViT is pretrained to extract features that include precise location information of objects. As we will empirically show in experiments, using OWL-ViT is significantly better than CLIP. However, we note that our approach is general, and will benefit from any future improvements in pretrained VLM backbones.",
|
| 612 |
+
"bbox": [
|
| 613 |
+
75,
|
| 614 |
+
595,
|
| 615 |
+
468,
|
| 616 |
+
792
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "Projection. Another problem of using OWL-ViT is about its embedding space. With a much higher input resolution, OWL-ViT produces 576 tokens with a hidden dimension of 768 for each image. In comparison, CLIP only produces 256 tokens for each image. Projecting each individual token into the hidden dimension of LLM (4096) consumes a lot of GPU memory. In practice, we project each token",
|
| 623 |
+
"bbox": [
|
| 624 |
+
75,
|
| 625 |
+
794,
|
| 626 |
+
470,
|
| 627 |
+
902
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 3
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "of OWL-ViT to 1024, and concatenating four neighboring tokens into a single token.",
|
| 634 |
+
"bbox": [
|
| 635 |
+
496,
|
| 636 |
+
90,
|
| 637 |
+
890,
|
| 638 |
+
121
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 3
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "text",
|
| 644 |
+
"text": "Language model. We follow LLaVA [40] and LLama [63] to use the standard text tokenizer to encode our text query. We use LLama-7B [63] as the large language model.",
|
| 645 |
+
"bbox": [
|
| 646 |
+
496,
|
| 647 |
+
123,
|
| 648 |
+
890,
|
| 649 |
+
167
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 3
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "Affordance decoder. We aim to keep a lightweight decoder, as it has been proved to produce good segmentation masks and affordance maps [6, 11, 29, 54]. However, we find the vanilla mask decoder is too lightweight in our case and does not produce high-resolution affordance map. Therefore, we add an additional transposed convolution layer to increase its output resolution.",
|
| 656 |
+
"bbox": [
|
| 657 |
+
496,
|
| 658 |
+
170,
|
| 659 |
+
890,
|
| 660 |
+
276
|
| 661 |
+
],
|
| 662 |
+
"page_idx": 3
|
| 663 |
+
},
|
| 664 |
+
{
|
| 665 |
+
"type": "text",
|
| 666 |
+
"text": "Implementation. We implement our model using PyTorch and HuggingFace. We initialize our model with LLama-7B pretrained weights. Following LLaVA [40], we freeze the image encoder, pretrain the image projection layer to align OWL-ViT and LLama features, and then use GPT instructions to tune the language model. Finally, we add the mask encoder [29, 54] and tune the whole model on AGD20K [45], which has annotations of object affordance. We use eight NVIDIA A100 (40GB) to train our model, with Fully Sharded Data Parallel. We use a batch size of 4 and set the learning rate as 2e-5.",
|
| 667 |
+
"bbox": [
|
| 668 |
+
496,
|
| 669 |
+
277,
|
| 670 |
+
890,
|
| 671 |
+
445
|
| 672 |
+
],
|
| 673 |
+
"page_idx": 3
|
| 674 |
+
},
|
| 675 |
+
{
|
| 676 |
+
"type": "text",
|
| 677 |
+
"text": "4. Experiments",
|
| 678 |
+
"text_level": 1,
|
| 679 |
+
"bbox": [
|
| 680 |
+
498,
|
| 681 |
+
459,
|
| 682 |
+
633,
|
| 683 |
+
477
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 3
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "text",
|
| 689 |
+
"text": "In experiments, we aim to systematically evaluate the performance of our approach. In particular, we are interested in answering these questions: (1) How well does it generalize, compared with state-of-the-art methods? (2) How does each design choice contribute to the final performance, including prompts, visual encoders, and depth?",
|
| 690 |
+
"bbox": [
|
| 691 |
+
496,
|
| 692 |
+
484,
|
| 693 |
+
890,
|
| 694 |
+
575
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 3
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "text",
|
| 700 |
+
"text": "4.1. Experimental Setup",
|
| 701 |
+
"text_level": 1,
|
| 702 |
+
"bbox": [
|
| 703 |
+
498,
|
| 704 |
+
585,
|
| 705 |
+
689,
|
| 706 |
+
602
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 3
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "text",
|
| 712 |
+
"text": "Metrics. We evaluate primarily on AGD20K [45] and follow its metrics to evaluate our model, which is KLD, SIM and NSS [34, 45, 50, 54]. For KLD, the lower the better. And for SIM and NSS, the higher, the better. A full explanation is available in the supplemental.",
|
| 713 |
+
"bbox": [
|
| 714 |
+
496,
|
| 715 |
+
609,
|
| 716 |
+
890,
|
| 717 |
+
686
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 3
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "Baselines. We compare our approach against state-of-the-art baselines. In general, affordance grounding methods belong to two categories: weakly supervised and fully supervised methods. We report performance of both categories.",
|
| 724 |
+
"bbox": [
|
| 725 |
+
496,
|
| 726 |
+
689,
|
| 727 |
+
890,
|
| 728 |
+
750
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 3
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "text",
|
| 734 |
+
"text": "(Weakly supervised methods): They do not train on explicit labels of the affordance map. Instead, they are trained on a human demonstration of the same object. These approaches include InteractionHotspots [50], Cross-View-AG [45], Cross-View-AG+ [44], AffCorrs [21], and LOCATE [34]. Among them, LOCATE is the most recent model and has the best results on AGD20K. We use the reported number in LOCATE for the easy split and retrain them for the hard split. Among them, we cannot run AffCorrs, as it focuses on one-shot affordance learning. The",
|
| 735 |
+
"bbox": [
|
| 736 |
+
496,
|
| 737 |
+
750,
|
| 738 |
+
890,
|
| 739 |
+
901
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 3
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "image",
|
| 745 |
+
"img_path": "images/9cdd04c15a1a50864b605d8367f79f681cc4e1d0531737b8403b962ad0925a29.jpg",
|
| 746 |
+
"image_caption": [
|
| 747 |
+
"Figure 4. Qualitative results on the test set of the hard split. LOCATE-Sup fails to learn a reasonable affordance map due to limited training data. LOCATE [34] typically predicts an affordance map which covers the whole object. 3DOI [54] focuses on a small area of the object. Overall, our approach produces the best-quality affordance predictions."
|
| 748 |
+
],
|
| 749 |
+
"image_footnote": [],
|
| 750 |
+
"bbox": [
|
| 751 |
+
81,
|
| 752 |
+
88,
|
| 753 |
+
890,
|
| 754 |
+
542
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 4
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "text",
|
| 760 |
+
"text": "reported model on the easy split is adapted by [34] and not publicly available. We also do not run InteractionHotspots because the pretrained model only supports 7 actions. The reported model is retrained by [45] but lacks sufficient implementation details to be reproduced. Therefore, we retrain Cross-View-AG [45], Cross-View-AG+ [44], and LOCATE [34] on the hard split. We maintain the object/action split, but allow them to use more images for weak supervision. Therefore, they have 11,889 images for training.",
|
| 761 |
+
"bbox": [
|
| 762 |
+
75,
|
| 763 |
+
601,
|
| 764 |
+
470,
|
| 765 |
+
738
|
| 766 |
+
],
|
| 767 |
+
"page_idx": 4
|
| 768 |
+
},
|
| 769 |
+
{
|
| 770 |
+
"type": "text",
|
| 771 |
+
"text": "(Fully supervised methods): Affordance map can also be learned from explicit labels, and we call it supervised methods. This includes 3DOI [54] and ours. We also adapt LOCATE to a fully supervised version for fair comparison.",
|
| 772 |
+
"bbox": [
|
| 773 |
+
75,
|
| 774 |
+
743,
|
| 775 |
+
468,
|
| 776 |
+
805
|
| 777 |
+
],
|
| 778 |
+
"page_idx": 4
|
| 779 |
+
},
|
| 780 |
+
{
|
| 781 |
+
"type": "text",
|
| 782 |
+
"text": "- 3DOI [54]: 3DOI is a SAM-based model [29], which takes a single image and a query point and predicts the segmentation mask and affordance map. Therefore, We randomly sample a pixel with score $>0.9$ as the query point from the affordance map. We use the 3DOI pretrained model, which has never seen any images in AGD20K, in",
|
| 783 |
+
"bbox": [
|
| 784 |
+
75,
|
| 785 |
+
809,
|
| 786 |
+
470,
|
| 787 |
+
901
|
| 788 |
+
],
|
| 789 |
+
"page_idx": 4
|
| 790 |
+
},
|
| 791 |
+
{
|
| 792 |
+
"type": "text",
|
| 793 |
+
"text": "cluding the training set.",
|
| 794 |
+
"bbox": [
|
| 795 |
+
498,
|
| 796 |
+
601,
|
| 797 |
+
658,
|
| 798 |
+
617
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 4
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "- LOCATE-Sup [34]: To ensure fair comparison, we also adopt LOCATE and train it using the same binary focal loss as our model. We call it LOCATE-Sup. LOCATE uses a Dino-ViT as its visual encoder [7]. To eliminate the effect of different pretrained visual encoders, we also report the performance of LOCATE-Sup-OWL, which uses the exact same pretrained visual encoder as ours.",
|
| 805 |
+
"bbox": [
|
| 806 |
+
496,
|
| 807 |
+
618,
|
| 808 |
+
890,
|
| 809 |
+
724
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 4
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "text",
|
| 815 |
+
"text": "4.2. Dataset",
|
| 816 |
+
"text_level": 1,
|
| 817 |
+
"bbox": [
|
| 818 |
+
500,
|
| 819 |
+
738,
|
| 820 |
+
594,
|
| 821 |
+
753
|
| 822 |
+
],
|
| 823 |
+
"page_idx": 4
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"text": "We follow LOCATE [34] to evaluate primarily on AGD20K [45], as it is the only large-scale dataset for affordance with action and object labels. Since our approach is not weakly supervised and requires dense annotations, we only use AGD20K images with dense annotations.",
|
| 828 |
+
"bbox": [
|
| 829 |
+
496,
|
| 830 |
+
762,
|
| 831 |
+
890,
|
| 832 |
+
838
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 4
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"text": "In this paper, we primarily evaluate the ability of an affordance model to generalize to unseen object categories, and thus evaluate on the Unseen split of the AGD20K benchmark. This split ensures that there is no overlap be",
|
| 839 |
+
"bbox": [
|
| 840 |
+
496,
|
| 841 |
+
839,
|
| 842 |
+
890,
|
| 843 |
+
900
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 4
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "table",
|
| 849 |
+
"img_path": "images/e263b46b6d60887c5889d992e77c3d8ca2c304e2220256c14d8d352c971220f8.jpg",
|
| 850 |
+
"table_caption": [
|
| 851 |
+
"Table 1. Difficulty score of different splits. The lower the score, the more similar are the object categories in the train and test set."
|
| 852 |
+
],
|
| 853 |
+
"table_footnote": [],
|
| 854 |
+
"table_body": "<table><tr><td>Splits</td><td>Same</td><td>Easy</td><td>Hard</td><td>Random</td></tr><tr><td>Difficulty Score ↑</td><td>0.000</td><td>0.356</td><td>0.412</td><td>0.491</td></tr></table>",
|
| 855 |
+
"bbox": [
|
| 856 |
+
99,
|
| 857 |
+
128,
|
| 858 |
+
446,
|
| 859 |
+
171
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 5
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "table",
|
| 865 |
+
"img_path": "images/ea4bec7c595cb0d018cf92d108ac3f7e3c6e8af385eca3da796278e52b034ac5.jpg",
|
| 866 |
+
"table_caption": [
|
| 867 |
+
"Table 2. Quantitative results on the Easy split of AGD20K [45]. InteractionHotspots, Cross-View-AG(+), AffCorrs and LOCATE are trained on AGD20K images with weak supervision (13,323 images). LOCATE-Sup and LOCATE-Sup-OWL, and AffordanceLLM are trained on AGD20K images with dense annotation (1,135 images). 3DOI is trained on their own dataset with dense annotation (10,000 images) [54]. AffordanceLLM is comparable to LOCATE [34] on the easy split, where test objects have similar counterparts in the training set. The best and second-best results are highlighted in bold and underlined, respectively."
|
| 868 |
+
],
|
| 869 |
+
"table_footnote": [],
|
| 870 |
+
"table_body": "<table><tr><td>Methods</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>InteractionHotspots [50]</td><td>1.994</td><td>0.237</td><td>0.577</td></tr><tr><td>Cross-View-AG [45]</td><td>1.787</td><td>0.285</td><td>0.829</td></tr><tr><td>Cross-View-AG+ [44]</td><td>1.765</td><td>0.279</td><td>0.882</td></tr><tr><td>AffCorrs [21]</td><td>1.618</td><td>0.348</td><td>1.021</td></tr><tr><td>LOCATE [34]</td><td>1.405</td><td>0.372</td><td>1.157</td></tr><tr><td>LOCATE-Sup [34]</td><td>1.907</td><td>0.236</td><td>0.641</td></tr><tr><td>LOCATE-Sup-OWL [34, 48]</td><td>1.927</td><td>0.234</td><td>0.624</td></tr><tr><td>3DOI [54]</td><td>3.565</td><td>0.227</td><td>0.657</td></tr><tr><td>AffordanceLLM (Ours)</td><td>1.463</td><td>0.377</td><td>1.070</td></tr></table>",
|
| 871 |
+
"bbox": [
|
| 872 |
+
81,
|
| 873 |
+
325,
|
| 874 |
+
464,
|
| 875 |
+
484
|
| 876 |
+
],
|
| 877 |
+
"page_idx": 5
|
| 878 |
+
},
|
| 879 |
+
{
|
| 880 |
+
"type": "text",
|
| 881 |
+
"text": "tween the object categories in the train and test set.",
|
| 882 |
+
"bbox": [
|
| 883 |
+
76,
|
| 884 |
+
501,
|
| 885 |
+
413,
|
| 886 |
+
516
|
| 887 |
+
],
|
| 888 |
+
"page_idx": 5
|
| 889 |
+
},
|
| 890 |
+
{
|
| 891 |
+
"type": "text",
|
| 892 |
+
"text": "However, the Unseen split has one major issue: there are still a lot of similarities between the objects in the train and test set. Objects in the test set typically have similar counterparts in the training set, leaving models room for memorizing. For example, \"skis\" in the test set maps to \"snowboards\" and \"skateboards\" in training set, \"basketball\" maps to \"baseball\", \"knife\" maps to \"fork\", etc. To make the benchmark more reflective of the generalization ability of a model, we provide a more challenging split. We thus have the following two splits.",
|
| 893 |
+
"bbox": [
|
| 894 |
+
75,
|
| 895 |
+
517,
|
| 896 |
+
468,
|
| 897 |
+
667
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 5
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "text",
|
| 903 |
+
"text": "Easy split. This is the original Unseen split of AGD20K. We have 1135/540 images for train and test with dense annotations for the fully supervised setting, or 13,323/540 images for the weakly supervised setting. The test set remains the same for both settings.",
|
| 904 |
+
"bbox": [
|
| 905 |
+
75,
|
| 906 |
+
670,
|
| 907 |
+
468,
|
| 908 |
+
746
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 5
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "text",
|
| 914 |
+
"text": "Hard split. We randomly put $50\\%$ AGD20K object classes into the training set and the remaining classes into the test set to simulate in-the-wild generalization (details in the supplemental). The training and test object are not overlapping, and most objects in the test set do not have a similar counterpart in the training set, and is thus harder to generalize than the Easy split. We have 868/807 images for train and test with dense annotations for the fully supervised setting, and 11,889/807 images for weakly supervised setting. The test set is the same for both settings.",
|
| 915 |
+
"bbox": [
|
| 916 |
+
75,
|
| 917 |
+
750,
|
| 918 |
+
468,
|
| 919 |
+
900
|
| 920 |
+
],
|
| 921 |
+
"page_idx": 5
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"type": "text",
|
| 925 |
+
"text": "Table 3. Quantitative results on the Hard split of AGD20K [45]. Cross-View-AG $(+)$ and LOCATE are trained on AGD20K images with weak supervision (11,889 images). LOCATE-Sup and LOCATE-Sup-OWL, and AffordanceLLM are trained on AGD20K images with dense annotation (868 images). 3DOI is trained on their own dataset with dense annotation (10,000 images) [54]. On the hard split, AffordanceLLM outperforms all baselines by a large margin, which demonstrates the superior generalization ability of our model. We do not run Interaction-Hotspots [50] and AffCorrs [21], as the reported model has ambiguous implementation details, or is not publicly available. The best and second-best results are highlighted in bold and underlined, respectively.",
|
| 926 |
+
"bbox": [
|
| 927 |
+
496,
|
| 928 |
+
88,
|
| 929 |
+
890,
|
| 930 |
+
268
|
| 931 |
+
],
|
| 932 |
+
"page_idx": 5
|
| 933 |
+
},
|
| 934 |
+
{
|
| 935 |
+
"type": "table",
|
| 936 |
+
"img_path": "images/8e7d924791ce551dff28806411a0db9425e905c12c16937ed3671546faede1a7.jpg",
|
| 937 |
+
"table_caption": [],
|
| 938 |
+
"table_footnote": [],
|
| 939 |
+
"table_body": "<table><tr><td>Methods</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>Cross-View-AG [45]</td><td>2.092</td><td>0.209</td><td>0.138</td></tr><tr><td>Cross-View-AG+ [44]</td><td>2.034</td><td>0.218</td><td>0.342</td></tr><tr><td>LOCATE [34]</td><td>1.829</td><td>0.282</td><td>0.276</td></tr><tr><td>LOCATE-Sup [34]</td><td>2.003</td><td>0.224</td><td>0.435</td></tr><tr><td>LOCATE-Sup-OWL [34, 48]</td><td>2.127</td><td>0.206</td><td>0.314</td></tr><tr><td>3DOI [54]</td><td>4.017</td><td>0.200</td><td>0.549</td></tr><tr><td>AffordanceLLM (Ours)</td><td>1.661</td><td>0.361</td><td>0.947</td></tr></table>",
|
| 940 |
+
"bbox": [
|
| 941 |
+
504,
|
| 942 |
+
270,
|
| 943 |
+
887,
|
| 944 |
+
401
|
| 945 |
+
],
|
| 946 |
+
"page_idx": 5
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "text",
|
| 950 |
+
"text": "Measuring split difficulty. We propose a metric to quantify the generalization difficulty of a split. Intuitively, the difficulty to generalize to an object class in the test set is defined by how different it is from the classes in the training set, which could be measured by its semantic distance to the most similar class in the training set [55]. The greater the distance, the harder it is to generalize to this test class. Therefore, for each semantic class $c$ in the test set, we compute its distance $d$ to the most similar class in the training set. We use the CLIP [58] text encoder to obtain an embedding to represent each object class. Assume train classes are $C_{\\mathrm{train}}$ and test classes are $C_{\\mathrm{test}}$ , the difficulty of this split is",
|
| 951 |
+
"bbox": [
|
| 952 |
+
496,
|
| 953 |
+
414,
|
| 954 |
+
890,
|
| 955 |
+
597
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 5
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "equation",
|
| 961 |
+
"text": "\n$$\nD \\left(C _ {\\text {t r a i n}}, C _ {\\text {t e s t}}\\right) = 1 - \\frac {1}{\\left| C _ {\\text {t e s t}} \\right|} \\sum_ {c \\in C _ {\\text {t e s t}}} \\max _ {c ^ {\\prime} \\in C _ {\\text {t r a i n}}} d \\left(c, c ^ {\\prime}\\right). \\tag {5}\n$$\n",
|
| 962 |
+
"text_format": "latex",
|
| 963 |
+
"bbox": [
|
| 964 |
+
513,
|
| 965 |
+
606,
|
| 966 |
+
890,
|
| 967 |
+
656
|
| 968 |
+
],
|
| 969 |
+
"page_idx": 5
|
| 970 |
+
},
|
| 971 |
+
{
|
| 972 |
+
"type": "text",
|
| 973 |
+
"text": "We compare the difficulty metric among four settings: (1) Same: train and test share the same classes; (2) Easy split; (3) Hard split; (4) Random: constructed by randomly even-splitting 50 object classes from LVIS [20], which serves as a lower bound.",
|
| 974 |
+
"bbox": [
|
| 975 |
+
496,
|
| 976 |
+
657,
|
| 977 |
+
890,
|
| 978 |
+
731
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 5
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "text",
|
| 984 |
+
"text": "We show the difficulty metrics in Tab 1. The Same split has a similarity metric of 0.0, as all object classes in the test are present during training. The Easy split has a similarity metric of 0.356. The Random split has a score of 0.491. The Hard split has a higher score than Easy, meaning that the difference between test and train is more significance in Hard than in Easy, and is thus harder to generalize.",
|
| 985 |
+
"bbox": [
|
| 986 |
+
496,
|
| 987 |
+
732,
|
| 988 |
+
890,
|
| 989 |
+
838
|
| 990 |
+
],
|
| 991 |
+
"page_idx": 5
|
| 992 |
+
},
|
| 993 |
+
{
|
| 994 |
+
"type": "text",
|
| 995 |
+
"text": "4.3. Results",
|
| 996 |
+
"text_level": 1,
|
| 997 |
+
"bbox": [
|
| 998 |
+
500,
|
| 999 |
+
847,
|
| 1000 |
+
591,
|
| 1001 |
+
861
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 5
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "Figure 4 shows qualitative results on the test set of the hard split. In this split, the objects in the test set bear little to none",
|
| 1008 |
+
"bbox": [
|
| 1009 |
+
496,
|
| 1010 |
+
869,
|
| 1011 |
+
890,
|
| 1012 |
+
900
|
| 1013 |
+
],
|
| 1014 |
+
"page_idx": 5
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "image",
|
| 1018 |
+
"img_path": "images/cdd8722e73a55110957393332cb416106625cb741cb37141d0da63a2f93cd853.jpg",
|
| 1019 |
+
"image_caption": [
|
| 1020 |
+
"Figure 5. Ablation of different text prompts and depth. Ours w/o depth is our approach without pseudodepth as additional inputs. Ours is our full approach. We find constructing the correct text prompt typically helps our model to focus on the correct area. We believe it is because the correct text prompt would activate the world knowledge related to affordance embedded in the VLM."
|
| 1021 |
+
],
|
| 1022 |
+
"image_footnote": [],
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
78,
|
| 1025 |
+
89,
|
| 1026 |
+
888,
|
| 1027 |
+
377
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 6
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "resemblance to the ones in the training set. We compare our approach, AffordanceLLM, with a set of state-of-the-art baselines. LOCATE [34] tends to predicts an affordance map that covers the entire object, indicating poor generalization performance. 3DOI [54] typically focuses on a small area of the object, and sometimes fails to ground the correct region. LOCATE-Sup fails to predict reasonable affordance map, probably due to the small amount of training data. Despite being trained on the same training set as LOCATE-Sup, our approach is able to produce the best affordance map among all methods, showcasing superior generalization capability.",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
75,
|
| 1036 |
+
436,
|
| 1037 |
+
472,
|
| 1038 |
+
618
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 6
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "text",
|
| 1044 |
+
"text": "We further compare our model with baselines quantitatively and the results are summarized in Tab. 2, 3. On the Hard split, where the test set objects differ semantically from the training set, our method outperforms all baselines significantly. This improvement can be attributed to the extensive world knowledge and understanding embedded within the large language model. On the Easy split, our model is comparable to LOCATE [34] and outperform all other baselines. We hypothesize that the advantage of our approach is less pronounced when the test and train objects exhibit similarity, as the generalization capability becomes less critical. It is also worth noting that unlike LOCATE which is weakly supervised on $10\\mathrm{k + }$ images, our model was fully supervised on some 1k images with dense annotations, which renders a more meaningful comparison with LOCATE-Sup that is trained on the same data. Our method significantly outperforms LOCATE-Sup on both splits, indicating the effectiveness of our approach.",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
75,
|
| 1047 |
+
628,
|
| 1048 |
+
472,
|
| 1049 |
+
902
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 6
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "table",
|
| 1055 |
+
"img_path": "images/0cbc007d3a6dc8f66b412d33f96d74f91204b07af14aa7fc6656dce138a4e0df.jpg",
|
| 1056 |
+
"table_caption": [
|
| 1057 |
+
"Table 4. Ablation on the hard split. We validate the importance of text prompts, image encoders and pseudo depth to performance."
|
| 1058 |
+
],
|
| 1059 |
+
"table_footnote": [],
|
| 1060 |
+
"table_body": "<table><tr><td>Depth</td><td>Text Prompt</td><td>Img Encoder</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>Yes</td><td>Full</td><td>OWL-ViT</td><td>1.661</td><td>0.361</td><td>0.947</td></tr><tr><td>-</td><td>Full</td><td>OWL-ViT</td><td>1.713</td><td>0.352</td><td>0.881</td></tr><tr><td>-</td><td>Full</td><td>CLIP-ViT</td><td>1.759</td><td>0.286</td><td>0.776</td></tr><tr><td>-</td><td>Object, Action</td><td>OWL-ViT</td><td>1.769</td><td>0.329</td><td>0.827</td></tr><tr><td>-</td><td>Action</td><td>OWL-ViT</td><td>1.843</td><td>0.336</td><td>0.815</td></tr><tr><td>-</td><td>Hi</td><td>OWL-ViT</td><td>1.836</td><td>0.325</td><td>0.793</td></tr></table>",
|
| 1061 |
+
"bbox": [
|
| 1062 |
+
503,
|
| 1063 |
+
463,
|
| 1064 |
+
892,
|
| 1065 |
+
565
|
| 1066 |
+
],
|
| 1067 |
+
"page_idx": 6
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "text",
|
| 1071 |
+
"text": "4.4. Ablation",
|
| 1072 |
+
"text_level": 1,
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
500,
|
| 1075 |
+
580,
|
| 1076 |
+
602,
|
| 1077 |
+
594
|
| 1078 |
+
],
|
| 1079 |
+
"page_idx": 6
|
| 1080 |
+
},
|
| 1081 |
+
{
|
| 1082 |
+
"type": "text",
|
| 1083 |
+
"text": "We conduct a few ablation studies to understand how different components of the model contribute to the final performance. We test different text prompts, different image encoders, and the effect of pseudo depth as inputs. The results are summarized in Tab 4 and Fig 5.",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
496,
|
| 1086 |
+
604,
|
| 1087 |
+
890,
|
| 1088 |
+
680
|
| 1089 |
+
],
|
| 1090 |
+
"page_idx": 6
|
| 1091 |
+
},
|
| 1092 |
+
{
|
| 1093 |
+
"type": "text",
|
| 1094 |
+
"text": "Text prompts. Prompt tuning is known to have major effects on large VLMs. We test four different text prompts to understand the effect of text content on model performance:",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
496,
|
| 1097 |
+
683,
|
| 1098 |
+
890,
|
| 1099 |
+
728
|
| 1100 |
+
],
|
| 1101 |
+
"page_idx": 6
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"type": "list",
|
| 1105 |
+
"sub_type": "text",
|
| 1106 |
+
"list_items": [
|
| 1107 |
+
"- Hi: We use \"Hi\" as our text prompt.",
|
| 1108 |
+
"- Action: We use the action (e.g. \"hold\") as the prompt.",
|
| 1109 |
+
"- Object + Action: We use the object name and action label as our text prompt, for example \"hold, knife\".",
|
| 1110 |
+
"- Full: We use a complete question as the text prompt — \"What part of the motorcycle should we interact with in order to push it?\""
|
| 1111 |
+
],
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
500,
|
| 1114 |
+
729,
|
| 1115 |
+
890,
|
| 1116 |
+
838
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 6
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "We notice that the Full prompt yields a higher performance compared with other simple text prompts. It demonstrates that specific question prompt is helpful for extracting the knowledge from pretrained large language models.",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
496,
|
| 1125 |
+
839,
|
| 1126 |
+
890,
|
| 1127 |
+
900
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 6
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "image",
|
| 1133 |
+
"img_path": "images/c45f4e42528f4cc2573551673b27d6cd1c76546a877426d12316710aae39b620.jpg",
|
| 1134 |
+
"image_caption": [
|
| 1135 |
+
"Input"
|
| 1136 |
+
],
|
| 1137 |
+
"image_footnote": [],
|
| 1138 |
+
"bbox": [
|
| 1139 |
+
81,
|
| 1140 |
+
99,
|
| 1141 |
+
176,
|
| 1142 |
+
172
|
| 1143 |
+
],
|
| 1144 |
+
"page_idx": 7
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "image",
|
| 1148 |
+
"img_path": "images/d312b5924ae3b900fb7c24f299a269d73313278b0632a408bc75ea7dc0b57032.jpg",
|
| 1149 |
+
"image_caption": [
|
| 1150 |
+
"LOCATE"
|
| 1151 |
+
],
|
| 1152 |
+
"image_footnote": [],
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
176,
|
| 1155 |
+
99,
|
| 1156 |
+
271,
|
| 1157 |
+
172
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 7
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "image",
|
| 1163 |
+
"img_path": "images/68eda05a7164d7bc29f6de5bafb885f14377b3e81eec96d7dce11d31d39038ac.jpg",
|
| 1164 |
+
"image_caption": [
|
| 1165 |
+
"AffordanceLLM"
|
| 1166 |
+
],
|
| 1167 |
+
"image_footnote": [],
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
272,
|
| 1170 |
+
99,
|
| 1171 |
+
367,
|
| 1172 |
+
172
|
| 1173 |
+
],
|
| 1174 |
+
"page_idx": 7
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "image",
|
| 1178 |
+
"img_path": "images/04ff063fa51646108ba1df5aa9bed48e18478d2c02d8d3efcad7a639a2eac24a.jpg",
|
| 1179 |
+
"image_caption": [
|
| 1180 |
+
"Most similar in training"
|
| 1181 |
+
],
|
| 1182 |
+
"image_footnote": [],
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
369,
|
| 1185 |
+
98,
|
| 1186 |
+
462,
|
| 1187 |
+
172
|
| 1188 |
+
],
|
| 1189 |
+
"page_idx": 7
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"type": "image",
|
| 1193 |
+
"img_path": "images/e7dd47ab77fbe969ac6837c39e371bc5c36f43963ca29fe3652654d8ea10a80c.jpg",
|
| 1194 |
+
"image_caption": [],
|
| 1195 |
+
"image_footnote": [],
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
81,
|
| 1198 |
+
172,
|
| 1199 |
+
176,
|
| 1200 |
+
248
|
| 1201 |
+
],
|
| 1202 |
+
"page_idx": 7
|
| 1203 |
+
},
|
| 1204 |
+
{
|
| 1205 |
+
"type": "image",
|
| 1206 |
+
"img_path": "images/4ce68197da29901ce2789f3352ab325f82b42ed83ff4a9e0f9a12a4c252d2e75.jpg",
|
| 1207 |
+
"image_caption": [],
|
| 1208 |
+
"image_footnote": [],
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
176,
|
| 1211 |
+
172,
|
| 1212 |
+
271,
|
| 1213 |
+
248
|
| 1214 |
+
],
|
| 1215 |
+
"page_idx": 7
|
| 1216 |
+
},
|
| 1217 |
+
{
|
| 1218 |
+
"type": "image",
|
| 1219 |
+
"img_path": "images/85f5255fdc0dc3381ebaac39538c7a217f56292fe7509ea82685523798a03f3c.jpg",
|
| 1220 |
+
"image_caption": [],
|
| 1221 |
+
"image_footnote": [],
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
272,
|
| 1224 |
+
172,
|
| 1225 |
+
367,
|
| 1226 |
+
248
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 7
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "image",
|
| 1232 |
+
"img_path": "images/5a43b0301fe5b214fc8e8200194231fdb5b06b53e11f138e9f0ff6b552606293.jpg",
|
| 1233 |
+
"image_caption": [
|
| 1234 |
+
"Figure 7. Failure examples. (Row 1:) AffordanceLLM sometimes fails due to multiple objects present in the scene and it fails to refer to the correct object. (Row 2:) AffordanceLLM thinks humans should hold the handle to cut something using the knife, while AGD20K annotators think \"cut with\" should refer to the blade."
|
| 1235 |
+
],
|
| 1236 |
+
"image_footnote": [],
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
369,
|
| 1239 |
+
172,
|
| 1240 |
+
462,
|
| 1241 |
+
248
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 7
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "image",
|
| 1247 |
+
"img_path": "images/9ba738e94934fb240d906a505b373b9f0ad76c4c8d68f9a925251dfa2b14725d.jpg",
|
| 1248 |
+
"image_caption": [],
|
| 1249 |
+
"image_footnote": [],
|
| 1250 |
+
"bbox": [
|
| 1251 |
+
81,
|
| 1252 |
+
250,
|
| 1253 |
+
176,
|
| 1254 |
+
323
|
| 1255 |
+
],
|
| 1256 |
+
"page_idx": 7
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "image",
|
| 1260 |
+
"img_path": "images/c9517e8791c1404b2a7b1d85e363402fd171ee87858b7f55b370fc15441e2d9e.jpg",
|
| 1261 |
+
"image_caption": [],
|
| 1262 |
+
"image_footnote": [],
|
| 1263 |
+
"bbox": [
|
| 1264 |
+
176,
|
| 1265 |
+
250,
|
| 1266 |
+
271,
|
| 1267 |
+
323
|
| 1268 |
+
],
|
| 1269 |
+
"page_idx": 7
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "image",
|
| 1273 |
+
"img_path": "images/c7b49575f6f309df4342a7b999e5d2dd38af7312a86a6a7207132a00e22e13d6.jpg",
|
| 1274 |
+
"image_caption": [],
|
| 1275 |
+
"image_footnote": [],
|
| 1276 |
+
"bbox": [
|
| 1277 |
+
272,
|
| 1278 |
+
250,
|
| 1279 |
+
367,
|
| 1280 |
+
323
|
| 1281 |
+
],
|
| 1282 |
+
"page_idx": 7
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "image",
|
| 1286 |
+
"img_path": "images/e0e7515bbf2a8ca1e3b8086ee29e82168df181e18e8ec56fc35f9a5a7ad98b5e.jpg",
|
| 1287 |
+
"image_caption": [],
|
| 1288 |
+
"image_footnote": [],
|
| 1289 |
+
"bbox": [
|
| 1290 |
+
369,
|
| 1291 |
+
250,
|
| 1292 |
+
462,
|
| 1293 |
+
323
|
| 1294 |
+
],
|
| 1295 |
+
"page_idx": 7
|
| 1296 |
+
},
|
| 1297 |
+
{
|
| 1298 |
+
"type": "image",
|
| 1299 |
+
"img_path": "images/742fee758cceb471caa1645422b87f9b1d0db2bc1d5521c62699ecdd2501f346.jpg",
|
| 1300 |
+
"image_caption": [
|
| 1301 |
+
"Figure 6. Generalization results on random Internet images. We show the most similar objects in the training set to demonstrate how different the objects are from the ones in the training set. (Row 1, 2): AffordanceLLM generalizes to novel objects from random Internet images, while LOCATE [34] fails. (Row 3, 4): AffordanceLLM generalizes to novel actions plus novel objects. LOCATE cannot infer novel actions thus we left it blank."
|
| 1302 |
+
],
|
| 1303 |
+
"image_footnote": [],
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
81,
|
| 1306 |
+
324,
|
| 1307 |
+
176,
|
| 1308 |
+
398
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "image",
|
| 1314 |
+
"img_path": "images/6839e64929ec6c9129ca81d44da4bdde65262db6644bbb7976f09f90bfd6e6f8.jpg",
|
| 1315 |
+
"image_caption": [],
|
| 1316 |
+
"image_footnote": [],
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
176,
|
| 1319 |
+
324,
|
| 1320 |
+
271,
|
| 1321 |
+
398
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 7
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "image",
|
| 1327 |
+
"img_path": "images/b4ef3d949aff543b8fa24ed9d50355157051a36fc765542f01e010c9a6b60f5f.jpg",
|
| 1328 |
+
"image_caption": [],
|
| 1329 |
+
"image_footnote": [],
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
272,
|
| 1332 |
+
324,
|
| 1333 |
+
367,
|
| 1334 |
+
398
|
| 1335 |
+
],
|
| 1336 |
+
"page_idx": 7
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"type": "image",
|
| 1340 |
+
"img_path": "images/32825329857a4ec1fb68519d6f82ed4c6c262b428ce5f154382afb79458db296.jpg",
|
| 1341 |
+
"image_caption": [],
|
| 1342 |
+
"image_footnote": [],
|
| 1343 |
+
"bbox": [
|
| 1344 |
+
367,
|
| 1345 |
+
324,
|
| 1346 |
+
462,
|
| 1347 |
+
398
|
| 1348 |
+
],
|
| 1349 |
+
"page_idx": 7
|
| 1350 |
+
},
|
| 1351 |
+
{
|
| 1352 |
+
"type": "text",
|
| 1353 |
+
"text": "Vision encoders. Although LLaVA [40] uses CLIP-ViT, it may not be the optimal vision encoder for our affordance grounding task — CLIP-ViT is trained with an objective to align text-image pairs and is not explicitly optimized to perform localization, and therefore has limited visual grounding capability. We therefore switch to OWL-ViT [48], which is trained on detection datasets with 2M images, and achieves state-of-the-art open vocabulary detection performance. As shown in Tab 4, using OWL-ViT as vision backbone far excels using CLIP-ViT. It indicates the importance of grounding capability of visual backbone.",
|
| 1354 |
+
"bbox": [
|
| 1355 |
+
75,
|
| 1356 |
+
512,
|
| 1357 |
+
468,
|
| 1358 |
+
679
|
| 1359 |
+
],
|
| 1360 |
+
"page_idx": 7
|
| 1361 |
+
},
|
| 1362 |
+
{
|
| 1363 |
+
"type": "text",
|
| 1364 |
+
"text": "4.5. Pseudodepth as Inputs",
|
| 1365 |
+
"text_level": 1,
|
| 1366 |
+
"bbox": [
|
| 1367 |
+
76,
|
| 1368 |
+
690,
|
| 1369 |
+
287,
|
| 1370 |
+
705
|
| 1371 |
+
],
|
| 1372 |
+
"page_idx": 7
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"type": "text",
|
| 1376 |
+
"text": "Our model is trained with pseudo depth map produced by DPT [59]. To verify whether the additional depth inputs are effective, we compare the model trained with and without estimated depth (Tab 4 and Fig 5). With depth, our model can predict better affordance map, demonstrating the importance of 3D information in affordance reasoning.",
|
| 1377 |
+
"bbox": [
|
| 1378 |
+
75,
|
| 1379 |
+
713,
|
| 1380 |
+
467,
|
| 1381 |
+
804
|
| 1382 |
+
],
|
| 1383 |
+
"page_idx": 7
|
| 1384 |
+
},
|
| 1385 |
+
{
|
| 1386 |
+
"type": "text",
|
| 1387 |
+
"text": "4.6. Generalization to Internet Images",
|
| 1388 |
+
"text_level": 1,
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
76,
|
| 1391 |
+
816,
|
| 1392 |
+
372,
|
| 1393 |
+
832
|
| 1394 |
+
],
|
| 1395 |
+
"page_idx": 7
|
| 1396 |
+
},
|
| 1397 |
+
{
|
| 1398 |
+
"type": "text",
|
| 1399 |
+
"text": "We further test the generalization of our model on random Internet images in Fig. 6. All test objects are novel. To showcase how different these objects are from the ones in train set, for each test object, we retrieve the most similar",
|
| 1400 |
+
"bbox": [
|
| 1401 |
+
75,
|
| 1402 |
+
839,
|
| 1403 |
+
468,
|
| 1404 |
+
900
|
| 1405 |
+
],
|
| 1406 |
+
"page_idx": 7
|
| 1407 |
+
},
|
| 1408 |
+
{
|
| 1409 |
+
"type": "image",
|
| 1410 |
+
"img_path": "images/30361a77743967d7bcd2dd91e8c7f1a03e0276a20085d5dc3b3c21577665a431.jpg",
|
| 1411 |
+
"image_caption": [],
|
| 1412 |
+
"image_footnote": [],
|
| 1413 |
+
"bbox": [
|
| 1414 |
+
506,
|
| 1415 |
+
89,
|
| 1416 |
+
632,
|
| 1417 |
+
186
|
| 1418 |
+
],
|
| 1419 |
+
"page_idx": 7
|
| 1420 |
+
},
|
| 1421 |
+
{
|
| 1422 |
+
"type": "image",
|
| 1423 |
+
"img_path": "images/422e96556673925dbcede314fbedaa412983551d8df100ca78e127dc89f6bf40.jpg",
|
| 1424 |
+
"image_caption": [],
|
| 1425 |
+
"image_footnote": [],
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
632,
|
| 1428 |
+
89,
|
| 1429 |
+
756,
|
| 1430 |
+
186
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 7
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "image",
|
| 1436 |
+
"img_path": "images/037d80695348fc340de955370bae6ed85d5a0be8da293842b45aee872a019188.jpg",
|
| 1437 |
+
"image_caption": [],
|
| 1438 |
+
"image_footnote": [],
|
| 1439 |
+
"bbox": [
|
| 1440 |
+
756,
|
| 1441 |
+
89,
|
| 1442 |
+
885,
|
| 1443 |
+
186
|
| 1444 |
+
],
|
| 1445 |
+
"page_idx": 7
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "image",
|
| 1449 |
+
"img_path": "images/23557dc59ae147d78c6267dc3c449fff9d768f973b52ab07016f5ac8fc089415.jpg",
|
| 1450 |
+
"image_caption": [
|
| 1451 |
+
"Input"
|
| 1452 |
+
],
|
| 1453 |
+
"image_footnote": [],
|
| 1454 |
+
"bbox": [
|
| 1455 |
+
506,
|
| 1456 |
+
189,
|
| 1457 |
+
625,
|
| 1458 |
+
284
|
| 1459 |
+
],
|
| 1460 |
+
"page_idx": 7
|
| 1461 |
+
},
|
| 1462 |
+
{
|
| 1463 |
+
"type": "image",
|
| 1464 |
+
"img_path": "images/956901f9b293e9d2c9060c835acd280147343b522fe32139ed9e25a3e7f96a28.jpg",
|
| 1465 |
+
"image_caption": [
|
| 1466 |
+
"Groundtruth"
|
| 1467 |
+
],
|
| 1468 |
+
"image_footnote": [],
|
| 1469 |
+
"bbox": [
|
| 1470 |
+
630,
|
| 1471 |
+
189,
|
| 1472 |
+
750,
|
| 1473 |
+
284
|
| 1474 |
+
],
|
| 1475 |
+
"page_idx": 7
|
| 1476 |
+
},
|
| 1477 |
+
{
|
| 1478 |
+
"type": "image",
|
| 1479 |
+
"img_path": "images/16c4e173dfb629d7bb0bddc60a732d6de5395a64fe58e697ecac2895afa11724.jpg",
|
| 1480 |
+
"image_caption": [
|
| 1481 |
+
"Prediction"
|
| 1482 |
+
],
|
| 1483 |
+
"image_footnote": [],
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
751,
|
| 1486 |
+
189,
|
| 1487 |
+
885,
|
| 1488 |
+
285
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 7
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "text",
|
| 1494 |
+
"text": "object in the train set using the metric defined in Eq 5. As we can see, these objects are not similar to any objects in the train set. We go even further to test if our approach is able to generalize to novel actions in addition to novel objects, such as \"blow\" and \"play\". Generalization to novel actions is even more challenging, requiring open vocabulary understanding of actions, which is beyond the capability of LOCATE [34]. Despite these challenges, our approach not only produces very reasonable affordance maps for novel objects, but is also able to handle novel actions plus novel objects, once again demonstrating the extraordinary capability to generalize.",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
496,
|
| 1497 |
+
391,
|
| 1498 |
+
890,
|
| 1499 |
+
571
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 7
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "text",
|
| 1505 |
+
"text": "4.7. Failure Examples",
|
| 1506 |
+
"text_level": 1,
|
| 1507 |
+
"bbox": [
|
| 1508 |
+
500,
|
| 1509 |
+
582,
|
| 1510 |
+
669,
|
| 1511 |
+
598
|
| 1512 |
+
],
|
| 1513 |
+
"page_idx": 7
|
| 1514 |
+
},
|
| 1515 |
+
{
|
| 1516 |
+
"type": "text",
|
| 1517 |
+
"text": "Finally, we show our failure examples in Fig 7. First, we find AffordanceLLM fails on some ambiguous questions. For example, in AGD20K, \"cut with\" refers to the blade of a knife. However, AffordanceLLM thinks humans should hold the handle of the knife to cut anything. Second, when there are multiple objects in the image, it sometimes cannot refer to the correct object.",
|
| 1518 |
+
"bbox": [
|
| 1519 |
+
496,
|
| 1520 |
+
604,
|
| 1521 |
+
890,
|
| 1522 |
+
710
|
| 1523 |
+
],
|
| 1524 |
+
"page_idx": 7
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "text",
|
| 1528 |
+
"text": "5. Conclusion",
|
| 1529 |
+
"text_level": 1,
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
500,
|
| 1532 |
+
724,
|
| 1533 |
+
617,
|
| 1534 |
+
739
|
| 1535 |
+
],
|
| 1536 |
+
"page_idx": 7
|
| 1537 |
+
},
|
| 1538 |
+
{
|
| 1539 |
+
"type": "text",
|
| 1540 |
+
"text": "We have presented AffordanceLLM, a novel approach which can ground affordance for in-the-wild objects unseen during training. By tapping into the world knowledge embedded in a Vision Language Model, our proposed approach generalizes much better to in-the-wild objects, compared with state-of-the-art affordance grounding models.",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
496,
|
| 1543 |
+
750,
|
| 1544 |
+
890,
|
| 1545 |
+
839
|
| 1546 |
+
],
|
| 1547 |
+
"page_idx": 7
|
| 1548 |
+
},
|
| 1549 |
+
{
|
| 1550 |
+
"type": "text",
|
| 1551 |
+
"text": "Our approach can have positive impacts by helping build intelligence robots which can manipulate in-the-wild objects. On the other hand, it can be misused to cause physical damage or harm if applied in an adversarial manner.",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
496,
|
| 1554 |
+
840,
|
| 1555 |
+
890,
|
| 1556 |
+
900
|
| 1557 |
+
],
|
| 1558 |
+
"page_idx": 7
|
| 1559 |
+
},
|
| 1560 |
+
{
|
| 1561 |
+
"type": "text",
|
| 1562 |
+
"text": "References",
|
| 1563 |
+
"text_level": 1,
|
| 1564 |
+
"bbox": [
|
| 1565 |
+
78,
|
| 1566 |
+
89,
|
| 1567 |
+
173,
|
| 1568 |
+
104
|
| 1569 |
+
],
|
| 1570 |
+
"page_idx": 8
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "list",
|
| 1574 |
+
"sub_type": "ref_text",
|
| 1575 |
+
"list_items": [
|
| 1576 |
+
"[1] Shikhar Bahl, Abhinav Gupta, and Deepak Pathak. Human-to-robot imitation in the wild. In RSS, 2022. 1",
|
| 1577 |
+
"[2] Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, and Deepak Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023. 3",
|
| 1578 |
+
"[3] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. 2",
|
| 1579 |
+
"[4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 2",
|
| 1580 |
+
"[5] Ang Cao, Chris Rockwell, and Justin Johnson. Fwd: Real-time novel view synthesis with forward warping and depth. In CVPR, 2022. 3",
|
| 1581 |
+
"[6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. 4",
|
| 1582 |
+
"[7] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. 5",
|
| 1583 |
+
"[8] Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. Hico: A benchmark for recognizing human-object interactions in images. In ICCV, 2015. 1",
|
| 1584 |
+
"[9] Joya Chen, Difei Gao, Kevin Qinghong Lin, and Mike Zheng Shou. Affordance grounding from demonstration video to target image. In CVPR, 2023. 3",
|
| 1585 |
+
"[10] Weifeng Chen, Shengyi Qian, and Jia Deng. Learning single-image depth from videos using quality assessment networks. In CVPR, 2019. 3",
|
| 1586 |
+
"[11] Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864-17875, 2021. 4",
|
| 1587 |
+
"[12] Yinpei Dai, Run Peng, Sikai Li, and Joyce Chai. Think, act, and ask: Open-world interactive personalized robot navigation. arXiv preprint arXiv:2310.07968, 2023. 2",
|
| 1588 |
+
"[13] Karan Desai and Justin Johnson. Virtex: Learning visual representations from textual annotations. In CVPR, 2021. 2",
|
| 1589 |
+
"[14] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Thompson, Quan Vuong, Tianhe Yu, et al. Palm-: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. 2",
|
| 1590 |
+
"[15] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. 3",
|
| 1591 |
+
"[16] Kuan Fang, Te-Lin Wu, Daniel Yang, Silvio Savarese, and Joseph J Lim. Demo2vec: Reasoning object affordances from online videos. In CVPR, 2018. 3"
|
| 1592 |
+
],
|
| 1593 |
+
"bbox": [
|
| 1594 |
+
78,
|
| 1595 |
+
114,
|
| 1596 |
+
467,
|
| 1597 |
+
898
|
| 1598 |
+
],
|
| 1599 |
+
"page_idx": 8
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "list",
|
| 1603 |
+
"sub_type": "ref_text",
|
| 1604 |
+
"list_items": [
|
| 1605 |
+
"[17] Georgia Gkioxari, Ross Girshick, Piotr Dollar, and Kaiming He. Detecting and recognizing human-object interactions. In CVPR, 2018. 1",
|
| 1606 |
+
"[18] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In ICCV, 2019. 3",
|
| 1607 |
+
"[19] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. arXiv preprint arXiv:2306.14896, 2023. 3",
|
| 1608 |
+
"[20] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In CVPR, 2019. 6",
|
| 1609 |
+
"[21] Denis Hadjivelichkov, Sicelukwanda Zwane, Lourdes Agapito, Marc Peter Deisenroth, and Dimitrios Kanoulas. One-shot transfer of affordance regions? affcorrs! In Conference on Robot Learning, pages 550-560. PMLR, 2023. 3, 4, 6",
|
| 1610 |
+
"[22] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. arXiv preprint arXiv:2307.12981, 2023. 2",
|
| 1611 |
+
"[23] Cheng-Chun Hsu, Zhenyu Jiang, and Yuke Zhu. Ditto in the house: Building articulation models of indoor scenes through interactive perception. In ICRA, 2023. 1, 3",
|
| 1612 |
+
"[24] Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang Zhao, Tao Jin, and Zhou Zhao. Chat-3d v2: Bridging 3d scene and large language models with object identifiers. arXiv preprint arXiv:2312.08168, 2023. 2",
|
| 1613 |
+
"[25] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. arXiv preprint arXiv:2311.12871, 2023. 2",
|
| 1614 |
+
"[26] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020. 3",
|
| 1615 |
+
"[27] Juntao Jian, Xiuping Liu, Manyi Li, Ruizhen Hu, and Jian Liu. Affordpose: A large-scale dataset of hand-object interactions with affordance-driven hand pose. In ICCV, 2023. 3",
|
| 1616 |
+
"[28] Justin Kerr, Letian Fu, Huang Huang, Yahav Avigal, Matthew Tancik, Jeffrey Ichnowski, Angjoo Kanazawa, and Ken Goldberg. Evo-nerf: Evolving nerf for sequential robot grasping of transparent objects. In 6th Annual Conference on Robot Learning, 2022. 3",
|
| 1617 |
+
"[29] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In ICCV, 2023. 3, 4, 5",
|
| 1618 |
+
"[30] Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A Efros, and Krishna Kumar Singh. Putting people in their place: Affordance-aware human insertion into scenes. In CVPR, 2023. 3",
|
| 1619 |
+
"[31] Ashish Kumar, Saurabh Gupta, David F. Fouhey, Sergey Levine, and Jitendra Malik. Visual memory for robust path following. In NIPS, 2018. 1"
|
| 1620 |
+
],
|
| 1621 |
+
"bbox": [
|
| 1622 |
+
501,
|
| 1623 |
+
92,
|
| 1624 |
+
890,
|
| 1625 |
+
898
|
| 1626 |
+
],
|
| 1627 |
+
"page_idx": 8
|
| 1628 |
+
},
|
| 1629 |
+
{
|
| 1630 |
+
"type": "list",
|
| 1631 |
+
"sub_type": "ref_text",
|
| 1632 |
+
"list_items": [
|
| 1633 |
+
"[32] Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692, 2023. 2",
|
| 1634 |
+
"[33] Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and Rene Ranftl. Language-driven semantic segmentation. In ICLR, 2022. 2",
|
| 1635 |
+
"[34] Gen Li, Varun Jampani, Deqing Sun, and Laura Sevilla-Lara. Locate: Localize and transfer object parts for weakly supervised affordance grounding. In CVPR, 2023. 1, 3, 4, 5, 6, 7, 8",
|
| 1636 |
+
"[35] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, 2023. 2, 4",
|
| 1637 |
+
"[36] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566, 2018. 3",
|
| 1638 |
+
"[37] Zeju Li, Chao Zhang, Xiaoyan Wang, Ruilong Ren, Yifan Xu, Ruifei Ma, and Xiangde Liu. 3dmit: 3d multi-modal instruction tuning for scene understanding. arXiv preprint arXiv:2401.03201, 2024. 2",
|
| 1639 |
+
"[38] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In ICCV, 2017. 4",
|
| 1640 |
+
"[39] Chen Liu, Kihwan Kim, Jinwei Gu, Yasutaka Furukawa, and Jan Kautz. PlaneRCNN: 3D plane detection and reconstruction from a single image. In CVPR, 2019. 3",
|
| 1641 |
+
"[40] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. 1, 2, 3, 4, 8",
|
| 1642 |
+
"[41] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV, 2023. 3",
|
| 1643 |
+
"[42] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 2",
|
| 1644 |
+
"[43] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In CVPR, 2022. 2",
|
| 1645 |
+
"[44] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Grounded affordance from exocentric view. arXiv preprint arXiv:2208.13196, 2022. 1, 3, 4, 5, 6",
|
| 1646 |
+
"[45] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Learning affordance grounding from exocentric images. In CVPR, 2022. 1, 2, 3, 4, 5, 6, 12",
|
| 1647 |
+
"[46] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Leverage interactive affinity for affordance learning. In CVPR, 2023. 3",
|
| 1648 |
+
"[47] Chuofan Ma, Yi Jiang, Xin Wen, Zehuan Yuan, and Xiaojuan Qi. Codet: Co-occurrence guided region-word alignment for open-vocabulary object detection. In Advances in Neural Information Processing Systems, 2023. 2",
|
| 1649 |
+
"[48] Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran"
|
| 1650 |
+
],
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
78,
|
| 1653 |
+
90,
|
| 1654 |
+
468,
|
| 1655 |
+
900
|
| 1656 |
+
],
|
| 1657 |
+
"page_idx": 9
|
| 1658 |
+
},
|
| 1659 |
+
{
|
| 1660 |
+
"type": "list",
|
| 1661 |
+
"sub_type": "ref_text",
|
| 1662 |
+
"list_items": [
|
| 1663 |
+
"Shen, et al. Simple open-vocabulary object detection. In ECCV, 2022. 2, 3, 4, 6, 8",
|
| 1664 |
+
"[49] Lorenzo Mur-Labadia, Jose J Guerrero, and Ruben Martinez-Cantin. Multi-label affordance mapping from egocentric vision. In ICCV, 2023. 3",
|
| 1665 |
+
"[50] Tushar Nagarajan, Christoph Feichtenhofer, and Kristen Grauman. Grounded human-object interaction hotspots from video. In ICCV, 2019. 1, 3, 4, 6",
|
| 1666 |
+
"[51] Yinyu Nie, Xiaoguang Han, Shihui Guo, Yujuan Zheng, Jian Chang, and Jian Jun Zhang. Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In CVPR, 2020. 3",
|
| 1667 |
+
"[52] OpenAI. Gpt-4 technical report, 2023. 2",
|
| 1668 |
+
"[53] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. In ICLR, 2023. 2",
|
| 1669 |
+
"[54] Shengyi Qian and David F Fouhey. Understanding 3d object interaction from a single image. In ICCV, 2023. 3, 4, 5, 6, 7",
|
| 1670 |
+
"[55] Shengyi Qian, Linyi Jin, and David F. Fouhey. Associative3d: Volumetric reconstruction from sparse views. In ECCV, 2020. 6",
|
| 1671 |
+
"[56] Shengyi Qian, Alexander Kirillov, Nikhila Ravi, Devendra Singh Chaplot, Justin Johnson, David F Fouhey, and Georgia Gkioxari. Recognizing scenes from novel viewpoints. arXiv preprint arXiv:2112.01520, 2021. 3",
|
| 1672 |
+
"[57] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2, 4",
|
| 1673 |
+
"[58] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6",
|
| 1674 |
+
"[59] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In ICCV, 2021. 3, 4, 8",
|
| 1675 |
+
"[60] Dandan Shan, Jiaqi Geng, Michelle Shu, and David Fouhey. Understanding human hands in contact at internet scale. In CVPR, 2020. 1",
|
| 1676 |
+
"[61] Cheng Shi and Sibei Yang. Edadet: Open-vocabulary object detection using early dense alignment. In ICCV, 2023. 2",
|
| 1677 |
+
"[62] Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Perceiver-actor: A multi-task transformer for robotic manipulation. In Conference on Robot Learning, pages 785–799. PMLR, 2023. 3",
|
| 1678 |
+
"[63] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 4",
|
| 1679 |
+
"[64] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and"
|
| 1680 |
+
],
|
| 1681 |
+
"bbox": [
|
| 1682 |
+
501,
|
| 1683 |
+
92,
|
| 1684 |
+
890,
|
| 1685 |
+
900
|
| 1686 |
+
],
|
| 1687 |
+
"page_idx": 9
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "list",
|
| 1691 |
+
"sub_type": "ref_text",
|
| 1692 |
+
"list_items": [
|
| 1693 |
+
"modalities through a simple sequence-to-sequence learning framework. In ICML, 2022. 2",
|
| 1694 |
+
"[65] Xiaolong Wang, David Fouhey, and Abhinav Gupta. Designing deep networks for surface normal estimation. In CVPR, 2015. 3",
|
| 1695 |
+
"[66] Ruihai Wu, Chuanruo Ning, and Hao Dong. Learning foresightful dense visual affordance for deformable object manipulation. In ICCV, 2023. 3",
|
| 1696 |
+
"[67] Xin Xu, Tianyi Xiong, Zheng Ding, and Zhuowen Tu. Masqclip for open-vocabulary universal image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 887-898, 2023. 2, 4",
|
| 1697 |
+
"[68] Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In ICRA, 2024. 2",
|
| 1698 |
+
"[69] Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, and Zheng-Jun Zha. Grounding 3d object affordance from 2d interactions in images. In ICCV, 2023. 3",
|
| 1699 |
+
"[70] Yufei Ye, Xueting Li, Abhinav Gupta, Shalini De Mello, Stan Birchfield, Jiaming Song, Shubham Tulsiani, and Sifei Liu. Affordance diffusion: Synthesizing hand-object interactions. In CVPR, 2023. 3",
|
| 1700 |
+
"[71] Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John Turner, et al. Homerobot: Open-vocabulary mobile manipulation. arXiv preprint arXiv:2306.11565, 2023. 3",
|
| 1701 |
+
"[72] Wei Yin, Yifan Liu, Chunhua Shen, and Youliang Yan. Enforcing geometric constraints of virtual normal for depth prediction. In ICCV, 2019. 3",
|
| 1702 |
+
"[73] Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, and Chunhua Shen. Learning to recover 3d scene shape from a single image. In CVPR, 2021. 3",
|
| 1703 |
+
"[74] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In CVPR, 2021. 3",
|
| 1704 |
+
"[75] Yichi Zhang, Ziqiao Ma, Xiaofeng Gao, Suhaila Shakiah, Qiaozi Gao, and Joyce Chai. Groundhog: Grounding large language models to holistic segmentation. In CVPR, 2024. 2",
|
| 1705 |
+
"[76] Brady Zhou, Philipp Krahenbuhl, and Vladlen Koltun. Does computer vision matter for action? Science Robotics, 4(30): eaaw6661, 2019. 3",
|
| 1706 |
+
"[77] Gengze Zhou, Yicong Hong, and Qi Wu. Navgpt: Explicit reasoning in vision-and-language navigation with large language models. arXiv preprint arXiv:2305.16986, 2023. 2"
|
| 1707 |
+
],
|
| 1708 |
+
"bbox": [
|
| 1709 |
+
78,
|
| 1710 |
+
90,
|
| 1711 |
+
468,
|
| 1712 |
+
758
|
| 1713 |
+
],
|
| 1714 |
+
"page_idx": 10
|
| 1715 |
+
},
|
| 1716 |
+
{
|
| 1717 |
+
"type": "text",
|
| 1718 |
+
"text": "A. Metrics",
|
| 1719 |
+
"text_level": 1,
|
| 1720 |
+
"bbox": [
|
| 1721 |
+
76,
|
| 1722 |
+
89,
|
| 1723 |
+
171,
|
| 1724 |
+
104
|
| 1725 |
+
],
|
| 1726 |
+
"page_idx": 11
|
| 1727 |
+
},
|
| 1728 |
+
{
|
| 1729 |
+
"type": "text",
|
| 1730 |
+
"text": "In the section, we explain the metrics (KLD, SIM, and NSS) to evaluate our model.",
|
| 1731 |
+
"bbox": [
|
| 1732 |
+
76,
|
| 1733 |
+
114,
|
| 1734 |
+
468,
|
| 1735 |
+
145
|
| 1736 |
+
],
|
| 1737 |
+
"page_idx": 11
|
| 1738 |
+
},
|
| 1739 |
+
{
|
| 1740 |
+
"type": "text",
|
| 1741 |
+
"text": "- Kullback-Leibler Divergence (KLD) measures distribution difference between the predicted affordance map $(M)$ and the ground truth $(M^{\\prime})$ , which is",
|
| 1742 |
+
"bbox": [
|
| 1743 |
+
76,
|
| 1744 |
+
156,
|
| 1745 |
+
468,
|
| 1746 |
+
200
|
| 1747 |
+
],
|
| 1748 |
+
"page_idx": 11
|
| 1749 |
+
},
|
| 1750 |
+
{
|
| 1751 |
+
"type": "equation",
|
| 1752 |
+
"text": "\n$$\n\\operatorname {K L D} \\left(M, M ^ {\\prime}\\right) = \\sum_ {i} M _ {i} ^ {\\prime} \\log \\left(\\epsilon + \\frac {M _ {i} ^ {\\prime}}{\\epsilon + M _ {i}}\\right), \\tag {6}\n$$\n",
|
| 1753 |
+
"text_format": "latex",
|
| 1754 |
+
"bbox": [
|
| 1755 |
+
127,
|
| 1756 |
+
212,
|
| 1757 |
+
468,
|
| 1758 |
+
244
|
| 1759 |
+
],
|
| 1760 |
+
"page_idx": 11
|
| 1761 |
+
},
|
| 1762 |
+
{
|
| 1763 |
+
"type": "text",
|
| 1764 |
+
"text": "- Similiary (SIM) is also called histogram intersection, which measures the intersection between the predicted affordance map $(M)$ and the ground truth $(M')$ . The final range is from 0 to 1. It is given by",
|
| 1765 |
+
"bbox": [
|
| 1766 |
+
76,
|
| 1767 |
+
257,
|
| 1768 |
+
468,
|
| 1769 |
+
318
|
| 1770 |
+
],
|
| 1771 |
+
"page_idx": 11
|
| 1772 |
+
},
|
| 1773 |
+
{
|
| 1774 |
+
"type": "equation",
|
| 1775 |
+
"text": "\n$$\n\\operatorname {S I M} \\left(M, M ^ {\\prime}\\right) = \\sum_ {i} \\min \\left(M _ {i}, M _ {i} ^ {\\prime}\\right), \\tag {7}\n$$\n",
|
| 1776 |
+
"text_format": "latex",
|
| 1777 |
+
"bbox": [
|
| 1778 |
+
148,
|
| 1779 |
+
332,
|
| 1780 |
+
468,
|
| 1781 |
+
364
|
| 1782 |
+
],
|
| 1783 |
+
"page_idx": 11
|
| 1784 |
+
},
|
| 1785 |
+
{
|
| 1786 |
+
"type": "text",
|
| 1787 |
+
"text": "where $\\sum_{i}M_{i} = \\sum_{i}M_{i}^{\\prime} = 1$",
|
| 1788 |
+
"bbox": [
|
| 1789 |
+
76,
|
| 1790 |
+
371,
|
| 1791 |
+
274,
|
| 1792 |
+
388
|
| 1793 |
+
],
|
| 1794 |
+
"page_idx": 11
|
| 1795 |
+
},
|
| 1796 |
+
{
|
| 1797 |
+
"type": "text",
|
| 1798 |
+
"text": "- Normalized Scanpath Saliency (NSS) measures the correspondence between the prediction map $(M)$ and the ground truth $(M^{\\prime})$ . It is given by",
|
| 1799 |
+
"bbox": [
|
| 1800 |
+
76,
|
| 1801 |
+
388,
|
| 1802 |
+
468,
|
| 1803 |
+
434
|
| 1804 |
+
],
|
| 1805 |
+
"page_idx": 11
|
| 1806 |
+
},
|
| 1807 |
+
{
|
| 1808 |
+
"type": "equation",
|
| 1809 |
+
"text": "\n$$\n\\mathrm {N S S} \\left(M, M ^ {\\prime}\\right) = \\frac {1}{N} \\sum_ {i} \\hat {M} \\times M _ {i} ^ {\\prime}, \\tag {8}\n$$\n",
|
| 1810 |
+
"text_format": "latex",
|
| 1811 |
+
"bbox": [
|
| 1812 |
+
163,
|
| 1813 |
+
445,
|
| 1814 |
+
468,
|
| 1815 |
+
477
|
| 1816 |
+
],
|
| 1817 |
+
"page_idx": 11
|
| 1818 |
+
},
|
| 1819 |
+
{
|
| 1820 |
+
"type": "text",
|
| 1821 |
+
"text": "where $N = \\sum_{i}M_{i}^{\\prime}$ , $\\hat{M} = \\frac{M - \\mu(M)}{\\sigma(M)}$ . $\\mu (M)$ and $\\sigma (M)$ are the mean and standard deviation, respectively.",
|
| 1822 |
+
"bbox": [
|
| 1823 |
+
76,
|
| 1824 |
+
484,
|
| 1825 |
+
467,
|
| 1826 |
+
518
|
| 1827 |
+
],
|
| 1828 |
+
"page_idx": 11
|
| 1829 |
+
},
|
| 1830 |
+
{
|
| 1831 |
+
"type": "text",
|
| 1832 |
+
"text": "B. Details of the Data Splits",
|
| 1833 |
+
"text_level": 1,
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
76,
|
| 1836 |
+
534,
|
| 1837 |
+
310,
|
| 1838 |
+
551
|
| 1839 |
+
],
|
| 1840 |
+
"page_idx": 11
|
| 1841 |
+
},
|
| 1842 |
+
{
|
| 1843 |
+
"type": "text",
|
| 1844 |
+
"text": "In the easy split, we follow the object split of the original AGD20K Unseen setting [45]. The easy split has 33 object classes for training and 14 for testing. We have 1135/540 images for train and test with dense annotations for the fully supervised setting, or 13,323/540 images for the weakly supervised setting.",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
76,
|
| 1847 |
+
560,
|
| 1848 |
+
468,
|
| 1849 |
+
651
|
| 1850 |
+
],
|
| 1851 |
+
"page_idx": 11
|
| 1852 |
+
},
|
| 1853 |
+
{
|
| 1854 |
+
"type": "list",
|
| 1855 |
+
"sub_type": "text",
|
| 1856 |
+
"list_items": [
|
| 1857 |
+
"- Train classes: scissors, badminton racket, surfboard, frisbee, hot dog, tennis racket, hammer, microwave, oven, punching bag, carrot, snowboard, book, suitcase, skateboard, wine glass, keyboard, javelin, motorcycle, discus, bench, toothbrush, bottle, cell phone, chair, orange, rugby ball, couch, baseball, fork, bowl, apple, baseball bat.",
|
| 1858 |
+
"- Test classes: camera, bed, bicycle, golf clubs, soccer ball, cup, laptop, banana, skis, knife, axe, broccoli, basketball, refrigerator."
|
| 1859 |
+
],
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
76,
|
| 1862 |
+
655,
|
| 1863 |
+
468,
|
| 1864 |
+
790
|
| 1865 |
+
],
|
| 1866 |
+
"page_idx": 11
|
| 1867 |
+
},
|
| 1868 |
+
{
|
| 1869 |
+
"type": "text",
|
| 1870 |
+
"text": "In the hard split, we randomly put around $50\\%$ AGD20K object classes into the training set and the remaining classes into the test set to simulate in-the-wild generalization. The hard split has 28 object classes for training and 22 for testing. We have 868/807 images for train and test with dense annotations for the fully supervised setting, and 11,889/807 images for the weakly supervised setting.",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
76,
|
| 1873 |
+
795,
|
| 1874 |
+
468,
|
| 1875 |
+
901
|
| 1876 |
+
],
|
| 1877 |
+
"page_idx": 11
|
| 1878 |
+
},
|
| 1879 |
+
{
|
| 1880 |
+
"type": "list",
|
| 1881 |
+
"sub_type": "text",
|
| 1882 |
+
"list_items": [
|
| 1883 |
+
"- Training objects include carrot, cup, bowl, discus, book, camera, golf clubs, bottle, broccoli, binoculars, drum, baseball, apple, frisbee, cell phone, baseball bat, couch, hammer, bicycle, bench, fork, badminton racket, banana, hot dog, axe, bed, chair, basketball.",
|
| 1884 |
+
"- Test objects include soccer ball, laptop, punching bag, oven, suitcase, javelin, wine glass, motorcycle, scissors, snowboard, keyboard, rugby ball, orange, surfboard, knife, skateboard, pen, microwave, skis, tennis racket, refrigerator, toothbrush."
|
| 1885 |
+
],
|
| 1886 |
+
"bbox": [
|
| 1887 |
+
500,
|
| 1888 |
+
90,
|
| 1889 |
+
890,
|
| 1890 |
+
242
|
| 1891 |
+
],
|
| 1892 |
+
"page_idx": 11
|
| 1893 |
+
}
|
| 1894 |
+
]
|
2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06341/2613baa8-4bb2-4dd6-965c-c071af73e2d8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c45aa0a5e3148176c509c812b75599dcf0c56c70c0d38ecd854864c1262b5b69
|
| 3 |
+
size 1328617
|
2401.06xxx/2401.06341/full.md
ADDED
|
@@ -0,0 +1,414 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AffordanceLLM: Grounding Affordance from Vision Language Models
|
| 2 |
+
|
| 3 |
+
Shengyi Qian*
|
| 4 |
+
|
| 5 |
+
Weifeng Chen
|
| 6 |
+
|
| 7 |
+
Min Bai
|
| 8 |
+
|
| 9 |
+
Xiong Zhou
|
| 10 |
+
|
| 11 |
+
Zhuowen Tu
|
| 12 |
+
|
| 13 |
+
Li Erran Li
|
| 14 |
+
|
| 15 |
+
AWS AI, Amazon
|
| 16 |
+
|
| 17 |
+
https://jasonqsy.github.io/AffordanceLLM
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1. Illustration for the affordance grounding task. The input is a single image and the corresponding action (e.g., "hold"). The output is a heatmap which highlights regions one can interact. We aim to enhance the generalization capability of affordance grounding to in-the-wild objects that are unseen during training, by developing a new approach, AffordanceLLM, that takes the advantage of the rich knowledge from large-scale vision language models [40] beyond the supervision from the training images.
|
| 21 |
+
|
| 22 |
+
# Abstract
|
| 23 |
+
|
| 24 |
+
Affordance grounding refers to the task of finding the area of an object with which one can interact. It is a fundamental but challenging task, as a successful solution requires the comprehensive understanding of a scene in multiple aspects including detection, localization, and recognition of objects with their parts, of geo-spatial configuration/layout of the scene, of 3D shapes and physics, as well as of the functionality and potential interaction of the objects and humans. Much of the knowledge is hidden and beyond the image content with the supervised labels from a limited training set. In this paper, we make an attempt to improve the generalization capability of the current affordance grounding by taking the advantage of the rich world, abstract, and human-object-interaction knowledge from pretrained large-scale vision language models [40]. Under the AGD20K benchmark, our proposed model demonstrates a significant performance gain over the competing methods for in-the-wild object affordance grounding. We further demonstrate it can ground affordance for objects from random Internet images, even if both objects and actions are unseen during training.
|
| 25 |
+
|
| 26 |
+
# 1. Introduction
|
| 27 |
+
|
| 28 |
+
Grounding affordance from a single image is a fundamental problem in computer vision. It forms the stepping stone to downstream tasks such as understanding human-object interaction [8, 17, 60], visual navigation [31], and object manipulation [1, 23]. Past approaches generally use human demonstrations as supervision to advance this field with tremendous success [34, 44, 45, 50]. While such approaches perform well on objects and actions seen during training, they struggle when generalizing in the wild, i.e. on novel objects unseen during training (Fig. 1).
|
| 29 |
+
|
| 30 |
+
The difficulties in generalization arise from the fact that affordance grounding is a challenging task that requires comprehensive understanding of an image from multiple aspects. A successful solution requires an understanding of the 3D geometry and functionality of objects and parts, of the actions and intentions of the executing agent, of the potential interaction between object and human, as well as of the spatial configuration of the environment. Much of these knowledge lies beyond the ground-truth localization/recognition of objects and parts provided as heatmaps in a limited training set.
|
| 31 |
+
|
| 32 |
+
In this paper we make attempts to improve affordance grounding in the wild by leveraging the rich world, abstract, and human-object-interaction knowledge embedded
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
How do I ride with this motorcycle?
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
LLaVA
|
| 41 |
+
Figure 2. State-of-the-art vision language models, such as LLaVA [40], has rich human-object-interaction knowledge, thanks to the large-scale text pretraining. Given a question about how to interact with an object, it typically gives a reasonable solution.
|
| 42 |
+
|
| 43 |
+
To ride the motorcycle, you should interact with the handlebars, which are located at the front of the motorcycle. The handlebars are used to steer the motorcycle and control its direction and speed.
|
| 44 |
+
|
| 45 |
+
Additionally, you should also ensure that the motorcycle is parked in a safe and legal location, and that you have the necessary safety gear, such as a helmet and protective clothing, before attempting to ride it.
|
| 46 |
+
|
| 47 |
+
in large-scale Vision Language Models (VLMs). With large-scale text pretraining, modern VLMs such as GPT-4 [52], LLaVA [40] and Blip-2 [35] have a rich reservoir of world knowledge, as demonstrated by their extraordinary capabilities in answering visually grounded common sense questions [4]. World knowledge is instrumental to affordance reasoning — when presented with an image of a motorcycle and questioned about “How do I ride with this motorcycle?” LLaVA answers “To ride the motorcycle, you should interact with the handlebars...” (Fig. 2), which exhibits commonsensical understanding of affordance. Affordance models equipped with similar world knowledge have a better chance generalizing to the wild than a model that purely learns from limited affordance demonstration data.
|
| 48 |
+
|
| 49 |
+
Beside world knowledge, another novel factor we introduce to improve affordance reasoning is 3D geometry, as it holds rich information of object functionality. A cylindrical area, for example a handle or a stick, is closely related to the action of grabbing or holding, regardless of its color or texture. Similarly, a flat part, for example the surface of a chair or a bench, might indicate areas suitable for sitting or lying. Relating 3D geometries to actions allows us to bypass the difficulties in handling variations in visual appearances, and thus facilitates generalization.
|
| 50 |
+
|
| 51 |
+
We propose a novel approach, AffordanceLLM, that reflects the above intuitions. Our approach builds upon a VLM backbone (LLaVA [40]) to tap into the its world knowledge. We achieve it by extending the backbone with a mask decoder and a special token <mask_token>, which are used to predict an affordance map. The whole model can be trained end-to-end. Additionally, we introduce depth maps as 3D information in parallel to RGB images as input to our network, with the goal of eliciting geometric reasoning capability from the network. We found both designs significantly improve performance.
|
| 52 |
+
|
| 53 |
+
We evaluate our method on the AGD20K [45] bench
|
| 54 |
+
|
| 55 |
+
mark, as this is the only large-scale affordance grounding dataset with accurate action and object labels. We re-split the benchmark to test capability of models to generalize to object categories unseen during training. We show that our approach outperforms all state-of-the-art baselines by a large margin. We take a further step to validate the generalization ability by testing our approach on random Internet images. It produces reasonable affordance maps on object categories very different from the ones in training set. Moreover, it even possesses some capability of generalizing to completely novel actions.
|
| 56 |
+
|
| 57 |
+
In summary, our contributions are as follows:
|
| 58 |
+
|
| 59 |
+
1. We introduce the first-ever affordance grounding approach that leverages the rich world knowledge embedded in pretrained VLMs, enabling the model to generalize beyond training data;
|
| 60 |
+
2. We demonstrate the importance of 3D information in affordance grounding;
|
| 61 |
+
3. Our proposed approach generalizes to novel objects and outperforms all state-of-the-art approaches on AGD20K. It even shows evidence that it could generalize to novel actions.
|
| 62 |
+
|
| 63 |
+
# 2. Related Work
|
| 64 |
+
|
| 65 |
+
Eliciting World Knowledge from Vision Language Models. Foundational Vision Language Models that bridge images and language have a rich reservoir of world knowledge, and recent researches have been tapping into it to make advancement in vision tasks. The joint visual-language embedding space learnt from simple image-text pair [13, 57] has made it possible to improve open-world detection [42, 47, 48, 61], and segmentation [33, 43, 64, 67]. The world knowledge here is the correspondence between visual and language concepts.
|
| 66 |
+
|
| 67 |
+
Large language models (LLMs) trained on trillions of tokens contain even richer world knowledge and are capable of answering common-sense questions. Coupled with vision inputs, the resulting multi-modal LLMs are brought in to solve complex vision problems. For example, Kosmos-2 and Groundhog incorporate the reasoning skill of LLMs to generate bounding boxes and segmentation masks [32, 53, 75]. 3D LLM further extend LLMs to reason about 3D scenes, including visual grounding and navigation [12, 22, 24, 25, 37, 68, 77]. For robotics, PaLM-E and RT2 transfer the knowledge from visual-language domains into motion planing and manipulation [3, 14]. Our approach embodies the same idea to transfer the world knowledge from VLMs, but applies it on a novel setting - solving visual affordance grounding.
|
| 68 |
+
|
| 69 |
+
Affordance Grounding. Understanding object affordance from a single image is an important step towards embodied visual intelligence, and researchers have built many dif
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
Figure 3. Overview of AffordanceLLM. The inputs of our model include a single image and a text prompt related to interaction. We use OWL-ViT [48] as the image encoder to generate image features and project it into the same hidden dimension as the large language model. As well, we use a tokenizer to encode the text prompt. The text features and image features are concatenated together and feed into the LLM. The LLM is fine-tuned to predict a special token, which is used as a query to the mask decoder to generate the final affordance map.
|
| 73 |
+
|
| 74 |
+
ferent approaches to endow machines to have this ability. Nagarajan et al. first proposes to ground object affordance from Internet videos [50]. Fang et al. constructs an object affordance dataset based on product review videos [16]. Luo et al. annotates the first large-scale affordance dataset and call it AGD20K [45]. LOCATE [34] is the state-of-the-art approach on AGD20K. More recently, researchers further extends the scope of the affordance grounding problem, including extending it to scene understanding [9, 30, 54], 3D models [69], egocentric videos [49], hand pose generation [27, 70], or associating it with human parts [46]. We use AGD20K as our primary benchmark, and compare our approach with state-of-the-art methods [21, 34, 44, 50, 54].
|
| 75 |
+
|
| 76 |
+
Incorporating 3D Information for Vision Tasks. 3D information has been shown to be critical in certain vision and robotics tasks. For example, Zhou et al. [76] found that visual navigation in mobile sensorimotor systems can benefit from 3D input. Kerr et al. [28] found the NeRF-rendered depth map can help grasping in robotics. Similarly, grounding affordance could benefit from 3D information as well, as 3D shapes of objects and their parts hold a lot of hints on their utility and the proper ways to interact with them. While 3D information is not usually available for an arbitrary image, fortunately, researchers have built a series of robust approaches to estimate the 3D of an image, ranging from surface normal estimation [15, 65], depth estimation [10, 36, 59, 72, 73], to 3D reconstruction [18, 39, 51] and few-image NeRF [5, 41, 56, 74]. In our paper, we mainly use DPT [59] to generate pseudo depth maps to help VLMs to build 3D understanding.
|
| 77 |
+
|
| 78 |
+
Robotics Manipulation. Manipulation of in-the-wild objects is an important but challenging task in robotics due to the difficulty of data collection. Researchers have developed many methods for different objects in different scenes, such as tabletop objects [19, 26, 62] and mobile manipulation [71]. While manipulation is not our goal, learning affordance can be a solution for manipulation [2, 23, 66].
|
| 79 |
+
|
| 80 |
+
# 3. Approach
|
| 81 |
+
|
| 82 |
+
We now introduce our approach, AffordanceLLM, which takes a single image $I$ and an affordance text query $T$ , and generates an affordance map $M$ . We use a template of "What part of the <object_name> should we interact with in order to <action_name> it?" as the text query $T$ . We then train LLM to generate a special token <mask_token> and use its hidden state to decode a dense affordance map $M$ . A brief overview is shown in Fig 3.
|
| 83 |
+
|
| 84 |
+
# 3.1. Overview
|
| 85 |
+
|
| 86 |
+
Large language model. We choose LLaVA-7B as our backbone multimodal large language model. We refer the reader for a fuller explanation in [40], but briefly, LLaVA contains an image encoder, a text tokenizer, and a large language model LLM. The image encoder is typically a CLIP pretrained ViT, with a linear layer to project the hidden dimension. It encodes the image $I$ into image features $F_{I}$ . At the same time, the tokenizer encodes the text $T$ into text features $F_{T}$ . They are concatenated and feed into the language model. The LLM produces text output $A$ as:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
A = \operatorname {L L M} \left(F _ {I}, F _ {T}\right). \tag {1}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
Predicting affordance. How do we perform affordance reasoning while leveraging the world knowledge embedded in LLM? We propose to treat affordance as an implicit text token predicted from the LLM, which could be further decoded into a 2D map. Specifically, we train the LLM to predict a special token $\langle \text{mask_token} \rangle$ , the hidden state of which is first projected into a query embedding $q$ and then fed into a Decoder to generate a dense affordance map. Decoder shares a similar architecture as the ones in SAM [29] and 3DOI [54]. It takes in $q$ and image features $F_{I}$ to produce an affordance map $M$ , i.e.,
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
M = \operatorname {D e c o d e r} \left(F _ {I}, q\right). \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
Pseudodepth as additional inputs. Besides images, the affordance reasoning task could benefit from 3D information (we will validate the benefits in Sec 4). However, modern VLMs are typically only trained with text and 2D images [35, 40]. Therefore, we also include a pseudo depth map as additional inputs to the large language model. For each image, we use the state-of-the-art depth estimation model DPT [59] to generate a pseudo depth map $D$ . We use the same image encoder to encode the depth map $D$ to produce depth features $F_{D}$ , and concatenate it with image features. Our final model is thus
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
A, M = \text {A f f o r d a n c e L L M} \left(F _ {I}, F _ {D}, F _ {T}\right). \tag {3}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Training objectives. Following the same practice as [54], we train the affordance map using a binary focal loss [38], and call it affordance loss $L_{\mathrm{aff}}$ . We set the weight of positive examples to be 0.95 and that of negative ones to be 0.05 to balance positives and negatives, as there are more negatives than positives in ground truth affordance map. We follow the standard cross entropy loss for the text output of language models. Our final loss function is a linear combination of affordance loss and text loss, given by
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
L = L _ {\text {a f f}} + \lambda \cdot L _ {\text {t e x t}}. \tag {4}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
In practice, we set $\lambda = 0.01$ to balance two losses, as the affordance loss can be quite small due to the imbalance of positive and negative values.
|
| 111 |
+
|
| 112 |
+
# 3.2. Network Architecture
|
| 113 |
+
|
| 114 |
+
Next, we discuss the network architecture, and the training details of our model.
|
| 115 |
+
|
| 116 |
+
Image encoder. The standard LLaVA uses CLIP image encoder and a linear projection layer [40, 67]. In practice, we find that the CLIP image encoder has low resolution (224x224) and does not capture sufficient information about grounding. Therefore, we use OWL-ViT [48] to replace the standard CLIP-ViT [57]. OWL-ViT has an input resolution of 768x768, which is significantly higher than CLIP. At the same time, OWL-ViT is pretrained to extract features that include precise location information of objects. As we will empirically show in experiments, using OWL-ViT is significantly better than CLIP. However, we note that our approach is general, and will benefit from any future improvements in pretrained VLM backbones.
|
| 117 |
+
|
| 118 |
+
Projection. Another problem of using OWL-ViT is about its embedding space. With a much higher input resolution, OWL-ViT produces 576 tokens with a hidden dimension of 768 for each image. In comparison, CLIP only produces 256 tokens for each image. Projecting each individual token into the hidden dimension of LLM (4096) consumes a lot of GPU memory. In practice, we project each token
|
| 119 |
+
|
| 120 |
+
of OWL-ViT to 1024, and concatenating four neighboring tokens into a single token.
|
| 121 |
+
|
| 122 |
+
Language model. We follow LLaVA [40] and LLama [63] to use the standard text tokenizer to encode our text query. We use LLama-7B [63] as the large language model.
|
| 123 |
+
|
| 124 |
+
Affordance decoder. We aim to keep a lightweight decoder, as it has been proved to produce good segmentation masks and affordance maps [6, 11, 29, 54]. However, we find the vanilla mask decoder is too lightweight in our case and does not produce high-resolution affordance map. Therefore, we add an additional transposed convolution layer to increase its output resolution.
|
| 125 |
+
|
| 126 |
+
Implementation. We implement our model using PyTorch and HuggingFace. We initialize our model with LLama-7B pretrained weights. Following LLaVA [40], we freeze the image encoder, pretrain the image projection layer to align OWL-ViT and LLama features, and then use GPT instructions to tune the language model. Finally, we add the mask encoder [29, 54] and tune the whole model on AGD20K [45], which has annotations of object affordance. We use eight NVIDIA A100 (40GB) to train our model, with Fully Sharded Data Parallel. We use a batch size of 4 and set the learning rate as 2e-5.
|
| 127 |
+
|
| 128 |
+
# 4. Experiments
|
| 129 |
+
|
| 130 |
+
In experiments, we aim to systematically evaluate the performance of our approach. In particular, we are interested in answering these questions: (1) How well does it generalize, compared with state-of-the-art methods? (2) How does each design choice contribute to the final performance, including prompts, visual encoders, and depth?
|
| 131 |
+
|
| 132 |
+
# 4.1. Experimental Setup
|
| 133 |
+
|
| 134 |
+
Metrics. We evaluate primarily on AGD20K [45] and follow its metrics to evaluate our model, which is KLD, SIM and NSS [34, 45, 50, 54]. For KLD, the lower the better. And for SIM and NSS, the higher, the better. A full explanation is available in the supplemental.
|
| 135 |
+
|
| 136 |
+
Baselines. We compare our approach against state-of-the-art baselines. In general, affordance grounding methods belong to two categories: weakly supervised and fully supervised methods. We report performance of both categories.
|
| 137 |
+
|
| 138 |
+
(Weakly supervised methods): They do not train on explicit labels of the affordance map. Instead, they are trained on a human demonstration of the same object. These approaches include InteractionHotspots [50], Cross-View-AG [45], Cross-View-AG+ [44], AffCorrs [21], and LOCATE [34]. Among them, LOCATE is the most recent model and has the best results on AGD20K. We use the reported number in LOCATE for the easy split and retrain them for the hard split. Among them, we cannot run AffCorrs, as it focuses on one-shot affordance learning. The
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
Figure 4. Qualitative results on the test set of the hard split. LOCATE-Sup fails to learn a reasonable affordance map due to limited training data. LOCATE [34] typically predicts an affordance map which covers the whole object. 3DOI [54] focuses on a small area of the object. Overall, our approach produces the best-quality affordance predictions.
|
| 142 |
+
|
| 143 |
+
reported model on the easy split is adapted by [34] and not publicly available. We also do not run InteractionHotspots because the pretrained model only supports 7 actions. The reported model is retrained by [45] but lacks sufficient implementation details to be reproduced. Therefore, we retrain Cross-View-AG [45], Cross-View-AG+ [44], and LOCATE [34] on the hard split. We maintain the object/action split, but allow them to use more images for weak supervision. Therefore, they have 11,889 images for training.
|
| 144 |
+
|
| 145 |
+
(Fully supervised methods): Affordance map can also be learned from explicit labels, and we call it supervised methods. This includes 3DOI [54] and ours. We also adapt LOCATE to a fully supervised version for fair comparison.
|
| 146 |
+
|
| 147 |
+
- 3DOI [54]: 3DOI is a SAM-based model [29], which takes a single image and a query point and predicts the segmentation mask and affordance map. Therefore, We randomly sample a pixel with score $>0.9$ as the query point from the affordance map. We use the 3DOI pretrained model, which has never seen any images in AGD20K, in
|
| 148 |
+
|
| 149 |
+
cluding the training set.
|
| 150 |
+
|
| 151 |
+
- LOCATE-Sup [34]: To ensure fair comparison, we also adopt LOCATE and train it using the same binary focal loss as our model. We call it LOCATE-Sup. LOCATE uses a Dino-ViT as its visual encoder [7]. To eliminate the effect of different pretrained visual encoders, we also report the performance of LOCATE-Sup-OWL, which uses the exact same pretrained visual encoder as ours.
|
| 152 |
+
|
| 153 |
+
# 4.2. Dataset
|
| 154 |
+
|
| 155 |
+
We follow LOCATE [34] to evaluate primarily on AGD20K [45], as it is the only large-scale dataset for affordance with action and object labels. Since our approach is not weakly supervised and requires dense annotations, we only use AGD20K images with dense annotations.
|
| 156 |
+
|
| 157 |
+
In this paper, we primarily evaluate the ability of an affordance model to generalize to unseen object categories, and thus evaluate on the Unseen split of the AGD20K benchmark. This split ensures that there is no overlap be
|
| 158 |
+
|
| 159 |
+
Table 1. Difficulty score of different splits. The lower the score, the more similar are the object categories in the train and test set.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>Splits</td><td>Same</td><td>Easy</td><td>Hard</td><td>Random</td></tr><tr><td>Difficulty Score ↑</td><td>0.000</td><td>0.356</td><td>0.412</td><td>0.491</td></tr></table>
|
| 162 |
+
|
| 163 |
+
Table 2. Quantitative results on the Easy split of AGD20K [45]. InteractionHotspots, Cross-View-AG(+), AffCorrs and LOCATE are trained on AGD20K images with weak supervision (13,323 images). LOCATE-Sup and LOCATE-Sup-OWL, and AffordanceLLM are trained on AGD20K images with dense annotation (1,135 images). 3DOI is trained on their own dataset with dense annotation (10,000 images) [54]. AffordanceLLM is comparable to LOCATE [34] on the easy split, where test objects have similar counterparts in the training set. The best and second-best results are highlighted in bold and underlined, respectively.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Methods</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>InteractionHotspots [50]</td><td>1.994</td><td>0.237</td><td>0.577</td></tr><tr><td>Cross-View-AG [45]</td><td>1.787</td><td>0.285</td><td>0.829</td></tr><tr><td>Cross-View-AG+ [44]</td><td>1.765</td><td>0.279</td><td>0.882</td></tr><tr><td>AffCorrs [21]</td><td>1.618</td><td>0.348</td><td>1.021</td></tr><tr><td>LOCATE [34]</td><td>1.405</td><td>0.372</td><td>1.157</td></tr><tr><td>LOCATE-Sup [34]</td><td>1.907</td><td>0.236</td><td>0.641</td></tr><tr><td>LOCATE-Sup-OWL [34, 48]</td><td>1.927</td><td>0.234</td><td>0.624</td></tr><tr><td>3DOI [54]</td><td>3.565</td><td>0.227</td><td>0.657</td></tr><tr><td>AffordanceLLM (Ours)</td><td>1.463</td><td>0.377</td><td>1.070</td></tr></table>
|
| 166 |
+
|
| 167 |
+
tween the object categories in the train and test set.
|
| 168 |
+
|
| 169 |
+
However, the Unseen split has one major issue: there are still a lot of similarities between the objects in the train and test set. Objects in the test set typically have similar counterparts in the training set, leaving models room for memorizing. For example, "skis" in the test set maps to "snowboards" and "skateboards" in training set, "basketball" maps to "baseball", "knife" maps to "fork", etc. To make the benchmark more reflective of the generalization ability of a model, we provide a more challenging split. We thus have the following two splits.
|
| 170 |
+
|
| 171 |
+
Easy split. This is the original Unseen split of AGD20K. We have 1135/540 images for train and test with dense annotations for the fully supervised setting, or 13,323/540 images for the weakly supervised setting. The test set remains the same for both settings.
|
| 172 |
+
|
| 173 |
+
Hard split. We randomly put $50\%$ AGD20K object classes into the training set and the remaining classes into the test set to simulate in-the-wild generalization (details in the supplemental). The training and test object are not overlapping, and most objects in the test set do not have a similar counterpart in the training set, and is thus harder to generalize than the Easy split. We have 868/807 images for train and test with dense annotations for the fully supervised setting, and 11,889/807 images for weakly supervised setting. The test set is the same for both settings.
|
| 174 |
+
|
| 175 |
+
Table 3. Quantitative results on the Hard split of AGD20K [45]. Cross-View-AG $(+)$ and LOCATE are trained on AGD20K images with weak supervision (11,889 images). LOCATE-Sup and LOCATE-Sup-OWL, and AffordanceLLM are trained on AGD20K images with dense annotation (868 images). 3DOI is trained on their own dataset with dense annotation (10,000 images) [54]. On the hard split, AffordanceLLM outperforms all baselines by a large margin, which demonstrates the superior generalization ability of our model. We do not run Interaction-Hotspots [50] and AffCorrs [21], as the reported model has ambiguous implementation details, or is not publicly available. The best and second-best results are highlighted in bold and underlined, respectively.
|
| 176 |
+
|
| 177 |
+
<table><tr><td>Methods</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>Cross-View-AG [45]</td><td>2.092</td><td>0.209</td><td>0.138</td></tr><tr><td>Cross-View-AG+ [44]</td><td>2.034</td><td>0.218</td><td>0.342</td></tr><tr><td>LOCATE [34]</td><td>1.829</td><td>0.282</td><td>0.276</td></tr><tr><td>LOCATE-Sup [34]</td><td>2.003</td><td>0.224</td><td>0.435</td></tr><tr><td>LOCATE-Sup-OWL [34, 48]</td><td>2.127</td><td>0.206</td><td>0.314</td></tr><tr><td>3DOI [54]</td><td>4.017</td><td>0.200</td><td>0.549</td></tr><tr><td>AffordanceLLM (Ours)</td><td>1.661</td><td>0.361</td><td>0.947</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Measuring split difficulty. We propose a metric to quantify the generalization difficulty of a split. Intuitively, the difficulty to generalize to an object class in the test set is defined by how different it is from the classes in the training set, which could be measured by its semantic distance to the most similar class in the training set [55]. The greater the distance, the harder it is to generalize to this test class. Therefore, for each semantic class $c$ in the test set, we compute its distance $d$ to the most similar class in the training set. We use the CLIP [58] text encoder to obtain an embedding to represent each object class. Assume train classes are $C_{\mathrm{train}}$ and test classes are $C_{\mathrm{test}}$ , the difficulty of this split is
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
D \left(C _ {\text {t r a i n}}, C _ {\text {t e s t}}\right) = 1 - \frac {1}{\left| C _ {\text {t e s t}} \right|} \sum_ {c \in C _ {\text {t e s t}}} \max _ {c ^ {\prime} \in C _ {\text {t r a i n}}} d \left(c, c ^ {\prime}\right). \tag {5}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
We compare the difficulty metric among four settings: (1) Same: train and test share the same classes; (2) Easy split; (3) Hard split; (4) Random: constructed by randomly even-splitting 50 object classes from LVIS [20], which serves as a lower bound.
|
| 186 |
+
|
| 187 |
+
We show the difficulty metrics in Tab 1. The Same split has a similarity metric of 0.0, as all object classes in the test are present during training. The Easy split has a similarity metric of 0.356. The Random split has a score of 0.491. The Hard split has a higher score than Easy, meaning that the difference between test and train is more significance in Hard than in Easy, and is thus harder to generalize.
|
| 188 |
+
|
| 189 |
+
# 4.3. Results
|
| 190 |
+
|
| 191 |
+
Figure 4 shows qualitative results on the test set of the hard split. In this split, the objects in the test set bear little to none
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
Figure 5. Ablation of different text prompts and depth. Ours w/o depth is our approach without pseudodepth as additional inputs. Ours is our full approach. We find constructing the correct text prompt typically helps our model to focus on the correct area. We believe it is because the correct text prompt would activate the world knowledge related to affordance embedded in the VLM.
|
| 195 |
+
|
| 196 |
+
resemblance to the ones in the training set. We compare our approach, AffordanceLLM, with a set of state-of-the-art baselines. LOCATE [34] tends to predicts an affordance map that covers the entire object, indicating poor generalization performance. 3DOI [54] typically focuses on a small area of the object, and sometimes fails to ground the correct region. LOCATE-Sup fails to predict reasonable affordance map, probably due to the small amount of training data. Despite being trained on the same training set as LOCATE-Sup, our approach is able to produce the best affordance map among all methods, showcasing superior generalization capability.
|
| 197 |
+
|
| 198 |
+
We further compare our model with baselines quantitatively and the results are summarized in Tab. 2, 3. On the Hard split, where the test set objects differ semantically from the training set, our method outperforms all baselines significantly. This improvement can be attributed to the extensive world knowledge and understanding embedded within the large language model. On the Easy split, our model is comparable to LOCATE [34] and outperform all other baselines. We hypothesize that the advantage of our approach is less pronounced when the test and train objects exhibit similarity, as the generalization capability becomes less critical. It is also worth noting that unlike LOCATE which is weakly supervised on $10\mathrm{k + }$ images, our model was fully supervised on some 1k images with dense annotations, which renders a more meaningful comparison with LOCATE-Sup that is trained on the same data. Our method significantly outperforms LOCATE-Sup on both splits, indicating the effectiveness of our approach.
|
| 199 |
+
|
| 200 |
+
Table 4. Ablation on the hard split. We validate the importance of text prompts, image encoders and pseudo depth to performance.
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Depth</td><td>Text Prompt</td><td>Img Encoder</td><td>KLD ↓</td><td>SIM ↑</td><td>NSS ↑</td></tr><tr><td>Yes</td><td>Full</td><td>OWL-ViT</td><td>1.661</td><td>0.361</td><td>0.947</td></tr><tr><td>-</td><td>Full</td><td>OWL-ViT</td><td>1.713</td><td>0.352</td><td>0.881</td></tr><tr><td>-</td><td>Full</td><td>CLIP-ViT</td><td>1.759</td><td>0.286</td><td>0.776</td></tr><tr><td>-</td><td>Object, Action</td><td>OWL-ViT</td><td>1.769</td><td>0.329</td><td>0.827</td></tr><tr><td>-</td><td>Action</td><td>OWL-ViT</td><td>1.843</td><td>0.336</td><td>0.815</td></tr><tr><td>-</td><td>Hi</td><td>OWL-ViT</td><td>1.836</td><td>0.325</td><td>0.793</td></tr></table>
|
| 203 |
+
|
| 204 |
+
# 4.4. Ablation
|
| 205 |
+
|
| 206 |
+
We conduct a few ablation studies to understand how different components of the model contribute to the final performance. We test different text prompts, different image encoders, and the effect of pseudo depth as inputs. The results are summarized in Tab 4 and Fig 5.
|
| 207 |
+
|
| 208 |
+
Text prompts. Prompt tuning is known to have major effects on large VLMs. We test four different text prompts to understand the effect of text content on model performance:
|
| 209 |
+
|
| 210 |
+
- Hi: We use "Hi" as our text prompt.
|
| 211 |
+
- Action: We use the action (e.g. "hold") as the prompt.
|
| 212 |
+
- Object + Action: We use the object name and action label as our text prompt, for example "hold, knife".
|
| 213 |
+
- Full: We use a complete question as the text prompt — "What part of the motorcycle should we interact with in order to push it?"
|
| 214 |
+
|
| 215 |
+
We notice that the Full prompt yields a higher performance compared with other simple text prompts. It demonstrates that specific question prompt is helpful for extracting the knowledge from pretrained large language models.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Input
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
LOCATE
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
AffordanceLLM
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Most similar in training
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
Figure 7. Failure examples. (Row 1:) AffordanceLLM sometimes fails due to multiple objects present in the scene and it fails to refer to the correct object. (Row 2:) AffordanceLLM thinks humans should hold the handle to cut something using the knife, while AGD20K annotators think "cut with" should refer to the blade.
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
Figure 6. Generalization results on random Internet images. We show the most similar objects in the training set to demonstrate how different the objects are from the ones in the training set. (Row 1, 2): AffordanceLLM generalizes to novel objects from random Internet images, while LOCATE [34] fails. (Row 3, 4): AffordanceLLM generalizes to novel actions plus novel objects. LOCATE cannot infer novel actions thus we left it blank.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+
Vision encoders. Although LLaVA [40] uses CLIP-ViT, it may not be the optimal vision encoder for our affordance grounding task — CLIP-ViT is trained with an objective to align text-image pairs and is not explicitly optimized to perform localization, and therefore has limited visual grounding capability. We therefore switch to OWL-ViT [48], which is trained on detection datasets with 2M images, and achieves state-of-the-art open vocabulary detection performance. As shown in Tab 4, using OWL-ViT as vision backbone far excels using CLIP-ViT. It indicates the importance of grounding capability of visual backbone.
|
| 256 |
+
|
| 257 |
+
# 4.5. Pseudodepth as Inputs
|
| 258 |
+
|
| 259 |
+
Our model is trained with pseudo depth map produced by DPT [59]. To verify whether the additional depth inputs are effective, we compare the model trained with and without estimated depth (Tab 4 and Fig 5). With depth, our model can predict better affordance map, demonstrating the importance of 3D information in affordance reasoning.
|
| 260 |
+
|
| 261 |
+
# 4.6. Generalization to Internet Images
|
| 262 |
+
|
| 263 |
+
We further test the generalization of our model on random Internet images in Fig. 6. All test objects are novel. To showcase how different these objects are from the ones in train set, for each test object, we retrieve the most similar
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
Input
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
Groundtruth
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
Prediction
|
| 279 |
+
|
| 280 |
+
object in the train set using the metric defined in Eq 5. As we can see, these objects are not similar to any objects in the train set. We go even further to test if our approach is able to generalize to novel actions in addition to novel objects, such as "blow" and "play". Generalization to novel actions is even more challenging, requiring open vocabulary understanding of actions, which is beyond the capability of LOCATE [34]. Despite these challenges, our approach not only produces very reasonable affordance maps for novel objects, but is also able to handle novel actions plus novel objects, once again demonstrating the extraordinary capability to generalize.
|
| 281 |
+
|
| 282 |
+
# 4.7. Failure Examples
|
| 283 |
+
|
| 284 |
+
Finally, we show our failure examples in Fig 7. First, we find AffordanceLLM fails on some ambiguous questions. For example, in AGD20K, "cut with" refers to the blade of a knife. However, AffordanceLLM thinks humans should hold the handle of the knife to cut anything. Second, when there are multiple objects in the image, it sometimes cannot refer to the correct object.
|
| 285 |
+
|
| 286 |
+
# 5. Conclusion
|
| 287 |
+
|
| 288 |
+
We have presented AffordanceLLM, a novel approach which can ground affordance for in-the-wild objects unseen during training. By tapping into the world knowledge embedded in a Vision Language Model, our proposed approach generalizes much better to in-the-wild objects, compared with state-of-the-art affordance grounding models.
|
| 289 |
+
|
| 290 |
+
Our approach can have positive impacts by helping build intelligence robots which can manipulate in-the-wild objects. On the other hand, it can be misused to cause physical damage or harm if applied in an adversarial manner.
|
| 291 |
+
|
| 292 |
+
# References
|
| 293 |
+
|
| 294 |
+
[1] Shikhar Bahl, Abhinav Gupta, and Deepak Pathak. Human-to-robot imitation in the wild. In RSS, 2022. 1
|
| 295 |
+
[2] Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, and Deepak Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023. 3
|
| 296 |
+
[3] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. 2
|
| 297 |
+
[4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 2
|
| 298 |
+
[5] Ang Cao, Chris Rockwell, and Justin Johnson. Fwd: Real-time novel view synthesis with forward warping and depth. In CVPR, 2022. 3
|
| 299 |
+
[6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. 4
|
| 300 |
+
[7] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021. 5
|
| 301 |
+
[8] Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. Hico: A benchmark for recognizing human-object interactions in images. In ICCV, 2015. 1
|
| 302 |
+
[9] Joya Chen, Difei Gao, Kevin Qinghong Lin, and Mike Zheng Shou. Affordance grounding from demonstration video to target image. In CVPR, 2023. 3
|
| 303 |
+
[10] Weifeng Chen, Shengyi Qian, and Jia Deng. Learning single-image depth from videos using quality assessment networks. In CVPR, 2019. 3
|
| 304 |
+
[11] Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864-17875, 2021. 4
|
| 305 |
+
[12] Yinpei Dai, Run Peng, Sikai Li, and Joyce Chai. Think, act, and ask: Open-world interactive personalized robot navigation. arXiv preprint arXiv:2310.07968, 2023. 2
|
| 306 |
+
[13] Karan Desai and Justin Johnson. Virtex: Learning visual representations from textual annotations. In CVPR, 2021. 2
|
| 307 |
+
[14] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Thompson, Quan Vuong, Tianhe Yu, et al. Palm-: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. 2
|
| 308 |
+
[15] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. 3
|
| 309 |
+
[16] Kuan Fang, Te-Lin Wu, Daniel Yang, Silvio Savarese, and Joseph J Lim. Demo2vec: Reasoning object affordances from online videos. In CVPR, 2018. 3
|
| 310 |
+
|
| 311 |
+
[17] Georgia Gkioxari, Ross Girshick, Piotr Dollar, and Kaiming He. Detecting and recognizing human-object interactions. In CVPR, 2018. 1
|
| 312 |
+
[18] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In ICCV, 2019. 3
|
| 313 |
+
[19] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. arXiv preprint arXiv:2306.14896, 2023. 3
|
| 314 |
+
[20] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In CVPR, 2019. 6
|
| 315 |
+
[21] Denis Hadjivelichkov, Sicelukwanda Zwane, Lourdes Agapito, Marc Peter Deisenroth, and Dimitrios Kanoulas. One-shot transfer of affordance regions? affcorrs! In Conference on Robot Learning, pages 550-560. PMLR, 2023. 3, 4, 6
|
| 316 |
+
[22] Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. arXiv preprint arXiv:2307.12981, 2023. 2
|
| 317 |
+
[23] Cheng-Chun Hsu, Zhenyu Jiang, and Yuke Zhu. Ditto in the house: Building articulation models of indoor scenes through interactive perception. In ICRA, 2023. 1, 3
|
| 318 |
+
[24] Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang Zhao, Tao Jin, and Zhou Zhao. Chat-3d v2: Bridging 3d scene and large language models with object identifiers. arXiv preprint arXiv:2312.08168, 2023. 2
|
| 319 |
+
[25] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. arXiv preprint arXiv:2311.12871, 2023. 2
|
| 320 |
+
[26] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020. 3
|
| 321 |
+
[27] Juntao Jian, Xiuping Liu, Manyi Li, Ruizhen Hu, and Jian Liu. Affordpose: A large-scale dataset of hand-object interactions with affordance-driven hand pose. In ICCV, 2023. 3
|
| 322 |
+
[28] Justin Kerr, Letian Fu, Huang Huang, Yahav Avigal, Matthew Tancik, Jeffrey Ichnowski, Angjoo Kanazawa, and Ken Goldberg. Evo-nerf: Evolving nerf for sequential robot grasping of transparent objects. In 6th Annual Conference on Robot Learning, 2022. 3
|
| 323 |
+
[29] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In ICCV, 2023. 3, 4, 5
|
| 324 |
+
[30] Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A Efros, and Krishna Kumar Singh. Putting people in their place: Affordance-aware human insertion into scenes. In CVPR, 2023. 3
|
| 325 |
+
[31] Ashish Kumar, Saurabh Gupta, David F. Fouhey, Sergey Levine, and Jitendra Malik. Visual memory for robust path following. In NIPS, 2018. 1
|
| 326 |
+
|
| 327 |
+
[32] Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692, 2023. 2
|
| 328 |
+
[33] Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and Rene Ranftl. Language-driven semantic segmentation. In ICLR, 2022. 2
|
| 329 |
+
[34] Gen Li, Varun Jampani, Deqing Sun, and Laura Sevilla-Lara. Locate: Localize and transfer object parts for weakly supervised affordance grounding. In CVPR, 2023. 1, 3, 4, 5, 6, 7, 8
|
| 330 |
+
[35] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, 2023. 2, 4
|
| 331 |
+
[36] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566, 2018. 3
|
| 332 |
+
[37] Zeju Li, Chao Zhang, Xiaoyan Wang, Ruilong Ren, Yifan Xu, Ruifei Ma, and Xiangde Liu. 3dmit: 3d multi-modal instruction tuning for scene understanding. arXiv preprint arXiv:2401.03201, 2024. 2
|
| 333 |
+
[38] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In ICCV, 2017. 4
|
| 334 |
+
[39] Chen Liu, Kihwan Kim, Jinwei Gu, Yasutaka Furukawa, and Jan Kautz. PlaneRCNN: 3D plane detection and reconstruction from a single image. In CVPR, 2019. 3
|
| 335 |
+
[40] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. 1, 2, 3, 4, 8
|
| 336 |
+
[41] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV, 2023. 3
|
| 337 |
+
[42] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023. 2
|
| 338 |
+
[43] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In CVPR, 2022. 2
|
| 339 |
+
[44] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Grounded affordance from exocentric view. arXiv preprint arXiv:2208.13196, 2022. 1, 3, 4, 5, 6
|
| 340 |
+
[45] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Learning affordance grounding from exocentric images. In CVPR, 2022. 1, 2, 3, 4, 5, 6, 12
|
| 341 |
+
[46] Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. Leverage interactive affinity for affordance learning. In CVPR, 2023. 3
|
| 342 |
+
[47] Chuofan Ma, Yi Jiang, Xin Wen, Zehuan Yuan, and Xiaojuan Qi. Codet: Co-occurrence guided region-word alignment for open-vocabulary object detection. In Advances in Neural Information Processing Systems, 2023. 2
|
| 343 |
+
[48] Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran
|
| 344 |
+
|
| 345 |
+
Shen, et al. Simple open-vocabulary object detection. In ECCV, 2022. 2, 3, 4, 6, 8
|
| 346 |
+
[49] Lorenzo Mur-Labadia, Jose J Guerrero, and Ruben Martinez-Cantin. Multi-label affordance mapping from egocentric vision. In ICCV, 2023. 3
|
| 347 |
+
[50] Tushar Nagarajan, Christoph Feichtenhofer, and Kristen Grauman. Grounded human-object interaction hotspots from video. In ICCV, 2019. 1, 3, 4, 6
|
| 348 |
+
[51] Yinyu Nie, Xiaoguang Han, Shihui Guo, Yujuan Zheng, Jian Chang, and Jian Jun Zhang. Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image. In CVPR, 2020. 3
|
| 349 |
+
[52] OpenAI. Gpt-4 technical report, 2023. 2
|
| 350 |
+
[53] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. In ICLR, 2023. 2
|
| 351 |
+
[54] Shengyi Qian and David F Fouhey. Understanding 3d object interaction from a single image. In ICCV, 2023. 3, 4, 5, 6, 7
|
| 352 |
+
[55] Shengyi Qian, Linyi Jin, and David F. Fouhey. Associative3d: Volumetric reconstruction from sparse views. In ECCV, 2020. 6
|
| 353 |
+
[56] Shengyi Qian, Alexander Kirillov, Nikhila Ravi, Devendra Singh Chaplot, Justin Johnson, David F Fouhey, and Georgia Gkioxari. Recognizing scenes from novel viewpoints. arXiv preprint arXiv:2112.01520, 2021. 3
|
| 354 |
+
[57] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 2, 4
|
| 355 |
+
[58] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6
|
| 356 |
+
[59] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In ICCV, 2021. 3, 4, 8
|
| 357 |
+
[60] Dandan Shan, Jiaqi Geng, Michelle Shu, and David Fouhey. Understanding human hands in contact at internet scale. In CVPR, 2020. 1
|
| 358 |
+
[61] Cheng Shi and Sibei Yang. Edadet: Open-vocabulary object detection using early dense alignment. In ICCV, 2023. 2
|
| 359 |
+
[62] Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Perceiver-actor: A multi-task transformer for robotic manipulation. In Conference on Robot Learning, pages 785–799. PMLR, 2023. 3
|
| 360 |
+
[63] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 4
|
| 361 |
+
[64] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and
|
| 362 |
+
|
| 363 |
+
modalities through a simple sequence-to-sequence learning framework. In ICML, 2022. 2
|
| 364 |
+
[65] Xiaolong Wang, David Fouhey, and Abhinav Gupta. Designing deep networks for surface normal estimation. In CVPR, 2015. 3
|
| 365 |
+
[66] Ruihai Wu, Chuanruo Ning, and Hao Dong. Learning foresightful dense visual affordance for deformable object manipulation. In ICCV, 2023. 3
|
| 366 |
+
[67] Xin Xu, Tianyi Xiong, Zheng Ding, and Zhuowen Tu. Masqclip for open-vocabulary universal image segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 887-898, 2023. 2, 4
|
| 367 |
+
[68] Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In ICRA, 2024. 2
|
| 368 |
+
[69] Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, and Zheng-Jun Zha. Grounding 3d object affordance from 2d interactions in images. In ICCV, 2023. 3
|
| 369 |
+
[70] Yufei Ye, Xueting Li, Abhinav Gupta, Shalini De Mello, Stan Birchfield, Jiaming Song, Shubham Tulsiani, and Sifei Liu. Affordance diffusion: Synthesizing hand-object interactions. In CVPR, 2023. 3
|
| 370 |
+
[71] Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John Turner, et al. Homerobot: Open-vocabulary mobile manipulation. arXiv preprint arXiv:2306.11565, 2023. 3
|
| 371 |
+
[72] Wei Yin, Yifan Liu, Chunhua Shen, and Youliang Yan. Enforcing geometric constraints of virtual normal for depth prediction. In ICCV, 2019. 3
|
| 372 |
+
[73] Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, and Chunhua Shen. Learning to recover 3d scene shape from a single image. In CVPR, 2021. 3
|
| 373 |
+
[74] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In CVPR, 2021. 3
|
| 374 |
+
[75] Yichi Zhang, Ziqiao Ma, Xiaofeng Gao, Suhaila Shakiah, Qiaozi Gao, and Joyce Chai. Groundhog: Grounding large language models to holistic segmentation. In CVPR, 2024. 2
|
| 375 |
+
[76] Brady Zhou, Philipp Krahenbuhl, and Vladlen Koltun. Does computer vision matter for action? Science Robotics, 4(30): eaaw6661, 2019. 3
|
| 376 |
+
[77] Gengze Zhou, Yicong Hong, and Qi Wu. Navgpt: Explicit reasoning in vision-and-language navigation with large language models. arXiv preprint arXiv:2305.16986, 2023. 2
|
| 377 |
+
|
| 378 |
+
# A. Metrics
|
| 379 |
+
|
| 380 |
+
In the section, we explain the metrics (KLD, SIM, and NSS) to evaluate our model.
|
| 381 |
+
|
| 382 |
+
- Kullback-Leibler Divergence (KLD) measures distribution difference between the predicted affordance map $(M)$ and the ground truth $(M^{\prime})$ , which is
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
\operatorname {K L D} \left(M, M ^ {\prime}\right) = \sum_ {i} M _ {i} ^ {\prime} \log \left(\epsilon + \frac {M _ {i} ^ {\prime}}{\epsilon + M _ {i}}\right), \tag {6}
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
- Similiary (SIM) is also called histogram intersection, which measures the intersection between the predicted affordance map $(M)$ and the ground truth $(M')$ . The final range is from 0 to 1. It is given by
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
\operatorname {S I M} \left(M, M ^ {\prime}\right) = \sum_ {i} \min \left(M _ {i}, M _ {i} ^ {\prime}\right), \tag {7}
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
where $\sum_{i}M_{i} = \sum_{i}M_{i}^{\prime} = 1$
|
| 395 |
+
|
| 396 |
+
- Normalized Scanpath Saliency (NSS) measures the correspondence between the prediction map $(M)$ and the ground truth $(M^{\prime})$ . It is given by
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\mathrm {N S S} \left(M, M ^ {\prime}\right) = \frac {1}{N} \sum_ {i} \hat {M} \times M _ {i} ^ {\prime}, \tag {8}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
where $N = \sum_{i}M_{i}^{\prime}$ , $\hat{M} = \frac{M - \mu(M)}{\sigma(M)}$ . $\mu (M)$ and $\sigma (M)$ are the mean and standard deviation, respectively.
|
| 403 |
+
|
| 404 |
+
# B. Details of the Data Splits
|
| 405 |
+
|
| 406 |
+
In the easy split, we follow the object split of the original AGD20K Unseen setting [45]. The easy split has 33 object classes for training and 14 for testing. We have 1135/540 images for train and test with dense annotations for the fully supervised setting, or 13,323/540 images for the weakly supervised setting.
|
| 407 |
+
|
| 408 |
+
- Train classes: scissors, badminton racket, surfboard, frisbee, hot dog, tennis racket, hammer, microwave, oven, punching bag, carrot, snowboard, book, suitcase, skateboard, wine glass, keyboard, javelin, motorcycle, discus, bench, toothbrush, bottle, cell phone, chair, orange, rugby ball, couch, baseball, fork, bowl, apple, baseball bat.
|
| 409 |
+
- Test classes: camera, bed, bicycle, golf clubs, soccer ball, cup, laptop, banana, skis, knife, axe, broccoli, basketball, refrigerator.
|
| 410 |
+
|
| 411 |
+
In the hard split, we randomly put around $50\%$ AGD20K object classes into the training set and the remaining classes into the test set to simulate in-the-wild generalization. The hard split has 28 object classes for training and 22 for testing. We have 868/807 images for train and test with dense annotations for the fully supervised setting, and 11,889/807 images for the weakly supervised setting.
|
| 412 |
+
|
| 413 |
+
- Training objects include carrot, cup, bowl, discus, book, camera, golf clubs, bottle, broccoli, binoculars, drum, baseball, apple, frisbee, cell phone, baseball bat, couch, hammer, bicycle, bench, fork, badminton racket, banana, hot dog, axe, bed, chair, basketball.
|
| 414 |
+
- Test objects include soccer ball, laptop, punching bag, oven, suitcase, javelin, wine glass, motorcycle, scissors, snowboard, keyboard, rugby ball, orange, surfboard, knife, skateboard, pen, microwave, skis, tennis racket, refrigerator, toothbrush.
|
2401.06xxx/2401.06341/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3954951bb08744d1059a5a384d11411cc6e9b2220a1910f44b1f36a4fb86f775
|
| 3 |
+
size 641817
|
2401.06xxx/2401.06341/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06373/764bf675-6eef-430c-9ccc-5757c8742bf5_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:01a995a66f9dff23af54faa8059ed3d511a5ffa4f5de9e948e8ac2b8024034f0
|
| 3 |
+
size 7411268
|
2401.06xxx/2401.06373/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06373/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ce73dbbde7ac982b1b604968507daa114a63ebccf2783c07fb09f3cd4daaeece
|
| 3 |
+
size 2626958
|
2401.06xxx/2401.06373/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06385/9b242d37-74d1-4099-99ae-b9f44be38186_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3a7a492f0233f4ee8c488c453c61998db209ebfea571ab8d639cc3406eed8367
|
| 3 |
+
size 4883599
|
2401.06xxx/2401.06385/full.md
ADDED
|
@@ -0,0 +1,582 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical Refinement and EM optimization
|
| 2 |
+
|
| 3 |
+
Zhenlong Yuan $^{1}$ , Jiakai Cao $^{1}$ , Zhaoxin Li $^{2,3*}$ , Hao Jiang $^{1}$ , Zhaoqi Wang $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Institute of Computing Technology, Chinese Academy of Sciences
|
| 6 |
+
|
| 7 |
+
$^{2}$ Agricultural Information Institute, Chinese Academy of Agricultural Sciences
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>Key Laboratory of Agricultural Big Data, Ministry of Agriculture and Rural Affairs
|
| 10 |
+
|
| 11 |
+
yuanzhenlong21b@ict.ac.cn, caojiakai21@mails.ucas.ac.cn,
|
| 12 |
+
|
| 13 |
+
czli@hotmail.com, {jianghao, zqwang} @ict.ac.cn
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
In this paper, we introduce Segmentation-Driven Deformation Multi-View Stereo (SD-MVS), a method that can effectively tackle challenges in 3D reconstruction of textureless areas. We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes and further leverage these constraints for pixelwise patch deformation on both matching cost and propagation. Concurrently, we propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths, significantly improving the completeness of reconstructed 3D model. Furthermore, we adopt the Expectation-Maximization (EM) algorithm to alternately optimize the aggregate matching cost and hyperparameters, effectively mitigating the problem of parameters being excessively dependent on empirical tuning. Evaluations on the ETH3D high-resolution multi-view stereo benchmark and the Tanks and Temples dataset demonstrate that our method can achieve state-of-the-art results with less time consumption.
|
| 18 |
+
|
| 19 |
+
# Introduction
|
| 20 |
+
|
| 21 |
+
Multi-view stereo (MVS) is a technique that employs images to reconstruct 3D objects or scenes. Its application spans various fields, including autonomous driving (Orsingher et al. 2022), augmented reality (Cao et al. 2021), and robotics (Li, Gogia, and Kaess 2019).
|
| 22 |
+
|
| 23 |
+
Recently, PatchMatch-based methods (Schönberger et al. 2016; Xu and Tao 2019; Lee et al. 2021) exhibits remarkable capabilities in sub-pixel reconstruction for large-scale imagery while being reliable for unstructured image set. These methods typically initiate by computing the matching cost of fixed patches between images, then proceeding with propagation and refinement for accurate depth estimation. Nonetheless, they typically encounter difficulties in textureless areas where the absence of texture results in unreliable depth estimations. To address this issue, several techniques have been introduced, including plane prior (Xu and Tao 2020), superpixel-wise planarization (Romanoni and Matteucci 2019), epipolar geometry (Xu et al. 2020) and confidence-based interpolation (Li et al. 2020). Yet when
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
(a) APD-MVS
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
(b) SD-MVS
|
| 30 |
+
Figure 1: Comparative analysis of patch deformation strategies between APD-MVS and our approach. APD-MVS (a) selects green anchor pixels from pixels characterized by similar colors but may have inconsistent depths to help reconstruct central red pixel, leading to potential inaccuracy. Conversely, our method (b) utilizes neighboring pixels inside the segmentation boundary for reconstruction.
|
| 31 |
+
|
| 32 |
+
facing large textureless areas, these methods perform unsatisfactory and leave room for further improvement.
|
| 33 |
+
|
| 34 |
+
Differently, learning-based methods leverages network to build learnable 3D cost volumes and thereby ameliorating the reconstruction quality. Several methods (Yao et al. 2019; Yan et al. 2020) attempt to employ the gated recurrent unit (GRU) to provide a more rational interpretation in reconstruction, while this often leads to unaffordable time and memory cost. Others (Su and Tao 2023) try to utilize residual learning module to refine depth estimates by rectifying the upsampling errors. Yet, such networks typically lack generalization when facing scenes different from training datasets, posing challenges for their practical application.
|
| 35 |
+
|
| 36 |
+
Edges in the color image are usually consistent with depth boundaries. Thus, edge information plays a pivotal role in both computation of PatchMatch and construction of 3D cost volumes. Nonetheless, problems like shadows and occlusions in complicated scenes tend to weaken the linkage between edge and depth boundaries. Consequently, several methods (Yuesong Wang et al. 2023) struggle to harness edge information effectively, often skipping edges and consequently calculating regions with inconsistent depth, leading to detail distortion, as shown in Fig. 1. Additionally, certain superpixel segmentation approaches (Kuhn, Lin, and Erdler 2019) face challenges in precisely segmenting edges
|
| 37 |
+
|
| 38 |
+
and lack semantic information to broaden receptive field. Differently, as an instance segmentation model, the Segment Anything Model (SAM) (Kirillov et al. 2023) can subtly mitigate the aforementioned disturbances, thereby segmenting instances with different depths across diverse scenes.
|
| 39 |
+
|
| 40 |
+
Therefore, we introduce SD-MVS, a PatchMatch-based method that integrates SAM-based instance segmentation to better exploits edge information for patch deformation. Specifically, we first employ the instance segmentation results derived from SAM to adaptively deform the patches for matching cost and propagation, thereby accommodating the distinct characteristics of different pixels. Moreover, we employ multi-scale matching cost and propagation scheme to extract diverse information, addressing the challenges posed by textureless areas. To optimize memory consumption, we introduce an architecture promoting multi-scale consistency in parallel, consequently reducing the program's runtime.
|
| 41 |
+
|
| 42 |
+
Moreover, we propose the spherical gradient refinement to optimize previous refinement strategies. Concerning with normal refinement, we randomly select two orthogonal unit vectors perpendicular to the current normal for perturbation and incorporate gradient descent to further refine perturbation directions in subsequent rounds, thereby improving the accuracy for each hypothesis. Regarding depth refinement, we adopt pixelwise search interval derived from the deformed patch for local perturbations.
|
| 43 |
+
|
| 44 |
+
Furthermore, we introduce an EM-based hyperparameter optimization to address the issue of empirical determination of hyperparameters in existing methods. By alternately optimizing the aggregated cost and the hyperparameters, we implement an excellent strategy for automatic parameter tuning, thereby facilitating a balanced consideration against diverse information. Evaluation results on the ETH3D and the Tanks and Temples benchmarks illustrate that our method surpasses the existing state-of-the-art (SOTA) methods.
|
| 45 |
+
|
| 46 |
+
In summary, our contributions are as follows:
|
| 47 |
+
|
| 48 |
+
- Based on SAM segmentation, we propose an adaptive patch deformation with multi-scale consistency on both matching cost and propagation to better utilize image edge information and memory cost.
|
| 49 |
+
- We introduce the spherical gradient refinement, which leverages spherical coordinates and gradient descent on normals and employs pixelwise search interval to constrain depths, thereby enhancing search precision.
|
| 50 |
+
- We propose the EM-based hyperparameter optimization by adopting the EM algorithm to alternately optimizing the aggregate cost and the hyperparameters.
|
| 51 |
+
|
| 52 |
+
# Related Work
|
| 53 |
+
|
| 54 |
+
Traditional MVS Methods Traditional Multi-View Stereo (MVS) algorithms can primarily be classified into four categories (Seitz et al. 2006): voxel-based methods (Vogiatzis et al. 2007), surface evolution-based methods (Cremers and Kolev 2011), patch-based methods (Bleyer, Rhemann, and Rother 2011), and depth-map based methods (Yao et al. 2019). Our methodology aligns with the last category, where depth maps are generated from images and
|
| 55 |
+
|
| 56 |
+
their corresponding camera parameters, further leading to point cloud construction via fusion. Within this category, PatchMatch-based methods are the most well-known subclass. Numerous innovative PatchMatch-based methods have been proposed and accomplished a great enhancement in both accuracy and completeness. ACMM (Xu and Tao 2019) uses multi-view consistency and cascading structure to tackle reconstruction of textureless areas, while subsequent works such as ACMMP (Xu et al. 2022) further introduce a plane-prior probabilistic graph model and thus provide plane hypothesis for textureless areas. In contrast, TAPA-MVS (Romanoni and Matteucci 2019) and PCFMVS (Kuhn, Lin, and Erdler 2019) employ superpixel for image segmentation and planarization of textureless areas. However, the reconstruction performance in textureless areas is contingent upon the actual segmentation and fitting of the superpixels. CLD-MVS (Li et al. 2020) incorporate a confidence estimator to interpolate unreliable pixels, but their definition way of the confidence makes the result susceptible to occlusion and highlights. MAR-MVS (Xu et al. 2020) leverages epipolar geometry to determine the optimal neighborhood images and scale for pixels, yet its fixed patch size limits its adaptability across various application scenarios. APD-MVS (Yuesong Wang et al. 2023) employs patches with adaptive deformation strategy and pyramid architecture, but the time consumption of its iterative process poses a challenge in large-scale datasets.
|
| 57 |
+
|
| 58 |
+
Learning-based MVS Methods Unlike traditional MVS methods that suffer from hand-crafted image features, learning-based MVS methods typically leverage convolutional neural networks to extract high-dimensional image features, thereby enabling a more rational 3D reconstruction. MVSNET (Yao et al. 2018) has pioneered the construction through introducing differentiable 3D cost volumes using deep neural network, enabling numerous methods for further research. Certain classic multi-stage methods, including Cas-MVSNet (Gu et al. 2020), utilize a coarse-to-fine strategy to refine and upscale depth from low-resolution, thereby reducing the cost volumes while expanding the receptive field. In terms of memory reduction, several methods like Iter-MVS (Wang et al. 2022a) leverage GRU to regulate the 3D cost volumes along the depth direction. Concerning feature extraction, AA-RMVSNET (Wei et al. 2021) aggregates multi-scale variable convolution for adaptive feature extraction. Additionally, MVSTER (Wang et al. 2022b) integrates the transformer architecture into MVS tasks to capture multi-dimensional attention feature information. Despite these advancements, it is worth noting that numerous learning-based MVS methods risk severe degradation when applied to target domains that deviate from the training set.
|
| 59 |
+
|
| 60 |
+
# Method
|
| 61 |
+
|
| 62 |
+
Given a series of input images $I = \{I_i | i = 1, \dots, N\}$ , each one with specific camera parameters $P_i = \{K_i, R_i, C_i\}$ . Our goal is to estimate the depth map $D_i$ for each image and subsequently merge them into a 3D point cloud. Fig. 2 illustrates our overall pipeline, specific design of each component will be detailed in subsequent sections.
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
Figure 2: An illustrated pipeline of our proposed method. Images with multi views are initially downsampled and further allocated into our multi-scale architecture. Through leveraging the SAM-based segmentation, we carry out patch deformation on the matching cost to gain multi-scale matching costs $C_{ms}$ . By integrating $C_{ms}$ with the projection color error $C_{pc}$ and the reprojection error $C_{rp}$ , the aggregated cost is acquired. Then we again employ the SAM-based segmentation for patch deformation in propagation, succeeded by load-balancing within each search domain. Subsequently, we alternately iterates spherical gradient refinement on normals and pixelwise search interval on depths for enhanced accuracy. Finally, we employ EM-based optimization for the hyperparameter tuning of $w_{ms}$ , $w_{rp}$ , $w_{pc}$ and reassign them for the next iteration procedure.
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
Figure 3: Comparative analysis of patch deformation strategies between the SAM-based instance segmentation and the Canny edge detection on partial scenes of ETH3D dadaset (office and kicker). From top to bottom, (a), (b) and (c) respectively show the original images, the SAM-based segmentation results and the Canny edge detection results. Representative areas in red boxes illustrate the advantages of SAM-based segmentation over Canny edge detection.
|
| 69 |
+
|
| 70 |
+
# Why Using Segment Anything Model?
|
| 71 |
+
|
| 72 |
+
The Segment Anything Model (SAM) can effectively discriminate between different instances, extracting subtle edge while neglecting strong illumination disturbances. To validate its effectiveness, we conduct the SAM-based instance segmentation and the Canny edge detection for patch deformation on partial scenarios of ETH3D datasets.
|
| 73 |
+
|
| 74 |
+
As shown in Fig. 3, when confronting with scenarios characterized by extensive similar colors and occlusion like office, SAM can effectively separate edges that exhibit similar colors on both sides with inconsistent depths, whereas Canny edge detection simply ignores them. Additionally, textureless areas like floors and walls in kicker can be effectively separated into different instances through SAM segmentation without illumination interference. In contrast, Canny edge detection incorrectly detects these illumination areas as edges, adversely affecting patch deformation.
|
| 75 |
+
|
| 76 |
+
# Segmentation-Driven Patch Deformation
|
| 77 |
+
|
| 78 |
+
Patch Deformation on Matching Cost Some recent methods (Wang et al. 2021; Yuesong Wang et al. 2023) attempt to leverage patch deformation to improve matching cost or propagation scheme. As shown in Fig. 1, due to their insufficiency in exploiting edge information, they often cross boundary and reference areas with discontinuous depths, thereby yielding unsatisfactory results, especially when confronting with scenarios characterized by extensive similar colors and occlusions like forests and farmlands. Simultaneously, superpixel-based segmentation approaches (Romanoni and Matteucci 2019) also struggle in precisely
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
(a)
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
(b)
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
(c)
|
| 88 |
+
Figure 4: Patch deformation on matching cost. (a) is the matching cost scheme from ACMMP, (b) shows the distance of each directions and (c) illustrates the deformed patch.
|
| 89 |
+
|
| 90 |
+
recognizing certain critical edges within these scenarios. They also lack instance semantic information to broaden receptive field, thereby meet pixelwise characteristic.
|
| 91 |
+
|
| 92 |
+
SAM segmentation can mitigate this issue as it separates different instances to extract subtle edges information while neglecting robust illumination disturbance. Consequently, we can leverage instance segmentation to better exploit and further introduce edge information into patch deformation. Specifically, we perform instance segmentation using SAM for input image $I_{i}$ to generate masks for diverse instances, denoted as $\mathcal{F}$ . Hence we have $M = \mathcal{F}(I_i)$ , where $M$ is an image mask whose size is consistent with $I_{i}$ .
|
| 93 |
+
|
| 94 |
+
For each pixel $p$ , we compute the bilateral weighted adaption of normalized cross correlation score (NCC) (Schönberger et al. 2016) between reference images $I_{i}$ and source image $I_{j}$ , which can be calculated as follows:
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\rho \left(p, \mathrm {W} _ {p} ^ {i}\right) = \frac {\operatorname {c o v} \left(\mathrm {W} _ {p} ^ {i} , \mathrm {W} _ {p} ^ {j}\right)}{\sqrt {\operatorname {c o v} \left(\mathrm {W} _ {p} ^ {i} , \mathrm {W} _ {p} ^ {i}\right) \operatorname {c o v} \left(\mathrm {W} _ {p} ^ {j} , \mathrm {W} _ {p} ^ {j}\right)}} \tag {1}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where $cov$ is weighted covariance, $W_{p}^{i}$ and $W_{p}^{j}$ are respectively the corresponding images patches on image $I_{i}$ and $I_{j}$ .
|
| 101 |
+
|
| 102 |
+
The goal of minimizing the matching cost is to obtain the optimal matching depths via the computation of color differences. However, when objects with varying depths exhibit similar colors, they are susceptible to generating matching inaccuracies, as shown in Fig. 4(a). Therefore, we introduce patch deformation to compute matching cost upon the sample patch W intersecting with different instances.
|
| 103 |
+
|
| 104 |
+
Specifically, we first measure the distances from the corresponding central pixel $p$ to the left, right, lower and upper boundaries of $M$ , denoted respectively as $d_{l}$ , $d_{r}$ , $d_{d}$ , and $d_{u}$ . Then we can deform the shape of $W$ to match these boundaries. The new shape of deformed patch can be defined as:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\left[ \frac {d _ {l} + d _ {r}}{d _ {l} + d _ {r} + d _ {d} + d _ {u}} L, \frac {d _ {d} + d _ {u}}{d _ {l} + d _ {r} + d _ {d} + d _ {u}} L \right] \tag {2}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $L$ denotes the side length of the square patch before patch deformation. Additionally, we reposition the patch's center by adding an offset:
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\Delta o (p) = \left(\frac {d _ {l} - d _ {r}}{d _ {l} + d _ {r}} L _ {h}, \frac {d _ {d} - d _ {u}}{d _ {u} + d _ {d}} L _ {v}\right) \tag {3}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $L_{h}$ and $L_{v}$ are respectively the horizontal and vertical length of deformed patch. The new center of the sample patch now becomes $p + \Delta o(p)$ .
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
(a)
|
| 120 |
+
Figure 5: Patch deformation on propagation. (a) is the propagation pattern of ACMMP, (b) depicts the length of each propagation branch, and (c) illustrates different search domains with different colors.
|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
(b)
|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
(c)
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
|
| 132 |
+
Both patch deformation and center offset allow pixels positioned at boundary regions to orient their patches more intensively towards the center of its own instance. Enhancing the receptive field for homogenous pixels in such approach can yield more robust results, consequently reducing potential errors in estimation. Note that considering the runtime, we restrict the number of calculations for each window such that the number of calculations after deformation never surpasses the initial total number $(L / 2)^{2}$ .
|
| 133 |
+
|
| 134 |
+
Patch Deformation on Propagation After SAM-based instance segmentation, pixels within the same instance typically exhibit similar depths, whereas noticeable depth discontinuities frequently arise at the boundaries between instances. Considering that propagation involves updating potential depths and normals within the surrounding area for each pixel, depth discontinuities will inevitably impact propagation. Consequently, we leverage patch deformation to adaptively alter the propagation scheme.
|
| 135 |
+
|
| 136 |
+
The adaptive checkerboard propagation scheme (Xu and Tao 2019) is conducted by introducing the optimal hypotheses from four near and four far search domains, as illustrated in Fig. 5 (a). However, his search domain between two adjacent diagonal directions is too dense, which leads to an imbalanced search space density and a risk of selecting redundant values. Hence we modify its oblique direction into a straight line extending to the corner of each patch.
|
| 137 |
+
|
| 138 |
+
Subsequently, we propose patch deformation on propagation via SAM, which adjusts the propagation patch shape and direction for each pixel. As illustrated in Fig. 5 (b), we adapt the propagation directions according to the shape of the surrounding mask. Specifically, denoting $l_{l}, l_{r}, l_{d}$ , and $l_{u}$ as the length from the central pixel $p$ to the left, right, lower and upper edges of the patch, respectively, we obtain:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
l _ {u} = \frac {d _ {u}}{d _ {u} + d _ {d}} L _ {v}, l _ {l} = \frac {d _ {l}}{d _ {l} + d _ {r}} L _ {h} \tag {4}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
Both $l_{r}$ and $l_{d}$ can be obtained similarly. Therefore, the directions and lengths of slanted branch $l_{ul}$ is given by:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
l _ {u l} = \sqrt {l _ {u} ^ {2} + l _ {l} ^ {2}}, \alpha_ {u r} = \arctan \left(\frac {l _ {u}}{l _ {l}}\right) \tag {5}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $l_{ul}$ refers to the length of the up-right branch, and $\alpha_{ur}$ represents the angle between the upward branch and the upright branch. Corresponding lengths and directions of other three slanted branches can be obtained similarly.
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
Figure 6: Different design architectures between ACMMP and our method. (a) illustrates the cascading network architectures employed in ACMMP, whereas (b) depicts our method with multi-scale architecture.
|
| 154 |
+
|
| 155 |
+
Having adjusted all directions and lengths, we encounter another challenge: the searching domain for each branch is unbalanced. Since the process of selecting a pixel with the minimal cost is essentially a spatial neighborhood search, an imbalance will emerge due to the different length of branches. The search along a shorter branch is suffered from unreliable results due to its minor search domain.
|
| 156 |
+
|
| 157 |
+
To address this, we accordingly modify the searching strategy in the propagation scheme, as shown in Fig. 5 (c). Specifically, we employ eight different colors to depict separate search domains on the eight directions centered on $p$ . Instead of taking the central pixel $p$ as the dividing point, we use the midpoint of the sum of the lengths of two opposite branches to divide the search domain. In experiments, pixels with the same color are grouped into the same domain, with CUDA operators balance the load of searching for minima within each color-specific region. Therefore, our proposed strategy ensures load-balance across all directions and allows for faster convergence.
|
| 158 |
+
|
| 159 |
+
Multi-scale Consistency Many conventional methods adopt cascading architectures by sequentially loading different scales of images into GPU, as shown in Fig. 6 (a). This may result in a time-consuming performance due to the limited transfer speed between CPU and GPU. Therefore, we draw inspiration from mipmap (Williams 1983) in computer graphics, a technique to load different scales of images in parallel at once, to replace the previous cascading architecture into our proposed parallel architecture.
|
| 160 |
+
|
| 161 |
+
Specifically, we first perform image downsampling in the CPU. Subsequently, multi-scale images are assembled and loaded together into the GPU, as depicted in Fig. 6 (b). Then
|
| 162 |
+
|
| 163 |
+
multi-scale images are processed together through matching cost, propagation and refinement in the GPU. Finally, all predicted depth images are transferred back into the CPU. Denoting the maximum memory consumption of ACMMP cascading architectures as $\sigma$ , and the number of memory read operations as $k$ , this technique enables us to load all scales of images in the GPU memory at a reasonable cost of $\frac{4}{3}\sigma$ instead of sequentially loading images, thereby eliminating the need for $k - 1$ additional memory read operations.
|
| 164 |
+
|
| 165 |
+
Based on this architecture, we further introduce multiscale consistency on matching cost and propagation. Regarding matching cost, we first apply SAM segmentation on the $k$ -th level downsampled image. Based on segmentation results, we construct deformed patch and further compute $k$ -th level matching cost, denoted as $c_{k}$ . Therefore, the multiscale matching cost is given by:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
C _ {m s} = \frac {\sum_ {k} c _ {k}}{k} \tag {6}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Concerning with propagation, the multi-scale consistency aggregates the search domain for all scales in each direction, yielding a total of eight distinct search domains. Conclusively, eight values with the lowest cost within each domain are chosen as new hypothesis for further computation.
|
| 172 |
+
|
| 173 |
+
Aggregated Cost During the patch-matching phase, we consider not only the multi-scale matching cost $C_{ms}$ , but also the reprojection error $C_{rp}$ and the projection color gradient error $C_{pc}$ . $C_{rp}$ proposed in ACMMP validates depth estimation from geometric consistency. $C_{pc}$ measures color consistency between current pixel $p_i$ in reference image $I_i$ and its corresponding pixel $p_j$ in source images $I_j$ :
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
C _ {p c} = \max \left\{\| \nabla I _ {j} (p _ {j}) - \nabla I _ {i} (p _ {i}) \|, \tau \right\} \tag {7}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where $\nabla$ represents the Laplacian Operator, $p_j$ denotes pixel in image $I_{j}$ the projected by pixel $p_i$ in $I_{i}$ , and $\tau$ is the truncation threshold to robustify the cost against outliers. With these terms, our the aggregated costs $C_{ag}$ can be given by:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
C _ {a g} = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c} \tag {8}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where $w_{ms}$ , $w_{rp}$ , and $w_{pc}$ respectively represent the aggregation weights of each component.
|
| 186 |
+
|
| 187 |
+
# Spherical Gradient Refinement
|
| 188 |
+
|
| 189 |
+
Two types of refinement strategies are adopted in ACMMP: 1. Local perturbations, which is the local search conduct by perturbing the current depth and normal with a small value; 2. Random selection, which achieves global search to suit potential depth discontinuities by assigning a random value. Since the edge information has already been segmented out through SAM, we only need to consider local perturbations. Given depth $d$ and normal $n = (n_x, n_y, n_z)$ in Cartesian coordinates, new depth $d'$ and normal $n'$ after the local perturbation can be defined by:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\left\{ \begin{array}{l} d ^ {\prime} \leftarrow d + \delta_ {d} \\ n ^ {\prime} \leftarrow \mathbf {V N} \left(n _ {x} + \delta_ {x}, n _ {y} + \delta_ {y}, n _ {z} + \delta_ {z}\right) \end{array} \right. \tag {9}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $\mathbf{VN}$ is a normalization function ensuring $\| n^{\prime}\| = 1$ and $\delta$ denotes a random value chosen from a fixed interval.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
(a)
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
(b)
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
(c)
|
| 205 |
+
Figure 7: Spherical Gradient Refinement Procedure. (a) illustrates the rotation from $n$ to $n'$ , (b) illustrates the rotation from $n'$ to $n''$ . (c) respectively indicates two old and new orthogonal perturbation directions $e_1, e_2$ and $e_1', e_2'$ .
|
| 206 |
+
|
| 207 |
+
However, the strategy is incompatible with the definition of normal. It introduces a higher sensitivity to axes with smaller values during the search process, resulting in an unequal ratio of change on $xyz$ axes. Therefore, we propose the spherical gradient descent refinement, which utilize a structured representation to converge more accurate hypotheses.
|
| 208 |
+
|
| 209 |
+
Spherical Coordinate As shown in Fig. 7, given the normalized normal, we first randomly choose two orthogonal vectors, $e_1$ and $e_2$ , perpendicular to the normal $n$ as the perturbation direction. We then use the angles $\theta_1$ and $\theta_2$ as the degree of rotation for iterative refinement. The normal first undergoes a counterclockwise rotation by $\theta_1$ degrees around $e_1$ as the rotation axis. Subsequently, the normal is further rotated counterclockwise by $\theta_2$ degrees around $e_2$ as the rotation axis. According to Rodrigues' rotation formula, the ultimate updated normal $n''$ is given by:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
\left\{ \begin{array}{l} n ^ {\prime} = \cos \theta_ {1} \cdot n + \sin \theta_ {1} \left(e _ {1} \times n\right) \\ n ^ {\prime \prime} = \cos \theta_ {2} \cdot n ^ {\prime} + \sin \theta_ {2} \left(e _ {2} \times n ^ {\prime}\right) \end{array} \right. \tag {10}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
This is analogous to sliding a vertex directed by the normal on the surface of a sphere, which ensures the preservation of normalization for the normal vector both before and after rotation. By finding two orthogonal bases perpendicular to the normal for refinement, it can be ensured that perturbations in each direction are equivalent. This approach aligns more closely with the geometric essence of the normal, which is defined on a sphere rather than individual axes in the $xyz$ coordinate system. As a result, our approach boosts the robustness and stability during the refinement process.
|
| 216 |
+
|
| 217 |
+
Gradient Descent We also utilize gradient descent in our method. The primary merit of gradient descent lies in its ability to logically restrict the search space to the vicinity of probable solutions. Denoting the number of total iterations as $N_{max}$ , the rotation angle $\theta$ for the $i$ -th round is randomly selected from range $[0, 5 * 2^{N_{max} - i}]$ . After one round of refinement for depth $d$ and normal $n$ , we determine the new direction for local perturbations $e_1'$ and $e_2'$ based on the result of the previous search. As such, we get:
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\left\{ \begin{array}{l} e _ {1} ^ {\prime} \leftarrow n ^ {\prime \prime} - n \\ e _ {2} ^ {\prime} \leftarrow e _ {1} ^ {\prime} \times n ^ {\prime \prime} \end{array} \right. \tag {11}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
Here, $e_1'$ is aligned with the vector sum of the previous round's perturbation, while $e_2'$ is a vector perpendicular to
|
| 224 |
+
|
| 225 |
+
both $n'$ and $e_1'$ , as shown in Fig. 7(c). The primary merit of gradient descent lies in its ability to restrict the search domain of neighbourhood solutions. Each round of search takes place on the orthogonal plane defined by the previous search direction and the current normal direction, thereby enabling faster convergence to the optimal solution.
|
| 226 |
+
|
| 227 |
+
Pixelwise Depth Interval Search ACMMM employs a fixed interval for local perturbations on depth, while static perturbation range cannot adapt well to locally varying scene depth. Addressing this, we introduce pixelwise depth search interval chosen within the deformed patch.
|
| 228 |
+
|
| 229 |
+
Specifically, for each pixel, we extract the depth values of all pixels encompassed by its deformed patch, and choose the maximal and minimal values from this set as depth boundary for perturbations. Additionally, considering our iterative refinement strategy, during the $i$ -th iteration, the pixelwise search interval is chosen within the deformed patch gained from $i$ -th downsampled image, thereby narrowing the perturbation interval to yield more accurate hypothesis.
|
| 230 |
+
|
| 231 |
+
# EM-based Hyperparameters Optimization
|
| 232 |
+
|
| 233 |
+
While computing the aggregated matching cost, the hyperparameters of each component is typically determined empirically, which may result in suboptimal outcomes for different scenes. To mitigate this, we leverage the Expectation-Maximization (EM) algorithm to alternately optimize the hyperparameters and the aggregated cost, thereby enhancing both the robustness and effectiveness of our method.
|
| 234 |
+
|
| 235 |
+
E-Step: Optimize $C_{ag}$ By fixing $w_{ms}$ , $w_{rp}$ , and $w_{pc}$ , we can optimize the aggregated cost $C_{ag}$ , formulated as:
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\min _ {C _ {m s}, C _ {r p}, C _ {p c}} C _ {a g} = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c} \tag {12}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
After optimization, we can get the optimal depth estimation under current hyperparameters.
|
| 242 |
+
|
| 243 |
+
M-Step: Optimize $w_{ms}, w_{rp}, w_{pc}$ By fixing $C_{ms}, C_{rp}$ and $C_{pc}$ , we can optimize $w_{ms}, w_{rp}$ and $w_{pc}$ , defined by:
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\min _ {w _ {m s}, w _ {m s}, w _ {p c}} C _ {a g} = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c},
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\begin{array}{l} s. t. \quad w _ {m s} + w _ {r p} + w _ {p c} = 1, \tag {13} \\ w _ {m s}, w _ {r p}, w _ {p c} > \eta \\ \end{array}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
All hyperparameters are required to exceed a minimal value $\eta$ , and we implement a normalization constraint ensuring that their sum equals 1 to mitigate significant variances. Following the E-step optimization, we can alternatively optimize the hyperparameters and feed them back into the E-step for the next round of aggregated cost optimization.
|
| 254 |
+
|
| 255 |
+
Since it may be challenging to obtain the analytical solution to the optimization problem in M-step, we will use numerical optimization methods such as Newton's method (Qi and Sun 1993) to obtain the optimal solutions for $w_{ms}$ , $w_{rp}$ , and $w_{pc}$ . A comprehensive formula derivation of the optimization can be found in supplementary material.
|
| 256 |
+
|
| 257 |
+
In practical situations, there might be partial pixels with depth estimation errors when all pixels are selected. Hence, we only select pixels where SIFT features can be matched between different images, and then calculate the aggregate cost between the pixels corresponding to these features.
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
Figure 8: An illustration of the qualitative results on partial scenes of ETH3D datasets (office, old computer, and pipes). Some challenging areas are shown in red boxes. It is obvious that our methods outperform others, especially in large textureless areas.
|
| 261 |
+
|
| 262 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Train</td><td colspan="3">Test</td></tr><tr><td>Acc.</td><td>Comp.</td><td>F1</td><td>Acc.</td><td>Comp.</td><td>F1</td></tr><tr><td>PatchMatchNet</td><td>64.81</td><td>65.43</td><td>64.21</td><td>69.71</td><td>77.46</td><td>73.12</td></tr><tr><td>IterMVS-LS</td><td>79.79</td><td>66.08</td><td>71.69</td><td>84.73</td><td>76.49</td><td>80.06</td></tr><tr><td>MVSTER</td><td>68.08</td><td>76.92</td><td>72.06</td><td>77.09</td><td>82.47</td><td>79.01</td></tr><tr><td>EPP-MVSNet</td><td>82.76</td><td>67.58</td><td>74.00</td><td>85.47</td><td>81.79</td><td>83.40</td></tr><tr><td>EPNet</td><td>79.36</td><td>79.28</td><td>79.08</td><td>80.37</td><td>87.84</td><td>83.72</td></tr><tr><td>COLMAP</td><td>91.85</td><td>55.13</td><td>67.66</td><td>91.97</td><td>62.98</td><td>73.01</td></tr><tr><td>PCF-MVS</td><td>84.11</td><td>75.73</td><td>79.42</td><td>82.15</td><td>79.29</td><td>80.38</td></tr><tr><td>MAR-MVS</td><td>81.98</td><td>77.19</td><td>79.21</td><td>80.24</td><td>84.18</td><td>81.84</td></tr><tr><td>ACMP</td><td>90.12</td><td>72.15</td><td>79.79</td><td>90.54</td><td>75.58</td><td>81.51</td></tr><tr><td>ACMMP</td><td>90.63</td><td>77.61</td><td>83.42</td><td>91.91</td><td>81.49</td><td>85.89</td></tr><tr><td>APD-MVS</td><td>89.14</td><td>84.83</td><td>86.84</td><td>89.54</td><td>85.93</td><td>87.44</td></tr><tr><td>SD-MVS (ours)</td><td>89.63</td><td>84.52</td><td>86.94</td><td>88.96</td><td>87.49</td><td>88.06</td></tr></table>
|
| 263 |
+
|
| 264 |
+
# Experiments
|
| 265 |
+
|
| 266 |
+
# Datasets and Implementation Details
|
| 267 |
+
|
| 268 |
+
We evaluate our work on both ETH3D high-resolution benchmark (Schöps et al. 2017) and Tanks and Temples benchmark (TNT) (Knapitsch et al. 2017). We compare our work against state-of-the-art learning-based methods including PatchMatchNet (Wang et al. 2021), IterMVS-LS (Wang et al. 2022a), MVSTER (Wang et al. 2022b), EPP-MVSNet (Ma et al. 2021), EPNet (Su and Tao 2023) and traditional MVS methods including COLMAP (Schönberger et al. 2016), PCF-MVS (Kuhn, Lin, and Erdler 2019), MARMVS (Xu et al. 2020), ACMP (Xu and Tao 2020), ACMMP (Xu et al. 2022) and APD-MVS (Yuesong Wang et al. 2023).
|
| 269 |
+
|
| 270 |
+
Note that experiments is carried out on downsampled images with half of the original resolution in ETH3D, and on original images in TNT. Concerning parameter setting, $\{w_{ms}, w_{rp}, w_{pc}, L, k, \tau, N_{max}, \eta\} = \{1, 0.2, 0.2, 11, 3, 2, 3, 0.1\}$ . In cost calculation, we take the matching strategy of every other row and column.
|
| 271 |
+
|
| 272 |
+
Our method is implemented on a system equipped with an Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz and an NVIDIA GeForce RTX 3080 graphics card. We take ACPI
|
| 273 |
+
|
| 274 |
+
Table 1: Quantitative results on ETH3D benchmark at threshold $2cm$ . Our method accomplishes the best $\mathbf{F}_1$ score.
|
| 275 |
+
|
| 276 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Intermediate</td><td colspan="3">Advanced</td></tr><tr><td>Pre.</td><td>Rec.</td><td>F1</td><td>Pre.</td><td>Rec.</td><td>F1</td></tr><tr><td>PatchMatchNet</td><td>43.64</td><td>69.37</td><td>53.15</td><td>27.27</td><td>41.66</td><td>32.31</td></tr><tr><td>CasMVSNet</td><td>47.62</td><td>74.01</td><td>56.84</td><td>29.68</td><td>35.24</td><td>31.12</td></tr><tr><td>IterMVS-LS</td><td>47.53</td><td>74.69</td><td>56.94</td><td>28.70</td><td>44.19</td><td>34.17</td></tr><tr><td>MVSTER</td><td>50.17</td><td>77.50</td><td>60.92</td><td>33.23</td><td>45.90</td><td>37.53</td></tr><tr><td>EPP-MVSNet</td><td>53.09</td><td>75.58</td><td>61.68</td><td>40.09</td><td>34.63</td><td>35.72</td></tr><tr><td>EPNet</td><td>57.01</td><td>72.57</td><td>63.68</td><td>34.26</td><td>50.54</td><td>40.52</td></tr><tr><td>COLMAP</td><td>43.16</td><td>44.48</td><td>42.14</td><td>31.57</td><td>23.96</td><td>27.24</td></tr><tr><td>PCF-MVS</td><td>49.82</td><td>65.68</td><td>55.88</td><td>34.52</td><td>35.36</td><td>35.69</td></tr><tr><td>ACMP</td><td>49.06</td><td>73.58</td><td>58.41</td><td>34.57</td><td>42.48</td><td>37.44</td></tr><tr><td>ACMMP</td><td>53.28</td><td>68.50</td><td>59.38</td><td>33.79</td><td>44.64</td><td>37.84</td></tr><tr><td>APD-MVS</td><td>55.58</td><td>75.06</td><td>63.64</td><td>33.77</td><td>49.41</td><td>39.91</td></tr><tr><td>SD-MVS (ours)</td><td>53.78</td><td>77.63</td><td>63.31</td><td>35.53</td><td>47.37</td><td>40.18</td></tr></table>
|
| 277 |
+
|
| 278 |
+
Table 2: Quantitative results on TNT dataset. Our method accomplishes competitive $\mathrm{F}_{1}$ score with SOTA methods.
|
| 279 |
+
|
| 280 |
+
(Xu and Tao 2020) as the backbone of our method.
|
| 281 |
+
|
| 282 |
+
# Results on ETH3D and TNT
|
| 283 |
+
|
| 284 |
+
Qualitative results on ETH3D are illustrated in Fig. 8. It is obvious that our method reconstructs the most comprehensive results, especially in large textureless areas like floors, walls and doors, without introducing conspicuous detail distortion. More qualitative results on ETH3D and TNT benchmark can be referred in supplementary material.
|
| 285 |
+
|
| 286 |
+
Tab. 1 and Tab. 2 respectively present quantitative results on the ETH3D and the TNT benchmark. Note that the first group is learning-based methods and the second is traditional methods. Meanwhile, the best results are marked in bold while the second-best results are underlined. Our method achieves the highest $\mathrm{F}_1$ score on ETH3D datasets, giving rise to state-of-the-art performance. Meanwhile, our method achieves competitive results with SOTA methods in TNT datasets like EPNET and APD-MVS, falling short by less than $0.5\%$ in $\mathrm{F}_1$ score. Especially, our method shows significant improvement in completeness in both datasets, demonstrating its robustness in recovering textureless areas.
|
| 287 |
+
|
| 288 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">2cm</td><td colspan="3">10cm</td></tr><tr><td>Acc.</td><td>Comp.</td><td>F1</td><td>Acc.</td><td>Comp.</td><td>F1</td></tr><tr><td>w/. ACM. Cost</td><td>90.16</td><td>74.61</td><td>81.27</td><td>98.01</td><td>89.04</td><td>93.16</td></tr><tr><td>w/o. Adp. Cost</td><td>89.92</td><td>78.01</td><td>83.42</td><td>97.92</td><td>91.87</td><td>94.71</td></tr><tr><td>w/o. Mul. Cost</td><td>89.84</td><td>79.94</td><td>84.55</td><td>97.9</td><td>93.36</td><td>95.53</td></tr><tr><td>w/. ACM. Pro.</td><td>89.83</td><td>79.96</td><td>84.52</td><td>97.91</td><td>93.58</td><td>95.54</td></tr><tr><td>w/o. Adp. Pro.</td><td>89.57</td><td>81.74</td><td>85.38</td><td>97.81</td><td>94.96</td><td>96.29</td></tr><tr><td>w/o. Mul. Pro.</td><td>89.69</td><td>81.97</td><td>85.54</td><td>97.87</td><td>95.17</td><td>96.44</td></tr><tr><td>w/o. Ref.</td><td>86.75</td><td>70.45</td><td>77.6</td><td>97.04</td><td>85.37</td><td>90.72</td></tr><tr><td>w/. Gip. Ref.</td><td>89.3</td><td>78.51</td><td>83.43</td><td>97.74</td><td>91.56</td><td>94.48</td></tr><tr><td>w/. ACM. Ref.</td><td>89.42</td><td>79.83</td><td>84.25</td><td>97.79</td><td>92.64</td><td>95.11</td></tr><tr><td>w/o. EM A</td><td>89.74</td><td>78.16</td><td>83.45</td><td>97.89</td><td>91.78</td><td>94.57</td></tr><tr><td>w/o. EM B</td><td>89.45</td><td>79.87</td><td>84.27</td><td>97.81</td><td>93.05</td><td>95.3</td></tr><tr><td>SD-MVS</td><td>89.63</td><td>84.52</td><td>86.94</td><td>97.85</td><td>96.74</td><td>97.28</td></tr></table>
|
| 289 |
+
|
| 290 |
+
Table 3: Quantitative results of the ablation studies on ETH3D benchmark to validate each proposed component.
|
| 291 |
+
|
| 292 |
+
# Memory and Runtime Comparison
|
| 293 |
+
|
| 294 |
+
To demonstrate the efficiency of our method, we compare both GPU memory usage and runtime among various methods on ETH3D training datasets, as depicted in Fig. 9. Note that all experiments are executed on original images whose number have been standardized to 10 across all scenes. Moreover, to exclude the impact of unrelated variables, all methods are conducted on a same system, whose hardware configuration has been specified in previous section.
|
| 295 |
+
|
| 296 |
+
Concerning learning-based methods, while IterMVS-LS exhibits the shortest runtime, its memory overhead exceeds the maximum capacity of mainstream GPUs. Other state-of-the-art (SOTA) learning-based methods also suffer from excessive memory consumption, making them impractical for the reconstruction of large-scale outdoor scenarios.
|
| 297 |
+
|
| 298 |
+
Although SD-MVS consumes approximately one-third more memory usage than traditional SOTA methods like APD-MVS and ACMMP, our runtime is only half of them, thanks to our multi-scale consistency architecture. Therefore, our method strikes the optimal balance between time and memory usage without sacrificing performance, demonstrating its effectiveness and practicality.
|
| 299 |
+
|
| 300 |
+
# Ablation Studies
|
| 301 |
+
|
| 302 |
+
We validate the rationale behind the design of each part of our method through ablation studies, as shown in Tab. 3.
|
| 303 |
+
|
| 304 |
+
Matching Cost with Adaptive Patch In terms of matching cost, we respectively remove patch deformation (w/o. Adp. Cost), multi-scale consistency (w/o. Mul. Cost) and both of them (w/. ACM. Cost). Since w/. ACM. Cost has neither deformable nor multi-scale, it produces the worst results. w/o. Mul. Cost slightly outperformed w/o. Adp. Cost, yet both are inferior to SD-MVS, implying that patch deformation contribute more than multi-scale consistency.
|
| 305 |
+
|
| 306 |
+
Adaptive Propagation with Load-balancing In terms of propagation, we respectively remove patch deformation (w/o. Adp. Pro.), multi-scale consistency (w/o. Mul. Pro.) and apply propagation scheme from ACMMP (w/. ACM. Pro.). Given that patches in ACMMP do not deform in accordance with the patch, its performance fell short of expectations. Both w/o. Adp. Pro. and w/o. Mul. Pro. delivered
|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
Figure 9: GPU memory usage (GB) and runtime (second) between different methods on ETH3D training datasets.
|
| 310 |
+
|
| 311 |
+
similar results, yet fell short in comparison to SD-MVS, indicating that both patch deformation and multi-scale consistency on propagation are equally crucial.
|
| 312 |
+
|
| 313 |
+
Spherical Gradient Refinement In terms of refinement, we respectively remove refinement (w/o. Ref.), exchange the refinement module into Gipuma (Galliani, Lasinger, and Schindler 2015) (w/. Gip. Ref.) and switch the refinement module into ACMMP (w/. ACM. Ref.). As observed, the absence of refinement significantly diminishes the results. However, introducing Gipuma refinement brings about noticeable progress, with further advancements achieved after adopting ACMMP refinement. Nonetheless, both refinement methods are worse than SD-MVS, proving the necessity of spherical gradient refinement.
|
| 314 |
+
|
| 315 |
+
EM-based Hyperparameters Optimization We conduct two experiments (w/o. EM A and w/o. EM B) by removing EM-based Optimization and respectively setting $(w_{ms},w_{rp},w_{pc})$ to $(1,0.5,0.5)$ and $(1,0.2,0.2)$ . The results highlight the impact of hyperparameter settings on the final results. Furthermore, their inferior performances compared to SD-MVS evidences the importance of automatic parameter tuning by the proposed EM-based Optimization.
|
| 316 |
+
|
| 317 |
+
# Conclusion
|
| 318 |
+
|
| 319 |
+
In this paper, we presented SD-MVS, a novel MVS method designed to effectively address challenges posed by textureless areas. The proposed method consists of an adaptive patch deformation with multi-scale consistency, a spherical gradient refinement and EM-based hyperparameter optimization. Our method has achieved state-of-the-art performance on ETH3D high-resolution benchmark, while being memory-friendly and with less time cost. In the future, we will tackle difficulty in highlight areas in matching cost and view selection strategy in pursuit of superior performance.
|
| 320 |
+
|
| 321 |
+
# Acknowledgements
|
| 322 |
+
|
| 323 |
+
This work was supported by the National Natural Science Foundation of China under Grant 62172392, the Central Public-interest Scientific Institution Basal Research Funds(No. Y2022QC17) and the Innovation Research Program of ICT CAS (E261070).
|
| 324 |
+
|
| 325 |
+
# Supplementaty Material
|
| 326 |
+
|
| 327 |
+
# EM-based Hyperparameters Optimization
|
| 328 |
+
|
| 329 |
+
The optimization process contains two parts: 1.E-step: Optimize $C_{ag}$ ; 2.M-Step: Optimize $w_{ms}$ , $w_{rp}$ , and $w_{pc}$ . Here, we present a comprehensive derivation for the optimization problem proposed in the M-step.
|
| 330 |
+
|
| 331 |
+
The optimization problem in the M-step is defined by:
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\begin{array}{l} \min _ {w _ {m s}, w _ {m s}, w _ {p c}} C _ {a g} = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c}, \\ s. t. \quad w _ {m s} + w _ {r p} + w _ {p c} = 1, \tag {14} \\ w _ {m s}, w _ {r p}, w _ {p c} > \eta \\ \end{array}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
Since this optimization problem contains both equality and inequality constraints, it can be solved by utilizing the Karush-Kuhn-Tucker conditions (KKT conditions) (Karush 2014;?). Specifically, we first reshape the problem so that it aligns with the KKT conditions:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\begin{array}{l} \min _ {w _ {m s}, w _ {m s}, w _ {p c}} C _ {a g} = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c}, \\ s. t. \quad w _ {m s} + w _ {r p} + w _ {p c} = 1, \tag {15} \\ - w _ {m s}, - w _ {r p}, - w _ {p c} < - \eta \\ \end{array}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Since it is not a convex optimization problem, we subsequently construct the Lagrange dual function to derive the following convex problem:
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\begin{array}{l} L (W, \mu , \{\lambda_ {i} \}) = C _ {a g} (W) + \mu h (W) + \sum_ {i = 1, 2, 3} \lambda_ {i} g _ {i} (W) \\ = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c} + \mu \left(w _ {m s} + w _ {r p} + w _ {p c} - 1\right) \\ + \lambda_ {1} (- w _ {m s} + \eta) + \lambda_ {2} (- w _ {r p} + \eta) + \lambda_ {3} (- w _ {p c} + \eta) \tag {16} \\ \end{array}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
where $W = \{w_{ms}, w_{rp}, w_{pc}\}$ . Therefore, our objective becomes acquiring the infimum of this Lagrange dual function. Then we decompose the above equation as follows:
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
C _ {a g} (W) = w _ {m s} C _ {m s} + w _ {r p} C _ {r p} + w _ {p c} C _ {p c}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\mu h (W) = \mu \left(w _ {m s} + w _ {r p} + w _ {p c} - 1\right)
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\lambda_ {1} g _ {1} (W) = \lambda_ {1} (- w _ {m s} + \eta) \tag {17}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\lambda_ {2} g _ {2} (W) = \lambda_ {2} (- w _ {r p} + \eta)
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\lambda_ {3} g _ {3} (W) = \lambda_ {3} (- w _ {p c} + \eta)
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
They can be solved by computing the partial derivative of each equation with respect to $\mathrm{W}$ and set them equal to zero:
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\nabla C _ {a g} \left(W ^ {*}\right) + \mu \nabla h \left(W ^ {*}\right) + \sum_ {i = 1, 2, 3} \lambda_ {i} \nabla g _ {i} (W) = 0 \tag {18}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\lambda_ {i} g _ {i} \left(W ^ {*}\right) = 0, i = 1, 2, 3 \tag {19}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
h \left(W ^ {*}\right) = 0 \tag {20}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\lambda_ {i} \geq 0, i = 1, 2, 3 \tag {21}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
g _ {i} \left(W ^ {*}\right) \leq 0, i = 1, 2, 3 \tag {22}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
In the above equations, $W^{*}$ is the optimal infimum of this problem. Since the problem has Slater constraint (Jeyakumar and Wolkowicz 1992) qualification, we can utilize Eq. 18, Eq. 19, and Eq. 20 to obtain the $W^{*}$ , and then check if
|
| 394 |
+
|
| 395 |
+
the answer satisfy Eq. 21 and Eq. 22. The derived $W^{*}$ represents the infimum of the dual problem, which is the optimal solutions for $w_{ms}$ , $w_{rp}$ , and $w_{pc}$ .
|
| 396 |
+
|
| 397 |
+
In practical experiment, since it may be challenging to obtain the analytical solution to the optimization problem in M-step, we alternatively adopt numerical optimization methods (Quasi-Newton) to obtain the optimal hyperparameters.
|
| 398 |
+
|
| 399 |
+
# Results on ETH3D and TnT dataset
|
| 400 |
+
|
| 401 |
+
Fig. 10 presents some qualitative results between different methods on partial scenes of the ETH3D datasets. It is evident that our method achieves superior performance than other competing methods, especially when dealing with large textureless areas. Moreover, our method can effectively restore areas characterized by less illumination, as depicted in red boxes of meadow and terrace.
|
| 402 |
+
|
| 403 |
+
# Expanded Broader Research Context
|
| 404 |
+
|
| 405 |
+
This section provides an expanded review of the literature to situate our work, SD-MVS, within the broader landscape of 3D computer vision, fundamental artificial intelligence methodologies, and their diverse real-world applications. While the main paper focuses on Multi-View Stereo (MVS), our research is informed by and contributes to a wider ecosystem of technological advancements, spanning from novel 3D representations and foundational models to critical applications in autonomous driving, medical imaging, and beyond.
|
| 406 |
+
|
| 407 |
+
# Advances in 3D Vision and Scene Reconstruction
|
| 408 |
+
|
| 409 |
+
The core of our work lies in 3D reconstruction. This field has witnessed rapid progress, moving beyond traditional MVS to embrace new neural representations and robust estimation techniques.
|
| 410 |
+
|
| 411 |
+
Frontiers in Multi-View Stereo (MVS) Our work builds upon the PatchMatch-based MVS paradigm. We have continuously explored this direction by enhancing segmentation-driven techniques with edge alignment and occlusion handling, as demonstrated in SED-MVS (Yuan et al. 2025c), and by designing textureless-aware segmentation and refinement strategies in TSAR-MVS. Other researchers have also pushed the boundaries of MVS. For instance, Chen et al. (Chen et al. 2025a) proposed guiding MVS with dual-level precision edges for accurate planarization. The community has also investigated incorporating diverse priors, such as monocular guidance in MonoMVSNet (Jiang et al. 2025b), and adopting modern network architectures. These include leveraging Transformers for recurrent regularization in RRT-MVS (Jiang et al. 2025c) and exploring State Space Models like Mamba for improved efficiency and context modeling in MVSMamba (Jiang et al. 2025a), signaling a trend towards more powerful sequence models in 3D reconstruction.
|
| 412 |
+
|
| 413 |
+
Emerging 3D Neural Representations Beyond MVS, neural rendering has revolutionized 3D scene representation. Neural Radiance Fields (NeRF) have set a high bar for novel view synthesis, and the NeRFBK dataset provides
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
Figure 10: An illustration of the qualitative results on partial scenes of ETH3D datasets (kicker, lecture room, lounge, meadow, terrace and terrains). It is obvious that our methods outperform others, especially in large textureless areas.
|
| 417 |
+
|
| 418 |
+
a holistic benchmark for evaluating such NeRF-based reconstruction methods (Yan et al. 2023). More recently, 3D Gaussian Splatting (GS) has emerged as a highly efficient and high-quality alternative. Research in this area is flourishing, with efforts focused on creating lightweight models for dynamic 4D scenes (Light4GS) (Liu et al. 2025c) and deformable 2D Gaussians for real-time video representation (Liu et al. 2025d). Further innovations include leveraging persistent homology to ensure topological integrity in Topology-Aware 3D Gaussian Splatting (Shen et al. 2025), unifying appearance codes for complex driving scenes (Wang et al. 2025d), and enabling controllable scene editing with 3DSceneEditor (Yan et al. 2024b). Directly related to our use of segmentation, GradiSeg enhances 3D boundary precision in Gaussian segmentation through gradient guidance (Li et al. 2024). The pursuit of real-time dynamic scene rendering is also advanced by spatio-temporal decoupling techniques like STDR (Li et al. 2025d).
|
| 419 |
+
|
| 420 |
+
Robust Depth and Stereo Estimation Accurate depth estimation is the cornerstone of MVS. Recent works have focused on improving robustness under challenging conditions. For example, Wang et al. have explored curriculum contrastive learning for self-supervised depth estimation in adverse weather (WeatherDepth) (Wang et al. 2024a) and have dug into using diffusion models to enhance contrastive learning for robust depth estimation (Wang et al. 2024b). The power of diffusion priors is further harnessed for self-supervised depth estimation in Jasmine (Wang et al. 2025b). Another line of work aims to develop a unified model that can transition from an image editor to a dense geometry estimator (Wang et al. 2025c). In parallel, advancements in stereo matching, such as the efficient hybrid-supervised network EHSS (Zhang et al. 2023), continue to improve the foundational blocks of 3D perception.
|
| 421 |
+
|
| 422 |
+
# Foundational Methods in Artificial Intelligence
|
| 423 |
+
|
| 424 |
+
The progress in specific domains like 3D vision is heavily dependent on advancements in core AI methodologies, including foundation models, advanced segmentation, and graph learning.
|
| 425 |
+
|
| 426 |
+
Vision and Language Foundation Models The advent of foundation models, such as the Segment Anything Model (SAM) used in our work, has transformed AI research. The field is rapidly evolving, with a focus on enhancing the capabilities of Large Language and Vision-Language Models (LLMs and VLMs). This includes promoting multi-domain reasoning through rubric-based rewards (Bi et al. 2025b), improving model confidence on edited facts via contrastive knowledge decoding (Bi et al. 2024b), and aligning models for better context-faithfulness using Context-DPO (Bi et al. 2024a). Researchers are also investigating the fine-grained control of knowledge reliance by balancing parameters versus context (Bi et al. 2025c). On the data front, RefineX shows a path to learning how to refine pre-training data at scale (Bi et al. 2025a). Efficiently transitioning and scaling LLMs is addressed by methods like WISCA, which uses weight scaling (Li et al. 2025a). For VLMs, alignment is key, as explored in Re-Align, which uses retrieval-augmented
|
| 427 |
+
|
| 428 |
+
DPO (Xing et al. 2025), and DecAlign, which proposes hierarchical cross-modal alignment (Qian et al. 2025c). These models are being applied to complex tasks like explainable visual question answering through a diffusion chain-of-thought (Lu, Lu, and Luo 2024) and versatile advertising poster generation in AnyLayout (Anonymous 2025). Their spatial intelligence is also being rigorously tested on complex reasoning benchmarks like SIRI-Bench (Song et al. 2025d), and multimodal diffusion mamba models are unifying end-to-end generation (Lu et al. 2025).
|
| 429 |
+
|
| 430 |
+
Advanced Segmentation and Recognition Our "segmentation-driven" approach highlights the critical role of precise segmentation. This is a vibrant research area in its own right. For instance, in the biomedical domain, TokenUnify scales up autoregressive pretraining for neuron segmentation (Chen et al. 2025e), while multi-agent reinforcement learning is used for self-supervised neuron segmentation (Chen et al. 2023). In general computer vision, MaskTwins introduces dual-form complementary masking for domain-adaptive segmentation (Wang et al. 2025a). Novel perspectives, such as using the frequency domain, are also being explored to unlock new capabilities in medical image segmentation (Han et al. 2025b). For industrial applications, SSDC-Net provides an effective method for classifying steel surface defects based on salient local features (Hao et al. 2024).
|
| 431 |
+
|
| 432 |
+
Graph Learning and Data Mining Modeling relationships and structure is crucial for many AI tasks. Graph neural networks offer a powerful framework for this. Recent research has focused on improving graph autoencoders by revisiting masking strategies from a robustness perspective (Song et al. 2025b) and through self-purified designs like SPMGAE (Song et al. 2025c). To defend against adversarial attacks, GPromptShield elevates the resilience of graph prompt tuning (Song et al. 2025a). Furthermore, Fan et al. (Fan et al. 2025) have demonstrated the effectiveness of multi-scale graph learning for challenging tasks like antisparse downscaling.
|
| 433 |
+
|
| 434 |
+
Zero-Shot Learning and Prompt Engineering Reducing the dependency on extensive labeled data is a major goal in AI. Zero-shot learning, often powered by rich semantic information, is a promising direction. For instance, MADS leverages multi-attribute document supervision for zero-shot image classification (Qu et al. 2025b), while EmDepart proposes visual-semantic decomposition and partial alignment for the same task (Qu et al. 2024). Concurrently, automating the creation of effective prompts is crucial for harnessing the power of foundation models, as explored in ProAPO for progressively automatic prompt optimization (Qu et al. 2025a).
|
| 435 |
+
|
| 436 |
+
# Applications in Diverse Domains
|
| 437 |
+
|
| 438 |
+
The ultimate test of these technologies is their successful application in solving real-world problems. Our team and collaborators are actively engaged in deploying AI across various critical domains.
|
| 439 |
+
|
| 440 |
+
Autonomous Driving and Robotics Autonomous driving is a primary driver for 3D vision research. Vision-
|
| 441 |
+
|
| 442 |
+
Language-Action (VLA) models are becoming central, with research focusing on incentivizing reasoning and self-reflection (AutoDrive-R2) (Yuan et al. 2025b) and providing comprehensive surveys of pure vision-based VLA models (Zhang et al. 2025b). End-to-end models are being simplified with distinct experts (ADDI) (Zhang et al. 2025d) and made more robust via adversarial transfer (AT-Drive) (Zhang et al. 2025c). Key sub-tasks are also being addressed, such as online HD map construction with MapExpert (Zhang et al. 2025a) and cross-view trajectory prediction using shared 3D queries (Song et al. 2023). World models with self-supervised 3D labels are being developed to enhance scene understanding (Yan et al. 2025). In robotics, physical autoregressive models show promise for manipulation without action pretraining (Song et al. 2025e). In industry, agents like MR-IntelliAssist enable adaptive human-AI symbiosis (Liu et al. 2025a), and cloud frameworks like A3Framework support autonomous driving path planning (Yang et al. 2024a).
|
| 443 |
+
|
| 444 |
+
Intelligent Medical Image Analysis AI is revolutionizing healthcare. In medical image segmentation, a major challenge is learning from imperfect data. Significant work is being done on handling noisy labels, for instance through region uncertainty estimation (Han et al. 2025c) and adaptive label correction techniques that improve robustness (Qian et al. 2025a). For imbalanced data, curriculum learning frameworks like ClimD (Han et al. 2025a) and DynCIM (Qian et al. 2025b) are being developed to manage multimodal learning. Foundation models are also being heavily applied to pathology, with methods for fusing multi-scale heterogeneous models for whole slide image analysis (Yang et al. 2025) and using sparse transformers for survival analysis (Yan et al. 2024a). In the specialized area of Cryo-Electron Tomography, self-supervised methods are used for volumetric image restoration (Yang, Zhang, and Han 2021) and denoising, guided by noise modeling and sparsity constraints (Yang et al. 2024c). Approaches like Noise-Transfer2Clean (Li et al. 2022) and simulation-aware pretraining (Yang et al. 2024b) further improve denoising performance. Moreover, large models are being leveraged for generative text-guided 3D pretraining to aid segmentation, as shown in GTGM (Chen et al. 2025d), and for optimizing medical prompts through evolutionary algorithms in EMPOWER (Chen et al. 2025b,c).
|
| 445 |
+
|
| 446 |
+
Remote Sensing Image Interpretation Remote sensing provides a unique multi-view perspective of our world. Research in this area includes developing advanced multi-view graph clustering methods with dual relation optimization (MDRO) (Guan et al. 2025a), structure-adaptive mechanisms (SAMVGC) (Guan et al. 2025c), and long-short range information mining (SEC-LSRM) (Guan et al. 2025b). Interactive agents like Change-Agent are being designed for comprehensive change interpretation (Liu et al. 2024), and comprehensive surveys on spatiotemporal vision-language models for remote sensing are helping to structure the field (Liu et al. 2025b). Diffusion models are also being used for controllable remote sensing image generation (CRS-Diff) (Tang et al. 2024) and to drive data generation for enhanced object detection in AeroGen (Tang et al. 2025).
|
| 447 |
+
|
| 448 |
+
Multimodal Content Analysis and Security Beyond the above domains, our research extends to other areas of multimodal understanding. In composed image and video retrieval, we have explored entity mining and relation binding (ENCODER) (Li et al. 2025b), explicit parsing of fine-grained modification semantics (FineCIR) (Li et al. 2025c), segmentation-based focus shift revision (OFFSET) (Chen et al. 2025f), hierarchical uncertainty-aware disambiguation (HUD) (Chen et al. 2025g), and complementarity-guided disentanglement (PAIR) (Fu et al. 2025). In human-object interaction (HOI) detection, we are discovering syntactic interaction clues (Luo et al. 2024), using context-aware instructions for multi-modal reasoning (InstructHOI) (Luo et al. 2025b), and developing synergistic prompting learning frameworks (Luo et al. 2025a). For video action recognition, we are exploring how to reinforce models with external tools in Video-STAR (Yuan et al. 2025a). Finally, in the domain of digital security, robust watermarking frameworks are being developed to resist extreme cropping and scaling (Sun et al. 2025b) and non-differentiable distortions (END2) (Sun et al. 2025a).
|
| 449 |
+
|
| 450 |
+
# References
|
| 451 |
+
|
| 452 |
+
Anonymous. 2025. AnyLayout: Versatile Advertising Poster Layout Generation with MLLMs. In Submitted to The Fourteenth International Conference on Learning Representations. Under review.
|
| 453 |
+
Bi, B.; Huang, S.; Wang, Y.; Yang, T.; Zhang, Z.; Huang, H.; Mei, L.; Fang, J.; Li, Z.; Wei, F.; et al. 2024a. Context-DPO: Aligning Language Models for Context-Faithfulness. ACL 2025.
|
| 454 |
+
Bi, B.; Liu, S.; Mei, L.; Wang, Y.; Ji, P.; and Cheng, X. 2024b. Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts. ACL 2025.
|
| 455 |
+
Bi, B.; Liu, S.; Ren, X.; Liu, D.; Lin, J.; Wang, Y.; Mei, L.; Fang, J.; Guo, J.; and Cheng, X. 2025a. RefineX: Learning to Refine Pre-training Data at Scale from Expert-Guided Programs. arXiv preprint arXiv:2507.03253.
|
| 456 |
+
Bi, B.; Liu, S.; Wang, Y.; Tong, S.; Mei, L.; Ge, Y.; Xu, Y.; Guo, J.; and Cheng, X. 2025b. Reward and Guidance through Rubrics: Promoting Exploration to Improve Multi-Domain Reasoning. arXiv preprint arXiv:2511.12344.
|
| 457 |
+
Bi, B.; Liu, S.; Wang, Y.; Xu, Y.; Fang, J.; Mei, L.; and Cheng, X. 2025c. Parameters vs. context: Fine-grained control of knowledge reliance in language models. arXiv preprint arXiv:2503.15888.
|
| 458 |
+
Bleyer, M.; Rhemann, C.; and Rother, C. 2011. PatchMatch Stereo - Stereo Matching with Slanted Support Windows. In Proc. Brit. Mach. Vis. Conf. (BMVC), 14.1-14.11.
|
| 459 |
+
Cao, M.; Zheng, L.; Jia, W.; Lu, H.; and Liu, X. 2021. Accurate 3-D Reconstruction Under IoT Environments and Its Applications to Augmented Reality. IEEE Trans. Ind. Inf., 17(3): 2090-2100.
|
| 460 |
+
Chen, K.; Yuan, Z.; Mao, T.; and Wang, Z. 2025a. Dual-level precision edges guided multi-view stereo with accurate planarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 2105-2113.
|
| 461 |
+
|
| 462 |
+
Chen, Y.; He, Y.; Yang, J.; Zhang, D.; Yuan, Z.; Khan, M. A.; Baili, J.; and Yee, L. 2025b. EMPOWER: Evolutionary Medical Prompt Optimization With Reinforcement Learning. IEEE J. Biomed. Health Inform.
|
| 463 |
+
Chen, Y.; He, Y.; Yang, J.; Zhang, D.; Yuan, Z.; Khan, M. A.; Baili, J.; and Yee, P. L. 2025c. EMPOWER: Evolutionary Medical Prompt Optimization With Reinforcement Learning. IEEE Journal of Biomedical and Health Informatics, 1-10.
|
| 464 |
+
Chen, Y.; Huang, W.; Zhou, S.; Chen, Q.; and Xiong, Z. 2023. Self-supervised neuron segmentation with multiagent reinforcement learning. In International Joint Conference on Artificial Intelligence (IJCAI).
|
| 465 |
+
Chen, Y.; Liu, C.; Huang, W.; Liu, X.; Shi, H.; Cheng, S.; Arcucci, R.; and Xiong, Z. 2025d. GTGM: Generative Text-Guided 3D Vision-Language Pretraining for Medical Image Segmentation. In IEEE International Conference on Computer Vision Workshop on Vision Language Models for 3D Understanding (ICCV Workshop VLM3D), 6715-6724.
|
| 466 |
+
Chen, Y.; Shi, H.; Liu, X.; Shi, T.; Zhang, R.; Liu, D.; Xiong, Z.; and Wu, F. 2025e. TokenUnify: Scaling Up Autoregressive Pretraining for Neuron Segmentation. In IEEE International Conference on Computer Vision (ICCV), 13604-13613.
|
| 467 |
+
Chen, Z.; Hu, Y.; Li, Z.; Fu, Z.; Song, X.; and Nie, L. 2025f. OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval. In Proceedings of the ACM International Conference on Multimedia, 6113-6122.
|
| 468 |
+
Chen, Z.; Hu, Y.; Li, Z.; Fu, Z.; Wen, H.; and Guan, W. 2025g. HUD: Hierarchical Uncertainty-Aware Disambiguation Network for Composed Video Retrieval. In Proceedings of the ACM International Conference on Multimedia, 6143-6152.
|
| 469 |
+
Cremers, D.; and Kolev, K. 2011. Multiview Stereo and Silhouette Consistency via Convex Functionals over Convex Domains. IEEE Trans. Pattern Anal. Mach. Intell., 33(6): 1161-1174.
|
| 470 |
+
Fan, Y.; Yu, R.; Barclay, J. R.; Appling, A. P.; Sun, Y.; Xie, Y.; and Jia, X. 2025. Multi-Scale Graph Learning for Anti-Sparse Downscaling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 27969-27977.
|
| 471 |
+
Fu, Z.; Li, Z.; Chen, Z.; Wang, C.; Song, X.; Hu, Y.; and Nie, L. 2025. PAIR: Complementarity-guided Disentanglement for Composed Image Retrieval. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1-5. IEEE.
|
| 472 |
+
Galliani, S.; Lasinger, K.; and Schindler, K. 2015. Massively Parallel Multiview Stereopsis by Surface Normal Diffusion. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 873-881.
|
| 473 |
+
Gu, X.; Fan, Z.; Zhu, S.; Dai, Z.; Tan, F.; and Tan, P. 2020. Cascade Cost Volume for High-Resolution Multi-View Stereo and Stereo Matching. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2492–2501.
|
| 474 |
+
Guan, R.; Li, J.; Wang, S.; Tu, W.; Li, M.; Zhu, E.; Liu, X.; and Chen, P. 2025a. Multi-view Graph Clustering with
|
| 475 |
+
|
| 476 |
+
Dual Relation Optimization for Remote Sensing Data. In Proceedings of the 33rd ACM International Conference on Multimedia, 7346-7355.
|
| 477 |
+
Guan, R.; Liu, T.; Tu, W.; Tang, C.; Luo, W.; and Liu, X. 2025b. Sampling Enhanced Contrastive Multi-View Remote Sensing Data Clustering with Long-Short Range Information Mining. IEEE Transactions on Knowledge and Data Engineering, 1-15.
|
| 478 |
+
Guan, R.; Tu, W.; Wang, S.; Liu, J.; Hu, D.; Tang, C.; Feng, Y.; Li, J.; Xiao, B.; and Liu, X. 2025c. Structure-Adaptive Multi-View Graph Clustering for Remote Sensing Data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 16933–16941.
|
| 479 |
+
Han, K.; Lyu, C.; Ma, L.; Qian, C.; Ma, S.; Pang, Z.; Chen, J.; and Liu, Z. 2025a. Climbd: A curriculum learning framework for imbalanced multimodal diagnosis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 65-74. Springer.
|
| 480 |
+
Han, K.; Ma, S.; Qian, C.; Chen, J.; Lyu, C.; Song, Y.; and Liu, Z. 2025b. Frequency Domain Unlocks New Perspectives for Abdominal Medical Image Segmentation. arXiv preprint arXiv:2510.11005.
|
| 481 |
+
Han, K.; Wang, S.; Chen, J.; Qian, C.; Lyu, C.; Ma, S.; Qiu, C.; Sheng, V. S.; Huang, Q.; and Liu, Z. 2025c. Region uncertainty estimation for medical image segmentation with noisy labels. IEEE Transactions on Medical Imaging.
|
| 482 |
+
Hao, Q.; Gan, Q.; Liu, Z.; Chen, J.; Shen, Q.; Qian, C.; and Liu, Y. 2024. SSDC-Net: An Effective Classification Method of Steel Surface Defects Based on Salient Local Features. In International Conference on Intelligent Computing, 490–503. Springer.
|
| 483 |
+
Jeyakumar, V.; and Wolkowicz, H. 1992. Generalizations of Slater's Constraint Qualification for Infinite Convex Programs. Mathematical Programming, 57(1-3): 85-101.
|
| 484 |
+
Jiang, J.; Liu, Q.; Liu, H.; Yu, H.; Wang, L.; Chen, J.; and Ma, H. 2025a. MVSMamba: Multi-View Stereo with State Space Model. arXiv preprint arXiv:2511.01315.
|
| 485 |
+
Jiang, J.; Liu, Q.; Yu, H.; Liu, H.; Wang, L.; Chen, J.; and Ma, H. 2025b. MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 27806-27816.
|
| 486 |
+
Jiang, J.; Wang, L.; Yu, H.; Hu, T.; Chen, J.; and Ma, H. 2025c. RRT-MVS: Recurrent Regularization Transformer for Multi-View Stereo. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 3994-4002.
|
| 487 |
+
Karush, W. 2014. Minima of Functions of Several Variables with Inequalities as Side Conditions. In Giorgi, G.; and Kjeldsen, T. H., eds., Traces and Emergence of Nonlinear Programming, 217-245.
|
| 488 |
+
Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; Dollar, P.; and Girshick, R. 2023. Segment Anything. arXiv:2304.02643.
|
| 489 |
+
Knapitsch, A.; Park, J.; Zhou, Q.-Y.; and Koltun, V. 2017. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Trans. Graph., 36(4).
|
| 490 |
+
|
| 491 |
+
Kuhn, A.; Lin, S.; and Erdler, O. 2019. Plane Completion and Filtering for Multi-View Stereo Reconstruction. In Proc. DAGM German Conf. (GcPR), volume 11824, 18-32.
|
| 492 |
+
Lee, J. Y.; DeGol, J.; Zou, C.; and Hoiem, D. 2021. PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 6138-6147.
|
| 493 |
+
Li, H.; Zhang, H.; Wan, X.; Yang, Z.; Li, C.; Li, J.; Han, R.; Zhu, P.; and Zhang, F. 2022. Noise-Transfer2Clean: denoising cryo-EM images based on noise modeling and transfer. Bioinformatics, 38(7): 2022-2029.
|
| 494 |
+
Li, J.; Tan, J.; Yang, Z.; Sun, P.; Huo, F.; Qin, J.; Sun, Y.; Xie, Y.; Cai, X.; Zhang, X.; He, M.; Tan, G.; Jia, W.; and Zhao, T. 2025a. WISCA: A Lightweight Model Transition Method to Improve LLM Training via Weight Scaling.
|
| 495 |
+
Li, Z.; Chen, Z.; Wen, H.; Fu, Z.; Hu, Y.; and Guan, W. 2025b. Encoder: Entity mining and modification relation binding for composed image retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 5101-5109.
|
| 496 |
+
Li, Z.; Fu, Z.; Hu, Y.; Chen, Z.; Wen, H.; and Nie, L. 2025c. FineCIR: Explicit Parsing of Fine-Grained Modification Semantics for Composed Image Retrieval. https://arxiv.org/abs/2503.21309.
|
| 497 |
+
Li, Z.; Gogia, P. C.; and Kaess, M. 2019. Dense Surface Reconstruction from Monocular Vision and LiDAR. In Proc. IEEE Conf. Robot. Automat. (ICRA), 6905-6911.
|
| 498 |
+
Li, Z.; Han, W.; Cai, Y.; Jiang, H.; Bi, B.; Gao, S.; Zhao, H.; and Wang, Z. 2024. Gradiseg: Gradient-guided gaussian segmentation with enhanced 3d boundary precision. arXiv preprint arXiv:2412.00392.
|
| 499 |
+
Li, Z.; Jiang, H.; Cai, Y.; Chen, J.; Bi, B.; Gao, S.; Zhao, H.; Wang, Y.; Mao, T.; and Wang, Z. 2025d. STDR: Spatio-Temporal Decoupling for Real-Time Dynamic Scene Rendering. arXiv preprint arXiv:2505.22400.
|
| 500 |
+
Li, Z.; Zuo, W.; Wang, Z.; and Zhang, L. 2020. Confidence-Based Large-Scale Dense Multi-View Stereo. IEEE Trans. on Image Process., 29: 7176-7191.
|
| 501 |
+
Liu, C.; Chen, K.; Zhang, H.; Qi, Z.; Zou, Z.; and Shi, Z. 2024. Change-Agent: Toward Interactive Comprehensive Remote Sensing Change Interpretation and Analysis. IEEE Transactions on Geoscience and Remote Sensing, 62: 1-16.
|
| 502 |
+
Liu, C.; Yuan, Z.; Wang, Y.; Yin, Y.; Luo, W.; He, Z.; and Liang, X. 2025a. MR-IntelliAssist: A World Cognition Agent Enabling Adaptive Human-AI Symbiosis in Industry 4.0. In International Conference on Human-Computer Interaction, 163–177. Springer Nature Switzerland Cham.
|
| 503 |
+
Liu, C.; Zhang, J.; Chen, K.; Wang, M.; Zou, Z.; and Shi, Z. 2025b. Remote Sensing Spatiotemporal Vision-Language Models: A comprehensive survey. IEEE Geoscience and Remote Sensing Magazine, 2-42.
|
| 504 |
+
Liu, M.; Yang, Q.; Huang, H.; Huang, W.; Yuan, Z.; Li, Z.; and Xu, Y. 2025c. Light4gs: Lightweight compact 4d gaussian splatting generation via context model. arXiv preprint arXiv:2503.13948.
|
| 505 |
+
|
| 506 |
+
Liu, M.; Yang, Q.; Zhao, M.; Huang, H.; Yang, L.; Li, Z.; and Xu, Y. 2025d. D2gv: Deformable 2d gaussian splatt-ting for video representation in 400fps. arXiv preprint arXiv:2503.05600.
|
| 507 |
+
Lu, C.; Lu, Q.; Dong, M.; and Luo, J. 2025. End-to-End Multi-Modal Diffusion Mamba. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 20529-20540.
|
| 508 |
+
Lu, C.; Lu, Q.; and Luo, J. 2024. An Explanable Vision Question Answer Model via Diffusion Chain-of-Thought. In European Conference on Computer Vision, 146-162. Springer.
|
| 509 |
+
Luo, J.; Ren, W.; Jiang, W.; Chen, X.; Wang, Q.; Han, Z.; and Liu, H. 2024. Discovering syntactic interaction clues for human-object interaction detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 28212-28222.
|
| 510 |
+
Luo, J.; Ren, W.; Wang, Z.; Chen, X.; Fan, H.; Han, Z.; and Liu, H. 2025a. Synergistic Prompting Learning for Human-Object Interaction Detection. IEEE Transactions on Image Processing.
|
| 511 |
+
Luo, J.; Ren, W.; Zheng, Q.; Zhang, Y.; Yuan, Z.; Wang, Z.; Lu, H.; and Liu, H. 2025b. InstructHOI: Context-Aware Instruction for Multi-Modal Reasoning in Human-Object Interaction Detection. In The Thirty-ninth Annual Conference on Neural Information Processing Systems.
|
| 512 |
+
Ma, X.; Gong, Y.; Wang, Q.; Huang, J.; Chen, L.; and Yu, F. 2021. EPP-MVSNet: Epipolar-assembling based Depth Prediction for Multi-view Stereo. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 5712-5720.
|
| 513 |
+
Orsingher, M.; Zani, P.; Medici, P.; and Bertozzi, M. 2022. Revisiting PatchMatch Multi-View Stereo for Urban 3D Reconstruction. In Proc. IEEE Intelligent Vehicles Symp. (IV), 190–196.
|
| 514 |
+
Qi, L.; and Sun, J. 1993. A nonsmooth version of Newton's method. Math. Program., 58: 353-367.
|
| 515 |
+
Qian, C.; Han, K.; Ding, J.; Lyu, C.; Yuan, Z.; Chen, J.; and Liu, Z. 2025a. Adaptive label correction for robust medical image segmentation with noisy labels. arXiv preprint arXiv:2503.12218.
|
| 516 |
+
Qian, C.; Han, K.; Wang, J.; Yuan, Z.; Lyu, C.; Chen, J.; and Liu, Z. 2025b. Dyncim: Dynamic curriculum for imbalanced multimodal learning. arXiv preprint arXiv:2503.06456.
|
| 517 |
+
Qian, C.; Xing, S.; Li, S.; Zhao, Y.; and Tu, Z. 2025c. DecAlign: Hierarchical Cross-Modal Alignment for Decoupled Multimodal Representation Learning. arXiv preprint arXiv:2503.11892.
|
| 518 |
+
Qu, X.; Gou, G.; Zhuang, J.; Yu, J.; Song, K.; Wang, Q.; Li, Y.; and Xiong, G. 2025a. ProAPO: Progressively Automatic Prompt Optimization for Visual Classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025, Nashville, TN, USA, June 11-15, 2025, 25145-25155.
|
| 519 |
+
Qu, X.; Yu, J.; Gai, K.; Zhuang, J.; Tang, Y.; Xiong, G.; Gou, G.; and Wu, Q. 2024. Visual-Semantic Decomposition and
|
| 520 |
+
|
| 521 |
+
Partial Alignment for Document-based Zero-Shot Learning. In Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024 - 1 November 2024, 4581-4590. ACM.
|
| 522 |
+
Qu, X.; Yu, J.; Zhuang, J.; Gou, G.; Xiong, G.; and Wu, Q. 2025b. MADS: Multi-Attribute Document Supervision for Zero-Shot Image Classification. CoRR, abs/2503.06847.
|
| 523 |
+
Romanoni, A.; and Matteucci, M. 2019. TAPA-MVS: Textureless-Aware PAtchMatch Multi-View Stereo. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 10412-10421.
|
| 524 |
+
Schöps, T.; Schonberger, J. L.; Galliani, S.; Sattler, T.; Schindler, K.; Pollefeys, M.; and Geiger, A. 2017. A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR).
|
| 525 |
+
Schönberger, J. L.; Zheng, E.; Frahm, J.-M.; and Pollefeys, M. 2016. Pixelwise View Selection for Unstructured Multi-View Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 9907, 501-518.
|
| 526 |
+
Seitz, S.; Curless, B.; Diebel, J.; Scharstein, D.; and Szeliski, R. 2006. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), volume 1, 519-528.
|
| 527 |
+
Shen, T.; Liu, S.; Feng, J.; Ma, Z.; and An, N. 2025. Topology-Aware 3D Gaussian Splatting: Leveraging Persistent Homology for Optimized Structural Integrity. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 6823-6832.
|
| 528 |
+
Song, S.; Li, P.; Dun, M.; Huang, M.; Cao, H.; and Ye, X. 2025a. GPromptShield: Elevating Resilience in Graph Prompt Tuning Against Adversarial Attacks. In The Thirteenth International Conference on Learning Representations.
|
| 529 |
+
Song, S.; Li, P.; Dun, M.; Zhang, Y.; Cao, H.; and Ye, X. 2025b. Equipping Graph Autoencoders: Revisiting Masking Strategies from a Robustness Perspective. In Proceedings of the 2025 SIAM International Conference on Data Mining (SDM), 366-375. SIAM.
|
| 530 |
+
Song, S.; Li, P.; Dun, M.; Zhang, Y.; Cao, H.; and Ye, X. 2025c. SPMGAE: Self-purified masked graph autoencoders release robust expression power. Neurocomputing, 611: 128631.
|
| 531 |
+
Song, Z.; Bi, H.; Zhang, R.; Mao, T.; and Wang, Z. 2023. Xvtp3d: cross-view trajectory prediction using shared 3d queries for autonomous driving. arXiv preprint arXiv:2308.08764.
|
| 532 |
+
Song, Z.; Lin, X.; Huang, Q.; Wang, G.; and Lin, L. 2025d. SIRI-Bench: Challenging VLMs' Spatial Intelligence through Complex Reasoning Tasks. arXiv preprint arXiv:2506.14512.
|
| 533 |
+
Song, Z.; Qin, S.; Chen, T.; Lin, L.; and Wang, G. 2025e. Physical autoregressive model for robotic manipulation without action pretraining. arXiv preprint arXiv:2508.09822.
|
| 534 |
+
|
| 535 |
+
Su, W.; and Tao, W. 2023. Efficient Edge-Preserving Multi-View Stereo Network for Depth Estimation. In Proc. of the AAAI Conf. Artif. Intell. (AAAI), 2348-2356.
|
| 536 |
+
Sun, N.; Fang, H.; Lu, Y.; Zhao, C.; and Ling, H. 2025a. END2: Robust Dual-Decoder Watermarking Framework Against Non-Differentiable Distortions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 773-781.
|
| 537 |
+
Sun, N.; Yuan, L.; Fang, H.; Lu, Y.; Ling, H.; Xie, S.; and Zhao, C. 2025b. Ultra-high Resolution Watermarking Framework Resistant to Extreme Cropping and Scaling. In The Thirty-ninth Annual Conference on Neural Information Processing Systems.
|
| 538 |
+
Tang, D.; Cao, X.; Hou, X.; Jiang, Z.; Liu, J.; and Meng, D. 2024. Crs-diff: Controllable remote sensing image generation with diffusion model. IEEE Transactions on Geoscience and Remote Sensing.
|
| 539 |
+
Tang, D.; Cao, X.; Wu, X.; Li, J.; Yao, J.; Bai, X.; Jiang, D.; Li, Y.; and Meng, D. 2025. AeroGen: Enhancing remote sensing object detection with diffusion-driven data generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, 3614-3624.
|
| 540 |
+
Vogiatzis, G.; Hernandez Esteban, C.; Torr, P. H.; and Cipolla, R. 2007. Multiview Stereo via Volumetric Graph-Cuts and Occlusion Robust Photo-Consistency. IEEE Trans. Pattern Anal. Mach. Intell., 29(12): 2241-2246.
|
| 541 |
+
Wang, F.; Galliani, S.; Vogel, C.; and Pollefeys, M. 2022a. IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 8596-8605.
|
| 542 |
+
Wang, F.; Galliani, S.; Vogel, C.; Speciale, P.; and Pollefeys, M. 2021. PatchmatchNet: Learned Multi-View Patch-match Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 14189-14198.
|
| 543 |
+
Wang, J.; Chen, Y.; Liu, X.; Liu, C.; Liu, D.; Gao, J.; and Xiong, Z. 2025a. MaskTwins: Dual-form Complementary Masking for Domain-Adaptive Image Segmentation. In International Conference on Machine Learning (ICML).
|
| 544 |
+
Wang, J.; Lin, C.; Guan, C.; Nie, L.; He, J.; Li, H.; Liao, K.; and Zhao, Y. 2025b. Jasmine: Harnessing Diffusion Prior for Self-supervised Depth Estimation. arXiv preprint arXiv:2503.15905.
|
| 545 |
+
Wang, J.; Lin, C.; Nie, L.; Huang, S.; Zhao, Y.; Pan, X.; and Ai, R. 2024a. WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions. In 2024 IEEE International Conference on Robotics and Automation (ICRA), 4976-4982. IEEE.
|
| 546 |
+
Wang, J.; Lin, C.; Nie, L.; Liao, K.; Shao, S.; and Zhao, Y. 2024b. Digging into Contrastive Learning for Robust Depth Estimation with Diffusion Models. In Proceedings of the 32nd ACM International Conference on Multimedia, 4129-4137. ACM.
|
| 547 |
+
Wang, J.; Lin, C.; Sun, L.; Liu, R.; Nie, L.; Li, M.; Liao, K.; Chu, X.; and Zhao, Y. 2025c. From Editor to Dense Geometry Estimator. arXiv preprint arXiv:2509.04338.
|
| 548 |
+
|
| 549 |
+
Wang, N.; Chen, Y.; Xiao, L.; Xiao, W.; Li, B.; Chen, Z.; Ye, C.; Xu, S.; Zhang, S.; Yan, Z.; et al. 2025d. Unifying Appearance Codes and Bilateral Grids for Driving Scene Gaussian Splitting. arXiv preprint arXiv:2506.05280.
|
| 550 |
+
Wang, X.; Zhu, Z.; Huang, G.; Qin, F.; Ye, Y.; He, Y.; Chi, X.; and Wang, X. 2022b. MVSTER: Epipolar Transformer for Efficient Multi-view Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 13691, 573-591.
|
| 551 |
+
Wei, Z.; Zhu, Q.; Min, C.; Chen, Y.; and Wang, G. 2021. AA-RMVSNet: Adaptive Aggregation Recurrent Multiview Stereo Network. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 6167-6176.
|
| 552 |
+
Williams, L. 1983. Pyramidal Parametrics. In Proc. of the 10th Annu. Conf. on Comput. Graph. and Interact. Techn. (SIGGRAPH), 1-11.
|
| 553 |
+
Xing, S.; Li, P.; Wang, Y.; Bai, R.; Wang, Y.; Hu, C.-W.; Qian, C.; Yao, H.; and Tu, Z. 2025. Re-Align: Aligning vision language models via retrieval-augmented direct preference optimization. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2379-2397.
|
| 554 |
+
Xu, Q.; Kong, W.; Tao, W.; and Pollefeys, M. 2022. Multi-Scale Geometric Consistency Guided and Planar Prior Assisted Multi-View Stereo. IEEE Trans. Pattern Anal. Mach. Intell., 1-18.
|
| 555 |
+
Xu, Q.; and Tao, W. 2019. Multi-Scale Geometric Consistency Guided Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5478-5487.
|
| 556 |
+
Xu, Q.; and Tao, W. 2020. Planar Prior Assisted PatchMatch Multi-View Stereo. In Proc. of the AAAI Conf. Artif. Intell. (AAAI), volume 34, 12516-12523.
|
| 557 |
+
Xu, Z.; Liu, Y.; Shi, X.; Wang, Y.; and Zheng, Y. 2020. MARMVS: Matching Ambiguity Reduced Multiple View Stereo for Efficient Large Scale Scene Reconstruction. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5980-5989.
|
| 558 |
+
Yan, J.; Wei, Z.; Yi, H.; Ding, M.; Zhang, R.; Chen, Y.; Wang, G.; and Tai, Y.-W. 2020. Dense hybrid recurrent multi-view stereo net with dynamic consistency checking. In Proc. Eur. Conf. Comput. Vis. (ECCV), 674-689.
|
| 559 |
+
Yan, R.; Lv, Z.; Yang, Z.; Lin, S.; Zheng, C.; and Zhang, F. 2024a. Sparse and Hierarchical Transformer for Survival Analysis on Whole Slide Images. IEEE Journal of Biomedical and Health Informatics, 28(1): 7-18.
|
| 560 |
+
Yan, Z.; Dong, W.; Shao, Y.; Lu, Y.; Liu, H.; Liu, J.; Wang, H.; Wang, Z.; Wang, Y.; Remondino, F.; et al. 2025. Renderworld: World model with self-supervised 3d label. In 2025 IEEE International Conference on Robotics and Automation (ICRA), 6063-6070. IEEE.
|
| 561 |
+
Yan, Z.; Li, L.; Shao, Y.; Chen, S.; Wu, Z.; Hwang, J.-N.; Zhao, H.; and Remondino, F. 2024b. 3dsceneeditor: Controllable 3d scene editing with gaussian splatting. arXiv preprint arXiv:2412.01583.
|
| 562 |
+
Yan, Z.; Mazzacca, G.; Rigon, S.; Farella, E. M.; Trybala, P.; Remondino, F.; et al. 2023. NeRFBK: a holistic dataset
|
| 563 |
+
|
| 564 |
+
for benchmarking NeRF-based 3D reconstruction. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 48(1): 219-226.
|
| 565 |
+
Yang, Y.; Wang, Z.; Zhang, D.; Zhang, H.; Zhi, P.; Zhao, R.; Kong, X.; Zhou, R.; and Zhou, Q. 2024a. A3Framework: A Cloud Framework with Autonomous Driving Path Planning. In 2024 Twelfth International Conference on Advanced Cloud and Big Data (CBD), 136-141.
|
| 566 |
+
Yang, Z.; Li, H.; Zang, D.; Han, R.; and Zhang, F. 2024b. Improved Denoising of Cryo-Electron Microscopy Micrographs with Simulation-Aware Pretraining. Journal of Computational Biology, 31(6): 564-575.
|
| 567 |
+
Yang, Z.; Shi, X.; Ba, W.; Song, Z.; Luan, H.; Hu, T.; Lin, S.; Wang, J.; Zhou, S. K.; and Yan, R. 2025. Fusion of Multi-scale Heterogeneous Pathology Foundation Models for Whole Slide Image Analysis.
|
| 568 |
+
Yang, Z.; Zang, D.; Li, H.; Zhang, Z.; Zhang, F.; and Han, R. 2024c. Self-supervised noise modeling and sparsity guided electron tomography volumetric image denoising. Ultramicroscopy, 255: 113860.
|
| 569 |
+
Yang, Z.; Zhang, F.; and Han, R. 2021. Self-Supervised Cryo-Electron Tomography Volumetric Image Restoration From Single Noisy Volume With Sparsity Constraint. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 4056-4065.
|
| 570 |
+
Yao, Y.; Luo, Z.; Li, S.; Fang, T.; and Quan, L. 2018. MVS-Net: Depth Inference for Unstructured Multi-view Stereo. In Proc. Eur. Conf. Comput. Vis. (ECCV), volume 11212, 785–801.
|
| 571 |
+
Yao, Y.; Luo, Z.; Li, S.; Shen, T.; Fang, T.; and Quan, L. 2019. Recurrent MVSNet for High-Resolution Multi-View Stereo Depth Inference. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 5520-5529.
|
| 572 |
+
Yuan, Z.; Qu, X.; Qian, C.; Chen, R.; Tang, J.; Sun, L.; Chu, X.; Zhang, D.; Wang, Y.; Cai, Y.; et al. 2025a. Video-star: Reinforcing open-vocabulary action recognition with tools. arXiv preprint arXiv:2510.08480.
|
| 573 |
+
Yuan, Z.; Tang, J.; Luo, J.; Chen, R.; Qian, C.; Sun, L.; Chu, X.; Cai, Y.; Zhang, D.; and Li, S. 2025b. AutoDriveR2: Incentivizing Reasoning and Self-Reflection Capacity for VLA Model in Autonomous Driving. arXiv preprint arXiv:2509.01944.
|
| 574 |
+
Yuan, Z.; Yang, Z.; Cai, Y.; Wu, K.; Liu, M.; Zhang, D.; Jiang, H.; Li, Z.; and Wang, Z. 2025c. SED-MVS: Segmentation-Driven and Edge-Aligned Deformation Multi-View Stereo with Depth Restoration and Occlusion Constraint. IEEE Transactions on Circuits and Systems for Video Technology.
|
| 575 |
+
Yuesong Wang; Zhaojie Zeng; Tao Guan; Wei Yang; Zhuo Chen; Wenkai Liu; Luoyuan Xu; and Yawei Luo. 2023. Adaptive Patch Deformation for Textureless-Resilient Multi-View Stereo. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 1621-1630.
|
| 576 |
+
Zhang, D.; Chen, D.; Zhi, P.; Chen, Y.; Yuan, Z.; Li, C.; Zhou, R.; Zhou, Q.; et al. 2025a. Mapexpert: Online hd map construction with simple and efficient sparse map element
|
| 577 |
+
|
| 578 |
+
expert. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, 14745-14753.
|
| 579 |
+
Zhang, D.; Sun, J.; Hu, C.; Wu, X.; Yuan, Z.; Zhou, R.; Shen, F.; and Zhou, Q. 2025b. Pure Vision Language Action (VLA) Models: A Comprehensive Survey. arXiv preprint arXiv:2509.19012.
|
| 580 |
+
Zhang, D.; Yuan, Z.; Huang, K.; Yan, Y.; Li, C.; Nie, H.; Zhao, S.; Zhou, R.; and Zhou, Q. 2025c. AT-Drive: Exploiting Adversarial Transfer for End-to-end Autonomous Driving.
|
| 581 |
+
Zhang, D.; Yuan, Z.; Li, C.; Chen, Y.; Zhao, S.; Nie, H.; Zhou, R.; and Zhou, Q. 2025d. ADDI: A Simplified E2E Autonomous Driving Model with Distinct Experts and Implicit Interactions.
|
| 582 |
+
Zhang, D.; Zhi, P.; Yong, B.; Wang, J.-Q.; Hou, Y.; Guo, L.; Zhou, Q.; and Zhou, R. 2023. EHSS: An Efficient Hybrid-supervised Symmetric Stereo Matching Network. 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 1044-1051.
|
2401.06xxx/2401.06385/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:469afa9761a206514bba42fe7f74f7a9728f5b400f4b78c32f74b62893f1ea1a
|
| 3 |
+
size 1151969
|
2401.06xxx/2401.06385/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06387/cc0da463-5d03-4bf4-a760-ab9656de99d1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7a13fb3816e502cfd0eb92cd466e2dd6a0b74b3d06d1a49ad8c51e591e8ba00c
|
| 3 |
+
size 21315126
|
2401.06xxx/2401.06387/full.md
ADDED
|
@@ -0,0 +1,430 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards High-Quality and Efficient Speech Bandwidth Extension with Parallel Amplitude and Phase Prediction
|
| 2 |
+
|
| 3 |
+
Ye-Xin Lu, Yang Ai, Member, IEEE, Hui-Peng Du, Zhen-Hua Ling, Senior Member, IEEE
|
| 4 |
+
|
| 5 |
+
Abstract—Speech bandwidth extension (BWE) refers to widening the frequency bandwidth range of speech signals, enhancing the speech quality towards brighter and fuller. This paper proposes a generative adversarial network (GAN) based BWE model with parallel prediction of Amplitude and Phase spectra, named AP-BWE, which achieves both high-quality and efficient wideband speech waveform generation. The proposed AP-BWE generator is entirely based on convolutional neural networks (CNNs). It features a dual-stream architecture with mutual interaction, where the amplitude stream and the phase stream communicate with each other and respectively extend the high-frequency components from the source narrowband amplitude and phase spectra. To improve the naturalness of the extended speech signals, we employ a multi-period discriminator at the waveform level and design a pair of multi-resolution amplitude and phase discriminators at the spectral level, respectively. Experimental results demonstrate that our proposed AP-BWE achieves state-of-the-art performance in terms of speech quality for BWE tasks targeting sampling rates of both $16\mathrm{kHz}$ and 48 kHz. In terms of generation efficiency, due to the all-convolutional architecture and all-frame-level operations, the proposed AP-BWE can generate $48\mathrm{kHz}$ waveform samples 292.3 times faster than real-time on a single RTX 4090 GPU and 18.1 times faster than real-time on a single CPU. Notably, to our knowledge, AP-BWE is the first to achieve the direct extension of the high-frequency phase spectrum, which is beneficial for improving the effectiveness of existing BWE methods.
|
| 6 |
+
|
| 7 |
+
Index Terms—Speech bandwidth extension, generative adversarial network, amplitude prediction, phase prediction.
|
| 8 |
+
|
| 9 |
+
# I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
In practical speech signal transmission scenarios, limitations in communication devices or transmission channels may lead to the truncation of the frequency bandwidth of speech signals. The deficiency of high-frequency information can induce distortion, muffling, or a lack of clarity in speech. Speech bandwidth extension (BWE) aims to supplement the missing high-frequency bandwidth from the low-frequency components, thereby enhancing the quality and intelligibility of the narrowband speech signals. In the earlier years, the bandwidth of communication devices was extremely limited. For instance, the bandwidth of speech signals in the public switching telephone network (PSTN) is less than $4\mathrm{kHz}$ . So
|
| 12 |
+
|
| 13 |
+
The authors are with the National Engineering Research Center of Speech and Language Information Processing, University of Science and Technology of China, Hefei 230027, China (e-mail: yxlu0102@mail.ustc.edu.cn; yangai@ustc.edu.cn; redmist@mail.ustc.edu.cn; zhling@ustc.edu.cn). This work was funded by the National Nature Science Foundation of China under Grant 62301521, the Anhui Provincial Natural Science Foundation under Grant 2308085QF200, and the Fundamental Research Funds for the Central Universities under Grant WK210000033.
|
| 14 |
+
|
| 15 |
+
early BWE efforts were primarily focused on extending the bandwidth to a maximum target frequency of $8\mathrm{kHz}$ . With the advancement of communication technology, the signal bandwidth that communication devices can transmit has been widening. Therefore, recent speech BWE research has increasingly focused on extending the bandwidth to the perceptual frequency limits of the human ear (e.g., $22.05\mathrm{kHz}$ or $24\mathrm{kHz}$ ), enabling applications in high-quality mobile communication, audio remastering and enhancement, and more. Speech BWE can be applied to various speech signal processing areas, such as text-to-speech (TTS) synthesis [1], automatic speech recognition (ASR) [2], [3], speech enhancement (SE) [4], [5], and speech codec [6].
|
| 16 |
+
|
| 17 |
+
In the time domain, speech BWE is a conditionally stringent form of speech super-resolution (SR). Speech SR aims to increase the temporal resolution of low-resolution speech signals by generating high-frequency components, whereas low-resolution speech signals may contain aliased high-frequency components. In contrast, in BWE, only the low-frequency components are preserved in the narrowband signals. Consequently, the BWE task poses greater challenges than SR. Nevertheless, the majority of SR methods are applicable to the BWE task.
|
| 18 |
+
|
| 19 |
+
Early research on BWE was predominantly based on signal processing techniques, encompassing approaches such as source-filter-based methods [4], [7], mapping-based methods [8]–[10], statistical methods [11]–[17], and so forth. Source-filter-based methods introduced the source-filter model to extend bandwidth by separately restoring high-frequency residual signals and spectral envelopes. The high-frequency residual signals are often derived by folding the spectrum of narrowband signals, while predicting high-frequency spectral envelopes presents more challenges. Mapping-based methods utilized codebook mapping or linear mapping to map lower-band speech representations to their corresponding upper-band envelopes. Additionally, statistical methods leveraged Gaussian mixture models (GMMs) and hidden Markov models (HMMs) to establish the mapping relationship between low-frequency spectral parameters and their corresponding high-frequency counterparts. Despite the effective performance achieved by these statistical methods in speech BWE, the limited modeling capability of GMMs and HMMs may lead to generating over-smoothed spectral parameters [18].
|
| 20 |
+
|
| 21 |
+
With the renaissance of deep learning, deep neural networks (DNNs) have shown strong modeling capability. DNN-based BWE methods can be broadly classified into two categories:
|
| 22 |
+
|
| 23 |
+
waveform-based methods and spectrum-based methods. In the waveform-based methods, neural networks were employed to learn the direct mapping from the narrowband waveforms to the wideband ones [19]–[23], in which both the amplitude and phase information were implicitly restored. Nevertheless, due to the all-sample-level operations, this category of methods still suffered from the bottleneck of low generation efficiency, especially in generating high-resolution waveforms, limiting the application of this category of methods in low computational power scenarios. In the spectrum-based methods, neural networks have been adopted to predict high-frequency amplitude-related spectral parameters. However, it's difficult to parameterize and predict the phase due to its wrapping characteristic and non-structured nature. The common practice was to replicate [24] or mirror-inverse [25]–[27] the low-frequency phase to obtain the high-frequency one, which constrained the quality of the extended wideband speech. Another approach was to use vocoders for phase recovery from the vocal-tract filter parameters [28]–[30] or mel-spectrogram [31]. These vocoder-based methods involved a two-step generation process, where the prediction errors accumulated and the generation efficiency was significantly constrained. Other methods chose to implicitly recover phase information by predicting the phase-contained spectra, e.g., short-time Fourier transform (STFT) complex spectrum [32] and modified discrete cosine transform (MDCT) spectrum [33], but they were still limited in the precise modeling and optimization of phase. Overall, existing BWE methods have yet to achieve a precise extension of the high-frequency phase, leaving room for improvement in both speech quality and generation efficiency.
|
| 24 |
+
|
| 25 |
+
In our previous works [34], [35], we proposed a neural speech phase prediction method based on parallel estimation architecture and anti-wrapping losses. The proposed phase prediction method has been proven to be applicable to various speech-generation tasks, such as speech synthesis [36] and speech enhancement [37]. We also have tried to apply it to speech BWE by predicting the wideband phase spectra from the extended log-amplitude spectra, and the final extended waveforms were obtained through inverse STFT (iSTFT). However, in our preliminary experiments, we found that this method still faced the same issue of error accumulation and two-step generation as vocoder-based methods, and the low-frequency phase information was not utilized. Therefore, integrating phase prediction into end-to-end speech BWE might be a preferable option.
|
| 26 |
+
|
| 27 |
+
Hence, in this paper, we propose AP-BWE, a generative adversarial network (GAN) based end-to-end speech BWE model that achieves high-quality and efficient speech BWE with the parallel extension of amplitude and phase spectra. The generator features a dual-stream architecture, with each stream incorporating ConvNeXt [38] as its foundational backbone. With narrowband log-amplitude and phase spectra as input conditions respectively, the amplitude stream predicts the residual high-frequency log-amplitude spectrum, while the phase stream directly predicts the wrapped wideband phase spectrum. Additionally, connections are established between these two streams which has been proven to be crucial for phase prediction [39]. To further enhance the subjective
|
| 28 |
+
|
| 29 |
+
perceptual quality of the extended speech, we first employ the multi-period discriminator (MPD) [40] at the waveform level. Subsequently, inspired by the multi-resolution discriminator proposed by Jang et al. [41] to alleviate the spectral oversmoothing, we respectively design a multi-resolution amplitude discriminator (MRAD) and a multi-resolution phase discriminator (MRPD) at the spectral level, aiming to enforce the generator to produce more realistic amplitude and phase spectra. Experimental results demonstrate that our proposed AP-BWE surpasses state-of-the-art (SOTA) BWE methods in terms of speech quality for target sampling rates of both 16 kHz and 48 kHz. It's worth noting that while ensuring high generation quality, our model exhibits significantly faster-than-real-time generation efficiency. For waveform generation at a sampling rate of 48 kHz, our model achieves a generation speed of up to 292.3 times real-time on a single RTX 4090 GPU and 18.1 times real-time on a single CPU. Compared to the SOTA speech BWE methods, we can also achieve at least a fourfold acceleration on both GPU and CPU.
|
| 30 |
+
|
| 31 |
+
The main contributions of this work are twofold. On the one hand, we propose to achieve speech BWE with parallel modeling and optimization of amplitude and phase spectra, which effectively avoids the amplitude-phase compensation issues present in previous works, significantly enhancing the quality of the extended speech. Additionally, benefiting from the parallel phase estimation architecture and anti-wrapping phase losses, we achieve the precise prediction of the wideband phase spectrum. Through the multi-resolution discrimination on the phase spectra, we further enhance the realism of the extended phase at multiple resolutions. To the best of our knowledge, we are the first to achieve the direct extension of the phase spectrum. On the other hand, with the all-convolutional architecture and all-frame-level operations, our approach achieves a win-win situation in terms of both generation quality and efficiency.
|
| 32 |
+
|
| 33 |
+
The rest of this paper is organized as follows. Section II briefly reviews previous waveform-based and spectrum-based BWE methods. In Section III, we give details of our proposed AP-BWE framework. The experimental setup is presented in Section IV, while Section V gives the results and analysis. Finally, we give conclusions in Section VI.
|
| 34 |
+
|
| 35 |
+
# II. RELATED WORK
|
| 36 |
+
|
| 37 |
+
# A. Waveform-based BWE Methods
|
| 38 |
+
|
| 39 |
+
Waveform-based BWE methods aim to directly predict wideband waveforms from narrowband ones without any frequency domain transformation. AudioUNet [19] proposed to use a U-Net [42] based architecture to reconstruct wideband waveforms without involving specialized audio processing techniques. TFiLM [21] and AFiLM [22] proposed to use recurrent neural networks (RNNs) and the self-attention mechanism [43] to capture the long-term dependencies, respectively. Wang et al. [23] proposed to use an autoencoder convolutional neural network (AECNN) based architecture and cross-domain losses to predict and optimize the wideband waveforms, respectively. However, the operations in the aforementioned methods were all performed at the sample-point level, leading
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
Fig. 1. The overall structure of the proposed AP-BWE. The Abs(·) and Angle(·) denote the amplitude and phase calculation functions, while $\log (\cdot)$ and $\exp (\cdot)$ denote the logarithmic and exponential functions, respectively. The Arctan2 refers to the two-argument arc-tangent function.
|
| 43 |
+
|
| 44 |
+
to relatively lower generation efficiency when compared to spectrum-based methods with frame-level operations.
|
| 45 |
+
|
| 46 |
+
Recently, diffusion probabilistic models [44], [45] have been successfully applied to audio processing tasks. They have been effectively utilized in speech BWE [46]-[48] by conditioning the network of the noise predictor with narrowband waveforms, with remarkably high perceptual quality. The diffusion-based methods decomposed the BWE process into two sub-processes: the forward process, and the reverse process. In the forward process, Gaussian noises were incrementally added to the narrowband waveforms to obtain whitened latent variables. Conversely, the wideband waveforms were gradually recovered by removing Gaussian noises step by step in the reverse process. While these diffusion-based BWE methods have demonstrated promising performance, they still required numerous time steps in the reverse process for waveform reconstruction, thereby imposing significant constraints on generation efficiency. The comparison between our proposed AP-BWE and these diffusion-based methods will be presented in Section V-B.
|
| 47 |
+
|
| 48 |
+
# B. Spectrum-based BWE Methods
|
| 49 |
+
|
| 50 |
+
Spectrum-based BWE methods aim to restore high-frequency spectral parameters for reconstructing wideband waveforms. However, as these spectral parameters were mostly amplitude-related, recovering high-frequency phase information remains the primary challenge. The most primitive method involved replicating or mirror-inversing the low-frequency phase, but such an approach introduces significant errors. Another method entailed the use of a vocoder to recover the phase from the extended amplitude-related spectrum. For instance, NVSR [31] divided the BWE process into two stages: 1) wideband mel-spectrogram prediction stage; 2) vocoder-based waveform synthesis and post-processing stage. Initially, NVSR employed ResUNet [49] to predict wideband mel-spectrograms from narrowband ones. Subsequently, these predicted mel-spectrograms were fed into a neural vocoder to reconstruct high-resolution waveforms. Finally, the low-frequency components of the high-resolution waveforms were replaced with the original low-frequency ones.
|
| 51 |
+
|
| 52 |
+
Other methods involved recovering phase information from the phase-contained spectrum. AERO [32] directly predicted
|
| 53 |
+
|
| 54 |
+
the wideband short-time complex spectrum from the narrowband one, implicitly recovering both amplitude and phase. However, the lack of an explicit optimization method for the phase can lead to the compensation effect [50] between amplitude and phase, thereby impacting the quality of generated waveforms. mdctGAN [33] utilized the MDCT to encode both amplitude and phase information to a real-valued MDCT spectrum. While successfully avoiding additional phase prediction through the prediction of the wideband MDCT spectrum, the performance of the MDCT spectrum in waveform generation tasks has been demonstrated to be significantly weaker than that of the STFT spectrum [51], which may be attributed to the advantageous impact of an over-complete Fourier basis on enhancing training stability [52].
|
| 55 |
+
|
| 56 |
+
Both waveform-based and spectrum-based methods mentioned above failed to achieve precise recovery of the high-frequency phase, thereby inevitably limiting the quality of the extended speech. Building upon our previous work on phase prediction [34], we preliminarily tried to apply it to the BWE task by predicting the wideband phase spectrum from the extended log-amplitude spectrum. However, we found that this two-stage prediction approach failed to fully leverage the low-frequency phase information in narrowband waveforms, and its prediction errors accumulate across stages. Therefore, in this study, we opted to integrate the phase prediction method into the end-to-end speech BWE.
|
| 57 |
+
|
| 58 |
+
# III. METHODOLOGY
|
| 59 |
+
|
| 60 |
+
The overview of the proposed AP-BWE is illustrated in Fig. 1. Given the narrowband waveform $\pmb{x} \in \mathbb{R}^L$ as input, AP-BWE aims to extend its bandwidth in the spectral domain as well as increase its resolution in the time domain to predict the wideband waveform $\pmb{y} \in \mathbb{R}^{nL}$ . Here, $n$ refers to the sampling rate ratio between wideband and narrowband waveforms (i.e., extension factor), while $nL$ and $L$ represent the length of the wideband and narrowband waveforms, respectively. Specifically, the narrowband waveform $\pmb{x}$ is first interpolated $n$ times using the sinc filter to match the temporal resolution of $\pmb{y}$ . Subsequently, the narrowband amplitude spectrum $X_{a} \in \mathbb{R}^{T \times F}$ and wrapped phase spectrum $X_{p} \in \mathbb{R}^{T \times F}$ are extracted from the interpolated narrowband waveform through STFT, where $T$ and $F$ denote the number of temporal frames and frequency bins, respectively. Through the mutual coupling of
|
| 61 |
+
|
| 62 |
+
the amplitude stream and the phase stream, AP-BWE predicts wideband log-amplitude spectrum $\log (\hat{Y}_a)\in \mathbb{R}^{T\times F}$ as well as wideband wrapped phase spectrum $\hat{Y}_p\in \mathbb{R}^{T\times F}$ separately from $\log (X_a)$ and $X_{p}$ . Eventually the wideband waveform $\hat{\pmb{y}}\in \mathbb{R}^{nL}$ was reconstructed through iSTFT. The details of the model structure and training criteria are described as follows.
|
| 63 |
+
|
| 64 |
+
# A. Model Structure
|
| 65 |
+
|
| 66 |
+
1) Generator: We denote the generator of our proposed AP-BWE as $G$ , and $\hat{\pmb{y}} = G(\pmb{x})$ . As depicted in Fig. 1, the generator $G$ comprises a dual-stream architecture, which is entirely based on convolutional neural networks. Both the amplitude and phase streams utilize the ConvNeXt [38] as the foundational backbone due to its strong modeling capability. The original two-dimensional convolution-based ConvNeXt is modified into a one-dimensional convolution-based version and integrated it into our model. As depicted in Fig. 2, the ConvNeXt block is a cascade of a large-kernel-sized depth-wise convolutional layer and a pair of point-wise convolutional layers that respectively expand and restore feature dimensions. Layer normalization [53] and Gaussian error linear unit (GELU) activation [54] are interleaved between the layers. Finally, the residual connection is added before the output to prevent the gradient from vanishing.
|
| 67 |
+
|
| 68 |
+
The amplitude stream comprises a convolutional layer, $N$ ConvNeXt clocks, and another convolutional layer, with the aim to predict the residual high-frequency log-amplitude spectrum and add it to the narrowband $\log (\mathbf{X}_a)$ to obtain the wideband log-amplitude spectrum $\log (\hat{\mathbf{Y}}_a)$ . Differs slightly from the amplitude stream, the phase stream incorporates two output convolutional layers to respectively predict the pseudo-real part component $\hat{\mathbf{Y}}_p^{(r)}$ and pseudo-imaginary part component $\hat{\mathbf{Y}}_p^{(i)}$ , and further calculate the wrapped phase spectrum $\hat{\mathbf{Y}}_p$ from them with the two-argument arc-tangent (Arctan2) function:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\hat {\mathbf {Y}} _ {p} = \arctan \left(\frac {\hat {\mathbf {Y}} _ {p} ^ {(i)}}{\hat {\mathbf {Y}} _ {p} ^ {(r)}}\right) - \frac {\pi}{2} \cdot \mathrm {S g n} ^ {*} (\hat {\mathbf {Y}} _ {p} ^ {(i)}) \cdot [ \mathrm {S g n} ^ {*} (\hat {\mathbf {Y}} _ {p} ^ {(r)}) - 1 ], (1)
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
where $\arctan (\cdot)$ denotes the arc-tangent function, and $\mathrm{Sgn}^* (x)$ is a redefined symbolic function: when $x\geq 0$ $\mathrm{Sgn}^* (x) = 1$ otherwise $\mathrm{Sgn}^* (x) = -1$ . Additionally, connections are established between two streams for information exchange, which is crucial for phase prediction [39]. Finally, the predicted wideband waveform $\hat{\pmb{y}}\in \mathbb{R}^{nL}$ is reconstructed from $\hat{\mathbf{Y}}_a$ and $\hat{\mathbf{Y}}_p$ using iSTFT:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\begin{array}{r l} \hat {\boldsymbol {y}} & = \mathrm {i S T F T} \left(\hat {\boldsymbol {Y}} _ {a} \cdot e ^ {j \hat {\boldsymbol {Y}} _ {p}}\right) \\ & = \mathrm {i S T F T} \left(\hat {\boldsymbol {Y}} _ {r} + j \hat {\boldsymbol {Y}} _ {i}\right), \end{array} \tag {2}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $\hat{\mathbf{Y}}_r = \hat{\mathbf{Y}}_a\cdot \cos (\hat{\mathbf{Y}}_p)\in \mathbb{R}^{T\times F}$ and $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_a\cdot \sin (\hat{\mathbf{Y}}_p)\in$ $\mathbb{R}^{T\times F}$ denote the real and imaginary parts of the extended short-time complex spectrum, respectively.
|
| 81 |
+
|
| 82 |
+
2) Discriminator: Directly predicting amplitude and phase and then reconstructing the speech waveform through iSTFT can result in over-smoothed spectral parameters, manifesting as a robotic or muffled quality in the reconstructed waveforms. To this end, we utilize discriminators defined both in the
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Fig. 2. Details of the ConvNeXt block [38]: Each ConvNeXt block consists of a $7 \times 1$ depth-wise convolution, followed by layer normalization, a $1 \times 1$ point-wise convolution for dimensionality projection with an expansion factor of 3, a GELU activation layer, and another $1 \times 1$ point-wise convolution for dimensionality restoration followed by residual connection.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Fig. 3. Details of the discriminators. The parameters inside the parentheses for each convolutional layer respectively represent the number of channels, kernel size, and stride.
|
| 89 |
+
|
| 90 |
+
spectral domain and time domain to guide generator $G$ in generating spectra and waveforms that closely resemble real ones. Firstly, considering that the speech signal is composed of sinusoidal signals with various frequencies, with some frequency bands generated through BWE. Due to the statistical characteristics of speech signals varying in different frequency bands, we employ an MPD [40] to capture periodic patterns, with the aim of matching the natural wideband speech across multiple frequency bands. Moreover, since the statistical characteristics of amplitude and phase also differ across frequency bands, and the sole utilization of MPD cannot cover all frequency bands, we consequently define discriminators on both amplitude and phase spectra. Drawing inspiration from the multi-resolution discriminator [41], we respectively introduce MRAD and MRPD, with the aim to capture full-band amplitude and phase patterns at various resolutions. The details of MPD, MRAD, and MRPD are described as follows.
|
| 91 |
+
|
| 92 |
+
- Multi-Period Discriminator: As depicted in Fig. 3, the MPD contains multiple sub-discriminators, each of which comprises a waveform two-dimensional reshaping module, multiple convolutional layers with an increasing number of channels, and an output convolutional layer. Firstly, the reshaping module reshapes the one-dimensional raw waveform into a two-dimensional format by sampling with a period $p$ , which is set to prime numbers to prevent overlaps. Subsequently, the reshaped waveform undergoes multiple
|
| 93 |
+
|
| 94 |
+
convolutional layers with leaky rectified linear unit (ReLU) activation [55] before finally producing the discriminative score, which indicates the likelihood that the input data is real.
|
| 95 |
+
|
| 96 |
+
- Multi-Resolution Discriminators: As depicted in Fig. 3, both MRAD and MRPD share a unified structure. They both consist of multiple sub-discriminators, each comprising a spectrum extraction module and multiple convolutional layers interleaved with leaky ReLU activation to capture features along both temporal and frequency axes. The raw waveform first undergoes an initial transformation into amplitude or phase spectra using STFT with diverse parameter sets, encompassing FFT point number, window size, and hop size. Subsequently, the multi-resolution amplitude or phase spectra are processed through multiple convolutional layers to yield the discriminative score.
|
| 97 |
+
|
| 98 |
+
# B. Training Criteria
|
| 99 |
+
|
| 100 |
+
1) Spectrum-based Losses: We first define loss functions in the spectral domain to capture time-frequency distributions and generate realistic spectra.
|
| 101 |
+
|
| 102 |
+
- Amplitude Spectrum Loss: The amplitude spectrum loss is the mean square error (MSE) of the wideband log-amplitude spectrum $\log (\mathbf{Y}_a)\in \mathbb{R}^{T\times F}$ and the extended log-amplitude spectrum $\log (\hat{\mathbf{Y}}_a)$ , which is defined as:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\mathcal {L} _ {A} = \frac {1}{T F} \mathbb {E} _ {\left(\mathbf {Y} _ {a}, \hat {\mathbf {Y}} _ {a}\right)} \left[ \| \log \left(\frac {\mathbf {Y} _ {a}}{\hat {\mathbf {Y}} _ {a}}\right) \| _ {\mathrm {F}} ^ {2} \right]. \tag {3}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
- Phase Spectrum Loss: Considering the phase wrapping issue, we follow our previous work [34] to use three antiwrapping losses to explicitly optimize the wrapped phase spectrum, which are respectively defined as the mean absolute error (MAE) between the anti-wrapped wideband and extended instantaneous phase (IP) spectra $\mathbf{Y}_p$ and $\hat{\mathbf{Y}}_p$ , group delay (GD) spectra $\mathbf{Y}_{GD}$ and $\hat{\mathbf{Y}}_{GD}$ , and instantaneous angular frequency (IAF) spectra $\mathbf{Y}_{IAF}$ and $\hat{\mathbf{Y}}_{IAF}$ :
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\mathcal {L} _ {I P} = \frac {1}{T F} \mathbb {E} _ {\left(\boldsymbol {Y} _ {p}, \hat {\boldsymbol {Y}} _ {p}\right)} \left[ \| f _ {A W} \left(\boldsymbol {Y} _ {p} - \hat {\boldsymbol {Y}} _ {p}\right) \| _ {1} \right], \tag {4}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathcal {L} _ {G D} = \frac {1}{T F} \mathbb {E} _ {\left(\mathbf {Y} _ {G D}, \hat {\mathbf {Y}} _ {G D}\right)} \left[ \| f _ {A W} \left(\mathbf {Y} _ {G D} - \hat {\mathbf {Y}} _ {G D}\right)) \| _ {1} \right], \tag {5}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\mathcal {L} _ {I A F} = \frac {1}{T F} \mathbb {E} _ {\left(\boldsymbol {Y} _ {I A F}, \hat {\boldsymbol {Y}} _ {I A F}\right)} \left[ \| f _ {A W} \left(\boldsymbol {Y} _ {I A F} - \hat {\boldsymbol {Y}} _ {I A F}\right) \| _ {1} \right], \tag {6}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
where $(\mathbf{Y}_{GD},\hat{\mathbf{Y}}_{GD}) = (\Delta_{DF}\mathbf{Y}_p,\Delta_{DF}\hat{\mathbf{Y}}_p)$ and $(\mathbf{Y}_{IAF},\hat{\mathbf{Y}}_{IAF}) = (\Delta_{DT}\mathbf{Y}_p,\Delta_{DT}\hat{\mathbf{Y}}_p)$ . The $\Delta_{DF}$ and $\Delta_{DT}$ represent the differential operator along the frequency and temporal axes, respectively. The $f_{AW}(x)$ denotes the anti-wrapping function, which is defined as $f_{AW}(x) = |x - 2\pi \cdot \mathrm{round}\left(\frac{x}{2\pi}\right)|, x \in \mathbb{R}$ . The final phase spectrum loss is the sum of these three anti-wrapping losses:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathcal {L} _ {P} = \mathcal {L} _ {I P} + \mathcal {L} _ {G D} + \mathcal {L} _ {I A F}. \tag {7}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
- Complex Spectrum Loss: To further optimize the amplitude and phase within the complex spectrum and enhance the spectral consistency of iSTFT, we define the
|
| 129 |
+
|
| 130 |
+
MSE loss between the wideband short-time complex spectrum $(\mathbf{Y}_r,\mathbf{Y}_i)\in \mathbb{R}^{T\times F\times 2}$ and extend short-time complex spectrum $(\hat{\mathbf{Y}}_r,\hat{\mathbf{Y}}_i)\in \mathbb{R}^{T\times F\times 2}$ as well as the MSE loss between $(\hat{\mathbf{Y}}_r,\hat{\mathbf{Y}}_i)$ and re-extracted short-time complex spectrum $(\hat{\mathbf{Y}}_r',\hat{\mathbf{Y}}_i')\in \mathbb{R}^{T\times F\times 2}$ from the extended waveform $\hat{\pmb{y}}$ So the complex spectrum loss is defined as:
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\begin{array}{l} \mathcal {L} _ {C} = \frac {1}{T F} \mathbb {E} _ {\left(\boldsymbol {Y} _ {r}, \boldsymbol {Y} _ {i}\right), \left(\hat {\boldsymbol {Y}} _ {r}, \hat {\boldsymbol {Y}} _ {i}\right)} \left[ \| \left(\boldsymbol {Y} _ {r}, \boldsymbol {Y} _ {i}\right) - \left(\hat {\boldsymbol {Y}} _ {r}, \hat {\boldsymbol {Y}} _ {i}\right) \| _ {\mathrm {F}} ^ {2} \right] \tag {8} \\ + \frac {1}{T F} \mathbb {E} _ {(\hat {\boldsymbol {Y}} _ {r}, \hat {\boldsymbol {Y}} _ {i}), (\hat {\boldsymbol {Y}} _ {r} ^ {\prime}, \hat {\boldsymbol {Y}} _ {i} ^ {\prime})} \left[ \| (\hat {\boldsymbol {Y}} _ {r}, \hat {\boldsymbol {Y}} _ {i}) - (\hat {\boldsymbol {Y}} _ {r} ^ {\prime}, \hat {\boldsymbol {Y}} _ {i} ^ {\prime}) \| _ {\mathrm {F}} ^ {2} \right]. \\ \end{array}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
- Final Spectral Loss: The final spectral loss is the linear combination of the spectrum-based losses mentioned above:
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\mathcal {L} _ {S} = \lambda_ {A} \mathcal {L} _ {A} + \lambda_ {P} \mathcal {L} _ {P} + \lambda_ {C} \mathcal {L} _ {C}, \tag {9}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $\lambda_{A},\lambda_{P}$ , and $\lambda_C$ are hyper-parameters and we set them to 45, 100, and 45, respectively.
|
| 143 |
+
|
| 144 |
+
# 2) GAN-based Losses:
|
| 145 |
+
|
| 146 |
+
- GAN Loss: For brevity, we represent MPD, MRAD, and MRPD collectively as $D$ . The discriminator $D$ and generator $G$ are trained alternately. The discriminator is trained to classify wideband samples as 1 and samples extended by the generator as 0; conversely, the generator is trained to generate samples that approach being classified as 1 by the discriminator as closely as possible. We use the hinge GAN loss [56] which is defined as:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\begin{array}{l} \mathcal {L} _ {a d v} (D; G) = \mathbb {E} _ {\boldsymbol {x}} \left[ \max (0, 1 + D (G (\boldsymbol {x})) \right] \tag {10} \\ + \mathbb {E} _ {\boldsymbol {y}} \left[ \max (0, 1 - D (\boldsymbol {y})) \right], \\ \end{array}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathcal {L} _ {a d v} (G; D) = \mathbb {E} _ {\boldsymbol {x}} \left[ \max (0, 1 - D (G (\boldsymbol {x}))) \right]. \tag {11}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
- Feature Matching Loss: To encourage the generator to produce samples that not only fool the discriminator but also match the features of real samples at multiple levels of abstraction, we define the feature matching loss [57] between the features extracted from the natural wideband waveforms and those from the extended waveforms at certain intermediate layers of the discriminator as follows:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\mathcal {L} _ {F M} (G; D) = \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y})} \left[ \sum_ {i = 1} ^ {M} \frac {1}{N _ {i}} \| D ^ {i} (\boldsymbol {y}) - D ^ {i} (G (\boldsymbol {x})) \| _ {1} \right], \tag {12}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
where $M$ denotes the number of layers in the discriminator, $D^{i}$ and $N_{i}$ denotes the features and the number of features in the $i$ -th layer of the discriminator, respectively.
|
| 163 |
+
|
| 164 |
+
3) Final Loss: Since the discriminator $D$ is a set of subdiscriminators of MPD, MRAD, and MRPD, the final losses of the generator and discriminator are defined as:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\begin{array}{l} \mathcal {L} _ {G} = \sum_ {k = 1} ^ {K} \left[ \lambda_ {a d v} \mathcal {L} _ {a d v} (G; D _ {k}) + \lambda_ {F M} \mathcal {L} _ {F M} (G; D _ {k}) \right] + \lambda_ {S} \mathcal {L} _ {S}, (13) \\ \mathcal {L} _ {D} = \sum_ {k = 1} ^ {K} \mathcal {L} _ {a d v} \left(D _ {k}; G\right). (14) \\ \end{array}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $K$ denotes the numbers of sub-discriminators, and $D_{k}$ denotes the $k$ -th sub-discriminator in MPD, MRAD, and MRPD. $\lambda_{adv}$ , $\lambda_{FM}$ , and $\lambda_{S}$ are hyper-parameters and in all our experiments, we set $\lambda_{S} = 1$ . For MPD, we set $\lambda_{adv} = 1$ and $\lambda_{FM} = 1$ , while for MRAD and MRPD, we set $\lambda_{adv} = 0.1$ and $\lambda_{FM} = 0.1$ .
|
| 171 |
+
|
| 172 |
+
# IV. EXPERIMENTAL SETUP
|
| 173 |
+
|
| 174 |
+
# A. Data Configuration
|
| 175 |
+
|
| 176 |
+
We trained all models on the VCTK-0.92 dataset [58], which contains approximately 44 hours of speech recordings from 110 speakers with diverse accents. Adhering to the data preparation approach adopted in previous speech BWE studies [31], [32], [46]–[48], we exclusively utilized the mic1-microphone data and excluded speakers $p280$ and $p315$ due to technical issues. Among the remaining 108 speakers, the last 8 were allocated for testing, while the remaining 100 were used for training. Given the historical focus of early speech BWE methods on a sampling rate of $16\mathrm{kHz}$ and the contemporary emphasis on higher target sampling rates (e.g., $44.1\mathrm{kHz}$ and $48\mathrm{kHz}$ ) in recent methods, we employed the original VCTK-0.92 dataset with a $48\mathrm{kHz}$ sampling rate for high-sampling-rate BWE experiments. Subsequently, we downsampled the VCTK-0.92 dataset to $16\mathrm{kHz}$ for low-sampling-rate BWE experiments.
|
| 177 |
+
|
| 178 |
+
To generate pairs of wideband and narrowband speech signals, we employed a sinc filter to eliminate high-frequency components in the speech signals above a specified bandwidth. This process retained only the low-frequency components, ensuring no aliasing occurred. For experiments targeting a 16 kHz sampling rate, we configured the downsampling rate $n$ to 2, 4, and 8, corresponding to the extension from 8 kHz, 4 kHz, and 2 kHz to 16 kHz, respectively. In experiments aiming for a 48 kHz sampling rate, we set the downsampling rate $n$ to 2, 3, 4, and 6, denoting the extension from 24 kHz, 16 kHz, 12 kHz, and 8 kHz to 48 kHz, respectively.
|
| 179 |
+
|
| 180 |
+
# B. Model Details
|
| 181 |
+
|
| 182 |
+
We used the same configuration for experiments with target sampling rates of $16\mathrm{kHz}$ and $48\mathrm{kHz}$ . For training our proposed AP-BWE model, all the audio clips underwent silence trimming with VCTK silence labels ${}^{1}$ and sliced into 8000-sample-point segments. To extract the amplitude and phase spectra from raw waveforms, we used STFT with the FFT point number of 1024, Hanning window size, and hop size of 320 and 80 sample points, respectively. So for the training set, the number of frequency bins $F$ is 513, and the number of temporal frames $T$ is 101.
|
| 183 |
+
|
| 184 |
+
For the generator, the number of the ConvNeXt block $N$ was set to 8. The period $p$ for each sub-discriminator in the MPD was configured as 2, 3, 5, 7, and 11. In the case of MRAD and MRPD, the FFT point numbers, rectangular window sizes, and hop sizes of the STFT parameter sets were set to [512, 128, 512], [1024, 256, 1024], and [2048, 512, 2048] for the three sub-discriminators, respectively. Both the generator and
|
| 185 |
+
|
| 186 |
+
discriminator were trained until 500k steps using the AdamW optimizer [59], with $\beta_{1} = 0.8$ , $\beta_{2} = 0.99$ , and weight decay $\lambda = 0.01$ . The learning rate was set initially to $2 \times 10^{-4}$ and scheduled to decay with a factor of 0.999 at every epoch.
|
| 187 |
+
|
| 188 |
+
# C. Evaluation Metrics
|
| 189 |
+
|
| 190 |
+
1) Metrics on Speech Quality: We comprehensively evaluated the quality of the extended speech signals using metrics defined on the amplitude spectra, phase spectra, and reconstructed speech waveforms, including:
|
| 191 |
+
|
| 192 |
+
- Log-Spectral Distance (LSD): LSD is a commonly used objective metric in the BWE task. Given the wideband and extended speech waveform $\pmb{y}$ and $\hat{\pmb{y}}$ , their corresponding amplitude spectra $\mathbf{Y}_a \in \mathbb{R}^{T \times F}$ and $\hat{\mathbf{Y}}_a \in \mathbb{R}^{T \times F}$ were first extracted using STFT with the FFT point number of 2048, Hanning window size of 2048, and hop size of 512. Then the LSD is defined as:
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
\mathrm {L S D} = \frac {1}{T} \sum_ {t = 1} ^ {T} \sqrt {\frac {1}{F} \sum_ {f = 1} ^ {F} \left(\log_ {1 0} \left(\frac {\mathbf {Y} _ {a} [ t , f ]}{\hat {\mathbf {Y}} _ {a} [ t , f ]}\right)\right) ^ {2}} \tag {15}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
- Anti-Wrapping Phase Distance (AWPD): To assess the model's capability of recovering high-frequency phase, on the basis of the anti-wrapping losses defined in Eq. 4-6, we defined three anti-wrapping phase metrics to evaluate the extended phase instantaneous error as well as its continuity in both the temporal and frequency domains:
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\begin{array}{l} \mathrm {A W P D} _ {I P} = \frac {1}{T} \sum_ {t = 1} ^ {T} \sqrt {\frac {1}{F} \sum_ {f = 1} ^ {F} f _ {A W} ^ {2} \left(\mathbf {Y} _ {P} [ t , f ] - \hat {\mathbf {Y}} _ {p} [ t , f ]\right)}, (16) \\ \mathrm {A W P D} _ {G D} = \frac {1}{T} \sum_ {t = 1} ^ {T} \sqrt {\frac {1}{F} \sum_ {f = 1} ^ {F} f _ {A W} ^ {2} \left(\mathbf {Y} _ {G D} [ t , f ] - \hat {\mathbf {Y}} _ {G D} [ t , f ]\right)}, (17) \\ \mathrm {A W P D} _ {I A F} = \frac {1}{T} \sum_ {t = 1} ^ {T} \sqrt {\frac {1}{F} \sum_ {f = 1} ^ {F} f _ {A W} ^ {2} \left(\mathbf {Y} _ {I A F} [ t , f ] - \hat {\mathbf {Y}} _ {I A F} [ t , f ]\right)}, (18) \\ \end{array}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
where all the spectra are extracted using the same STFT parameters as those used in LSD.
|
| 205 |
+
|
| 206 |
+
- Virtual Speech Quality Objective Listener (ViSQOL): To access the overall perceived audio quality of the extended speech signals in an objective manner, we employed the ViSQOL [60] which uses a spectral-temporal measure of similarity between a reference and a test speech signal to produce a mean opinion score - listening quality objective (MOS-LQO) score. For the audio mode of ViSQOL at a required sampling rate of $48\mathrm{kHz}$ , the MOS-LQO score ranges from 1 to 4.75, the higher the better. For the speech mode of ViSQOL at a required sampling rate of $16\mathrm{kHz}$ , the MOS-LQO score ranges from 1 to 5.
|
| 207 |
+
- Mean Opinion Score (MOS): To further subjectively access the overall audio quality, MOS tests were conducted to
|
| 208 |
+
|
| 209 |
+
TABLEI EXPERIMENTAL RESULTS IN SPEECH QUALITY (LSD AND VISQOL) AND GENERATION EFFICIENCY (RTF AND FLOPS) FOR BWE METHODS EVALUATED ON THE VCTK DATASET WITH TARGET SAMPLING RATE OF 16 KHZ, WHERE IN RTF $(a\times)$ REPRESENTING $a$ TIMES REAL-TIME
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">8kHz → 16kHz</td><td colspan="2">4kHz → 16kHz</td><td colspan="2">2kHz → 16kHz</td><td rowspan="2">RTF(CPU)</td><td rowspan="2">RTF(GPU)</td><td rowspan="2">FLOPs</td></tr><tr><td>LSD</td><td>ViSQOL</td><td>LSD</td><td>ViSQOL</td><td>LSD</td><td>ViSQOL</td></tr><tr><td>sinc</td><td>1.80</td><td>4.34</td><td>2.68</td><td>3.52</td><td>3.15</td><td>2.73</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TFiLM [21]</td><td>1.31</td><td>4.46</td><td>1.65</td><td>3.84</td><td>1.97</td><td>3.10</td><td>0.3287 (3.04×)</td><td>0.0244 (41.01×)</td><td>232.85G</td></tr><tr><td>AFiLM [22]</td><td>1.24</td><td>4.39</td><td>1.63</td><td>3.83</td><td>1.79</td><td>2.75</td><td>0.5029 (1.99×)</td><td>0.0477 (20.96×)</td><td>260.76G</td></tr><tr><td>NVSR [31]</td><td>0.79</td><td>4.52</td><td>0.95</td><td>4.11</td><td>1.10</td><td>3.41</td><td>0.7577 (1.32×)</td><td>0.0512 (19.54×)</td><td>34.28G</td></tr><tr><td>AERO [32]</td><td>0.87</td><td>4.57</td><td>1.00</td><td>4.19</td><td>-</td><td>-</td><td>0.4395 (2.28×)</td><td>0.0217 (46.01×)</td><td>141.77G</td></tr><tr><td>AP-BWE*</td><td>0.71</td><td>4.66</td><td>0.88</td><td>4.28</td><td>0.99</td><td>3.77</td><td rowspan="2">0.0338 (29.61×)</td><td rowspan="2">0.0026 (382.56×)</td><td rowspan="2">5.97G</td></tr><tr><td>AP-BWE</td><td>0.69</td><td>4.71</td><td>0.87</td><td>4.30</td><td>0.99</td><td>3.76</td></tr></table>
|
| 212 |
+
|
| 213 |
+
evaluate the naturalness of the wideband speech and speech waveforms extended by the speech BWE models. Defining the extension ratio as the ratio between the target sampling rate and the source sampling rate, we selected configurations with the highest extension ratios for subjective evaluations. In each MOS test, twenty utterances from the test set were evaluated by at least 30 native English listeners on the crowd-sourcing platform Amazon Mechanical Turk. For each utterance, listeners were asked to rate a naturalness score between 1 and 5 with an interval of 0.5. All the MOS results were reported with $95\%$ confidence intervals (CI). We also conducted paired $t$ -tests to assess the significance of differences between our proposed AP-BWE and the baseline models, reporting $p$ -values to indicate the statistical significance of these comparisons.
|
| 214 |
+
2) Metrics on Generation Efficiency: We first used the real-time factor (RTF) to evaluate the inference speed of the model. The RTF is defined as the ratio of the total inference time for processing narrowband source signals into wideband output signals, to the total duration of the wideband signals. In our implementation, RTF was calculated using the complete test set on an RTX 4090 GPU and an Intel(R) Xeon(R) Silver 4310 CPU (2.10 GHz). Additionally, we used floating point operations (FLOPs) to assess the computational complexity of the model. All the FLOPs were calculated using 1-second speech signals as inputs to the models.
|
| 215 |
+
3) Metrics on Speech Intelligibility: The main frequency components of human speech are concentrated within the range of approximately $300\mathrm{Hz}$ to $3400\mathrm{Hz}$ . This frequency range encompasses crucial information for vowels and consonants, significantly impacting speech intelligibility. Consequently, we analyzed the intelligibility of waveforms extended by speech BWE methods with the target sampling rate of 16 kHz. Firstly, we employed an advanced ASR model, Whisper [61] to transcribe the extended $16\mathrm{kHz}$ speech signals into corresponding texts. Subsequently, we calculated the word error rate (WER) and character error rate (CER) based on the transcription results. Additionally, short-time objective intelligibility (STOI) was also included as an objective metric to indicate the percentage of speech signals that are correctly understood.
|
| 216 |
+
|
| 217 |
+
# V. RESULTS AND ANALYSIS
|
| 218 |
+
|
| 219 |
+
# A. BWE Experiments Targeting 16 kHz
|
| 220 |
+
|
| 221 |
+
1) Baseline Methods: For BWE targeting a $16\mathrm{kHz}$ sampling rate, we first used the sinc filter interpolation as the lower-bound method, and further compared our proposed AP-BWE with two waveform-based methods (TFiLM [21] and AFiLM [22]), a vocoder-based method (NVSR [31]), and a complex-spectrum-based method (AERO [32]). For TFiLM and AFiLM, we used their official implementations. However, their original papers used the old-version VCTK dataset [62] and employed subsampling to obtain the narrowband waveforms, which aliased high-frequency components. Thus, they performed not a strict BWE task but an SR task. For a fair comparison, we re-trained the TFiLM and AFiLM models with our data-preprocessing manner on the VCTK-0.92 dataset until 50 epochs. For AERO and NVSR, we used their official implementations. Notably, AERO did not conduct the experiment at a $2\mathrm{kHz}$ source sampling rate, and thus, this result was excluded from our analysis.
|
| 222 |
+
|
| 223 |
+
Additionally, considering some recent BWE methods [31], [47], [48] demonstrated the ability to handle various source sampling rates with a single model, we also trained our AP-BWE with the source sampling rate uniformly sampled from $2\mathrm{kHz}$ to $8\mathrm{kHz}$ , denoted as AP-BWE*, to unifiedly extend speech signals at all these three sampling rates to $16\mathrm{kHz}$ .
|
| 224 |
+
|
| 225 |
+
# 2) Evaluation on Speech Quality:
|
| 226 |
+
|
| 227 |
+
- Objective Evaluation: As depicted in Table I, our proposed AP-BWE achieved the best performance in speech quality with all kinds of source sampling rates. Compared to sinc filter interpolation, our proposed AP-BWE exhibited significant improvements of $61.7\%$ , $67.5\%$ , and $68.6\%$ in terms of LSD as well as $8.5\%$ , $22.2\%$ , and $37.7\%$ in terms ViSQOL for source sampling rates of $8\mathrm{kHz}$ , $4\mathrm{kHz}$ , and $2\mathrm{kHz}$ , respectively. With the narrowing of source speech bandwidth, the performance advantage of our proposed AP-BWE became more pronounced, indicating the powerful BWE capability of our model. In general, waveform-based methods (TFiLM and AFiLM) performed less effectively
|
| 228 |
+
|
| 229 |
+
4https://github.com/ncarraz/AFILM.
|
| 230 |
+
5https://github.com/haoheliu/sss_eval.
|
| 231 |
+
$^{6}$ https://github.com/slp-r1/aero.
|
| 232 |
+
|
| 233 |
+
TABLE II PHASE-RELATED EVALUATION RESULTS FOR BWE METHODS EVALUATED ON THE VCTK DATASET WITH TARGET SAMPLING RATE OF 16 KHZ
|
| 234 |
+
|
| 235 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">8kHz → 16kHz</td><td colspan="3">4kHz → 16kHz</td><td colspan="3">2kHz → 16kHz</td></tr><tr><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td></tr><tr><td>sinc</td><td>1.27</td><td>0.87</td><td>1.06</td><td>1.57</td><td>1.18</td><td>1.28</td><td>1.69</td><td>1.34</td><td>1.38</td></tr><tr><td>TFiLM [21]</td><td>1.28</td><td>0.91</td><td>1.07</td><td>1.54</td><td>1.18</td><td>1.27</td><td>1.68</td><td>1.35</td><td>1.37</td></tr><tr><td>AFiLM [22]</td><td>1.32</td><td>0.98</td><td>1.11</td><td>1.54</td><td>1.19</td><td>1.27</td><td>1.70</td><td>1.38</td><td>1.39</td></tr><tr><td>NVSR [31]</td><td>1.38</td><td>0.89</td><td>1.11</td><td>1.61</td><td>1.14</td><td>1.29</td><td>1.72</td><td>1.29</td><td>1.38</td></tr><tr><td>AERO [32]</td><td>1.31</td><td>0.93</td><td>1.08</td><td>1.56</td><td>1.15</td><td>1.27</td><td>-</td><td>-</td><td>-</td></tr><tr><td>AP-BWE*</td><td>1.27</td><td>0.86</td><td>1.05</td><td>1.53</td><td>1.12</td><td>1.25</td><td>1.67</td><td>1.27</td><td>1.35</td></tr><tr><td>AP-BWE</td><td>1.26</td><td>0.84</td><td>1.04</td><td>1.53</td><td>1.12</td><td>1.25</td><td>1.67</td><td>1.27</td><td>1.35</td></tr></table>
|
| 236 |
+
|
| 237 |
+
than spectrum-based methods (NVSR, AERO, and our proposed AP-BWE), indicating the importance of capturing time-frequency domain characteristics for the BWE task. Within spectrum-based methods, NVSR, relying on high-frequency mel-spectrogram prediction and vocoder-based waveform reconstruction, demonstrated advantages in the LSD metric assessing the extended amplitude. However, the vocoder-based phase recovery was not as effective as the complex spectrum-based approach, so it lagged behind AERO in the ViSQOL metric assessing overall speech quality. Compared to AERO, our AP-BWE, benefiting from explicit amplitude and phase optimizations, successfully avoided the compensation effects between amplitude and phase and consequently achieved better performance in both spectral and waveform-based metrics. Worth noting is that the unified AP-BWE* model exhibited only a slight decrease in performance compared to AP-BWE, and even achieved the highest ViSQOL score at the source sampling rate of $2\mathrm{kHz}$ . This indicated that our model exhibited strong adaptability to the source sampling rate.
|
| 238 |
+
|
| 239 |
+
The key distinction between our approach and others lay in our implementation of explicit high-frequency phase extension. As illustrated in Table II, our proposed AP-BWE consistently outperformed other baselines across various source sampling rates, demonstrating superior performance in terms of instantaneous phase error and phase continuity along both time and frequency axes. For the AP-BWE*, only slight decreases in the AWPD metrics were observed when the source sampling rate was $8\mathrm{kHz}$ . Under other source sampling rate conditions, the metrics were the same as AP-BWE, indicating the robustness of our unified model on phase. Remarkably, for other baseline methods, some of their AWPD metrics exhibited degradation compared to those of the source sinc-interpolated waveforms. This suggests a limitation in the effective utilization of low-frequency phase information during the speech BWE process by these baseline methods. Moreover, all methods here directly generated waveforms without substituting the original low-frequency components, so their low-frequency phase might be partially compromised, leading to a significant impact on the quality of the extended speech. This observation underscored the critical importance of precise phase prediction and optimization in the context of BWE tasks, further emphasizing the advantage of our approach.
|
| 240 |
+
|
| 241 |
+
TABLE III MOS TESTS RESULTS FOR BWE METHODS WITH SOURCE SAMPLING RATE OF 2 KHZ AND TARGET SAMPLING RATE OF 16 KHZ
|
| 242 |
+
|
| 243 |
+
<table><tr><td>Methods</td><td>MOS (CI)</td></tr><tr><td>sinc</td><td>3.34 (± 0.09)</td></tr><tr><td>TFiLM [21]</td><td>3.41 (± 0.09)</td></tr><tr><td>AFiLM [22]</td><td>3.50 (± 0.08)</td></tr><tr><td>NVSR [31]</td><td>3.68 (± 0.08)</td></tr><tr><td>AP-BWE</td><td>3.93 (± 0.07)</td></tr><tr><td>Ground Truth</td><td>4.01 (± 0.06)</td></tr></table>
|
| 244 |
+
|
| 245 |
+
- Subjective Evaluation: To compare the BWE capabilities of our proposed AP-BWE with those of other baseline models, we conducted MOS tests on natural wideband 16 kHz speech waveforms, as well as on speech waveforms extended by AP-BWE and other baseline methods at a source sampling rate of $2\mathrm{kHz}$ . The subjective experimental results are presented in Table III. For a more intuitive comparison, we visualized the spectrograms of these speech waveforms, as illustrated in Fig.4. According to the MOS results, our proposed AP-BWE outperformed other baseline models very significantly in terms of subjective quality $(p < 0.01)$ . The MOS of TFiLM and AFiLM showed a slight improvement to that of the sinc filter interpolation, demonstrating their insufficient modeling capability for high-frequency components, particularly in the case of high-frequency unvoiced segments as shown in the in Fig. 4. NVSR achieved a decent MOS score compared to TFiLM and AFiLM but still lagged behind our proposed AP-BWE. We can observe that, compared to the spectrograms of natural wideband speech and AP-BWE-extended speech, the NVSR-extended speech spectrogram exhibited relatively low energy in both high-frequency unvoiced segments (e.g., $0.2 \sim 0.3\mathrm{s}$ ) and low-frequency harmonics (e.g., $1.1 \sim 1.5\mathrm{s}$ ). As a result, the speech signals extended by NVSR would sound duller, negatively impacting its perceived speech quality. In contrast, our proposed AP-BWE effectively extended more robust harmonic structures, demonstrating their strong modeling capabilities and highlighting the effectiveness of explicit predictions of amplitude and phase spectra.
|
| 246 |
+
|
| 247 |
+
3) Evaluation on Generation Efficiency: We respectively evaluated the generation efficiency of our proposed AP-BWE as well as other baseline methods as outlined in Table I.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
Fig. 4. Spectrogram visualization of the original wideband $16\mathrm{kHz}$ speech waveform and speech waveforms extended by baseline methods and our proposed AP-BWE from the source sampling rate of $2\mathrm{kHz}$ .
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+
TABLE IV EXPERIMENTAL RESULTS IN INTELLIGIBILITY FOR BWE METHODS EVALUATED ON THE VCTK DATASET WITH TARGET SAMPLING RATE OF 16 KHZ
|
| 261 |
+
|
| 262 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">8kHz → 16kHz</td><td colspan="3">4kHz → 16kHz</td><td colspan="3">2kHz → 16kHz</td></tr><tr><td>WER (%)</td><td>CER (%)</td><td>STOI (%)</td><td>WER (%)</td><td>CER (%)</td><td>STOI (%)</td><td>WER (%)</td><td>CER (%)</td><td>STOI (%)</td></tr><tr><td>sinc</td><td>3.67</td><td>1.67</td><td>99.76</td><td>11.45</td><td>7.08</td><td>89.91</td><td>47.43</td><td>33.56</td><td>79.04</td></tr><tr><td>TFiLM [21]</td><td>3.69</td><td>1.69</td><td>99.24</td><td>11.32</td><td>7.24</td><td>91.27</td><td>45.95</td><td>33.58</td><td>80.23</td></tr><tr><td>AFiLM [22]</td><td>3.67</td><td>1.67</td><td>98.54</td><td>9.28</td><td>5.53</td><td>90.51</td><td>45.16</td><td>33.01</td><td>76.83</td></tr><tr><td>NVSR [31]</td><td>4.38</td><td>2.02</td><td>98.84</td><td>13.56</td><td>8.51</td><td>92.04</td><td>59.53</td><td>44.43</td><td>82.38</td></tr><tr><td>AERO [32]</td><td>3.97</td><td>1.84</td><td>99.38</td><td>9.78</td><td>5.51</td><td>93.74</td><td>-</td><td>-</td><td>-</td></tr><tr><td>AP-BWE</td><td>3.72</td><td>1.67</td><td>99.77</td><td>6.69</td><td>3.54</td><td>94.75</td><td>36.69</td><td>25.61</td><td>87.00</td></tr><tr><td>Ground Truth</td><td>3.07</td><td>1.26</td><td>100.00</td><td>3.07</td><td>1.26</td><td>100.00</td><td>3.07</td><td>1.26</td><td>100.00</td></tr></table>
|
| 263 |
+
|
| 264 |
+
Considering the inference speed, since NVSR divided the BWE process into the mel-spectrogram extension stage and vocoder synthesis stage, it lagged far behind other end-to-end methods. For TFiLM and AFiLM, since they both operated on the waveform level and utilized RNNs or self-attention to capture long-term dependencies, their inference speeds were consequently constrained. For AERO, although it and our proposed AP-BWE both operated on the spectral level, the utilization of transformer [43] blocks in multiple layers severely slowed down its inference speed. Nevertheless, our AP-BWE model, based on fully convolutional networks and all-frame-level operations, has achieved an astonishingly high-speed waveform generation (29.61 times real-time speed on CPU and 382.56 times on GPU), far surpassing other baseline methods. Considering the models' computational complexity, the FLOPs of AP-BWE were at least five times smaller than those of the baseline models, further demonstrating the advantage of our proposed model in generation efficiency.
|
| 265 |
+
|
| 266 |
+
4) Evaluation on Speech Intelligibility: As shown in Table IV, it is obvious that our proposed AP-BWE exhibited a remarkable improvement in terms of intelligibility metrics compared to baseline models. Under the condition of extending from $8\mathrm{kHz}$ to $16\mathrm{kHz}$ , the performance of the sinc filter interpolation was already very close to the Ground Truth. Our proposed AP-BWE and other baseline models struggled to further improve WER and CER on top of the waveform interpolated by the sinc filter, suggesting that the ASR model focused on information from frequencies below $4\mathrm{kHz}$ for transcription. When the source sampling rate was further reduced to $4\mathrm{kHz}$ and $2\mathrm{kHz}$ , all the baseline models showed slight improvements in WER and CER compared to sinc filter interpolation, except for NVSR. The decline in NVSR's performance in WER and CER was due to its use
|
| 267 |
+
|
| 268 |
+
of a vocoder to restore the waveform, which made the low-frequency components unnatural, but its STOI metric was still improved. Overall, these baseline models demonstrated limited extension capabilities under the extremely high extension ratio. However, our proposed AP-BWE significantly improved WER, CER, and STOI by $41.57\%$ , $50.00\%$ , $5.38\%$ at the $4\mathrm{kHz}$ source sampling rate, and by $22.64\%$ , $23.69\%$ , and $10.07\%$ at the $2\mathrm{kHz}$ source sampling rate, compared to sinc filter interpolation. This indicated that benefiting from our precise phase prediction, our model possessed strong harmonic restoration capabilities, reconstructing the key information of vowels and consonants as well as significantly enhancing the intelligibility of the extended speech.
|
| 269 |
+
|
| 270 |
+
# B. BWE Experiments Targeting 48 kHz
|
| 271 |
+
|
| 272 |
+
1) Baseline Methods: For BWE targeting a $48\mathrm{kHz}$ sampling rate, the sinc filter interpolation was still used as the low-bound method. We subsequently compared our proposed AP-BWE with three diffusion-based methods (NU-Wave [46], NU-Wave 2 [47], and UDM+ [48]) and an MDCT-spectrum-based method (mdctGAN [33]). For NU-Wave, we used the community-contributed checkpoints from their official implementation. Notably, NU-Wave did not conduct the experiment at source sampling rates of $8\mathrm{kHz}$ and $12\mathrm{kHz}$ , so we excluded these results from our analysis. For NU-Wave 2 and UDM+, we used the reproduced NU-Wave 2 checkpoint and official UDM+ checkpoint. It is worth noting that in the original paper of mdctGAN [33], the mdctGAN model was trained on the combination of the VCTK training set and the HiFi-TTS dataset, and tested on the VCTK test set. Here, for a
|
| 273 |
+
|
| 274 |
+
<sup>7</sup>https://github.com/maum-ai/nuwave.
|
| 275 |
+
$^{8}$ https://github.com/yoyololicon/diffwave-sr.
|
| 276 |
+
|
| 277 |
+
TABLE V EXPERIMENTAL RESULTS IN SPEECH QUALITY (LSD AND VISQOL) AND GENERATION EFFICIENCY (RTF AND FLOPS) FOR BWE METHODS EVALUATED ON THE VCTK DATASET WITH TARGET SAMPLING RATE OF 48 KHZ, WHERE IN RTF $(a\times)$ REPRESENTING $a$ TIMES REAL-TIME
|
| 278 |
+
|
| 279 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">24kHz → 48kHz</td><td colspan="2">16kHz → 48kHz</td><td colspan="2">12kHz → 48kHz</td><td colspan="2">8kHz → 48kHz</td><td rowspan="2">RTF(CPU)</td><td rowspan="2">RTF(GPU)</td><td rowspan="2">FLOPs</td></tr><tr><td>LSD</td><td>ViSQOL</td><td>LSD</td><td>ViSQOL</td><td>LSD</td><td>ViSQOL</td><td>LSD</td><td>ViSQOL</td></tr><tr><td>sinc</td><td>2.17</td><td>2.99</td><td>2.57</td><td>2.26</td><td>2.75</td><td>2.09</td><td>2.94</td><td>2.07</td><td>-</td><td>-</td><td>-</td></tr><tr><td>NU-Wave [46]</td><td>0.85</td><td>3.18</td><td>0.99</td><td>2.36</td><td>-</td><td>-</td><td>-</td><td>-</td><td>95.57 (0.01×)</td><td>0.5018 (1.99×)</td><td>4039.13G</td></tr><tr><td>NU-Wave2 [47]</td><td>0.72</td><td>3.74</td><td>0.86</td><td>3.00</td><td>0.94</td><td>2.75</td><td>1.09</td><td>2.48</td><td>92.58 (0.01×)</td><td>0.5195 (1.92×)</td><td>1385.27G</td></tr><tr><td>UDM+ [48]</td><td>0.64</td><td>4.02</td><td>0.79</td><td>3.35</td><td>0.88</td><td>3.08</td><td>1.03</td><td>2.81</td><td>74.03 (0.01×)</td><td>0.8335 (1.20×)</td><td>2369.50G</td></tr><tr><td>mdctGAN [33]</td><td>0.71</td><td>3.69</td><td>0.83</td><td>3.27</td><td>0.85</td><td>3.12</td><td>0.93</td><td>3.03</td><td>0.2461 (4.06×)</td><td>0.0129 (77.80×)</td><td>103.38G</td></tr><tr><td>AP-BWE*</td><td>0.62</td><td>4.17</td><td>0.72</td><td>3.63</td><td>0.79</td><td>3.46</td><td>0.85</td><td>3.32</td><td rowspan="2">0.0551 (18.14×)</td><td rowspan="2">0.0034 (292.28×)</td><td rowspan="2">17.87G</td></tr><tr><td>AP-BWE</td><td>0.61</td><td>4.25</td><td>0.72</td><td>3.70</td><td>0.78</td><td>3.46</td><td>0.84</td><td>3.35</td></tr></table>
|
| 280 |
+
|
| 281 |
+
TABLE VI EXPERIMENTAL RESULTS FOR THE BAND-WISE ANALYSIS WITH SOURCE SAMPLING RATE OF 8 KHZ AND TARGET SAMPLING RATE OF 48 KHZ
|
| 282 |
+
|
| 283 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">4kHz ~ 8kHz</td><td colspan="4">8kHz ~ 12kHz</td><td colspan="4">12kHz ~ 24kHz</td></tr><tr><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td></tr><tr><td>NU-Wave2 [47]</td><td>1.35</td><td>1.81</td><td>1.48</td><td>1.47</td><td>1.24</td><td>1.81</td><td>1.48</td><td>1.47</td><td>1.09</td><td>1.82</td><td>1.47</td><td>1.46</td></tr><tr><td>UDM+ [48]</td><td>1.21</td><td>1.80</td><td>1.45</td><td>1.46</td><td>1.26</td><td>1.81</td><td>1.46</td><td>1.46</td><td>1.03</td><td>1.82</td><td>1.46</td><td>1.46</td></tr><tr><td>mdctGAN [33]</td><td>1.11</td><td>1.80</td><td>1.45</td><td>1.46</td><td>1.07</td><td>1.81</td><td>1.46</td><td>1.47</td><td>0.93</td><td>1.82</td><td>1.46</td><td>1.46</td></tr><tr><td>AP-BWE</td><td>0.98</td><td>1.75</td><td>1.41</td><td>1.44</td><td>0.98</td><td>1.81</td><td>1.44</td><td>1.46</td><td>0.86</td><td>1.82</td><td>1.45</td><td>1.46</td></tr></table>
|
| 284 |
+
|
| 285 |
+
fair comparison, we re-trained all the mdctGAN models solely on the VCTK training set following its official implementation $^{9}$ . In addition, the AP-BWE* was trained using randomly selected sampling rates from 8 kHz, 12 kHz, 16 kHz, and 24 kHz to handle inputs of various resolutions.
|
| 286 |
+
|
| 287 |
+
2) Evaluation on Speech Quality:
|
| 288 |
+
|
| 289 |
+
# Objective Evaluation:
|
| 290 |
+
|
| 291 |
+
As depicted in Table V, for the high-sampling-rate waveform generation at $48\mathrm{kHz}$ , our proposed AP-BWE still achieved the SOTA performance in objective metrics, irrespective of the source sampling rates. In general, compared to the baseline models, our approach exhibited a notably significant improvement in ViSQOL, particularly under lower extension ratios, which underscored the substantial impact of precise phase prediction on speech quality. For diffusion-based methods, since both NU-Wave2 and UDM+ implemented a single model for extension across different source sampling rates, we compared our unified AP-BWE* model with them. Compared to NU-Wave 2 and UDM+, our AP-BWE* model exhibited growing superiority in LSD as the source sampling rate decreased. This suggested that diffusion-based methods, operating at the waveform level, struggled to effectively recover spectral information in scenarios with restricted bandwidth. Although both mdctGAN and our proposed AP-BWE were spectrum-based methods, AP-BWE significantly outperformed it across all source sampling rates. Especially in terms of the overall speech quality, our proposed AP-BWE surpassed mdctGAN by $15.2\%$ , $13.1\%$ , $10.9\%$ , and $10.6\%$ in ViSQOL at source sampling rates of $24\mathrm{kHz}$ , $16\mathrm{kHz}$ , $12\mathrm{kHz}$ , and $8\mathrm{kHz}$ respectively. This suggested that STFT spectra were more suitable for waveform generation tasks compared to MDCT spectra. Additionally, similar to the results at $16\mathrm{kHz}$ , our
|
| 292 |
+
|
| 293 |
+
unified AP-BWE* model demonstrated competence across different sampling rate inputs, with only a slight decline in quality compared to AP-BWE, reaffirming the adaptability of our approach to source sampling rates.
|
| 294 |
+
|
| 295 |
+
Unlike the strong harmonic structure observed in the high-frequency components of $16\mathrm{kHz}$ waveforms, the high-frequency portion of $48\mathrm{kHz}$ waveforms exhibited more randomness. In our preliminary experiments, we observed the phase metrics of extended $48\mathrm{kHz}$ waveforms show minimal variation between systems, especially in scenarios with relatively higher source sampling rates. Therefore, with the source sampling rate of $8\mathrm{kHz}$ , we calculated LSD and AWPD separately for different frequency bands of the extended $48\mathrm{kHz}$ waveform to assess the performance of these models within different frequency ranges, and the evaluation results are depicted in Table VI. For the LSD metric, our AP-BWE outperformed other baseline models within each frequency band. Regarding the AWPD metrics, we only exhibited an advantage in the $4\mathrm{kHz} \sim 8\mathrm{kHz}$ frequency band, while the differences between systems were minimal in the $8\mathrm{kHz} \sim 12\mathrm{kHz}$ and $12\mathrm{kHz} \sim 24\mathrm{kHz}$ frequency bands. This indicated that our proposed AP-BWE, benefiting from explicit phase prediction, was capable of effectively recovering the harmonic structure in the waveform, thereby significantly improving the speech quality in the mid-to-low-frequency range. For high-frequency phases, due to their strong randomness, the current methods exhibited comparable predictive capabilities.
|
| 296 |
+
|
| 297 |
+
- Subjective Evaluation: As shown in the right half of Table VII, we conducted MOS tests on the wideband $48\mathrm{kHz}$ speech waveform, along with speech waveforms extended by AP-BWE and other baseline methods at a source sampling rate of $8\mathrm{kHz}$ . We also visualized the corresponding spectrograms, as illustrated in Fig. 5. In this configuration, the initial $4\mathrm{kHz}$ bandwidth already contained the
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
Fig. 5. Spectrogram visualization of the original wideband $48\mathrm{kHz}$ speech waveform and speech waveforms extended by baseline methods and our proposed AP-BWE from the source sampling rate of $8\mathrm{kHz}$ .
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
|
| 310 |
+
TABLE VII MOS TESTS RESULTS FOR BWE METHODS WITH SOURCE SAMPLING RATE OF 8 KHZ AND TARGET SAMPLING RATE OF 48 KHZ
|
| 311 |
+
|
| 312 |
+
<table><tr><td>Methods</td><td>MOS (CI)</td></tr><tr><td>sinc</td><td>3.69 (± 0.07)</td></tr><tr><td>NU-Wave2 [47]</td><td>3.75 (± 0.08)</td></tr><tr><td>UDM+ [48]</td><td>3.98 (± 0.06)</td></tr><tr><td>mdctGAN [33]</td><td>4.00 (± 0.07)</td></tr><tr><td>AP-BWE</td><td>4.11 (± 0.06)</td></tr><tr><td>Ground Truth</td><td>4.17 (± 0.06)</td></tr></table>
|
| 313 |
+
|
| 314 |
+
full fundamental frequency and most harmonic structures, resulting in less pronounced subjective listening differences between the speech extended by different models. Nevertheless, the MOS results demonstrated that our proposed AP-BWE still showed substantial advantages in subjective quality over other baseline models $(p < 0.05)$ . Firstly, NU-Wave2 scored very significantly lower in MOS compared to our proposed AP-BWE $(p < 0.01)$ , showing only a slight improvement over sinc filter interpolation, with spectrogram analysis revealing poor recovery of mid-to-high frequency components. UDM+ performed well in recovering mid-frequency components of speech, but it seemed to struggle with restoring higher-frequency components, particularly with low energy in the unvoiced segments, resulting in extended speech that sounded less bright. Consequently, the subjective quality of UDM+ remained significantly lower than that of our proposed AP-BWE $(p = 0.020)$ . This finding aligned with the results obtained at the target sampling rate of $16\mathrm{kHz}$ , suggesting a potential limitation in modeling high-frequency unvoiced segments for waveform-based methods. The mdctGAN achieved the optimal MOS among the baseline methods, with its corresponding spectrogram displaying brighter and more complete structures. However, the high-frequency components of the spectrogram exhibited higher randomness and poorer continuity, resulting in a less stable auditory perception. In contrast, our proposed AP-BWE demonstrated a more robust restoration capability for the high-frequency components, especially in the unvoiced segments, giving it a substantial advantage in subjective speech quality over mdctGAN $(p = 0.026)$ . While there were still differences in the high-frequency components of the voiced segments compared to natural wideband speech, these distinctions had minimal impact on the perceptual
|
| 315 |
+
|
| 316 |
+
quality of the speech. Therefore, AP-BWE achieved a MOS close to that of natural wideband speech.
|
| 317 |
+
|
| 318 |
+
3) Evaluation on Generation Efficiency: Considering the inference speed, as depicted in Table V, our proposed AP-BWE remained capable of efficient $48\mathrm{kHz}$ speech waveform production at a speed of 18.14 times real-time on CPU and 292.28 times real-time on GPU. For diffusion-based methods (i.e., NU-Wave, NU-Wave2, and UDM+), their generation efficiency took a significant hit as they require multiple time steps in the reverse process to continuously denoise and recover the extended waveform from latent variables. Remarkably, our AP-BWE achieved a speedup over them by approximately 1000 times on CPU and 100 times on GPU, respectively. Despite both mdctGAN and our proposed AP-BWE operating at the spectral level, the generation speed of mdctGAN was still constrained by its two-dimensional convolution and transformer-based structure. Consequently, our AP-BWE which is fully based on one-dimensional convolutions enabled an approximately fourfold acceleration compared to it. Compared to running on a GPU, our model exhibited a more significant efficiency improvement on the CPU. This indicated that our model could efficiently generate high-sampling-rate samples even without the parallel acceleration support of GPUs, making it more suitable for applications in scenarios with limited computational resources. Considering the models' computational complexity, the FLOPs of diffusion models are heavily constrained by their reverse steps $(82.78\mathrm{G} \times 50$ steps for NU-Wave, $27.71\mathrm{G} \times 50$ steps for NU-Wave2, and $47.39\mathrm{G} \times 50$ steps for UDM+). However, even with just one step generation, the FLOPs of our proposed AP-BWE were still smaller than theirs, further demonstrating the superiority of our proposed AP-BWE in terms of generation efficiency. Comparing Table I and Table V, it can be observed that for generating speech waveforms of the same duration, the inference speed of our model at $48\mathrm{kHz}$ sampling rate was relatively lower, while the computational complexity was higher compared to the $16\mathrm{kHz}$ sampling rate. This was attributed to the fact that our model under both sampling rate configurations utilized the same STFT settings, resulting in different frame numbers processed by the model.
|
| 319 |
+
|
| 320 |
+
# C. Analysis and Discussion
|
| 321 |
+
|
| 322 |
+
1) Ablation Studies: We implemented ablation studies on discriminators and between-stream connections to investigate the roles of each discriminator and the effects of interactions
|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
Fig. 6. Spectrogram visualization of the original wideband speech waveform and speech waveforms extended by the ablation models of our proposed AP-BWE with a source sampling rate of $8\mathrm{kHz}$ and target sampling rate of $48\mathrm{kHz}$ . "AP-BWE w/o MRDs" represents the ablation of both MRAD and MRPD, while "AP-BWE w/o Disc." denotes the ablation of all discriminators.
|
| 326 |
+
|
| 327 |
+

|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
TABLE VIII EXPERIMENTAL RESULTS FOR THE ABLATION STUDIES WITH SOURCE SAMPLING RATE OF 8 KHZ AND TARGET SAMPLING RATE OF 48 KHZ
|
| 336 |
+
|
| 337 |
+
<table><tr><td colspan="7">Ablation on Discriminators</td></tr><tr><td>MPD</td><td>MRAD / MRPD</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>ViSQOL</td></tr><tr><td>✓</td><td>✓/✓</td><td>0.84</td><td>1.75</td><td>1.41</td><td>1.44</td><td>3.35</td></tr><tr><td>✗</td><td>✓/✓</td><td>0.85</td><td>1.76</td><td>1.42</td><td>1.44</td><td>3.29</td></tr><tr><td>✓</td><td>X/✓</td><td>0.86</td><td>1.75</td><td>1.41</td><td>1.44</td><td>3.26</td></tr><tr><td>✓</td><td>✓/X</td><td>0.85</td><td>1.76</td><td>1.42</td><td>1.44</td><td>3.31</td></tr><tr><td>✓</td><td>X/X</td><td>0.88</td><td>1.75</td><td>1.41</td><td>1.44</td><td>3.26</td></tr><tr><td>✗</td><td>X/X</td><td>1.50</td><td>1.74</td><td>1.42</td><td>1.50</td><td>3.26</td></tr><tr><td colspan="7">Ablation on Between-Stream Connections</td></tr><tr><td>A → P</td><td>P → A</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>ViSQOL</td></tr><tr><td>✓</td><td>✓</td><td>0.84</td><td>1.75</td><td>1.41</td><td>1.44</td><td>3.35</td></tr><tr><td>✗</td><td>✓</td><td>0.85</td><td>1.77</td><td>1.42</td><td>1.45</td><td>3.31</td></tr><tr><td>✓</td><td>✗</td><td>0.85</td><td>1.76</td><td>1.42</td><td>1.44</td><td>3.32</td></tr><tr><td>✗</td><td>✗</td><td>0.86</td><td>1.77</td><td>1.42</td><td>1.45</td><td>3.32</td></tr></table>
|
| 338 |
+
|
| 339 |
+
between amplitude stream and phase stream. All the experiments were conducted with the source sampling rate of 8 kHz and target sampling rate of 48 kHz, and the experimental results are depicted in Table VIII. Due to the minimal phase differences in the high-frequency components for the BWE targeting 48 kHz, we calculated the AWPD metric only in the frequency band of 4 kHz to 8 kHz while calculating the LSD metric on the whole frequency band. We further visualized the spectrograms of the natural wideband 48 kHz speech waveform and speech waveforms generated by the ablation models of AP-BWE on discriminator, as illustrated in Fig. 6.
|
| 340 |
+
|
| 341 |
+
As shown in the upper half of Table VIII, for the ablation of discriminators, all the discriminators contributed to the overall performance of our proposed AP-BWE. We first ablated the MPD to train the AP-BWE model solely with discriminators at the spectral level. Although there is only a slight decrease in all metrics, the spectrogram (AP-BWE w/o MPD) reveals a smearing effect along the frequency axis as shown in Fig. 6, resulting in a perceptible harshness in the extended speech. Subsequently, in our preliminary experiments, we separately ablated MRPD and MRAD. In both cases, the metrics showed only slight decreases, and the spectrograms appeared normal. However, when we simultaneously ablated both of them (AP-BWE w/o MRDs), although the metrics still decreased insignificantly, a noticeable over-smoothing could be observed in the frequency band from $12\mathrm{kHz}$ to $24\mathrm{kHz}$ of the spectrogram. This is because with the sole utilization
|
| 342 |
+
|
| 343 |
+
TABLE IX EXPERIMENTAL RESULTS FOR THE CROSS-DATASET EVALUATION ON THE LIBRI-TTS AND HIFI-TTS DATASETS
|
| 344 |
+
|
| 345 |
+
<table><tr><td colspan="6">Libri-TTS (8 kHz → 24 kHz)</td></tr><tr><td>Methods</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>ViSQOL</td></tr><tr><td>NU-Wave2</td><td>1.83</td><td>1.82</td><td>1.51</td><td>1.47</td><td>2.92</td></tr><tr><td>UDM+</td><td>1.79</td><td>1.82</td><td>1.50</td><td>1.46</td><td>2.88</td></tr><tr><td>mdctGAN</td><td>1.27</td><td>1.80</td><td>1.44</td><td>1.46</td><td>3.42</td></tr><tr><td>AP-BWE</td><td>1.22</td><td>1.79</td><td>1.40</td><td>1.44</td><td>3.44</td></tr><tr><td colspan="6">HiFi-TTS (8 kHz → 44.1 kHz)</td></tr><tr><td>Methods</td><td>LSD</td><td>AWPDIP</td><td>AWPDGD</td><td>AWPDIAF</td><td>ViSQOL</td></tr><tr><td>NU-Wave2</td><td>1.67</td><td>1.81</td><td>1.48</td><td>1.48</td><td>2.16</td></tr><tr><td>UDM+</td><td>1.56</td><td>1.80</td><td>1.47</td><td>1.47</td><td>2.17</td></tr><tr><td>mdctGAN</td><td>1.69</td><td>1.80</td><td>1.47</td><td>1.47</td><td>2.43</td></tr><tr><td>AP-BWE</td><td>1.49</td><td>1.77</td><td>1.42</td><td>1.45</td><td>2.51</td></tr></table>
|
| 346 |
+
|
| 347 |
+
of MPD, the minimum period of its sub-discriminator was 2, so the frequency band range it can discriminate was only from 0 to $12\mathrm{kHz}$ . As this over-smoothing phenomenon is present in the high-frequency range, it has not had a substantial impact on the perceived quality of extended speech. We further ablated all discriminators (AP-BWE w/o Disc.), the LSD metric experienced a significant decline, and the extended portions across the entire spectrogram exhibited severe oversmoothing, greatly compromising the quality of the extended speech. This indicated that the training strategy of GAN was indispensable for the current AP-BWE model.
|
| 348 |
+
|
| 349 |
+
Moreover, we ablated the between-stream connections. As shown in the last row of Table VIII, the information interaction between the amplitude stream and the phase stream did contribute to the quality of the extended speech. To investigate the influence of one stream on another, we selectively ablated each of the connections. Our observations revealed that when ablating the connection from the amplitude stream to the phase stream $(\mathrm{A} \rightarrow \mathrm{P})$ , the AWPD metrics exhibited a deterioration compared to ablating the connection from the phase stream to the amplitude stream $(\mathrm{P} \rightarrow \mathrm{A})$ , and there was also a decrease in ViSQOL, indicating that amplitude information played a role in the modeling of phase. This conclusion aligns with our previous work [34], where the phase spectrum can be predicted from the amplitude spectrum.
|
| 350 |
+
|
| 351 |
+
2) Cross-Dataset Validation: Since the speech data in a corpus is recorded in a fixed environment, models trained exclusively on a single corpus may adapt to the specific
|
| 352 |
+
|
| 353 |
+
characteristics of the recording environment. To evaluate the models' generalization abilities across different corpora, we conducted cross-dataset experiments on models trained with the source and target sampling rates of $8\mathrm{kHz}$ and $48\mathrm{kHz}$ , respectively. We selected two high-quality datasets, namely Libri-TTS [63] and HiFi-TTS [64]. The Libri-TTS dataset consists of 585 hours of speech data at a $24\mathrm{kHz}$ sampling rate. For evaluation, we exclusively utilized the "test-clean" set, containing 4,837 audio clips. The HiFi-TTS dataset contains about 292 hours of speech from 10 speakers with at least 17 hours per speaker sampled at $44.1\mathrm{kHz}$ , and we also only evaluated the models on its test set which contains 1000 audio clips. The experimental results are depicted in Table IX, where the LSD scores were computed by downsampling all the extended speech waveforms from $48\mathrm{kHz}$ to the original sampling rates of the datasets, the AWPD metrics were calculated only in the $4\mathrm{kHz} \sim 8\mathrm{kHz}$ frequency band for a more intuitive comparison, and the ViSQOL scores were computed by upsampling all the speech waveforms to $48\mathrm{kHz}$ .
|
| 354 |
+
|
| 355 |
+
For the evaluation on the Libri-TTS dataset, as depicted in the upper half in Table IX, our proposed AP-BWE still achieved the best performance among all the metrics. For NU-Wave2 and UDM+, their performance on the Libri-TTS dataset was noticeably degraded compared to VCTK. This indicated a strong dependency of waveform-based methods on the training corpus, whereas spectrum-based approaches, by capturing temporal and spectral characteristics from the waveforms, exhibited adaptability to various data recording environments. The evaluation results on the HiFi-TTS dataset are depicted in the lower half in Table IX. Compared to the waveform-based methods, spectrum-based methods still outperform in terms of the overall speech quality as indicated by the ViSQOL metric. Compared to NU-Wave2 and UDM+, the advantage of mdctGAN on the HiFi-TTS test set is far less pronounced than on the Libri-TTS test set, especially in terms of the LSD metric. This suggests that different models exhibit varying generalization abilities across different datasets. However, our proposed AP-BWE still shows a significant advantage over other baseline models across all metrics, further demonstrating the superior generalization ability of our model.
|
| 356 |
+
|
| 357 |
+
# VI. CONCLUSION
|
| 358 |
+
|
| 359 |
+
In this paper, we introduced AP-BWE, a GAN-based BWE model that can efficiently achieve high-quality wideband waveform generation. The generator of AP-BWE performed direct recovery of high-frequency amplitude and phase information from the narrowband amplitude and phase spectra through an all-convolutional structure and all-frame-level operations, significantly enhancing generation efficiency. Moreover, multiple discriminators applied on the time-domain waveform, amplitude spectrum, and phase spectrum noticeably elevated the overall generation quality. The major contribution of the AP-BWE lay in the direct extension of the phase spectrum. This allowed for the precise modeling and optimization of both the amplitude and phase spectra simultaneously, significantly enhancing the quality of the extended speech without compromising the trade-offs between the two. Experimental
|
| 360 |
+
|
| 361 |
+
results on the VCTK-0.92 dataset showcased that our proposed AP-BWE achieved SOTA performance for tasks with target sampling rates of both $16\mathrm{kHz}$ and $48\mathrm{kHz}$ . Spectrogram visualizations underscored the robust capability of our model in recovering high-frequency harmonic structures, effectively enhancing the intelligibility of speech signals, even in scenarios with extremely low source speech bandwidth. In future work, our AP-BWE model can be further applied to assist generative models trained on low-sampling-rate datasets in improving their synthesized speech quality.
|
| 362 |
+
|
| 363 |
+
# REFERENCES
|
| 364 |
+
|
| 365 |
+
[1] K. Nakamura, K. Hashimoto, K. Oura, Y. Nankaku, and K. Tokuda, "A mel-cepstral analysis technique restoring high frequency components from low-sampling-rate speech," in Proc. Interspeech, 2014, pp. 2494-2498.
|
| 366 |
+
[2] M. M. Goodarzi, F. Almasganj, J. Kabudian, Y. Shekofteh, and I. S. Rezaei, “Feature bandwidth extension for persian conversational telephone speech recognition,” in Proc. ICEE, 2012, pp. 1220–1223.
|
| 367 |
+
[3] A. Albahri, C. S. Rodriguez, and M. Lech, "Artificial bandwidth extension to improve automatic emotion recognition from narrow-band coded speech," in Proc. ICSPCS, 2016, pp. 1-7.
|
| 368 |
+
[4] S. Chennoukh, A. Gerrits, G. Miet, and R. Sluijter, "Speech enhancement via frequency bandwidth extension using line spectral frequencies," in Proc. ICASSP, vol. 1, 2001, pp. 665-668.
|
| 369 |
+
[5] F. Mustière, M. Bouchard, and M. Bolić, "Bandwidth extension for speech enhancement," in Proc. CCECE, 2010, pp. 1-4.
|
| 370 |
+
[6] W. Xiao, W. Liu, M. Wang, S. Yang, Y. Shi, Y. Kang, D. Su, S. Shang, and D. Yu, "Multi-mode neural speech coding based on deep generative networks," in Proc. Interspeech, 2023, pp. 819-823.
|
| 371 |
+
[7] J. Makhoul and M. Berouti, "High-frequency regeneration in speech coding systems," in Proc. ICASSP, vol. 4, 1979, pp. 428-431.
|
| 372 |
+
[8] H. Carl, "Bandwidth enhancement of narrowband speech signals," in Proc. EUSIPCO, vol. 2, 1994, pp. 1178-1181.
|
| 373 |
+
[9] J. Sadasivan, S. Mukherjee, and C. S. Seelamantula, "Joint dictionary training for bandwidth extension of speech signals," in Proc. ICASSP, 2016, pp. 5925-5929.
|
| 374 |
+
[10] T. Unno and A. McCree, “A robust narrowband to wideband extension system featuring enhanced codebook mapping,” in Proc. ICASSP, vol. 1, 2005, pp. I-805.
|
| 375 |
+
[11] H. Pulakka, U. Remes, K. Palomäki, M. Kurimo, and P. Alku, "Speech bandwidth extension using Gaussian mixture model-based estimation of the highband mel spectrum," in Proc. ICASSP, 2011, pp. 5100-5103.
|
| 376 |
+
[12] Y. Ohtani, M. Tamura, M. Morita, and M. Akamine, “GMM-based bandwidth extension using sub-band basis spectrum model,” in Proc. Interspeech, 2014, pp. 2489–2493.
|
| 377 |
+
[13] Y. Wang, S. Zhao, Y. Yu, and J. Kuang, "Speech bandwidth extension based on GMM and clustering method," in Proc. CSNT, 2015, pp. 437-441.
|
| 378 |
+
[14] G. Chen and V. Parsa, “HMM-based frequency bandwidth extension for speech enhancement using line spectral frequencies,” in Proc. ICASSP, vol. 1, 2004, pp. I-709.
|
| 379 |
+
[15] P. Bauer and T. Fingscheidt, “An HMM-based artificial bandwidth extension evaluated by cross-language training and test,” in Proc. ICASSP, 2008, pp. 4589–4592.
|
| 380 |
+
[16] G.-B. Song and P. Martynovich, “A study of HMM-based bandwidth extension of speech signals,” Signal Processing, vol. 89, no. 10, pp. 2036–2044, 2009.
|
| 381 |
+
[17] Z. Yong and L. Yi, "Bandwidth extension of narrowband speech based on hidden markov model," in Proc. ICALIP, 2014, pp. 372-376.
|
| 382 |
+
[18] Z.-H. Ling, S.-Y. Kang, H. Zen, A. Senior, M. Schuster, X.-J. Qian, H. M. Meng, and L. Deng, "Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends," IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 35-52, 2015.
|
| 383 |
+
[19] V. Kuleshov, S. Z. Enam, and S. Ermon, "Audio super-resolution using neural nets," in Proc. ICLR (Workshop Track), 2017.
|
| 384 |
+
[20] Z.-H. Ling, Y. Ai, Y. Gu, and L.-R. Dai, "Waveform modeling and generation using hierarchical recurrent neural networks for speech bandwidth extension," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 5, pp. 883-894, 2018.
|
| 385 |
+
|
| 386 |
+
[21] S. Birnbaum, V. Kuleshov, Z. Enam, P. W. W. Koh, and S. Ermon, "Temporal FiLM: Capturing long-range sequence dependencies with feature-wise modulations." Proc. NeurIPS, vol. 32, 2019.
|
| 387 |
+
[22] N. C. Rakotonirina, "Self-attention for audio super-resolution," in Proc. MLSP, 2021, pp. 1-6.
|
| 388 |
+
[23] H. Wang and D. Wang, “Towards robust speech super-resolution,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2058–2066, 2021.
|
| 389 |
+
[24] J. Abel, M. Strake, and T. Fingscheidt, “A simple cepstral domain DNN approach to artificial speech bandwidth extension,” in Proc. ICASSP, 2018, pp. 5469–5473.
|
| 390 |
+
[25] K. Li and C.-H. Lee, “A deep neural network approach to speech bandwidth expansion,” in Proc. ICASSP, 2015, pp. 4395–4399.
|
| 391 |
+
[26] B. Liu, J. Tao, Z. Wen, Y. Li, and D. Bukhari, “A novel method of artificial bandwidth extension using deep architecture.” in Proc. Interspeech, 2015, pp. 2598–2602.
|
| 392 |
+
[27] Y. Gu, Z.-H. Ling, and L.-R. Dai, "Speech bandwidth extension using bottleneck features and deep recurrent neural networks," in Proc. Interspeech, 2016, pp. 297-301.
|
| 393 |
+
[28] C. V. Botinhao, B. S. Carlos, L. P. Caloba, and M. R. Petraglia, "Frequency extension of telephone narrowband speech signal using neural networks," in Proc. CESA, vol. 2, 2006, pp. 1576-1579.
|
| 394 |
+
[29] J. Kontio, L. Laaksonen, and P. Alku, “Neural network-based artificial bandwidth expansion of speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 3, pp. 873–881, 2007.
|
| 395 |
+
[30] H. Pulakka and P. Alku, "Bandwidth extension of telephone speech using a neural network and a filter bank implementation for highband mel spectrum," IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2170-2183, 2011.
|
| 396 |
+
[31] H. Liu, W. Choi, X. Liu, Q. Kong, Q. Tian, and D. Wang, "Neural vocoder is all you need for speech super-resolution," in Proc. Interspeech, 2022, pp. 4227-4231.
|
| 397 |
+
[32] M. Mandel, O. Tal, and Y. Adi, "AERO: Audio super resolution in the spectral domain," in Proc. ICASSP, 2023, pp. 1-5.
|
| 398 |
+
[33] C. Shuai, C. Shi, L. Gan, and H. Liu, "mdctGAN: Taming transformer-based GAN for speech super-resolution with modified DCT spectra," in Proc. Interspeech, 2023, pp. 5112-5116.
|
| 399 |
+
[34] Y. Ai and Z.-H. Ling, “Neural speech phase prediction based on parallel estimation architecture and anti-wrapping losses,” in Proc. ICASSP, 2023, pp. 1-5.
|
| 400 |
+
[35] Y. Ai, Y.-X. Lu, and Z.-H. Ling, "Long-frame-shift neural speech phase prediction with spectral continuity enhancement and interpolation error compensation," IEEE Signal Processing Letters, vol. 30, pp. 1097-1101, 2023.
|
| 401 |
+
[36] Y. Ai and Z.-H. Ling, "APNet: An all-frame-level neural vocoder incorporating direct prediction of amplitude and phase spectra," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 2145-2157, 2023.
|
| 402 |
+
[37] Y.-X. Lu, Y. Ai, and Z.-H. Ling, "MP-SENet: A speech enhancement model with parallel denoising of magnitude and phase spectra," in Proc. Interspeech, 2023, pp. 3834-3838.
|
| 403 |
+
[38] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A convnet for the 2020s,” in Proc. CVPR, 2022, pp. 11976–11986.
|
| 404 |
+
[39] D. Yin, C. Luo, Z. Xiong, and W. Zeng, “PHASEN: A phase-and-harmonics-aware speech enhancement network,” in Proc. AAAI, vol. 34, no. 05, 2020, pp. 9458–9465.
|
| 405 |
+
[40] J. Kong, J. Kim, and J. Bae, "HiFi-GAN: Generative adversarial networks for efficient and high fidelity speech synthesis," Advances in neural information processing systems, vol. 33, pp. 17022-17033, 2020.
|
| 406 |
+
[41] W. Jang, D. Lim, J. Yoon, B. Kim, and J. Kim, "UnivNet: A neural vocoder with multi-resolution spectrogram discriminators for high-fidelity waveform generation," in Proc. Interspeech, 2021, pp. 2207-2211.
|
| 407 |
+
[42] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. MICCAI, 2015, pp. 234–241.
|
| 408 |
+
[43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. NeurIPS, vol. 30, 2017.
|
| 409 |
+
[44] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in Proc. ICML, 2015, pp. 2256–2265.
|
| 410 |
+
[45] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Proc. NeurIPS, vol. 33, pp. 6840–6851, 2020.
|
| 411 |
+
[46] J. Lee and S. Han, "NU-Wave: A diffusion probabilistic model for neural audio upsampling," Proc. Interspeech, pp. 1634-1638, 2021.
|
| 412 |
+
|
| 413 |
+
[47] S. Han and J. Lee, "NU-Wave 2: A general neural audio upsampling model for various sampling rates," in Proc. Interspeech, 2022, pp. 4401-4405.
|
| 414 |
+
[48] C.-Y. Yu, S.-L. Yeh, G. Fazekas, and H. Tang, “Conditioning and sampling in variational diffusion models for speech super-resolution,” in Proc. ICASSP, 2023, pp. 1-5.
|
| 415 |
+
[49] F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, "ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 162, pp. 94-114, 2020.
|
| 416 |
+
[50] Z.-Q. Wang, G. Wichern, and J. Le Roux, “On the compensation between magnitude and phase in speech separation,” IEEE Signal Processing Letters, vol. 28, pp. 2018-2022, 2021.
|
| 417 |
+
[51] H. Siuzdak, "Vocos: Closing the gap between time-domain and fourier-based neural vocoders for high-quality audio synthesis," arXiv preprint arXiv:2306.00814, 2023.
|
| 418 |
+
[52] A. Gritsenko, T. Salimans, R. van den Berg, J. Snoek, and N. Kalchbrenner, “A spectral energy distance for parallel speech synthesis,” Proc. NeurIPS, vol. 33, pp. 13062-13072, 2020.
|
| 419 |
+
[53] J. L. Ba, J. R. Kiros, and G. E. Hinton, "Layer normalization," stat, vol. 1050, p. 21, 2016.
|
| 420 |
+
[54] D. Hendrycks and K. Gimpel, "Gaussian error linear units (GELUs)," in Proc. ICML, vol. 70, 2017, pp. 3441-3450.
|
| 421 |
+
[55] A. L. Maas, A. Y. Hannun, A. Y. Ng et al., "Rectifier nonlinearities improve neural network acoustic models," in Proc. ICML, vol. 30, no. 1, 2013, p. 3.
|
| 422 |
+
[56] N. Zeghidour, A. Luebs, A. Omran, J. Skoglund, and M. Tagliasacchi, "Soundstream: An end-to-end neural audio codec," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 495-507, 2021.
|
| 423 |
+
[57] K. Kumar, R. Kumar, T. De Boissiere, L. Gestin, W. Z. Teoh, J. Sotelo, A. De Brebisson, Y. Bengio, and A. C. Courville, "MelGAN: Generative adversarial networks for conditional waveform synthesis," in Proc. NeurIPS, vol. 32, 2019.
|
| 424 |
+
[58] J. Yamagishi, C. Veaux, K. MacDonald et al., "CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit (version 0.92)," University of Edinburgh. The Centre for Speech Technology Research (CSTR), 2019.
|
| 425 |
+
[59] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
|
| 426 |
+
[60] M. Chinen, F. S. Lim, J. Skoglund, N. Gureev, F. O'Gorman, and A. Hines, "ViSQOL v3: An open source production ready objective speech and audio metric," in Proc. QoMEX, 2020, pp. 1-6.
|
| 427 |
+
[61] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in Proc. ICML, 2023, pp. 28492-28518.
|
| 428 |
+
[62] C. Veaux, J. Yamagishi, K. MacDonald et al., "CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit," University of Edinburgh. The Centre for Speech Technology Research (CSTR), vol. 6, p. 15, 2017.
|
| 429 |
+
[63] H. Zen, V. Dang, R. Clark, Y. Zhang, R. J. Weiss, Y. Jia, Z. Chen, and Y. Wu, "LibriTTS: A corpus derived from LibriSpeech for text-to-speech," in Proc. Interspeech, 2019, pp. 1526-1530.
|
| 430 |
+
[64] E. Bakhturina, V. Lavrukhin, B. Ginsburg, and Y. Zhang, "Hi-Fi multispeaker English TTS dataset," in Proc. Interspeech, 2021, pp. 2776-2780.
|
2401.06xxx/2401.06387/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02bf3dcd95d3c73abd45747cdabe297d7dfc34257015c9974cca4c3d30d546de
|
| 3 |
+
size 876213
|
2401.06xxx/2401.06387/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06391/b5d13a93-69f7-4b4c-9ca9-f920626768d9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b700f870a992814164317c858fbb0a4118afab807456f74202391a74771a70f
|
| 3 |
+
size 1728579
|
2401.06xxx/2401.06391/full.md
ADDED
|
@@ -0,0 +1,547 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Teaching Code LLMs to Use Autocompletion Tools in Repository-Level Code Generation
|
| 2 |
+
|
| 3 |
+
Chong Wang*, Jian Zhang*, Yebo Feng*, Tianlin Li*, Weisong Sun*, Yang Liu*, and Xin Peng†
|
| 4 |
+
*School of Computer Science and Engineering, Nanyang Technological University, Singapore
|
| 5 |
+
{chong.wang, jian_zhang, yebo.feng} @ntu.edu.sg, tianlin001@e.ntu.edu.sg, {weisong.sun, yangliu} @ntu.edu.sg
|
| 6 |
+
†School of Computer Science and Shanghai Key Laboratory of Data Science, Fudan University, China
|
| 7 |
+
pengxin@fudan.edu.cn
|
| 8 |
+
|
| 9 |
+
Abstract—Recent code large language models (LLMs) have shown promising performance in generating standalone functions. However, they face limitations in repository-level code generation due to their lack of awareness of repository-level dependencies (e.g., user-defined attributes), resulting in dependency errors such as undefined-variable and no-member errors. In this work, we introduce TOOLGEN, an approach that integrates autocompletion tools into the code LLM generation process to address these dependencies. TOOLGEN comprises two main phases: Trigger Insertion and Model Fine-tuning (Offline), and Tool-integrated Code Generation (Online). During the offline phase, TOOLGEN augments functions within a given code corpus with a special mark token, indicating positions to trigger autocompletion tools. These augmented functions, along with their corresponding descriptions, are then used to fine-tune a selected code LLM. In the online phase, TOOLGEN iteratively generates functions by predicting tokens step-by-step using the fine-tuned LLM. Whenever a mark token is encountered, TOOLGEN invokes the autocompletion tool to suggest code completions and selects the most appropriate one through constrained greedy search.
|
| 10 |
+
|
| 11 |
+
We conduct comprehensive experiments to evaluate TOOLGEN's effectiveness in repository-level code generation across three distinct code LLMs: CodeGPT, CodeT5, and CodeLlama. To facilitate this evaluation, we create a benchmark comprising 671 real-world code repositories and introduce two new dependency-based metrics: Dependency Coverage and Static Validity Rate. The results demonstrate that TOOLGEN significantly improves Dependency Coverage by $31.4\%$ to $39.1\%$ and Static Validity Rate by $44.9\%$ to $57.7\%$ across the three LLMs, while maintaining competitive or improved performance in widely recognized similarity metrics such as BLEU-4, CodeBLEU, Edit Similarity, and Exact Match. On the CoderEval dataset, TOOLGEN achieves improvements of $40.0\%$ and $25.0\%$ in test pass rate (Pass@1) for CodeT5 and CodeLlama, respectively, while maintaining the same pass rate for CodeGPT. TOOLGEN also demonstrates high efficiency in repository-level code generation, with latency ranging from 0.63 to 2.34 seconds for generating each function. Furthermore, our generalizability evaluation confirms TOOLGEN's consistent performance when applied to diverse code LLMs, encompassing various model architectures and scales.
|
| 12 |
+
|
| 13 |
+
# I. INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Code generation has been a longstanding focal point in the field of software engineering. Recent advancements have introduced a variety of code large language models (LLMs) [1]–[13] constructed upon the Transformer model architecture [14], achieving promising performance in code-related applications [15]–[24]. These models are either pretrained or fine-tuned on extensive code corpora, enabling them
|
| 16 |
+
|
| 17 |
+
to automatically generate code based on provided natural language descriptions. These code LLMs have demonstrated notable effectiveness in the generation of code blocks or functions. For instance, CodeLlama [12], built upon the foundational Llama2 model [25], has achieved state-of-the-art results among open code LLMs (e.g., CodeGen [8] and StarCoder [11]), on benchmarks like HumanEval [3] and MBPP [26] that focus on standalone functions.
|
| 18 |
+
|
| 19 |
+
However, it is crucial to emphasize that in real-world code repositories, more than $70\%$ of functions are not standalone [27]. Code LLMs encounter significant challenges when generating such real-world functions, primarily because they cannot be aware of repository-level dependencies, such as user-defined functions and attributes, during the code generation process [27]. This limitation often leads to the generation of code with dependency errors, including undefined-variable and no-member errors. These errors impede the usability and effectiveness of the code LLMs [28]. For example, consider the scenario depicted in Figure 1. A code LLM (e.g., CodeLlama) might incorrectly predict "updates" after generating "... self.", resulting in a no-member error because the object "self" does not possess an attribute named "updates".
|
| 20 |
+
|
| 21 |
+
Meanwhile, modern Integrated Development Environments (IDEs) take a different approach, which typically incorporates code autocompletion tools based on program analysis. These tools, like Jedi [29], leverage their ability to analyze the current incomplete function's state and project context to provide valid completion recommendations. This includes suggestions for accessible variables, attributes, and functions. For instance, when encountering "self." in Figure 1, Jedi can infer and recommend 68 accessible attributes defined within "self", including the target suggestion "registered Updates". Therefore, if we can seamlessly switch between code LLMs and the use of autocompletion tools, we have the potential to significantly reduce the occurrence of dependency errors in repository-level code generation.
|
| 22 |
+
|
| 23 |
+
In fact, recent research has delved into the integration of external tools into the generation process of LLMs to mitigate their limitations in constrained generation scenarios. One noteworthy example is ToolFormer [30], which creates an augmented dataset to instruct LLMs on invoking existing arithmetic calculators. This integration effectively reduces errors in generated text involving arithmetic calculations.
|
| 24 |
+
|
| 25 |
+
Register updates that will be executed in each iteration.
|
| 26 |
+
|
| 27 |
+
def register Updates(self, *updates):
|
| 28 |
+
|
| 29 |
+
for update in updates: self.
|
| 30 |
+
|
| 31 |
+
# CodeLlama Prediction:
|
| 32 |
+
|
| 33 |
+
# updates
|
| 34 |
+
|
| 35 |
+
# no-member error
|
| 36 |
+
|
| 37 |
+
# Jedi Completions:
|
| 38 |
+
|
| 39 |
+
(68 suggestions)
|
| 40 |
+
|
| 41 |
+
1. activation
|
| 42 |
+
2. belongs_to
|
| 43 |
+
|
| 44 |
+
46. _registered_updates
|
| 45 |
+
|
| 46 |
+
$\mathbb{O}$ target
|
| 47 |
+
|
| 48 |
+
Fig. 1: Illustrative Example of LLM Prediction and Tool Completion
|
| 49 |
+
|
| 50 |
+
Building upon ToolFormer's inspiration, Zhang et al. [31] introduce ToolCoder, an approach designed to teach LLMs how to utilize information-retrieval-based (IR-based) API search tools during the code generation process. While ToolCoder targets the generation of functionally correct standalone functions and demonstrates promising results, the integrated IR-based API search tools do not consider repository-level dependencies, limiting their potential in resolving dependency errors. Additionally, ToolFormer and ToolCoder are unable to handle scenarios where the tools return multiple candidates. Another relevant example of harnessing external tools is Repilot [32], which leverages code completion tools to filter out impractical suggestions made by LLMs in the context of automatic program repairing (APR). Unlike repository-level code generation, Repilot's primary focus is on generating valid single-hunk bug-fix patches rather than entire functions. When applying Repilot to function-level code generation, the autocompletion tools are frequently triggered unnecessarily, resulting in significant overhead and impracticality.
|
| 51 |
+
|
| 52 |
+
In this work, we aim at integrating program-analysis-based code autocompletion tools into the generation process of code LLMs. Achieving the incorporation presents two key challenges. (i) Determining when to trigger the invocation of autocompletion tools during the generation process: The generation process of LLMs is a step-by-step decoding process where each subsequent token is predicted based on previous tokens. In general, a function consists of dozens or even hundreds of tokens, making it impractical to invoke code autocompletion tools at every decoding step. In the case of tools like ToolFormer and ToolCoder, ChatGPT is employed to augment the training corpus by introducing special tokens into the text or code to mark positions where tool invocation is needed. After training on this augmented corpus, LLMs can predict the special token at the appropriate step, thereby triggering tool invocation. However, this ChatGPT-based augmentation method is less effective for repository-level code generation due to the presence of repository-level dependencies. The special token must be precisely inserted at positions involving such dependencies, such as when accessing
|
| 53 |
+
|
| 54 |
+
user-defined variables. (ii) Selecting the target suggestion from the recommended completions of autocompletion tools: Different from tools like arithmetic calculation or API search integrated into ToolFormer and ToolCoder, which return a single result for each invocation, autocompletion tools often provide multiple completion suggestions sorted alphabetically. For instance, as depicted in Figure 1, Jedi returns a list of 68 completion suggestions (excluding builtin attributes), with the target suggestion being the 46th one in the list. Consequently, after invoking autocompletion tools, it is essential to assess the suggestions based on the generated code and select the most appropriate one. Furthermore, this selection process needs to be seamlessly integrated into the code generation process to ensure efficiency and coherence.
|
| 55 |
+
|
| 56 |
+
To tackle the challenges, we propose TOOLGEN, an approach to integrate autocompletion tools into the generation process of code LLMs to support repository-level code generation. TOOLGEN has two main phases: Trigger Insertion and Model Fine-tuning (Offline), and Tool-integrated Code Generation (Online). In the offline phase, TOOLGEN analyzes source files within a corpus of code repositories, creating abstract syntax trees (ASTs) and extracting function definitions. It augments these functions by inserting a special token, $<\text{COMP}>$ , signifying the positions to trigger autocompletion tools. The insertion positions are established by navigating through the functions and identifying the identifiers that can be recommended by autocomplete tools. These augmented functions, paired with their respective descriptions, are then employed to fine-tune a selected code LLM. In the online phase, TOOLGEN iteratively constructs a function based on a provided description by predicting tokens step-by-step through the fine-tuned LLM. Whenever a $<\text{COMP}>$ token is encountered, TOOLGEN invokes the autocompletion tool to suggest code completions, drawing from the current repository context. Subsequently, it identifies the most appropriate suggestion through a constraint greedy search algorithm, appending this selected suggestion to the current tokens. This process continues as it predicts tokens until a specified termination condition is satisfied.
|
| 57 |
+
|
| 58 |
+
We conduct extensive experiments to evaluate the effectiveness of TOOLGEN in repository-level code generation across three distinct code LLMs, namely, CodeGPT [33], CodeT5 [4], and CodeLlama [12]. To facilitate this evaluation, we first construct a benchmark, which includes 12,406 Python functions from 671 real-world code repositories and 176 coding tasks from CoderEval dataset [27]. We define two new repository-level metrics, namely Dependency Coverage and Static Validity Rate. Dependency Coverage quantifies the proportion of repository-level dependencies present in ground-truth functions and successfully covered by the generated functions, while Static Validity Rate measures the percentage of generated functions that pass a dependency error check. The evaluation results on the 12,406 functions demonstrate that TOOLGEN exhibits comparable or improved performance in widely-recognized similarity metrics such as BLEU-4, CodeBLEU, Edit Similarity, and Exact Match. Importantly, TOOLGEN achieves significant improvements in Dependency
|
| 59 |
+
|
| 60 |
+
Coverage, ranging from $31.4\%$ to $39.1\%$ , and Static Validity Rate, spanning from $44.9\%$ to $57.7\%$ , across the three code LLMs. On the 176 tasks derived from CoderEval, TOOLGEN achieves improvements of $40.0\%$ and $25.0\%$ in test pass rate (Pass@1) for CodeT5 and CodeLlama, respectively, while maintaining the same pass rate for CodeGPT. TOOLGEN also demonstrates high efficiency in repository-level code generation, with average latency ranging from 0.63 to 2.34 seconds, attributed to offline fine-tuning with trigger insertion. Moreover, the results from our generalizability evaluation confirm that TOOLGEN consistently performs well across a variety of code LLMs, with different model architectures and scales.
|
| 61 |
+
|
| 62 |
+
In summary, this paper presents the following key contributions:
|
| 63 |
+
|
| 64 |
+
- TOOLGEN, an novel approach that seamlessly integrates autocompletion tools into the generation process of code LLMs, which consists of Trigger Insertion and Model Finetuning (Offline), and Tool-integrated Code Generation (Online). TOOLGEN seamlessly integrates the autocompletion tool into the generation process of code LLMs, thereby enhancing repository-level code generation. The offline phase results in an Augmented Dataset, which comprises 249,298 Python functions sourced from a diverse selection of 12,231 code repositories. Each function is augmented with a special token, <COMP>, which signifies positions suitable for invoking autocompletion tools.
|
| 65 |
+
- An Evaluation Benchmark, which encompasses 12,406 Python functions drawn from 671 real-world code repositories and 176 coding tasks with test cases derived from CoderEval, along with the introduction of two novel repository-level metrics: Dependency Coverage and Static Validity Rate.
|
| 66 |
+
- Extensive Experimental Results, which affirm the efficacy of TOOLGEN in repository-level code generation. TOOLGEN demonstrates substantial improvements in Dependency Coverage, ranging from $31.4\%$ to $39.1\%$ , and Static Validity Rate, spanning from $44.9\%$ to $57.7\%$ , across three distinct code LLMs. Additionally, TOOLGEN achieves $40\%$ and $25\%$ improvements in test pass rate for CodeT5 and CodeLlama, respectively, with high generation efficiency.
|
| 67 |
+
|
| 68 |
+
# II. PRELIMINARIES
|
| 69 |
+
|
| 70 |
+
# A. Code LLMs
|
| 71 |
+
|
| 72 |
+
Typically, there are two main categories of code LLMs that can be employed for code generation. These categories include decoder-only models and encoder-decoder models, each of which conducts the code generation process base on a given description as outlined below:
|
| 73 |
+
|
| 74 |
+
- Decoder-only Models: Illustrated in Figure 2a, decoder-only code LLMs, such as CodeGPT [33] and CodeLlama [12], consist solely of a decoder component derived from the Transformer architecture [14]. An employed decoder-only model first tokenizes the input description into a sequence of tokens. Subsequently, it feeds this token
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
(a) Decoder-only Model
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
(b) Encoder-Decoder Model
|
| 81 |
+
Fig. 2: Decoder-only Model and Encoder-Decoder Model
|
| 82 |
+
|
| 83 |
+
sequence into the model's decoder and proceeds to predict a function token-by-token, based on the context provided by the description and previously predicted tokens.
|
| 84 |
+
|
| 85 |
+
- Encoder-Decoder Models: As depicted in Figure 2b, encoder-decoder code LLMs, such as CodeT5 [4] and CodeT5+ [5], encompass both the encoder and decoder components of the Transformer architecture. In this case, the employed model also tokenizes the description into a token sequence, but the sequence is first processed by the model's encoder. The model's decoder is then tasked with predicting a function token-by-token, relying on the representation produced by the encoder and the context provided by the preceding tokens.
|
| 86 |
+
|
| 87 |
+
On top of the standard generation process, to ensure that the employed code LLM can recognize and predict the special token $<\mathrm{COMP}>$ , we initially incorporate this token into the LLM's vocabulary, denoted as $\mathbb{V}_{llm}$ . Formally, this addition results in an expanded vocabulary represented as:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbb {V} \leftarrow \mathbb {V} _ {l l m} \cup \{< C O M P > \} \tag {1}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
For the employed code LLM, within the generation process, we define its tokenization process as a procedure:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\text {L L M - T O K E N I Z E}: \Sigma_ {\text {c h a r}} ^ {*} \rightarrow \mathbb {V} ^ {*} \tag {2}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Here, $\Sigma_{char}^{*}$ represents a character sequence of either a description or a code snippet, and $\mathbb{V}^*$ corresponds to the resulting sequence of tokens drawn from $\mathbb{V}$ .
|
| 100 |
+
|
| 101 |
+
The next token prediction involved in each step is defined as a procedure:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\text {L L M - P R e d i c t}: \left(\mathbb {V} ^ {*}, \mathbb {V} ^ {*}\right)\rightarrow [ 0, 1 ] ^ {\left| \mathbb {V} \right|} \tag {3}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
In this context, the two input token sequences $(\mathbb{V}^{*})$ represent a description and an incomplete function, respectively, while $[0,1]^{|\mathbb{V}|}$ signifies a probability distribution encompassing $|\mathbb{V}|$ probabilities [0, 1]. Here, $|\mathbb{V}|$ is the size (token numbers) of the vocabulary $\mathbb{V}$ .
|
| 108 |
+
|
| 109 |
+
Example: In Figure 1, CodeLlama takes the description "Register updates..." and the incomplete function "... self." as inputs. It then performs a prediction, generating a probability distribution of size $|\mathbb{V}|$ , wherein the token "_updates" exhibits the highest probability among all tokens.
|
| 110 |
+
|
| 111 |
+
# B. Autocompletion Tools
|
| 112 |
+
|
| 113 |
+
An autocompletion tool takes a code repository and a caret position (defined as a tuple containing source file, line number,
|
| 114 |
+
|
| 115 |
+
and column number) as input and provides a list of completion suggestions. We define this completion process as a procedure:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\text {T O O L - C O M P L E T E}: \left(\Sigma_ {\text {r e p o}}, \Sigma_ {\text {p o s}}\right)\rightarrow \Sigma_ {\text {i d e n}} ^ {*} \tag {4}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
Here, $\Sigma_{repo}$ and $\Sigma_{pos}$ respectively represent the domains of code repositories and caret positions, $\Sigma_{iden}$ encompasses all possible identifiers such that $\Sigma_{iden}^{*}$ is a list of identifiers. It's worth noting that autocompletion tools often provide a wide range of completion suggestions, including keywords and partial identifiers. In our context, we focus solely on identifier-level completions, as keywords are relatively straightforward for code LLMs to predict, and partial identifiers are encompassed by identifier-level completions.
|
| 122 |
+
|
| 123 |
+
Example: In Figure 1, when provided with the code repository and caret position, Jedi is capable of generating 86 completion suggestions for the incomplete function "... self."
|
| 124 |
+
|
| 125 |
+
# III. APPROACH
|
| 126 |
+
|
| 127 |
+
In this section, we elaborate on our approach named TOOLGEN to integrate autocompletion tools into the generation process of code LLMs to support repository-level code generation.
|
| 128 |
+
|
| 129 |
+
# A. Overview
|
| 130 |
+
|
| 131 |
+
Figure 3 presents an overview of TOOLGEN, which consists of two main phases, namely (i) Trigger Insertion and Model Fine-tuning (Offline) and (ii) Tool-integrated Code Generation (Online).
|
| 132 |
+
|
| 133 |
+
In trigger insertion and model fine-tuning, TOOLGEN parses each source file in the given code repositories into an abstract syntax tree (AST) and then extracts function definitions from the AST; For each extract function definition, TOOLGEN then utilizes an autocompletion tool to augment it with the special token $<\text{COMP}>$ to mark the positions to invoke the tool, and then assembles a pair of description and augmented function; After process all code repositories, TOOLGEN employs the resulting pairs of descriptions and augmented functions to fine-tune a code LLM, resulting in a fine-tuned code LLM that can predict $<\text{COMP}>$ at suitable positions to trigger the autocompletion tool.
|
| 134 |
+
|
| 135 |
+
In tool-integrated code generation, TOOLGEN generates a token sequence to form a function by an iterative process, in which, at each step, one or multiple tokens are yielded by the fine-tuned code LLM and the employed autocompletion tool. At certain step, ① the fine-tuned code LLM takes the given description and the incomplete function as inputs and predicts the next token; The predicted token is appended to the incomplete function; ② If the predicted token equals $<\text{COMP}>$ , the autocomplete tool is triggered and a list of completion suggestions is returned based on the current repository context; ③ TOOLGEN then selects the most suitable one from the suggestions with the fine-tuned code LLM and appends the selected suggestion to the incomplete function.
|
| 136 |
+
|
| 137 |
+
# B. Trigger Insertion and Model Fine-tuning
|
| 138 |
+
|
| 139 |
+
1) Trigger Insertion: We employ a trigger insertion method to facilitate the learning process of code LLMs in determining when to utilize autocompletion tools during code generation. In this method, the special token $<\mathrm{COMP}>$ is inserted at specific locations within code functions, indicating when autocompletion tools should be triggered.
|
| 140 |
+
|
| 141 |
+
Given a code repository $\mathcal{R}$ , we traverse each source file file within it based on the file's suffix (e.g., .py for Python) and then proceed to analyze the functions defined in the source file. To achieve this, we parse the source file into an abstract syntax tree (AST), where the functions are represented as function-definition nodes. Each function-definition node contains multiple AST-tokens, which are smallest individual units, such as keywords, identifiers, literals, operators, and punctuation, within programming language syntax. Note that these AST-tokens differ from the tokens in the LLM's vocabulary $\mathbb{V}_{llm}$ . Typically, an AST-token comprises one or more tokens from $\mathbb{V}_{llm}$ . For example, the AST-token “\_registered Updates” consists of six tokens in vocabulary of CodeLlama, i.e., “[”, “register”, “ed”, “_”, “up”, “dates”。
|
| 142 |
+
|
| 143 |
+
For each function within the source file, we identify its corresponding function-definition node, denoted as node, and apply Algorithm 1 to it. The purpose of this algorithm is to traverse the function body and identify specific identifiers that are eligible for suggestions by autocomplete tools. Subsequently, the special token $\langle \text{COMP} \rangle$ is inserted in front of these chosen identifiers. More specifically, as the algorithm iterates through each AST-token $t$ within the function body node.body (line 2), it performs two crucial checks. First, it employs the ISIDENTIFIER procedure to determine whether $t$ is an identifier. Second, it verifies that $t$ is not a built-in attribute, such as "dict" in Python, using the ISBUILTIN procedure. These conditions are essential because dependency errors often arise from user-defined attributes categorized as identifiers rather than other AST-tokens like language keywords. Additionally, these checks prevent the insertion of $\langle \text{COMP} \rangle$ at positions where the code LLM can confidently predict the following tokens, thus minimizing unnecessary tool invocations. When both conditions are met, the algorithm updates the caret position $\mathcal{P}$ to the start position of $t$ (line 4) and invokes the autocompletion tool to obtain a list of completion suggestions, denoted as $\mathbb{C}$ (line 5). If $\mathbb{C}$ contains $t$ , indicating that the tool can propose the desired identifier, the special token $\langle \text{COMP} \rangle$ is inserted before $t$ to mark the position for triggering the autocompletion tool (lines 6-7). Upon executing the algorithm, we obtain the augmented function code $\mathcal{F}_{\text{aug}}$ .
|
| 144 |
+
|
| 145 |
+
Next, we assemble a tuple $(\mathcal{D},\mathcal{F}_{aug})$ , in which $\mathcal{D}$ corresponds to the concatenation of the signature and docstring of the parsed function. Note that functions lacking corresponding docstrings are omitted from our process as our repository-level code generation relies on textual descriptions as input. Once we complete the processing of all code repositories, we accumulate an augmented dataset that contains a substantial number of these data tuples.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
Fig. 3: Approach Overview of TOOLGEN
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
Fig. 4: Augmented Function
|
| 152 |
+
|
| 153 |
+
Algorithm 1: Trigger Insertion
|
| 154 |
+
Input: Repository $\mathcal{R}$ Source File file, Function-definition node node Output: Augmented function $\mathcal{F}_{\mathrm{aug}}$ 1 $\mathcal{F}_{\mathrm{aug}}\gets$ GETSIGNATURE(node) // signature 2 for t in node.body do if ISIDENTIFIER(t) and not ISBUILTIN(t) then $\mathcal{P} =$ (file,t.start_line,t.start_column) $\mathbb{C}\gets$ TOOL-COMPLETE(R,P) if $t\in \mathbb{C}$ then $\begin{array}{r}\lfloor \mathcal{F}_{\mathrm{aug}}.\mathrm{append(< }\mathrm{COMP > })\end{array}$ 8 $\mathcal{F}_{\mathrm{aug}}.\mathrm{append}(t)$
|
| 155 |
+
|
| 156 |
+
```python
|
| 157 |
+
def register Updates(self, *updates):
|
| 158 |
+
for update in <COMP>updates:
|
| 159 |
+
self._registeredUpdates.<COMP>add(<COMP>update)
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
Note that our trigger insertion method can be applied to arbitrary code and is not limited to function bodies alone. Currently, we focus exclusively on function bodies, as our primary application scenario involves generating code based on the given natural language descriptions. Extracting descriptions for code blocks outside functions for model training and evaluation is challenging, due to the difficulty in determining the scope of line comments [34], [35]. Therefore, we solely consider function bodies, where corresponding descriptions can be readily obtained from function docstrings.
|
| 163 |
+
|
| 164 |
+
Example: In Figure 4, we showcase an augmented function that contains four instances of the special token $<\text{COMP}>$ . These tokens have been inserted at positions where the desired identifiers, namely "updates", "registered Updates", "add", and "update", are found within the suggestion lists of the autocompletion tool.
|
| 165 |
+
|
| 166 |
+
2) Model Fine-tuning: During the fine-tuning process, we supply the collected descriptions and augmented functions to optimize the parameters of the employ code LLM (base model), adhering to established practices in code generation tasks. Specifically, for each pair consisting of a description $\mathcal{D}$ and an augmented function $\mathcal{F}_{\text{aug}}$ , both are tokenized into sequences of tokens and subsequently fed into the base model to undergo the token-by-token generation process described in Section II-A. At each step, a cross-entropy loss is computed between the predicted probability distribution of the next token and the ground-truth next token present in $\mathcal{F}_{\text{aug}}$ .
|
| 167 |
+
|
| 168 |
+
In the case of code LLMs with an extensive number of parameters, such as CodeLlama-7B with 7 billion parameters, fine-tuning all parameters becomes computationally challenging due to resource limitations. To address this, we employ Low-Rank Adaptation (LoRA) [36] as a parameter-efficient fine-tuning technique. LoRA relies on low-dimensional representations and a freeze-and-inject strategy, where the majority of the model parameters remain fixed, and trainable low-rank matrices are introduced into specific transformer layers, particularly the projection matrices within the attention module, to approximate weight updates.
|
| 169 |
+
|
| 170 |
+
# C. Tool-integrated Code Generation
|
| 171 |
+
|
| 172 |
+
Based on the fine-tuned code LLM and the employed auto-completion tool, we perform a tool-integrated code generation process that is aware to the repository-level dependencies.
|
| 173 |
+
|
| 174 |
+
1) Overall Process: Algorithm 2 outlines the overall tool-integrated generation process, comprising three crucial parts based on the fine-tuned code LLM and the employed autocomb-pletion tool: $①$ Next Token Prediction, $②$ Code Autocompletion, and $③$ Suggestion Selection. This algorithm takes a code repository $\mathcal{R}$ , an insertion position $\mathcal{P}$ , and a description $\mathcal{D}$ as inputs and follows an iterative process to generate a token sequence, ultimately constructing a function denoted as $\mathcal{F}$ . Here, the tokens are drawn from the expanded vocabulary $\mathbb{V}$ defined in Equation 1.
|
| 175 |
+
|
| 176 |
+
The iterative process commences with the $\langle \mathsf{BOS} \rangle$ token (representing the beginning of the sequence), i.e., $\mathcal{F} \gets [ \langle \mathsf{BOS} \rangle ]$
|
| 177 |
+
|
| 178 |
+
in line 2, and proceeds by iteratively updating $\mathcal{F}$ until it reaches the $<\mathrm{EOS}>$ token (representing the end of the sequence). During each iteration step, the algorithm utilizes the description $\mathcal{D}$ and the current incomplete function $\mathcal{F}$ as inputs for the fine-tuned code LLM to execute the LLM-PREDICT procedure. This procedure predicts a $|\mathbb{V}|$ -dimension probability distribution $\pmb{p}^{|\mathbb{V}|}$ for the tokens in the vocabulary $\mathbb{V}$ (line 4). Subsequently, the token $tok$ with the highest probability is selected using the commonly used ARGMAX function [37] (line 5). The selected token $tok$ is then appended to $\mathcal{F}$ (line 6). If $tok$ corresponds to the $<\mathrm{EOS}>$ token, the iterative process concludes, yielding the final generated function (lines 7-8).
|
| 179 |
+
|
| 180 |
+
If $tok$ corresponds to the special token $\langle \mathrm{COMP}\rangle$ , the autocompletion tool is triggered to provide a list of completion suggestions denoted as $\mathbb{C}$ . These suggestions are produced based on the code repository $\mathcal{R}$ and the caret position $\mathcal{P}'$ after inserting $\mathcal{F}$ at $\mathcal{P}$ (lines 9-11). Notably, when $\mathcal{F}$ is inserted using the INSERT procedure, any $\langle \mathrm{COMP}\rangle$ tokens within it are removed to prevent syntax errors. The fine-tuned code LLM is then employed to assess the completion suggestions and select the most suitable one for $\mathcal{F}$ by the LLM-SELECT procedure (line 12). The tokens from the selected suggestion are concatenated to $\mathcal{F}$ .
|
| 181 |
+
|
| 182 |
+
Example: In the case of the incomplete code snippet shown in Figure 1, Algorithm 2 predicts the next token as $<\text{COMP}>$ through the fine-tuned code LLM. This prediction triggers the autocompletion tool. Subsequently, the resulting completion suggestions are fed into the LLM-SELECT procedure, which determines the most appropriate suggestion.
|
| 183 |
+
|
| 184 |
+
Algorithm 2: Tool-integrated Code Generation
|
| 185 |
+
Input: Repository $\mathcal{R}$ , Description $\mathcal{D}$ , Insertion Position
|
| 186 |
+
$\mathcal{P}$
|
| 187 |
+
Output: Function $\mathcal{F}$
|
| 188 |
+
1 $\mathcal{D}\gets$ LLM-TOKENIZE(D)
|
| 189 |
+
2 $\mathcal{F}\leftarrow [< \mathrm{BOS}>\right]$
|
| 190 |
+
3 while true do
|
| 191 |
+
4 $\star 1$ Next Token Prediction \*/
|
| 192 |
+
5 $p^{\lfloor \mathbb{V}\rfloor}\gets$ LLM-PREDICT(D,F)
|
| 193 |
+
6 $t o k\gets$ ARGMAX(V, $\pmb {p}^{|\mathbb{V}|})$
|
| 194 |
+
7 if tok $=$ then
|
| 195 |
+
8 break
|
| 196 |
+
9 if tok $=$ then
|
| 197 |
+
10 $\begin{array}{r}\mathcal{P}^{\prime}\gets \mathrm{INSERT}(\mathcal{P},\mathcal{F})\\ \mathbb{C}\gets \mathrm{TOOL - C O M P L E T E}(\mathcal{R},\mathcal{P}^{\prime})\\ /\star \bullet$ Suggestion Selection \*/
|
| 198 |
+
12 LLM-SELECT(D,F,C)
|
| 199 |
+
|
| 200 |
+
2) Completion Suggestion Selection: Algorithm 3 provides a description of the LLM-SELECT procedure, which is called within Algorithm 2. To begin, it tokenizes each completion in $\mathbb{C}$ into a sequence of tokens from $\mathbb{V}$ using the code LLM's tokenizer (via the LLM-TOKENize procedure) and
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
Fig. 5: Example Prefix Tree
|
| 204 |
+
|
| 205 |
+
inserts this token sequence into a prefix tree [38], denoted as trie (lines 1-5). Each node in the tree possesses four properties: node_token, node).(tok_idx, node.children, and node.is_final, indicating the token stored in the node, the index of the stored token in $\mathbb{V}$ , the child nodes of the current node, and whether the node corresponding to the terminal of a token sequence. The root node, trie.root, is a unique node that stores $\epsilon$ , signifying an empty string. Every path from trie.root to a terminal node corresponds to a token sequence from $\mathbb{C}$ . As an illustration, Figure 5 presents the prefix tree corresponding to the 68 completion suggestions shown in Figure 1. In this example, nodes enclosed in blue boxes indicate the terminals of token sequences.
|
| 206 |
+
|
| 207 |
+
Algorithm 3: Suggestion Selection based on Constraint Greedy Search
|
| 208 |
+
1 Procedure LLM-SELECT(D,F, C):
|
| 209 |
+
2 trie $\leftarrow$ Trie() // prefix tree
|
| 210 |
+
3 for comp in C do
|
| 211 |
+
4 seq $\leftarrow$ LLM-TOKENIZE(comp)
|
| 212 |
+
5 trie.insert(seq)
|
| 213 |
+
6 node $\leftarrow$ trie.root
|
| 214 |
+
7 while not node.is_rootal do
|
| 215 |
+
8 $m^{\mathbb{V}}\gets 0$
|
| 216 |
+
9 for child in node.children do
|
| 217 |
+
10 $\begin{array}{r}\lfloor m^{\mathbb{V}}\rfloor [\text{child}.t o k\_idx]\leftarrow 1 \end{array}$
|
| 218 |
+
11 $p^{\mathbb{V}}\gets$ LLM-PREDICT(D,F)
|
| 219 |
+
12 $p^{\mathbb{V}}\gets p^{\mathbb{V}}\odot m^{\mathbb{V}}$
|
| 220 |
+
13 tok $\leftarrow$ ARGMAX(V,p|V)
|
| 221 |
+
14 $F\gets F\oplus tok$
|
| 222 |
+
15 for child in node.children do
|
| 223 |
+
16 if child_token = tok then
|
| 224 |
+
17 node $\leftarrow$ child
|
| 225 |
+
18 break
|
| 226 |
+
|
| 227 |
+
Subsequently, the algorithm proceeds to select a path in $\text{tie}$ in a greedy fashion, based on predictions made by the finetuned code LLM, and appends the token sequence associated with the chosen path to the incomplete function $\mathcal{F}$ (lines 6-13). Specifically, the algorithm initiates a node pointer, denoted as node, with the root node $\text{tie.root}$ (line 6). A loop continues until the pointer node reaches a terminal node (line 7). Within this loop, a $|\mathbb{V}|$ -dimensional mask vector, denoted as $m^{|\mathbb{V}|}$ , is generated based on the children of the current node (lines 8-10).
|
| 228 |
+
|
| 229 |
+
In $m^{|\mathbb{V}|}$ , only positions corresponding to the $tok\_idx$ property of the children of node are assigned a value of 1, while all other positions are set to 0. Subsequently, the fine-tuned code LLM is employed to predict a probability distribution, $p^{|\mathbb{V}|}$ (line 11). This predicted distribution is then element-wise multiplied by the mask vector $m^{|\mathbb{V}|}$ , effectively setting the probability of tokens not in the children of node to 0. The next token, $tok$ , is selected from $\mathbb{V}$ based on the highest probability in $p^{|\mathbb{V}|}$ using the ARGMAX function and is appended to the current incomplete function $\mathcal{F}$ (lines 13-14). Finally, the node pointer is updated to point to the child of node whose stored token matches the selected token $tok$ (line 15-18).
|
| 230 |
+
|
| 231 |
+
Example: For the prefix tree illustrated in Figure 5, the LLM-SELECT procedure iteratively selects the next tokens within the tree, guided by the LLM's predictions. This iterative process results in the inclusion of tokens corresponding to the suggestion "registered Updates", which are found along the green path, being appended to the incomplete function.
|
| 232 |
+
|
| 233 |
+
# IV. EVALUATION SETUP
|
| 234 |
+
|
| 235 |
+
To evaluate the effectiveness and efficiency of TOOLGEN in repository-level code generation, we conduct a comprehensive set of experiments.
|
| 236 |
+
|
| 237 |
+
# A. Research Questions
|
| 238 |
+
|
| 239 |
+
We formulate the following research questions to guide our evaluation:
|
| 240 |
+
|
| 241 |
+
- RQ1 - Similarity-based Effectiveness: How closely does the code generated by TOOLGEN align with the ground truth when assessed using common similarity metrics?
|
| 242 |
+
- RQ2 - Dependency-based Effectiveness: To what degree can TOOLGEN cover repository-level dependencies and reduce dependency errors, including those related to user-defined functions and attributes?
|
| 243 |
+
- RQ3 - Execution-based Effectiveness: How effectively can TOOLGEN generate functionally correct functions that pass test cases?
|
| 244 |
+
- RQ4 - Efficiency: What is the average time TOOLGEN takes to generate functions?
|
| 245 |
+
- RQ5 - Generalizability: Is TOOLGEN effective in code generation when applied to different code LLMs?
|
| 246 |
+
|
| 247 |
+
# B. Implementation
|
| 248 |
+
|
| 249 |
+
Although TOOLGEN is designed to be language-agnostic, our current focus is on developing a Python-specific prototype of TOOLGEN.
|
| 250 |
+
|
| 251 |
+
Base Model. In TOOLGEN, we explore the utilization of three distinct code LLMs to encompass diverse model architectures and parameter scales. These code LLMs demonstrate impressive performance in code generation and have found extensive utilization in prior studies [4], [5], [12], [33], [39] for fine-tuning and evaluation.
|
| 252 |
+
|
| 253 |
+
- CodeGPT: CodeGPT [33] falls into the category of decoder-only models. It undergoes pre-training on a Python corpus sourced from the CodeSearchNet
|
| 254 |
+
|
| 255 |
+
dataset [40], comprising 1.1 million Python functions. For our purposes, we adopt the pre-trained CodeGPT-small version<sup>1</sup>, which encompasses 124 million model parameters.
|
| 256 |
+
|
| 257 |
+
- CodeT5: CodeT5 [4] belongs to the encoder-decoder model category and is similarly pre-trained on the Python corpus from the CodeSearchNet dataset. We select the pre-trained CodeT5-base version<sup>2</sup>, which comprises 220 million model parameters.
|
| 258 |
+
- CodeLlama: CodeLlama [12] represents another decoder-only model, specialized for code-related tasks and based on Llama2 [25]. It is pre-trained on an even larger Python corpus, encompassing a staggering 100 billion tokens sourced from a Python-centric dataset [12]. For our purposes, we adopt the pre-trained CodeLlama-7b version<sup>3</sup>, featuring a substantial 7 billion model parameters.
|
| 259 |
+
|
| 260 |
+
We refer to the variants of TOOLGEN, namely TOOLGEN-gpt, TOOLGEN-t5, and TOOLGEN-llama, corresponding to the underlying base models CodeGPT, CodeT5, and CodeLlama, respectively.
|
| 261 |
+
|
| 262 |
+
Autocompletion Tool. We employ Jedi [29] as our autocompletion tool. Jedi is a static analysis tool designed for Python, commonly utilized within integrated development environments (IDEs) and editor plugins. Utilizing Jedi, TOOLGEN can trigger autocompletion, generating a list of suggestions that encompasses repository-level dependencies, including user-defined attributes and functions.
|
| 263 |
+
|
| 264 |
+
Trigger Insertion. To create the augmented dataset for fine-tuning the employed base model, we begin with the Python corpus from the training set of CodeSearchNet dataset. Since the CodeSearchNet dataset does not provide complete code repositories from which to extract Python functions, we initiate the process by crawling the code repositories listed in the dataset. Subsequently, we follow the procedure outlined in Section III-B1 to extract and augment functions within these code repositories, ultimately generating the augmented dataset. It's important to note that the CodeSearchNet dataset includes a partitioning into training, validation, and test sets. For our trigger insertion process, we exclusively utilize the code repositories associated with the training set. The resulting augmented dataset comprises a total of 249,298 pairs of descriptions and augmented functions, which are sourced from 12,231 distinct Python code repositories. Regarding dataset statistics, the average token count for descriptions is 10.98, and for augmented functions, it is 55.31. Additionally, the special token $<\text{COMP}>$ appears an average of 5.54 times within these functions.
|
| 265 |
+
|
| 266 |
+
Model Fine-tuning. In the fine-tuning process, we adopt different strategies for CodeGPT, CodeT5, and CodeLlama: For CodeGPT and CodeT5, we perform full-parameter fine-tuning, optimizing all model parameters during this phase. In the case of CodeLlama, we employ LoRA with a reduction factor $(r)$
|
| 267 |
+
|
| 268 |
+
<sup>1</sup>https://huggingface.co/microsoft/CodeGPT-small-py
|
| 269 |
+
$^{2}$ https://huggingface.co/Salesforce/codet5-base
|
| 270 |
+
3https://huggingface.co/codellama/CodeLlama-7b-Python-hf
|
| 271 |
+
|
| 272 |
+
of 8 and a scaling factor (alpha) of 16 to achieve parameter-efficient fine-tuning. This approach allows us to optimize only $3.86\%$ of the trainable parameters in comparison to the original CodeLlama model. The fine-tuning settings for learning rate and batch size are consistent across all three models, with a learning rate of 5E-6 and a batch size of 32. However, the number of epochs differs: 10 epochs for CodeGPT and CodeT5, while CodeLlama undergoes fine-tuning for 3 epochs. To ensure reproducibility, we set the seed for random functions to 42 consistently across all packages and libraries used.
|
| 273 |
+
|
| 274 |
+
# C. Evaluation Benchmark
|
| 275 |
+
|
| 276 |
+
1) Datasets: To evaluate TOOLGEN, we curate two datasets: (i) a large dataset derived from the CodeSearchNet [40] to assess similarity-based and dependency-based effectiveness (RQ1 and RQ2); (ii) a dataset derived from CoderEval [27] containing test cases to evaluate execution-based effectiveness (RQ3).
|
| 277 |
+
|
| 278 |
+
CodeSearchNet. To assess similarity-based and dependency-based effectiveness, we follow this process to construct the dataset: We start by crawling the code repositories listed in the test set of the CodeSearchNet dataset, ensuring no overlap with the training set used for model fine-tuning. We then extract pairs of descriptions and functions from these repositories by parsing and traversing Abstract Syntax Trees (ASTs), similar to the method described in Section III-B1. This process yields an evaluation dataset comprising 12,406 Python functions sourced from 671 code repositories. On average, the descriptions contain 10.66 tokens, while the functions consist of an average of 54.54 tokens.
|
| 279 |
+
|
| 280 |
+
CoderEval. To evaluate execution-based effectiveness, we initially gather all 230 Python code generation tasks from the CoderEval benchmark, extracted from 43 real-world Python repositories. Each task consists of a natural language description, a ground-truth code snippet, and a set of test cases, along with the project environment context associated with the task (e.g., project source code, dependent libraries, and test scripts). The tasks are categorized into six runnable levels: self-contained, slab-rutable, plib-rutable, class-rutable, file-rutable, and project-rutable [27]. Each runnable level relies on the dependencies defined at that level and does not depend on those defined at subsequent levels. For example, plib-rutable indicate that the task requires public third-party libraries, while file-rutable require dependencies defined in the current file (e.g., user-defined classes and functions). We remove the tasks overlapping with the training dataset of TOOLGEN, resulting in a final dataset containing 176 tasks.
|
| 281 |
+
2) Baselines: The different variants of TOOLGEN and the baselines are presented in Table I, along with the base models they employ.
|
| 282 |
+
|
| 283 |
+
Vanilla Baselines. We develop three vanilla baseline approaches by fine-tuning these same base models but performing straightforward code generation without tool integration. Specifically, the fine-tuning process for the baselines involves using the 249,298 pairs of descriptions and functions from the augmented dataset. Notably, the fine-tuning is conducted
|
| 284 |
+
|
| 285 |
+
on the original functions, prior to the introduction of <COMP> tokens. The training configurations, including learning rates and training epochs, mirror those employed in the implementation of TOOLGEN. After fine-tuning, these models are utilized to perform straightforward code generation, as outlined in Section II-A. We label the three baseline approaches as follows:
|
| 286 |
+
|
| 287 |
+
- VANILLA-gpt: Represents straightforward code generation using the CodeGPT model fine-tuned on original functions.
|
| 288 |
+
- VANILLA-t5: Signifies straightforward code generation using the CodeT5 model fine-tuned on original functions.
|
| 289 |
+
- VANILLA-llama: Designates straightforward code generation with the CodeLlama model fine-tuned on original functions.
|
| 290 |
+
|
| 291 |
+
Retrieval-Augmented-Generation (RAG) Baselines. We also include REPOCODER [41], a state-of-the-art approach that addresses repository-level code generation by integrating a similarity-based retriever and a pre-trained code language model in an iterative retrieval-augmented-generation pipeline. Similarly, we create three variants of REPOCODER based on the three fine-tuned models VANILLA-gpt, VANILLA-t5, and VANILLA-llama. We directly apply the prompt template defined in the original implementation of REPOCODER. The three variants of REPOCODER are listed as follows:
|
| 292 |
+
|
| 293 |
+
- REPOCODER-gpt: Represents the variant of REPOCODER with the CodeGPT model fine-tuned on original functions.
|
| 294 |
+
- REPOCODER-t5: Signifies the variant of REPOCODER with the CodeT5 model fine-tuned on original functions.
|
| 295 |
+
- REPOCODER-llama: Designates the variant of REPOCODER with the CodeLlama model fine-tuned on original functions.
|
| 296 |
+
|
| 297 |
+
The hyperparameters of REPOCODER used in our experiments follow its default implementation: the retrieval-generation iteration is set to 2, the window size is 20, and the slice size is 2.
|
| 298 |
+
|
| 299 |
+
RAG-based Variants of TOOLGEN. In fact, RAG method is orthogonal to our tool-integrated approach. To ensure a fair comparison and further explore the potential of TOOLGEN, we also develop three RAG-based variants: ragTOOLGEN-gpt, ragTOOLGEN-t5, and ragTOOLGEN-llama. In these variants, the employed retrieval process is the same as the REPOCODER baselines.
|
| 300 |
+
|
| 301 |
+
3) Metrics: In our evaluation, we employ commonly used similarity-based metrics, two novel dependency-based metrics, and an execution-based metric to evaluate the effectiveness of TOOLGEN in repository-level code generation.
|
| 302 |
+
|
| 303 |
+
Similarity-based Metrics: We utilize the following well-established similarity metrics to measure the correspondence between generated functions and their ground-truth counterparts:
|
| 304 |
+
|
| 305 |
+
- BLEU-4 [42]: This metric assesses the quality of generated code by comparing n-grams (sequences of n consecutive tokens) in the generated functions with those in the ground-truth functions.
|
| 306 |
+
- CodeBLEU [33]: Specifically designed for code generation tasks, CodeBLEU evaluates the accuracy of code
|
| 307 |
+
|
| 308 |
+
TABLE I: Variants of TOOLGEN and baselines.
|
| 309 |
+
|
| 310 |
+
<table><tr><td>Approach</td><td>Base Model</td><td>Architecture</td><td># Parameters</td></tr><tr><td>VANILLA-gpt</td><td></td><td></td><td></td></tr><tr><td>REPOCODER-gpt</td><td>CodeGPT</td><td>Decoder-Only</td><td>124 Million</td></tr><tr><td>TOOLGEN-gpt (ours)</td><td></td><td></td><td></td></tr><tr><td>ragTOOLGEN-gpt (ours)</td><td></td><td></td><td></td></tr><tr><td>VANILLA-t5</td><td></td><td></td><td></td></tr><tr><td>REPOCODER-t5</td><td>CodeT5</td><td>Encoder-Decoder</td><td>220 Million</td></tr><tr><td>TOOLGEN-t5 (ours)</td><td></td><td></td><td></td></tr><tr><td>ragTOOLGEN-t5 (ours)</td><td></td><td></td><td></td></tr><tr><td>VANILLA-llama</td><td></td><td></td><td></td></tr><tr><td>REPOCODER-llama</td><td>CodeLlama</td><td>Decoder-Only</td><td>7 Billion</td></tr><tr><td>TOOLGEN-llama (ours)</td><td></td><td></td><td></td></tr><tr><td>ragTOOLGEN-llama (ours)</td><td></td><td></td><td></td></tr></table>
|
| 311 |
+
|
| 312 |
+
generation models by considering code-specific vocabulary and structure.
|
| 313 |
+
|
| 314 |
+
- Edit Similarity (EditSim) [43]: This metric measures the similarity between two pieces of functions by analyzing the character-level edit operations required to transform one into the other.
|
| 315 |
+
- Exact Match: This metric measures the ratio of the generated code that are exactly matched with the ground truth.
|
| 316 |
+
|
| 317 |
+
The calculation of the similarity-based metrics follows the implementation in CodeXGLUE<sup>4</sup>.
|
| 318 |
+
|
| 319 |
+
Dependency-based Metrics: To assess the effectiveness of both TOOLGEN and the baselines in repository-level code generation, we introduce the dependency-based metrics, namely Dependency Coverage (DepCov) and Static Validity Rate (ErrRate).
|
| 320 |
+
|
| 321 |
+
- Dependency Coverage (DepCov): This metric calculates the ratio of repository-level dependencies, including user-defined functions and attributes, that appear in ground-truth functions and are covered by the generated functions. Given the $i$ -th ground-truth function $gt_{i}$ , we identify dependencies by performing the Trigger Insertion procedure (Algorithm 1) and extracting expressions (such as function calls and attribute accesses like "self._registered Updates") that are marked with a trigger. These expressions are considered dependencies as their definitions can be traced in the current repository using static analysis tools like Jedi. Next, for the generated function $pred_{i}$ corresponding to $gt_{i}$ , we extract all expressions by traversing its corresponding AST. We denote the identified dependencies in $gt_{i}$ and extracted expressions in $pred_{i}$ as two sets, $DEP(gt_{i})$ and $EXP(pred_{i})$ , respectively. The Dependency Coverage can be calculated as follows, where $N$ is the size of the
|
| 322 |
+
|
| 323 |
+
test dataset:
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
D e p C o v = \frac {\sum_ {i} ^ {N} | E X P (p r e d _ {i}) \cap D E P (g t _ {i}) |}{\sum_ {i} ^ {N} | D E P (g t _ {i}) |}
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
- Static Validity Rate (ValRate): As repository-level dependencies can potentially introduce dependency errors in generated functions, we introduce the Static Validity Rate metric (ValRate) to evaluate the effectiveness of TOOLGEN in reducing dependency errors. This metric evaluates the proportion of generated functions that successfully pass a static check for dependency errors, specifically no-member and undefined-variable. To perform this evaluation, we incorporate the generated functions into their respective code repositories and conduct static lint analysis using pylint [44]. Functions that do not exhibit syntax errors, no-member, or undefined-variable errors are deemed statically valid. The Static Validity Rate can be calculated as follows:
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
\text {V a l R a t e} = \frac {\left| \left\{\operatorname {p r e d} _ {i \leq N}: \operatorname {p r e d} _ {i} \text {p a s s e s l i n t c h e c k} \right\} \right|}{N}
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
Execution-based Metric: To further assess the functional correctness of the generated functions, we also employ a widely used execution-based metric involving running test cases.
|
| 336 |
+
|
| 337 |
+
- Test Pass Rate (Pass@1): This metric calculates the ratio of generated functionally-correct functions that pass all corresponding test cases. It is evaluated specifically on the CoderEval dataset, where test cases and test scripts are provided.
|
| 338 |
+
|
| 339 |
+
# V. RESULTS AND ANALYSES
|
| 340 |
+
|
| 341 |
+
# A. RQ1: Similarity-based Effectiveness
|
| 342 |
+
|
| 343 |
+
The evaluation results of similarity-based metrics are presented in Table II. When comparing the performance of TOOLGEN's variants and the three different base models, namely CodeGPT, CodeT5, and CodeLlama, we observe that TOOLGEN achieves similarity scores comparable to the baselines.
|
| 344 |
+
|
| 345 |
+
To provide a detailed breakdown, when utilizing CodeGPT as the base model, TOOLGEN-gpt demonstrates a $2.7\%$ improvement in BLEU-4 and a $11.9\%$ improvement in Exact Match compared to VANILLA-gpt. However, it exhibits a $1.0\%$ decrease in CodeBLEU and a $1.1\%$ decrease in Edit Similarity. With the base model CodeT5, TOOLGEN-t5 exhibits $6.2\%$ , $1.4\%$ , and $27.9\%$ enhancements in BLEU-4, CodeBLEU, and Exact Match relative to VANILLA-t5 but experiences a $3.1\%$ decrease in Edit Similarity. In the case of the larger base model CodeLlama, TOOLGEN-llama shows improvements of $4.2\%$ and $21.1\%$ in BLEU-4 and Exact Match compared to VANILLA-t5 but encounters a $0.6\%$ decrease in CodeBLEU and a $2.4\%$ decrease in Edit Similarity.
|
| 346 |
+
|
| 347 |
+
Although the absolute improvements in Exact Match rate are not large (from $0.5\%$ to $1.2\%$ ), considering the size of the test set (e.g., 12,406 samples), the additional exactly matched functions range from 62 to 149. The variability in TOOLGEN's performance across BLEU-4 and CodeBLEU can be attributed
|
| 348 |
+
|
| 349 |
+
TABLE II: Evaluation results of similarity-based effectiveness. $\Delta$ indicates the metric improvement or reduction of TOOLGEN's variants compared to the baselines.
|
| 350 |
+
|
| 351 |
+
<table><tr><td rowspan="2">Approach</td><td colspan="4">Similarity-based Metrics</td></tr><tr><td>BLEU-4</td><td>CodeBLEU</td><td>EditSim</td><td>ExactMatch</td></tr><tr><td>VANILLA-gpt</td><td>0.331</td><td>0.313</td><td>65.4%</td><td>4.2%</td></tr><tr><td>TOOLGEN-gpt</td><td>0.340(Δ = +2.7%)</td><td>0.310(Δ = -1.0%)</td><td>64.7%(Δ = -1.1%)</td><td>4.7%(Δ = +11.9%)</td></tr><tr><td>VANILLA-t5</td><td>0.341</td><td>0.289</td><td>63.9%</td><td>4.3%</td></tr><tr><td>TOOLGEN-t5</td><td>0.362(Δ = +6.2%)</td><td>0.293(Δ = +1.4%)</td><td>61.9%(Δ = -3.1%)</td><td>5.5%(Δ = +27.9%)</td></tr><tr><td>VANILLA-llama</td><td>0.408</td><td>0.360</td><td>67.9%</td><td>5.7%</td></tr><tr><td>TOOLGEN-llama</td><td>0.425(Δ = +4.2%)</td><td>0.358(Δ = -0.6%)</td><td>66.3%(Δ = -2.4%)</td><td>6.9%(Δ = +21.1%)</td></tr></table>
|
| 352 |
+
|
| 353 |
+
to the tokenization methods used in the calculations. For BLEU-4, before using the widely used utility script $^{5}$ , we first tokenize the generated function and its corresponding ground-truth function using the tokenizer of the base code LLMs. In contrast, CodeBLEU is calculated based on the original generated and ground-truth code using the utility script that employs a simpler method, splitting functions into strings (e.g., "func(args)},) based on spaces. This splitting method may introduce inaccuracies in the statistics of matched n-grams, consequently affecting the CodeBLEU calculation. For Edit Similarity, it is calculated at the character level, making it overly sensitive to semantics-insensitive elements like temporary variables. When two variables have different names, their similarity is much lower at the character level than at the token level.
|
| 354 |
+
|
| 355 |
+
SUMMARY: TOOLGEN demonstrates competitive performance in similarity metrics compared to the baselines across various base models. It achieves improvements in BLEU-4 and Exact Match while exhibiting comparable performance in CodeBLEU and Edit Similarity.
|
| 356 |
+
|
| 357 |
+
# B. RQ2: Dependency-based Effectiveness
|
| 358 |
+
|
| 359 |
+
1) Dependency Coverage: Table III displays the evaluation results for repository-level Dependency Coverage (DepCov). Notably, our approach TOOLGEN demonstrates significant superiority over the baselines across all three base models. Specifically, when employing the base models CodeGPT, CodeT5, and CodeLlama, TOOLGEN surpasses the corresponding baselines in Dependency Coverage by $39.1\%$ , $36.4\%$ , and $31.4\%$ , respectively.
|
| 360 |
+
|
| 361 |
+
These results underscore the effectiveness of the tool-integrated generation process in enhancing awareness of repository-level dependencies, a challenge often unaddressed by the conventional code LLMs. For instance, consider the incomplete function in Figure 1: in a straightforward CodeLlama generation, it fails to recognize the valid attributes of "self".
|
| 362 |
+
|
| 363 |
+
TABLE III: Evaluation results of dependency-based effectiveness. DepCov and ValRate represent Dependency Coverage and Static Validity Rate, respectively. ValRate-dep represents the Static Validity Rate rate calculated only for functions containing dependencies. $\Delta$ indicates the metric improvement or reduction of TOOLGEN's variants compared to the baselines.
|
| 364 |
+
|
| 365 |
+
<table><tr><td rowspan="2">Approach</td><td colspan="3">Dependency-based Metrics</td></tr><tr><td>DepCov</td><td>ValRate</td><td>ValRate-dep</td></tr><tr><td>VANILLA-gpt</td><td>8.7%</td><td>50.4%</td><td>46.5%</td></tr><tr><td rowspan="2">TOOLGEN-gpt</td><td>12.1%</td><td>79.5%</td><td>78.0%</td></tr><tr><td>(Δ = +39.1%)</td><td>(Δ = +57.7%)</td><td>(Δ = +67.7%)</td></tr><tr><td>VANILLA-t5</td><td>11.0%</td><td>47.3%</td><td>42.5%</td></tr><tr><td rowspan="2">TOOLGEN-t5</td><td>15.0%</td><td>70.6%</td><td>68.0%</td></tr><tr><td>(Δ = +36.4%)</td><td>(Δ = +49.3%)</td><td>(Δ = +60.0%)</td></tr><tr><td>VANILLA-llama</td><td>14.0%</td><td>49.7%</td><td>44.4%</td></tr><tr><td rowspan="2">TOOLGEN-llama</td><td>18.4%</td><td>72.0%</td><td>69.6%</td></tr><tr><td>(Δ = +31.4%)</td><td>(Δ = +44.9%)</td><td>(Δ = +56.8%)</td></tr></table>
|
| 366 |
+
|
| 367 |
+
However, through tool-integrated generation, TOOLGEN leverages Jedi to deduce a list of completion suggestions, enabling it to select the most suitable one and cover target repository-level dependencies, including the usage of user-defined attributes.
|
| 368 |
+
|
| 369 |
+
Despite the considerable improvement in repository-level Dependency Coverage facilitated by our approach, it is essential to acknowledge that the overall coverage remains limited. This limitation arises from the fact that code LLMs generate function tokens sequentially from left to right. Consequently, errors tend to accumulate as the token count increases due to the exposure bias problem [45]–[47]. This means that code LLMs often make incorrect token predictions at certain generation steps and may not produce $<\mathrm{COMP}>$ tokens to trigger autocompletion tools, especially for long functions.
|
| 370 |
+
|
| 371 |
+
SUMMARY: Our approach, TOOLGEN, consistently outperforms the baselines in repository-level Dependency Coverage across all three base models by ranging from $31.4\%$ to $39.1\%$ . These results highlight the effectiveness of our tool-integrated generation process in addressing the crucial issue of enhancing awareness of repository-level dependencies, which is often a challenge for conventional code LLMs in repository-level code generation.
|
| 372 |
+
|
| 373 |
+
2) Static Validity Rate: Table III also presents the evaluation results for Static Validity Rate (ValRate and ValRate-dep) in repository-level lint analysis, with a particular focus on no-member and undefined-variable errors. Remarkably, our approach, TOOLGEN, consistently exhibits significantly higher Static Validity Rate compared to the baselines across all three base models. Specifically, when employing the base models CodeGPT, CodeT5, and CodeLlama, TOOLGEN increases the Static Validity Rate (ValRate) by $57.7\%$ , $49.3\%$ , and $44.9\%$ , respectively. When considering only the functions containing dependencies, TOOLGEN improves the Static Validity Rate (ValRate-dep) by $67.7\%$ , $60.0\%$ , and $56.8\%$ , respectively.
|
| 374 |
+
|
| 375 |
+
These results underscore the effectiveness of our tool-integrated generation process in mitigating the production of invalid identifiers during code generation within a specific repository context. For instance, let's revisit the incomplete function in Figure 1: in a straightforward CodeLlama generation, it may predict a non-existent attribute, such as "updates", for "self". In contrast, through our tool-integrated approach, only valid completion suggestions inferred by Jedi are considered as candidates, thereby preventing numerous no-member and undefined-variable errors.
|
| 376 |
+
|
| 377 |
+
SUMMARY: Our approach, TOOLGEN, consistently achieves significantly higher Static Validity Rate in repository-level lint analysis compared to the baselines, with improvements ranging from $44.9\%$ to $57.7\%$ . These results underscore the effectiveness of our tool-integrated generation process in mitigating the generation of invalid identifiers, a common challenge faced by conventional code LLMs in the context of repository-level code generation.
|
| 378 |
+
|
| 379 |
+
# C. RQ3: Execution-based Effectiveness
|
| 380 |
+
|
| 381 |
+
Table IV presents the detailed evaluation results for test case execution (Pass@1) on the 176 CoderEval coding tasks.
|
| 382 |
+
|
| 383 |
+
Comparison to VANILLA baselines. Compared to the three VANILLA baselines, our approach TOOLGEN generates 0, 2, and 3 additional functionally-correct functions, resulting in $0\%$ , $40.0\%$ , and $25.0\%$ improvements in Pass@1, respectively. Specifically, TOOLGEN-gpt improves the pass rate for file-runnable tasks, while reducing pass rate for self-contained tasks; TOOLGEN-t5 improves pass rate for slab-runnable file-runnable tasks; TOOLGEN-llama improves pass rate for self-contained, plib-runnable, and class-runnable tasks. These tasks require different runnable-level dependencies (such as local variables and user-defined functions) to achieve correct functionality in the code. The enhancements by TOOLGEN underscore the effectiveness of integrating autocompletion tools to handle these dependencies.
|
| 384 |
+
|
| 385 |
+
Comparison to REPOCODER baselines. Overall, the REPOCODER baselines show unstable performance across different base models. Specifically, compared to their respective VANILLA baselines, REPOCODER-gpt and REPOCODER-t5 exhibit reductions or no improvement in test pass rates, while REPOCODER-llama shows significant improvement. When compared to REPOCODER-gpt and REPOCODER-t5,
|
| 386 |
+
|
| 387 |
+
our TOOLGEN-gpt and TOOLGEN-t5 show improvements in pass rates with 1 and 2 more functions passing the test cases, respectively. However, TOOLGEN-llama exhibits a lower pass rate than REPOCODER-llama (15 vs. 19). These variations can be attributed to several factors: CodeGPT and CodeT5 have fewer parameters (124 million and 220 million) and stricter token number limitations (1,024 and 512), limiting their ability to process and understand retrieval-augmented prompts. In contrast, CodeLlama, with more model parameters (7 billion) and support for longer token sequences (16,384 tokens), allows REPOCODER-llama to achieve a higher pass rate than VANILLA-llama and TOOLGEN-llama due to the benefits of RAG.
|
| 388 |
+
|
| 389 |
+
Integration with RAG. When integrating TOOLGEN with RAG, ragTOOLGEN-gpt and ragTOOLGEN-t5 do not show improvement, while ragTOOLGEN-llama exhibits breakthroughs for project-runnable tasks, with 4 more generated functions passing the test cases. However, the overall pass rate remains unchanged after integrating RAG, showing different advantages and disadvantages of RAG integration for various runnable-level dependencies.
|
| 390 |
+
|
| 391 |
+
SUMMARY: Our approach, TOOLGEN, outperforms or matches the three VANILLA baselines (VANILLA-gpt, VANILLA-t5, and VANILLA-llama) by generating 0, 2, and 3 more functionally correct functions, achieving $0\%$ , $40.0\%$ , and $25.0\%$ improvements in Pass@1, respectively. Compared to RAG-based REPOCODER baselines, both TOOLGEN and REPOCODER have their own advantages and disadvantages for different base models and runnable-level dependencies. Additionally, our decoding-stage tool integration approach shows potential when combined with prompt-level RAG techniques for addressing certain types of dependencies.
|
| 392 |
+
|
| 393 |
+
# D. RQ4: Efficiency
|
| 394 |
+
|
| 395 |
+
Figure 6 illustrates the efficiency evaluation conducted on a single NVIDIA H100 Tensor Core GPU (80GB GPU Memory). Our approach exhibits approximately twice the average generation time for the 176 tasks in the CoderEval dataset, ranging from 0.63 to 2.34 seconds across different base models.
|
| 396 |
+
|
| 397 |
+
The high efficiency of our tool integration is attributed to the offline trigger insertion and fine-tuning. The autocompletion tool is triggered only when the fine-tuned models predict the trigger token $<\text{COMP}>$ , significantly reducing unnecessary tool invocations. Specifically, the fine-tuned CodeGPT, CodeT5, and CodeLlama predict an average of 5.02, 6.24, and 7.05 $<\text{COMP}>$ tokens per task, respectively, which is much fewer than the average function length. Additionally, during the generation of a function, autocompletion is often triggered multiple times for the same objects (e.g., "self"); we maintain a cache to recall completion suggestions for previously visited objects, thereby avoiding repeated tool invocations for the same objects.
|
| 398 |
+
|
| 399 |
+
TABLE IV: Evaluation results of execution-based effectiveness. self, slab, plib, class, file, and project represent self-contained, slab-summable, plib-summable, class-summable, file-summable, and project-summable, respectively. The numbers in brackets after each summable level indicate the corresponding number of tasks, while the numbers in brackets after the rates indicate the number of generated functions that pass the test cases. In each base model group, the best results are highlighted in gray, except when all results are the same.
|
| 400 |
+
|
| 401 |
+
<table><tr><td rowspan="2">Approach</td><td colspan="7">Execution-based Metric (Pass@1)</td></tr><tr><td>total (176)</td><td>self (26)</td><td>slib (23)</td><td>plib (15)</td><td>class (49)</td><td>file (51)</td><td>project (12)</td></tr><tr><td>VANILLA-gpt</td><td>3.4% (6)</td><td>7.7% (2)</td><td>4.3% (1)</td><td>6.7% (1)</td><td>2.0% (1)</td><td>2.0% (1)</td><td>0.0% (0)</td></tr><tr><td>TOOLGEN-gpt (ours)</td><td>3.4% (6)</td><td>3.8% (1)</td><td>4.3% (1)</td><td>6.7% (1)</td><td>2.0% (1)</td><td>3.9% (2)</td><td>0.0% (0)</td></tr><tr><td>REPOCODER-gpt</td><td>2.8% (5)</td><td>0.0% (0)</td><td>4.3% (1)</td><td>6.7% (1)</td><td>4.1% (2)</td><td>2.0% (1)</td><td>0.0% (0)</td></tr><tr><td>ragTOOLGEN-gpt (ours)</td><td>2.8% (5)</td><td>0.0% (0)</td><td>4.3% (1)</td><td>6.7% (1)</td><td>4.1% (2)</td><td>2.0% (1)</td><td>0.0% (0)</td></tr><tr><td>VANILLA-t5</td><td>4.0% (7)</td><td>7.7% (2)</td><td>4.3% (1)</td><td>13.3% (2)</td><td>4.1% (2)</td><td>0.0% (0)</td><td>0.0% (0)</td></tr><tr><td>TOOLGEN-t5 (ours)</td><td>5.1% (9)</td><td>15.4% (4)</td><td>8.7% (2)</td><td>6.7% (1)</td><td>4.1% (2)</td><td>0.0% (0)</td><td>0.0% (0)</td></tr><tr><td>REPOCODER-t5</td><td>4.0% (7)</td><td>7.7% (2)</td><td>4.3% (1)</td><td>13.3% (2)</td><td>4.1% (2)</td><td>0.0% (0)</td><td>0.0% (0)</td></tr><tr><td>ragTOOLGEN-t5 (ours)</td><td>5.1% (9)</td><td>15.4% (4)</td><td>8.7% (2)</td><td>6.7% (1)</td><td>4.1% (2)</td><td>0.0% (0)</td><td>0.0% (0)</td></tr><tr><td>VANILLA-llama</td><td>6.8% (12)</td><td>19.2% (5)</td><td>13.0% (3)</td><td>13.3% (2)</td><td>4.1% (2)</td><td>0.0% (0)</td><td>0.0% (0)</td></tr><tr><td>TOOLGEN-llama (ours)</td><td>8.5% (15)</td><td>23.1% (6)</td><td>13.0% (3)</td><td>20.0% (3)</td><td>4.1% (2)</td><td>2.0% (1)</td><td>0.0% (0)</td></tr><tr><td>REPOCODER-llama</td><td>10.8% (19)</td><td>26.9% (7)</td><td>17.4% (4)</td><td>0.0% (0)</td><td>10.2% (5)</td><td>2.0% (1)</td><td>16.7% (2)</td></tr><tr><td>ragTOOLGEN-llama (ours)</td><td>10.8% (19)</td><td>34.6% (9)</td><td>8.7% (2)</td><td>6.7% (1)</td><td>6.1% (3)</td><td>0.0% (0)</td><td>33.3% (4)</td></tr></table>
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
Fig. 6: Results of efficiency evaluation.
|
| 405 |
+
|
| 406 |
+
SUMMARY: Our tool-integrated generation approach, TOOLGEN, demonstrates high efficiency in repository-level code generation, with latency ranging from 0.63 to 2.34 seconds for generating each function. This efficiency is attributed to predicting the trigger token $<\mathrm{COMP}>$ and implementing a caching mechanism for completion suggestions.
|
| 407 |
+
|
| 408 |
+
# E. RQ5: Generalizability
|
| 409 |
+
|
| 410 |
+
Based on the results presented in Table II and Table III, our tool-integrated generation approach consistently enhances performance in dependency-based metrics while maintaining comparable similarity-based metrics across various model architectures (decoder-only and encoder-decoder) and parameter scales (ranging from 124 million to 7 billion). According to Table IV and Figure 6, our approach improves or maintains
|
| 411 |
+
|
| 412 |
+
execution-based metrics across the base models, with a consistent and acceptable additional latency overhead.
|
| 413 |
+
|
| 414 |
+
SUMMARY: Our tool-integrated generation approach consistently improves or maintains dependency-based and execution-based metrics while achieving competitive similarity-based metrics across various model architectures and parameter scales. This suggests that our approach is versatile and has the potential for broader applicability with other base models in repository-level code generation.
|
| 415 |
+
|
| 416 |
+
# VI. DISCUSSION
|
| 417 |
+
|
| 418 |
+
# A. Case Study
|
| 419 |
+
|
| 420 |
+
Figure 7 depicts three specific examples using VANILLA-llama and TOOLGEN-llama. Each row corresponds to an example, presenting the description, ground truth, code generated by VANILLA-llama, and code generated by TOOLGEN-llama.
|
| 421 |
+
|
| 422 |
+
Example 1: The code generated by TOOLGEN-llama successfully predicts the member "_.value" in the class "Counter", while VANILLA-llama incorrectly predicts an undefined member "value", resulting in a no-member error. This difference can be attributed to TOOLGEN's integration of the autocompletion tool, which helps the code LLMs recognize necessary dependencies like user-defined attributes/members.
|
| 423 |
+
|
| 424 |
+
Example 2: Both VANILLA-llama and TOOLGEN-llama generate incorrect code that fails some test cases. After reviewing the description and the ground-truth, we find that the description is incomplete in expressing the desired functionality. As noted, the description only mentions changing "w:st=" to "w-st="", but the actual desired functionality in the ground-truth is to handle all strings matching the pattern "\bw: [a-z] {1,} = \)". Both VANILLA-llama and
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
Fig. 7: Case study of three specific examples. Additional explanatory notes are marked with gray boxes. In the notes, "dep: xxx" denotes a dependency necessary in the generated code.
|
| 428 |
+
|
| 429 |
+
TOOLGEN-llama follow the description and generate code that satisfies this incomplete functionality. This finding highlights the challenges posed by low-quality descriptions in real-world generation scenarios and reveals quality issues in existing benchmarks.
|
| 430 |
+
|
| 431 |
+
Example 3: There are two crucial dependencies, namely "cls._get_service() and "和服务Name.PLUGINS managerial", necessary to realize the required functionality. VANILLA-llama fails to predict both dependencies and instead outputs nonexistent dependencies like "cls._pluginsmanager" and "PluginManager"), causing the generated code to fail lint checks and test cases. For TOOLGEN-llama, although it successfully predicts the dependency "cls._get_service() ", it fails to predict "和服务Name.PLUGINS managerial" because the model chooses "cls" instead of "服务水平" when starting predicting the argument for "cls._get_service()". This misleads the generation in an incorrect direction, resulting in the failure of the final code. This example also highlights the challenges of applying code LLMs in practical code generation, even when integrating autocompletion tools to avoid certain dependency issues. Introducing an incorrect token at any critical step in the generation process can result in the production of erroneous code.
|
| 432 |
+
|
| 433 |
+
# B. Limitations
|
| 434 |
+
|
| 435 |
+
Static Autocompletion Tools for Dynamically Typed Programming Languages: Currently, our implementation of TOOLGEN is specific to Python, a dynamically typed programming language. However, the autocompletion tools used in TOOLGEN rely on static analysis, which can sometimes fail to trigger for certain repository-level dependencies. For instance, when the type of a function parameter cannot be explicitly inferred through static analysis, autocompletion tools may struggle to
|
| 436 |
+
|
| 437 |
+
deduce attributes defined within the argument type. In the future, we plan to explore the integration of comprehensive type inference tools, such as learning-based tools, into the code generation process alongside autocompletion tools to enhance Python code generation.
|
| 438 |
+
|
| 439 |
+
Greedy Next Token Prediction in Generation Process: During the generation process, we employ a greedy strategy for next token prediction, where the token with the highest probability is selected using the ARGMAX function. This greedy prediction strategy can occasionally lead the model to choose sub-optimal tokens for subsequent steps, resulting in code that may not be of the high quality. To address this issue, we intend to investigate the incorporation of techniques such as beam search and other advanced decoding methods into our tool-integrated generation process to mitigate the challenges posed by greedy prediction.
|
| 440 |
+
|
| 441 |
+
Dependency-based Evaluation Metrics: In the computation of the two repository-level evaluation metrics, namely Dependency Coverage and Static Validity Rate, we employ static analysis to identify target expressions and perform lint examinations. Similar to the autocompletion tools, these static tools may introduce a degree of inaccuracy into the calculated metrics. However, it is essential to note that this does not significantly impact the demonstrated effectiveness of TOOLGEN, as the baseline metrics are also determined using the same static analysis.
|
| 442 |
+
|
| 443 |
+
Integration and Comparison with SOTA Closed-source LLMs: Our approach can be applied to any encoder-decoder or decoder-only models. However, for the most state-of-the-art (SOTA) LLMs like GPT-3.5 and GPT-4, integrating the trigger insertion and tool-integrated decoding process faces challenges due to their closed-source nature and impracticality for fine-tuning. In the future, we may explore the possibility of integrating autocompletion tools into such closed-source LLMs in a non-tuning manner. In our evaluation, we do not compare
|
| 444 |
+
|
| 445 |
+
TOOLGEN with these SOTA closed-source LLMs, as our goal is to assess the effectiveness of integrating autocompletion tools for repository-level code generation. Therefore, we focus on comparing the performance of TOOLGEN, VANILLA, and REPOCODER under the same base models.
|
| 446 |
+
|
| 447 |
+
# C. Threats to Validity
|
| 448 |
+
|
| 449 |
+
Internal Threats: The first internal threat pertains to potential data quality issues common in learning-based approaches. To mitigate this threat, we construct our augmented dataset and evaluation benchmark dataset using the widely adopted CodeSearchNet dataset, which serves as a reliable source for pretraining and evaluating various code models. Another internal threat pertains to the potential data leakage for CodeLlama, as the code repositories in the benchmark dataset may have been encountered by CodeLlama during its pretraining phase. However, our generalizability evaluation (RQ5) provides evidence of consistent performance across the three TOOLGEN variants, suggesting that the improvements achieved by TOOLGEN-llama in repository-level code generation are not attributed to data leakage.
|
| 450 |
+
|
| 451 |
+
External Threats: Our implementation and evaluation of TOOLGEN are specific to the Python programming language. As a result, the findings may not be generalizable to other programming languages. Exploring the tool-integrated generation process for different languages is a valuable direction for future research.
|
| 452 |
+
|
| 453 |
+
# VII. RELATED WORK
|
| 454 |
+
|
| 455 |
+
# A. Code Generation
|
| 456 |
+
|
| 457 |
+
Code generation has long been a central focus in the field of software engineering. Recent developments have introduced a range of large language models for code (code LLMs), including Codex [3], CodeT5 [4], CodeT5+ [5], InCoder [6], AlphaCode [7], CodeGen [8], and CodeLlama [12], built upon the Transformer model architecture [14]. These models, either pretrained or fine-tuned on extensive code corpora, have the capability to automatically generate code based on provided natural language descriptions.
|
| 458 |
+
|
| 459 |
+
While these code LLMs have demonstrated significant effectiveness in generating standalone functions on existing benchmarks like HumanEval [3] and MBPP [26], they face substantial challenges when tasked with generating real-world functions within code repositories. The primary challenge stems from their lack of awareness of repository-level dependencies, such as user-defined functions and attributes, during the code generation process [27]. To address these challenges, researchers have proposed prompt engineering approaches to make code LLMs aware of repository-level dependencies. Shrivastava et al. [48] introduced the repository-level prompt generator, a framework for generating context-aware prompts without requiring access to the weights of the LLM. Bairi et al. [49] presented CodePlan, a task-agnostic framework that treats repository-level coding as a planning problem, using innovative techniques to generate multi-step code edits while
|
| 460 |
+
|
| 461 |
+
considering context from the entire codebase, previous changes, and specific instructions.
|
| 462 |
+
|
| 463 |
+
In this study, we tackle the challenges associated with repository-level code generation by seamlessly integrating autocompletion tools into the generation process of code LLMs.
|
| 464 |
+
|
| 465 |
+
# B. Incorporating External Tools into LLMs
|
| 466 |
+
|
| 467 |
+
Recent research [30], [31], [50]–[58] has explored the integration of external tools (e.g., search engines, web browsers, calculators, and python interpreters) into the LLM generation process, aiming to address their limitations in certain generation scenarios. For instance, Schick et al. propose ToolFormer [30], which augments datasets to instruct LLMs on invoking existing arithmetic calculators, effectively reducing errors in generated text related to arithmetic calculations. Building upon this idea, Zhang et al. introduce ToolCoder [31], designed to teach LLMs how to utilize information-retrieval-based (IR-based) API search tools during code generation. While ToolCoder is effective in generating functionally correct standalone functions, it falls short in addressing repository-level dependencies, limiting its ability to resolve dependency errors. More relevant examples are Repilot [41], STALL+ [59], and MGD [60], which utilize code completion tools to filter out impractical suggestions made by LLMs, focusing on generating API/line-level code completions and valid bug-fix patches rather than entire functions.
|
| 468 |
+
|
| 469 |
+
In this paper, we integrate program-analysis-based autocompletion tools into the code LLM generation process to facilitate repository-level code generation.
|
| 470 |
+
|
| 471 |
+
# VIII. CONCLUSION
|
| 472 |
+
|
| 473 |
+
We present TOOLGEN, a novel approach that seamlessly integrates autocompletion tools into the code LLM generation process to effectively address repository-level dependencies. TOOLGEN encompasses two crucial phases: Data Augmentation and Model Fine-tuning, and Tool-integrated Code Generation. Our comprehensive evaluation showcases TOOLGEN's improvements in the two introduced dependency-level metrics and a widely used execution-based metric across three distinct code LLMs, while also demonstrating its competitiveness in widely-recognized similarity metrics. TOOLGEN also demonstrates high efficiency in repository-level code generation, due to the offline fine-tuning with trigger insertion. Moreover, our generalizability evaluation reaffirms TOOLGEN's consistent performance when applied to diverse code LLMs, including various model architectures and scales.
|
| 474 |
+
|
| 475 |
+
# REFERENCES
|
| 476 |
+
|
| 477 |
+
[1] S. Black, L. Gao, P. Wang, C. Leahy, and S. Biderman, "GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow," Mar. 2021, If you use this software, please cite it using these metadata. [Online]. Available: https://doi.org/10.5281/zenodo.5297715
|
| 478 |
+
[2] B. Wang and A. Komatsuzaki, “GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model,” https://github.com/kingoflolz/mesh-transformer-jax, May 2021.
|
| 479 |
+
|
| 480 |
+
[3] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba, “Evaluating large language models trained on code,” CoRR, vol. abs/2107.03374, 2021. [Online]. Available: https://arxiv.org/abs/2107.03374
|
| 481 |
+
[4] Y. Wang, W. Wang, S. R. Joty, and S. C. H. Hoi, "Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation," in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, M. Moens, X. Huang, L. Specia, and S. W. Yih, Eds. Association for Computational Linguistics, 2021, pp. 8696-8708. [Online]. Available: https://doi.org/10.18653/v1/2021.emnlp-main.685
|
| 482 |
+
[5] Y. Wang, H. Le, A. Gotmare, N. D. Q. Bui, J. Li, and S. C. H. Hoi, "Codet5+: Open code large language models for code understanding and generation," in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 1069-1088. [Online]. Available: https://aclanthology.org/2023.emnlp-main.68
|
| 483 |
+
[6] D. Fried, A. Aghajanyan, J. Lin, S. Wang, E. Wallace, F. Shi, R. Zhong, S. Yih, L. Zettlemoyer, and M. Lewis, "Incoder: A generative model for code infilling and synthesis," in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [Online]. Available: https://openreview.net/pdf?id=hQwb-lbM6EL
|
| 484 |
+
[7] Y. Li, D. H. Choi, J. Chung, N. Kushner, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de Masson d'Autume, I. Babuschkin, X. Chen, P. Huang, J. Welbl, S. Gowal, A. Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de Freitas, K. Kavukcuoglu, and O. Vinyals, "Competition-level code generation with alphanumeric," CoRR, vol. abs/2203.07814, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2203.07814
|
| 485 |
+
[8] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong, "Codegen: An open large language model for code with multi-turn program synthesis," in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [Online]. Available: https://openreview.net/pdf?id=iaYcJKpY2B_
|
| 486 |
+
[9] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, M. Pieler, U. S. Prashanth, S. Purohit, L. Reynolds, J. Tow, B. Wang, and S. Weinbach, "Gpt-neox-20b: An open-source autoregressive language model," CoRR, vol. abs/2204.06745, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2204.06745
|
| 487 |
+
[10] L. B. Allal, R. Li, D. Kocetkov, C. Mou, C. Akiki, C. M. Ferrandis, N. Muennighoff, M. Mishra, A. Gu, M. Dey, L. K. Umapathi, C. J. Anderson, Y. Zi, J. Lamy-Poirier, H. Schoelkopf, S. Troshin, D. Abulkhanov, M. Romero, M. Lappert, F. D. Toni, B. G. del Río, Q. Liu, S. Bose, U. Bhattacharyya, T. Y. Zhuo, I. Yu, P. Villegas, M. Zocca, S. Mangrulkar, D. Lansky, H. Nguyen, D. Contractor, L. Villa, J. Li, D. Bahdanau, Y. Jernite, S. Hughes, D. Fried, A. Guha, H. de Vries, and L. von Werra, “Santacoder: don’t reach for the stars!” CoRR, vol. abs/2301.03988, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2301.03988
|
| 488 |
+
[11] R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy-Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. M. V., J. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Moustafa-Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Lucioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, J. Robinson, C. J. Anderson, B. Dolan-Gavitt, D. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. M. Ferrandis, S. Hughes, T. Wolf, A. Guha, L. von Werra, and H. de Vries, "Starcoder: may
|
| 489 |
+
|
| 490 |
+
the source be with you!" CoRR, vol. abs/2305.06161, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2305.06161
|
| 491 |
+
[12] B. Rozière, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. E. Tan, Y. Adi, J. Liu, T. Remez, J. Rapin, A. Kozhevnikov, I. Evtimov, J. Bitton, M. Bhatt, C. Canton-Ferrer, A. Grattafori, W. Xiong, A. Defossez, J. Copet, F. Azhar, H. Touvron, L. Martin, N. Usunier, T. Scialom, and G. Synnaeve, "Code llama: Open foundation models for code," CoRR, vol. abs/2308.12950, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2308.12950
|
| 492 |
+
[13] F. Christopoulou, G. Lampouras, M. Gritta, G. Zhang, Y. Guo, Z. Li, Q. Zhang, M. Xiao, B. Shen, L. Li, H. Yu, L. Yan, P. Zhou, X. Wang, Y. Ma, I. Iacobacci, Y. Wang, G. Liang, J. Wei, X. Jiang, Q. Wang, and Q. Liu, "Pangu-coder: Program synthesis with function-level language modeling," CoRR, vol. abs/2207.11280, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2207.11280
|
| 493 |
+
[14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998-6008. [Online]. Available: https://proceedings.neurips.cc/paper/2017/bit/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
|
| 494 |
+
[15] C. Wang, J. Zhang, Y. Lou, M. Liu, W. Sun, Y. Liu, and X. Peng, “Tiger: A generating-then-ranking framework for practical python type inference,” arXiv preprint arXiv:2407.02095, 2024.
|
| 495 |
+
[16] C. Wang, J. Liu, X. Peng, Y. Liu, and Y. Lou, “Boosting static resource leak detection via lvm-based resource-oriented intention inference,” arXiv preprint arXiv:2311.04448, 2023.
|
| 496 |
+
[17] C. Wang, Y. Lou, J. Liu, and X. Peng, "Generating variable explanations via zero-shot prompt learning," in 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2023, pp. 748-760.
|
| 497 |
+
[18] C. Wang, K. Huang, J. Zhang, Y. Feng, L. Zhang, Y. Liu, and X. Peng, "How and why llms use deprecated apis in code completion? an empirical study," arXiv preprint arXiv:2406.09834, 2024.
|
| 498 |
+
[19] X. Du, M. Liu, K. Wang, H. Wang, J. Liu, Y. Chen, J. Feng, C. Sha, X. Peng, and Y. Lou, "Evaluating large language models in class-level code generation," in Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, ICSE 2024, Lisbon, Portugal, April 14 - 20, 2024. ACM, 2024, pp. 1496-1508. [Online]. Available: https://doi.org/10.1145/3597503.3639219
|
| 499 |
+
[20] J. Zhang, C. Wang, A. Li, W. Sun, C. Zhang, W. Ma, and Y. Liu, "An empirical study of automated vulnerability localization with large language models," arXiv preprint arXiv:2404.00287, 2024.
|
| 500 |
+
[21] J. Zhang, K. Huang, B. Chen, C. Wang, Z. Tian, and X. Peng, "Malicious package detection in npm and pypi using a single model of malicious behavior sequence," arXiv preprint arXiv:2309.02637, 2023.
|
| 501 |
+
[22] X. Du, G. Zheng, K. Wang, J. Feng, W. Deng, M. Liu, B. Chen, X. Peng, T. Ma, and Y. Lou, “Vul-rag: Enhancing lvm-based vulnerability detection via knowledge-level rag,” arXiv preprint arXiv:2406.11147, 2024.
|
| 502 |
+
[23] Z. Yuan, Y. Lou, M. Liu, S. Ding, K. Wang, Y. Chen, and X. Peng, "No more manual tests? evaluating and improving chatgpt for unit test generation," CoRR, vol. abs/2305.04207, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2305.04207
|
| 503 |
+
[24] Y. Wang, T. Jiang, M. Liu, J. Chen, and Z. Zheng, “Beyond functional correctness: Investigating coding style inconsistencies in large language models,” arXiv preprint arXiv:2407.00456, 2024.
|
| 504 |
+
[25] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Canton-Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom, "Llama 2: Open foundation and fine-tuned chat models," CoRR, vol. abs/2307.09288, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2307.09288
|
| 505 |
+
[26] J. Austin, A. Odena, M. I. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. J. Cai, M. Terry, Q. V. Le, and C. Sutton, "Program
|
| 506 |
+
|
| 507 |
+
synthesis with large language models," CoRR, vol. abs/2108.07732, 2021. [Online]. Available: https://arxiv.org/abs/2108.07732
|
| 508 |
+
[27] H. Yu, B. Shen, D. Ran, J. Zhang, Q. Zhang, Y. Ma, G. Liang, Y. Li, T. Xie, and Q. Wang, "Codereval: A benchmark of pragmatic code generation with generative pre-trained models," arXiv preprint arXiv:2302.00288, 2023.
|
| 509 |
+
[28] P. Vaithilingam, T. Zhang, and E. L. Glassman, "Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models," in CHI '22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, Extended Abstracts, S. D. J. Barbosa, C. Lampe, C. Appert, and D. A. Shamma, Eds. ACM, 2022, pp. 332:1-332:7. [Online]. Available: https://doi.org/10.1145/3491101.3519665
|
| 510 |
+
[29] Jedi - an awesome autocompletion, static analysis and refactoring library for python. [Online]. Available: https://jedi.readthedocs.io/
|
| 511 |
+
[30] T. Schick, J. Dwivedi-Yu, R. Dessi, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, "Toolformer: Language models can teach themselves to use tools," CoRR, vol. abs/2302.04761, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2302.04761
|
| 512 |
+
[31] K. Zhang, G. Li, J. Li, Z. Li, and Z. Jin, "Toolcoder: Teach code generation models to use apis with search tools," arXiv preprint arXiv:2305.04032, 2023.
|
| 513 |
+
[32] Y. Wei, C. S. Xia, and L. Zhang, "Copiloting the copilots: Fusing large language models with completion engines for automated program repair," in Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, CA, USA, December 3-9, 2023, S. Chandra, K. Blincoe, and P. Tonella, Eds. ACM, 2023, pp. 172-184. [Online]. Available: https://doi.org/10.1145/3611643.3616271
|
| 514 |
+
[33] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, S. Fu, and S. Liu, "Codexglue: A machine learning benchmark dataset for code understanding and generation," in Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, J. Vanschoren and S. Yeung, Eds., 2021. [Online]. Available: https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/biticalc/c16a5320fa475530d9583c34fd356ef5-Abstract-round1.html
|
| 515 |
+
[34] H. Chen, Y. Huang, Z. Liu, X. Chen, F. Zhou, and X. Luo, "Automatically detecting the scopes of source code comments," J. Syst. Softw., vol. 153, pp. 45-63, 2019. [Online]. Available: https://doi.org/10.1016/j.jss.2019.03.010
|
| 516 |
+
[35] Y. Huang, H. Guo, X. Ding, J. Shu, X. Chen, X. Luo, Z. Zheng, and X. Zhou, "A comparative study on method comment and inline comment," ACM Trans. Softw. Eng. Methodol., vol. 32, no. 5, pp. 126:1-126:26, 2023. [Online]. Available: https://doi.org/10.1145/3582570
|
| 517 |
+
[36] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, "Lora: Low-rank adaptation of large language models," in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=nZeVKeeFYf9
|
| 518 |
+
[37] Argmax function. [Online]. Available: https://en.wikipedia.org/wiki/Arg_max
|
| 519 |
+
[38] Trie structure. [Online]. Available: https://en.wikipedia.org/wiki/Trie
|
| 520 |
+
[39] X. Wang, Y. Wang, Y. Wan, F. Mi, Y. Li, P. Zhou, J. Liu, H. Wu, X. Jiang, and Q. Liu, "Compilable neural code generation with compiler feedback," in Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, S. Muresan, P. Nakov, and A. Villavicencio, Eds. Association for Computational Linguistics, 2022, pp. 9-19. [Online]. Available: https://doi.org/10.18653/v1/2022-findings-acl.2
|
| 521 |
+
[40] H. Husain, H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, "Codesearchnet challenge: Evaluating the state of semantic code search," CoRR, vol. abs/1909.09436, 2019. [Online]. Available: http://arxiv.org/abs/1909.09436
|
| 522 |
+
[41] F. Zhang, B. Chen, Y. Zhang, J. Keung, J. Liu, D. Zan, Y. Mao, J. Lou, and W. Chen, "RepEncoder: Repository-level code completion through iterative retrieval and generation," in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 2471-2484. [Online]. Available: https://doi.org/10.18653/v1/2023.emnlp-main.151
|
| 523 |
+
[42] K. Papineni, S. Roukos, T. Ward, and W. Zhu, "Bleu: a method for automatic evaluation of machine translation," in Proceedings of the 40th
|
| 524 |
+
|
| 525 |
+
Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA. ACL, 2002, pp. 311-318. [Online]. Available: https://aclanthology.org/P02-1040/
|
| 526 |
+
[43] A. Svyatkovskiy, S. K. Deng, S. Fu, and N. Sundaresan, "Intellicode compose: code generation using transformer," in ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Virtual Event, USA, November 8-13, 2020, P. Devanbu, M. B. Cohen, and T. Zimmermann, Eds. ACM, 2020, pp. 1433-1443. [Online]. Available: https://doi.org/10.1145/3368089.3417058
|
| 527 |
+
[44] Pylint. [Online]. Available: https://github.com/pylint-dev/pylint
|
| 528 |
+
[45] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, "Scheduled sampling for sequence prediction with recurrent neural networks," in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 1171-1179. [Online]. Available: https://proceedings.neurips.cc/paper/2015/ hash/e995f98d56967d946471af29d7bf99f1-Abstract.html
|
| 529 |
+
[46] R. Paulus, C. Xiong, and R. Socher, "A deep reinforced model for abstractive summarization," in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. [Online]. Available: https://openreview.net/forum?id=HkACIQqGa-
|
| 530 |
+
[47] K. Arora, L. E. Asri, H. Bahuleyan, and J. C. K. Cheung, "Why exposure bias matters: An imitation learning perspective of error accumulation in language generation," in Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, S. Muresan, P. Nakov, and A. Villavicencio, Eds. Association for Computational Linguistics, 2022, pp. 700-710. [Online]. Available: https://doi.org/10.18653/v1/2022.findings-acl.58
|
| 531 |
+
[48] D. Shrivastava, H. Larochelle, and D. Tarlow, "Repository-level prompt generation for large language models of code," in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 2023, pp. 31693-31715. [Online]. Available: https://proceedings.mlr.press/v202/shrivastava23a.html
|
| 532 |
+
[49] R. Bairi, A. Sonwane, A. Kanade, V. D. C, A. Iyer, S. Parthasarathy, S. K. Rajamani, B. Ashok, and S. Shet, "Codeplan: Repository-level coding using llms and planning," CoRR, vol. abs/2309.12499, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2309.12499
|
| 533 |
+
[50] B. Paranjape, S. M. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro, “ART: automatic multi-step reasoning and tool-use for large language models,” CoRR, vol. abs/2303.09014, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2303.09014
|
| 534 |
+
[51] M. Komeili, K. Shuster, and J. Weston, "Internet-augmented dialogue generation," in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, S. Muresan, P. Nakov, and A. Villavicencio, Eds. Association for Computational Linguistics, 2022, pp. 8460-8478. [Online]. Available: https://doi.org/10.18653/v1/2022.acl-long.579
|
| 535 |
+
[52] A. Lazaridou, E. Gribovskaya, W. Stokowiec, and N. Grigorev, "Internet-augmented language models through few-shot prompting for open-domain question answering," CoRR, vol. abs/2203.05115, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2203.05115
|
| 536 |
+
[53] R. Thoppilan, D. D. Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pickett, K. S. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Rajakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. A. y Arcas, C. Cui, M. Croak, E. H. Chi, and Q. Le, "Lamda: Language models for dialog applications," CoRR, vol. abs/2201.08239, 2022. [Online]. Available: https://arxiv.org/abs/2201.08239
|
| 537 |
+
[54] K. Shuster, M. Komeili, L. Adolphs, S. Roller, A. Szlam, and J. Weston, "Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion," in Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, Y. Goldberg, Z. Kozareva, and Y. Zhang,
|
| 538 |
+
|
| 539 |
+
Eds. Association for Computational Linguistics, 2022, pp. 373-393. [Online]. Available: https://doi.org/10.18653/v1/2022-findings-emnlp.27
|
| 540 |
+
[55] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, X. Jiang, K. Cobbe, T. Eloundou, G. Krueger, K. Button, M. Knight, B. Chess, and J. Schulman, "Webgpt: Browser-assisted question-answering with human feedback," CoRR, vol. abs/2112.09332, 2021. [Online]. Available: https://arxiv.org/abs/2112.09332
|
| 541 |
+
[56] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman, "Training verifiers to solve math word problems," CoRR, vol. abs/2110.14168, 2021. [Online]. Available: https://arxiv.org/abs/2110.14168
|
| 542 |
+
[57] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, “PAL: program-aided language models,” in International
|
| 543 |
+
|
| 544 |
+
Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 2023, pp. 10764-10799. [Online]. Available: https://proceedings.mlr.press/v202/gao23f.html
|
| 545 |
+
[58] W. Chen, X. Ma, X. Wang, and W. W. Cohen, “Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks,” CoRR, vol. abs/2211.12588, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2211.12588
|
| 546 |
+
[59] J. Liu, Y. Chen, M. Liu, X. Peng, and Y. Lou, “Stall+: Boosting llm-based repository-level code completion with static analysis,” arXiv preprint arXiv:2406.10018, 2024.
|
| 547 |
+
[60] L. A. Agrawal, A. Kanade, N. Goyal, S. K. Lahiri, and S. K. Rajamani, "Guiding language models of code with global context using monitors," arXiv preprint arXiv:2306.10763, 2023.
|
2401.06xxx/2401.06391/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:68680640a6110f4d0deb4d4668c771aea2bbf2ec81b2affd627b0d498dbb16fd
|
| 3 |
+
size 516703
|
2401.06xxx/2401.06391/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06416/fdba11e4-4684-45ad-8a0e-5c593d527e47_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba356211d35cb6b9a17d428c016a22cb2466b86b236f153ff101d859b44161e7
|
| 3 |
+
size 1460719
|
2401.06xxx/2401.06416/full.md
ADDED
|
@@ -0,0 +1,541 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mission: Impossible Language Models
|
| 2 |
+
|
| 3 |
+
# Julie Kallini<sup>1</sup>, Isabel Papadimitriou<sup>1</sup>, Richard Futrell<sup>2</sup>, Kyle Mahowald<sup>3</sup>, Christopher Potts<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Stanford University; <sup>2</sup>University of California, Irvine; <sup>3</sup>University of Texas, Austin kallini@stanford.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Chomsky and others have very directly claimed that large language models (LLMs) are equally capable of learning languages that are possible and impossible for humans to learn. However, there is very little published experimental evidence to support such a claim. Here, we develop a set of synthetic impossible languages of differing complexity, each designed by systematically altering English data with unnatural word orders and grammar rules. These languages lie on an impossibility continuum: at one end are languages that are inherently impossible, such as random and irreversible shuffles of English words, and on the other, languages that may not be intuitively impossible but are often considered so in linguistics, particularly those with rules based on counting word positions. We report on a wide range of evaluations to assess the capacity of GPT-2 small models to learn these uncontroversially impossible languages, and crucially, we perform these assessments at various stages throughout training to compare the learning process for each language. Our core finding is that GPT-2 struggles to learn impossible languages when compared to English as a control, challenging the core claim. More importantly, we hope our approach opens up a productive line of inquiry in which different LLM architectures are tested on a variety of impossible languages in an effort to learn more about how LLMs can be used as tools for these cognitive and typological investigations.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Chomsky (2023), Chomsky et al. (2023), Moro et al. (2023), and Bolhuis et al. (2024) make very broad claims to the effect that large language models (LLMs) are equally capable of learning possible and impossible human languages. For these authors, it follows from this claim that LLMs cannot teach us anything about language, and so the claim (if true) would have significant consequences for linguistic methodology and potentially also for the
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: Partial impossibility continuum of languages based on complexity. We assess the learnability of languages at different points in the continuum and push the (currently unclear) boundary between possible and impossible.
|
| 17 |
+
|
| 18 |
+
viability of LLMs as the basis for robust language capabilities.
|
| 19 |
+
|
| 20 |
+
These authors state this claim in absolute terms. For example, Chomsky et al. (2023) flatly assert that LLMs "are incapable of distinguishing the possible from the impossible," Chomsky (2023) says this property "can't be modified," and Moro et al. (2023) write that "the distinction between possible versus impossible languages cannot be formulated by definition for LLM." Bolhuis et al. (2024) go so far as to claim that "LLMs can produce 'impossible' languages [...] just as well as (if not better than) natural language output." One might expect such strong claims to be supported by extensive formal analysis and/or experimental evidence. However, as far as we are aware, this is not the case. The sole experimental paper cited by the above authors is Mitchell and Bowers 2020—an important and
|
| 21 |
+
|
| 22 |
+
inspiring paper but not one that can resolve these questions on its own. In addition, linguists themselves do not even have an agreed upon notion of what defines the possible or the impossible languages, to say nothing of having formal results with respect to LLMs.
|
| 23 |
+
|
| 24 |
+
Here we provide extensive new experimental evidence to inform the claim that LLMs are equally capable of learning possible and impossible languages in the human sense. Arguably, the central challenge for such work is the fact that there is no agreed-upon way of distinguishing these two groups. We do not feel positioned ourselves to assert such a definition, so we instead offer some examples of impossible languages on a continuum of intuitive complexity (Figure 1). Some of these examples seem intuitively impossible, such as random sentence-level shuffling of English words. Others operationalize less obvious but common claims in the linguistics literature about rules that are impossible, like those that depend on counting words.
|
| 25 |
+
|
| 26 |
+
All of our examples are, we take it, uncontroversial instances of impossible languages. Thus, our experiments can inform the core hypotheses as follows: if LLMs learn these languages as well as they learn natural languages, then the claims of Chomsky and others are supported (for the specific class of LLMs tested). Conversely, if LLMs do not learn these languages as well as the possible ones, it would call into question those assertions. In that case, proponents of those claims ought to provide examples of impossible languages that they find more informative, which we can then evaluate using our approach to further advance the discussion.
|
| 27 |
+
|
| 28 |
+
Our experiments use GPT-2 small models (Radford et al., 2018, 2019), and our base training corpus is the BabyLM dataset (Warstadt et al., 2023), which we modify in various ways to implement our impossible languages. What we find is that these models indeed struggle to learn impossible languages, shown through three core experiments:
|
| 29 |
+
|
| 30 |
+
- In Experiment 1, we train GPT-2 models on our set of defined possible and impossible languages, measuring their learning efficiency through test set perplexities. We find that models trained on possible languages learn more efficiently, evident from lower perplexities achieved in fewer training steps.
|
| 31 |
+
- In Experiment 2, we more closely examine a set of languages that exhibit count-based verb
|
| 32 |
+
|
| 33 |
+
marking rules, using surprisal comparisons to target the relevant patterns. We find that GPT-2s trained on possible languages are more surprised by ungrammatical constructions, indicating that models disprefer agreement rules involving counting.
|
| 34 |
+
|
| 35 |
+
- In Experiment 3, we dive deeper into the internal mechanisms that models may develop to learn such count-based grammar rules using causal abstraction analysis. We find that models develop natural, modular solutions to unnatural grammatical patterns.
|
| 36 |
+
|
| 37 |
+
Overall, our experimental results strongly challenge the claims of Chomsky and others given above, and we believe they pave the way for even deeper discussions of LLMs as models of language learning. At the same time, we recognize that models and humans exhibit fundamental differences, but the extent to which models favor or disfavor natural languages can be influenced by specific architectural decisions (as demonstrated by our findings on tokenization and positional encodings). We hope this paper initiates a new line of work that explores how different model architectures can distinguish between the possible and impossible languages.<sup>1</sup>
|
| 38 |
+
|
| 39 |
+
# 2 Background and Related Work
|
| 40 |
+
|
| 41 |
+
# 2.1 Impossible Human Languages and Language Universals
|
| 42 |
+
|
| 43 |
+
The notion of an impossible human language is elusive and difficult to define, in part due to a lack of consensus on which properties are universal in human language and which properties are “impossible” (Comrie, 1989; Evans and Levinson, 2009; Nefdt, 2024). For instance, recursion, or the principle that all languages produce hierarchical syntactic structures via recursive procedures, has been claimed to be a universal property of human language (Chomsky, 1957, 1965, 2002; Hauser et al., 2002). However, the motivations for recursion have been questioned, with empirical limits on the maximum depth of nested phrases (Karlsson, 2007; Jin et al., 2018) and counterevidence from at least one natural language that seems to lack embedded structures (Everett, 2012). Still, if we grant that possible languages are defined by hierarchical, recursive rules, what defines the impossible
|
| 44 |
+
|
| 45 |
+
languages? Moro et al. (2023) claim that the class of impossible languages would use the "opposite" type of rules: those based on the linear order of words. Musso et al. (2003) provide a few concrete examples that involve counting word positions to mark features like negation and agreement, and we include languages with similar rules in our set of tested impossible languages.
|
| 46 |
+
|
| 47 |
+
It is important to also distinguish what is impossible from what is merely typologically marked, such as the word order patterns listed in Greenberg's (1963) language universals. Previous work has shown that such word order universals can arise through a language's optimization of communication efficiency, achieved by balancing complexity and ambiguity (Hahn et al., 2020; Futrell and Hahn, 2022). While our current exploration does not encompass attested languages, various impossible languages can similarly differ in their information-theoretic complexity, informing the patterns that lie at the boundary between possible and impossible.
|
| 48 |
+
|
| 49 |
+
# 2.2 Training Language Models with Unnatural Word Orders
|
| 50 |
+
|
| 51 |
+
The only work cited by Chomsky that investigates neural language models' ability to learn impossible languages is Mitchell and Bowers 2020, which finds that recurrent neural networks (RNNs; Elman, 1990) trained on various unnatural language constructs, such as reversed sentences and randomized vocabularies, achieve high accuracy on a subject-verb number agreement task. Other work turns to more recent Transformer-based language models (Vaswani et al., 2017), observing their sensitivity to word order and phrase structure (Alleman et al., 2021; Galke et al., 2023) as well as their surprising ability to learn from syntactic information alone (Huang et al., 2023). Studies by Sinha et al. (2021) and Abdou et al. (2022) debate the impact of tokenization, pretraining adjustments, and positional encodings in recovering word order information from shuffled languages. Further investigations into BERT's (Devlin et al., 2019) reliance on word order for grammatical role classification suggest that lexical cues alone may not always be sufficient for good performance (Papadimitriou et al., 2022; see also Hessel and Schofield, 2021; Pham et al., 2021).
|
| 52 |
+
|
| 53 |
+
# 2.3 Language Models and Formal Languages
|
| 54 |
+
|
| 55 |
+
A related line of research examines the abilities of neural language models to express formal lan
|
| 56 |
+
|
| 57 |
+
guages, as defined by the Chomsky hierarchy (Chomsky, 1956, 1959). Human language is considered to be slightly more expressive than context-free languages due to certain syntactic phenomena that interleave constituents (Shieber, 1985; Joshi, 1985). Previous work has shown that RNNs or related models can represent variants of counter and DYCK languages, which are context-free (Weiss et al., 2018; Merrill, 2019; Merrill et al., 2020; Hewitt et al., 2020). Similar work on Transformer architectures has shown that, while they are theoretically Turing-complete provided arbitrary precision and decoder steps (Pérez et al., 2021), they cannot empirically model many regular and non-regular languages (Hahn, 2020; Ebrahimi et al., 2020; Deletang et al., 2023).
|
| 58 |
+
|
| 59 |
+
The inability of Transformer-based language models to learn more complex languages in the Chomsky hierarchy seems surprising, given their impressive performance on natural language. This could be interpreted as evidence that theoretically weak computational models are sufficient for expressing human language. Alternatively, Transformer-based models can be augmented to have inductive biases for nested, hierarchical structures through architecture changes, like the addition of a stack component (Hao et al., 2018; Murty et al., 2023), or data-centered approaches, like structural pretraining (Papadimitriou and Jurafsky, 2023).
|
| 60 |
+
|
| 61 |
+
# 3 Impossible Languages
|
| 62 |
+
|
| 63 |
+
Core to our experiments are the set of impossible languages we synthesize. In constructing these artificial counterfactual languages, we consider their information-theoretic attributes relevant to machine learning, such as entropy rate, as well as their formal linguistic characteristics, such as adherence to hierarchical grammatical structures. We believe that our choice of languages broadly spans the impossibility continuum hypothesized in Figure 1.
|
| 64 |
+
|
| 65 |
+
Concretely, we specify impossible languages by defining perturbation functions of English sentences. These perturbation functions map English input sentences to sequences of tokens. We categorize our languages into three classes: *SHUFFLE, *REVERSE, and *HOP, defined in the next subsections. Each class has one control language that represents unaltered English, or a pattern that is very similar to English. Table 1 provides examples
|
| 66 |
+
|
| 67 |
+
<table><tr><td>Class</td><td>Language</td><td>Example 1</td><td>Example 2</td></tr><tr><td rowspan="9">*SHUFFLE</td><td>NOSHUFFLE</td><td>He cleans his very messy bookshelf.</td><td>They clean his very messy bookshelf.</td></tr><tr><td>NONDETERMINISTIC SHUFFLE</td><td>messy books his he very. If He cleans</td><td>his . very he They messylf books clean</td></tr><tr><td>DETERMINISTIC SHUFFLE(s = 21)</td><td>cleans He messy bookshelf very . his</td><td>clean They messy bookshelf very . his</td></tr><tr><td>DETERMINISTIC SHUFFLE(s = 57)</td><td>cleans his He messy . he very If books</td><td>clean his They messy . he very If books</td></tr><tr><td>DETERMINISTIC SHUFFLE(s = 84)</td><td>He messy . If his very books cleanshe</td><td>They messy . If his very books clean he</td></tr><tr><td>LOCALSHUFFLE(w = 3)</td><td>his He cleans books very messy . helf</td><td>hisThey clean books very messy . helf</td></tr><tr><td>LOCALSHUFFLE(w = 5)</td><td>his messy veryHe cleanslf bookshe.</td><td>his messy veryThey cleanlf bookshe.</td></tr><tr><td>LOCALSHUFFLE(w = 10)</td><td>messy books his he very . If He cleans</td><td>messy books his he very . If They clean</td></tr><tr><td>EVENODD SHUFFLE</td><td>He his messy he . cleans very bookslf</td><td>They his messy he . clean very bookslf</td></tr><tr><td rowspan="3">*REVERSE</td><td>NOREVERSE</td><td>He cleans his very messy books R half.</td><td>They clean his R very messy bookshelf.</td></tr><tr><td>PARTIAL REVERSE</td><td>He cleans his very messy books R . If he</td><td>They clean his R . If he books messy very</td></tr><tr><td>FULL REVERSE</td><td>. If he R books messy very his cleans He</td><td>. If he books messy very R his clean They</td></tr><tr><td rowspan="3">*HOP</td><td>NOHOP</td><td>He clean S his very messy bookshelf.</td><td>They clean P his very messy bookshelf.</td></tr><tr><td>TOKENHOP</td><td>He clean his very messy books S half.</td><td>They clean his very messy books P half.</td></tr><tr><td>WORDHOP</td><td>He clean his very messy bookshelf S.</td><td>They clean his very messy bookshelf P.</td></tr></table>
|
| 68 |
+
|
| 69 |
+
Table 1: List of impossible languages with examples. Control ('No*) languages have patterns that resemble English. Differently colored blocks represent different GPT-2 tokens.
|
| 70 |
+
|
| 71 |
+
of perturbed sentences in each language.
|
| 72 |
+
|
| 73 |
+
# 3.1 *SHUFFLE Languages.
|
| 74 |
+
|
| 75 |
+
The first set of impossible languages, which we call the *SHUFFLE languages, involve different shuffles of tokenized English sentences.
|
| 76 |
+
|
| 77 |
+
1. NOSHUFFLE: The input sentence is tokenized, and the token sequence is unaltered. This language is simply English, used for comparison with other *SHUFFLE languages.
|
| 78 |
+
2. NONDETERMINISTICSHUFFLE: The tokenized input sentence is randomly shuffled. A different random shuffle is used for each input sentence, with no consistency across inputs.
|
| 79 |
+
3. DETERMINISTICSHUFFLE(s): The tokenized input sentence is deterministically shuffled based on the length of the token sequence. For example, all token sequences of length 5 are shuffled in the same order. We create several languages by varying the random seed $s$ that produces the shuffle.
|
| 80 |
+
4. LOCALSHUFFLE(w): The tokenized input sentence is deterministically shuffled in local windows of a fixed size $w$ . We create several languages by varying $w$ .
|
| 81 |
+
5. EVENODDSHUFFLE: The tokenized input sentence is reordered such that all even-indexed tokens appear first, followed by all odd-indexed tokens.
|
| 82 |
+
|
| 83 |
+
The random shuffling function that generates the NONDETERMINISTICSHUFFLE language is irreversible, resulting in sentences that are purely bags of words--any structural information in the original linguistic signal is irretrievable. While the DETERMINISTICSHUFFLE languages are created using a reversible perturbation function, this function operates in an entirely non-linguistic manner; words are ordered based solely on the random seed and sentence length, without considerations for linguistic features or information locality--the property that, when parts of text predict each other, they are often close together (Futrell, 2019; Mansfield and Kemp, 2023). This method is arguably even less humanly feasible than NONDETERMINISTICSHUFFLE, as it relies on an arbitrarily complex yet consistent rule to determine word order. $^{3}$ The question of ranking these two families of languages in the impossibility continuum probes at the definition of impossibility and whether reversibility to an attested language like English is a relevant quantity.
|
| 84 |
+
|
| 85 |
+
The LOCALSHUFFLE languages offer a finer-grained testbed for the importance of information locality, since we can observe the effects of different window sizes. Finally, EVENODDSHUFFLE also manipulates locality, but interestingly preserves part of the linear word order of English while
|
| 86 |
+
|
| 87 |
+
introducing new long-distance dependencies.
|
| 88 |
+
|
| 89 |
+
# 3.2 *REVERSE Languages.
|
| 90 |
+
|
| 91 |
+
The *REVERSE impossible languages involve reversals of all or part of input sentences.
|
| 92 |
+
|
| 93 |
+
1. NOREVERSE: The input sentence is tokenized, and a special marker token $\boxed{\mathbb{R}}$ is inserted at a random position in the token list. Like NOSHUFFLE, this language is most similar to English. We use it for comparison with other *REVERSE languages.
|
| 94 |
+
2. PARTIALREVERSE: The input sentence is tokenized, a special marker token $\boxed{\mathbb{R}}$ is inserted at a random position in the list of tokens, and the following tokens are reversed.
|
| 95 |
+
3. FULLREVERSE: The input sentence is tokenized, a special marker token $\mathbb{R}$ is inserted at a random position in the token list, and all tokens are reversed.
|
| 96 |
+
|
| 97 |
+
The PARTIALREVERSE language is inspired by the experiments of Mitchell and Bowers (2020) on partially reversed English data, though our experiments are not a direct replication, since we use a different model architecture and dataset. FULL-REVERSE may seem like a plausible language syntactically, but higher-level linguistic concepts like anaphora would be highly disrupted. The $\mathbb{R}$ tokens are placed at the same positions across the data in all *REVERSE languages to control for the entropy introduced by their random placement.
|
| 98 |
+
|
| 99 |
+
# 3.3 *HOP Languages.
|
| 100 |
+
|
| 101 |
+
The *HOP languages perturb verb inflection with counting rules.
|
| 102 |
+
|
| 103 |
+
1. NoHOP: All 3rd-person present tense verbs in the input sentence are lemmatized, and the sentence is tokenized. For each 3rd-person present tense verb, a special marker representing the verb's number and tense is placed right after the lemmatized verb. Singular verbs are marked with a special token $\boxed{\mathbb{S}}$ , and plural verbs are marked with $\boxed{\mathbb{P}}$ . Like the other control languages, NoHOP has a pattern that is most similar to English.
|
| 104 |
+
2. TokenHOP: Identical transformation to NoHOP, but the special number/tense markers are placed 4 tokens after the verb.
|
| 105 |
+
|
| 106 |
+
3. WORDHOP: Identical transformation to No-HOP and TOKENHOP, but the special number/tense markers are placed 4 words after the verb, skipping punctuation.
|
| 107 |
+
|
| 108 |
+
These languages specifically investigate GPT-2's ability to learn grammar rules that involve counting the positions of words or tokens.
|
| 109 |
+
|
| 110 |
+
# 4 Experiments
|
| 111 |
+
|
| 112 |
+
We run several experiments to assess GPT-2's learning of our impossible languages. Our first experiment (Section 4.2) uses perplexities as a general evaluation to compare how well each impossible language model has learned its own perturbed language and see whether this reflects the hypothesized impossibility continuum. In our second and third experiments, we conduct a closer examination of the *HOP languages. Given that their count-based verb marking rules appear to be the least clearly implausible among our proposed languages, we focus on examining these rules specifically through targeted assessments using surprisal theory (Section 4.3). Finally, we dive deeper into the mechanisms each *HOP model uses to predict their respective verb marking rules using causal abstraction analysis (Section 4.4). For all evaluations, we run tests on several model checkpoints to observe the learning process over intervals of training steps.
|
| 113 |
+
|
| 114 |
+
# 4.1 Implementation Details
|
| 115 |
+
|
| 116 |
+
For each impossible language, we apply its perturbation function to each sentence of the BabyLM dataset (Warstadt et al., 2023) to create a transformed dataset. Appendix A provides details on preprocessing and formatting, and describes the language-specific filtering needed to achieve the criteria that define each language.
|
| 117 |
+
|
| 118 |
+
We train standard GPT-2 small models (Radford et al., 2018, 2019) on each impossible language. To produce confidence intervals for our experiments, we train 5 sets of models for each language using different random seeds, which affect the model parameter initialization and dataset shuffling during training. Training and model hyperparameter choices are detailed in Appendix B. The primary set of GPT-2 models we train have absolute positional encodings. We also train a set of GPT-2 small
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
Figure 2: Perplexities on a sample of 10K test sentences for each impossible language model over training steps. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+
models with an architecture in which the positional encodings are removed, so that the models' only notion of word order is derived from GPT-2's causal language modeling learning objective (Kazemnejad et al., 2023). Results for these additional experiments supported our main findings on the unaltered GPT-2 architecture. These results are provided in Appendix C.
|
| 128 |
+
|
| 129 |
+
# 4.2 Experiment 1: Language Models Reflect the Impossibility Continuum
|
| 130 |
+
|
| 131 |
+
We train GPT-2 models on all of the languages described in Table 1, and evaluate each model's perplexities on a test set over the course of training. Test perplexities provide a general metric for the extent to which a model has learned a language.
|
| 132 |
+
|
| 133 |
+
Setup. We sample 10K sentences from the BabyLM test set and perturb this sample for each impossible language. For a given impossible language model, we report the geometric mean of the individual sentence perplexities in the corresponding test sample.
|
| 134 |
+
|
| 135 |
+
Hypothesis. Models trained on possible languages will achieve lower average perplexities more quickly (as measured in training steps) than those trained on impossible languages.
|
| 136 |
+
|
| 137 |
+
Results. Our results are in Figure 2. There are clear distinctions between model perplexities after about 500 training steps. First considering the *SHUFFLE models, the NONDETERMINISTIC-SHUFFLE model has the highest perplexities, followed by the three DETERMINISTIC SHUFFLE models, indicating that GPT-2 is better at learning shuffling patterns when they are deterministic, invert
|
| 138 |
+
|
| 139 |
+
ible functions. $^{5}$ The prevalence of certain sentence lengths in the corpus could also limit the variety of sentence shuffles in the DETERMINISTICSHUFFLE languages, potentially resulting in similarly functioning words frequently occupying the same token positions, thus increasing their predictability.
|
| 140 |
+
|
| 141 |
+
Following the sentence-level shuffles, the next models in the order of decreasing perplexity are the three LOCALSHUFFLE models, with smaller window sizes having lower perplexities. LOCALSHUFFLE $(w = 3)$ and EVENODDSHUFFLE have perplexities closest to the NOSHUFFLE model (which represents unaltered English), but NOSHUFFLE consistently has the lowest perplexities throughout the training process.
|
| 142 |
+
|
| 143 |
+
Compared to the *SHUFFLE models, the experimental *REVERSE models have perplexities that are much closer to the NOREVERSE model, and PARTIALREVERSE is slightly better than FULLREVERSE. For the *HOP languages, their respective control model again has the lowest perplexities, although differences among the models are quite minimal. This warrants our deep-dive into the particular verb marking patterns for this set of models.
|
| 144 |
+
|
| 145 |
+
# 4.3 Experiment 2: Language Models Disprefer Counting Rules
|
| 146 |
+
|
| 147 |
+
In Experiment 1, we show that impossible languages are harder for GPT-2 to learn. However, perplexity is a coarse-grained metric of language learning, and the question remains: do language
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
(a) Test 1: mean surprisesals of the verb marker token (S or P) for each *HOP model.
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
(b) Test 2: mean surprisal difference between the verb marker token (S or P) and the following token for each *HOP model.
|
| 154 |
+
Figure 3: Surprisal tests for each *HOP model over training steps. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 155 |
+
|
| 156 |
+
models learn natural grammatical structures better than impossible grammars?
|
| 157 |
+
|
| 158 |
+
The structure of the *HOP languages invites a finer-grained evaluation of their verb marking rules. We use surprisesals to measure how well each *HOP model can predict the placement of its verb marker tokens, S and P. The surprisal $S(w_{i})$ of a word $w_{i}$ is the negative log probability of $w_{i}$ given the context words $w_{1},\ldots ,w_{i - 1}$ that precede it: $S(w_{i}) = -\log_{2}p(w_{i}|w_{1},\ldots ,w_{i - 1})$ . Surprisesals have been used as acceptability judgments from neural language models to probe for their processing of syntactic information (Wilcox et al., 2018; Futrell et al., 2019; Hu et al., 2020; Wilcox et al., 2023) and have been shown to correlate with human sentence processing difficulty (Hale, 2001; Levy, 2008).
|
| 159 |
+
|
| 160 |
+
Setup. To test the *HOP models' sensitivity to marker placement, we conduct two tests on a sample of 10K sentences extracted from the BabyLM dataset containing the verb marker tokens (S or P). As an example, consider the following pair of sentences for the NoHOP language shown in (1).
|
| 161 |
+
|
| 162 |
+
(1) a. He clean his very messy books he l f. b.\*He clean his very messy books he l f.
|
| 163 |
+
|
| 164 |
+
Sentence (1-a) is an example in the NOHOP language, and (1-b) is an ungrammatical counterfactual in which the marker token does not appear.
|
| 165 |
+
|
| 166 |
+
In the first test, we compare the average surprisals of the marker tokens across the three *HOP languages, using grammatical examples like (1-a). In the case of (1-a), the marker is singular, and its
|
| 167 |
+
|
| 168 |
+
surprisal $S(\mathbb{S})$ is defined as:
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
S (\mathrm {S}) = - \log_ {2} p (\mathrm {S} | \mathrm {H e c l e a n})
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
We average this surprisal value for instances of S or $\mathbb{P}$ in the test sample.
|
| 175 |
+
|
| 176 |
+
In the second test, we construct minimal pairs from the example sentences in which the marker token appears and does not appear, and then compare the surprisal of the marker token to the surprisal of the token that follows it, both conditioned on the same context. In example (1-b), the surprisal of the following token $S(\text{his})$ is defined as:
|
| 177 |
+
|
| 178 |
+
$$
|
| 179 |
+
S (\text {h i s}) = - \log_ {2} p (\text {h i s} | \text {H e c l e a n})
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
We expect $S(\text{his}) - S(\text{S})$ to be a large positive value. We average such surprisal differences over instances of the marker tokens in the test sample and similarly define marker surprisesals and minimal pair configurations for the other *HOP languages.
|
| 183 |
+
|
| 184 |
+
Hypothesis. For the first surprisal test, our hypothesis is that the mean surprisal of the marker tokens across test examples will be smaller for the control language than for the impossible languages. For the second test, our hypothesis is that the mean surprisal difference across all test pairs will be larger for possible languages than for impossible ones.
|
| 185 |
+
|
| 186 |
+
Results. Our results are presented in Figure 3. The NoHOP model, which has the verb marking pattern most similar to English, consistently has the lowest mean marker surprisal across training steps in test 1 (Figure 3a). The NoHOP model also has the highest mean surprisal difference across training
|
| 187 |
+
|
| 188 |
+
steps in test 2 (Figure 3b). Both of these results indicate that GPT-2 has learned to expect the marker tokens when they follow a more natural grammatical pattern and was very surprised when they did not appear at the correct positions.
|
| 189 |
+
|
| 190 |
+
GPT-2 learns to expect marker tokens at the right locations in the other *HOP models, just not as well as the control.TOKENHOP tends to have a lower marker surprisal and a higher mean surprisal difference compared to WORDHOP across training steps, indicating that GPT-2 is better at learning the verb marking rule when the units being counted are tokens instead of words.
|
| 191 |
+
|
| 192 |
+
# 4.4 Experiment 3: Language Models Develop Natural Solutions to Unnatural Patterns
|
| 193 |
+
|
| 194 |
+
Experiment 2 demonstrates that, while GPT-2 favors natural grammar rules, it is also capable of acquiring count-based grammar rules like those seen in the verb marking patterns of our *HOP languages. But what sorts of internal mechanisms does it implement to learn such grammar rules, and how do these mechanisms compare to the more natural control? To address this, we conduct a final experiment using causal abstraction analysis, which offers an interpretability framework for identifying and examining causal mechanisms within neural models (Geiger et al., 2020, 2021; Wu et al., 2022, 2023a,b; Geiger et al., 2023). We employ the interchange intervention technique on our *HOP models. To perform a basic interchange intervention on a neural model $M$ , we create two instances of $M$ that are provided two different inputs, the base input $b$ and the source input $s$ . Then, we interchange representations created while processing $b$ with representations created while processing $s$ and observe the effect on the output of $M$ . Such interventions allow us to piece together a causal understanding of how the model processes inputs.
|
| 195 |
+
|
| 196 |
+
Setup. We use interchange interventions to identify representations in our *HOP models that have causal effects on their output behaviors on a subject-verb agreement task. In our experimental setup, $b$ is a sentence prefix with a singular subject and $s$ is an identical prefix with the plural form of the subject. These prefixes include all tokens up to but not including the markers (S and P). We interchange the GPT-2 block outputs from processing $b$ with GPT-2 block outputs from processing $s$ and observe whether the probability of plural marker $\mathbb{P}$ is higher than the probability of singular marker S
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 4: An interchange intervention on the NoHOP model with base input $b =$ The man be and source input $s =$ The men be. The intervention is performed at the second layer and second token position, causing a change in prediction from S to P.
|
| 200 |
+
|
| 201 |
+
after the intervention. This is shown more concretely in Figure 4.
|
| 202 |
+
|
| 203 |
+
We run such interventions at each GPT-2 layer and token position to see which parts of the model cause a change in the marker prediction. We run all of these interventions over several test examples and report the interchange intervention accuracy (IIA), a metric that represents the subject-verb agreement accuracy if the counterfactual (i.e. plural) were the ground truth. The test examples for each *HOP model are extracted from their respective versions of the BabyLM test set, and minimally-different counterfactual examples are created by changing the singular subjects to plural subjects. To ensure that interventions on different examples are analogous, we use regular expressions to locate examples that follow the same structure (i.e. subjects and verbs at the same positions).
|
| 204 |
+
|
| 205 |
+
Results. Our results are presented in Figure 5. The IIA graphs demonstrate how information about the marker tokens flows through the models. We can see that, in all three *HOP models, IIA is high at the token position of the subject up until about layer 3; then there is a transition to the position of the last token in the prefix, preceding the location where the marker should be predicted. All models develop the same modular solution to the task by tracking agreement through the representations at the relevant positions, but the NoHOP model obtains nearly $100\%$ IIA earlier during training, at about 1,500 training steps, supporting the previous surprisal results.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 5: Subject-verb agreement interchange intervention accuracies (IIA) for each *HOP model over training steps. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb, respectively. $t_1 \ldots t_4$ represent the four tokens/words between the verb and its marker for TOKENHOP and WORDHOP. IIA values are averaged over results from 5 models initialized on different random seeds. See Appendix F for confidence intervals.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+
# 5 Discussion and Conclusion
|
| 215 |
+
|
| 216 |
+
Contra claims by Chomsky and others that LLMs cannot possibly inform our understanding of human language, we argue there is great value in treating LLMs as a comparative system for human language and in understanding what systems like LLMs can and cannot learn. Prior explorations of neural language models have already been fruitful for understanding the generalization of syntactic principles from data (Wilcox et al., 2018; Marvin and Linzen, 2018; Futrell et al., 2019; Prasad et al., 2019; Hu et al., 2020). Our paper complements this line of work. We have shown that GPT-2 models do not master our set of synthetic impossible languages as well as natural ones, challenging the unfounded assertions stated previously.
|
| 217 |
+
|
| 218 |
+
Even in the absence of a clear definition of what constitutes a possible or impossible language, we believe that our investigations advance this debate regarding LLMs. The lack of a definition does not hinder inquiry into this topic; in fact, it beckons further explorations of the boundary between the possible and impossible languages, as shown in our hypothesized continuum in Figure 1. We believe that the *HOP languages we propose closely approach this boundary.
|
| 219 |
+
|
| 220 |
+
At the same time, conclusions about LLMs' linguistic competence and preferences for natural languages should be informed by an understanding of the ways that models fundamentally differ from humans. For instance, we saw that models can perform operations that involve counting tokens because LLMs rely on tokens as basic units. While humans are sensitive to morpheme boundaries and word boundaries, it is unlikely humans rely on atomic tokens in the way that LLMs do. This does
|
| 221 |
+
|
| 222 |
+
not mean that LLMs can fundamentally tell us nothing about human language. Rather, as we did here, it is valuable to consider and control for this difference before making generalizations.
|
| 223 |
+
|
| 224 |
+
Since at least the 1950s, a major line of linguistic inquiry has focused on what aspects of syntactic structure can be learned just from data, without domain-specific innate priors (e.g. a Universal Grammar). LLMs lack strong in-built linguistic priors, yet they can learn complex syntactic structures. While many LLMs are trained with vastly more data than children see, there is increasing evidence that even systems trained on smaller amounts of data can learn interesting linguistic information (Warstadt et al., 2023). The current paper raises further questions along similar lines. Since we do find that real languages are more learnable by GPT-2, this leads us to wonder what inductive bias of GPT language models matches natural language. We believe that this inductive bias is related to information locality, the tendency for statistical correlations in text to be short range. Information locality arises in GPTs due to their autoregressive training objective and has been argued to arise in humans due to the incremental nature of real-time language processing (Futrell, 2019; Hahn et al., 2021).
|
| 225 |
+
|
| 226 |
+
Since LLMs have been shown to learn the complex structures of human language and have a preference for learning such structures over unnatural counterfactuals, it follows that they are clearly relevant to investigations and claims about the necessary innate priors for language learning. Arguments that they are "by design, unlimited in what they can 'learn'" and "incapable of distinguishing the possible from the impossible" (Chomsky et al., 2023) do not offer convincing evidence otherwise.
|
| 227 |
+
|
| 228 |
+
# 6 Acknowledgments
|
| 229 |
+
|
| 230 |
+
The authors would like to thank Aryaman Arora, Christiane Fellbaum, Roger Levy, Tristan Thrush, and Diyi Yang for helpful comments on the project. We would also like to thank the members of the Stanford NLP Group, the MIT Computational Psycholinguistics Lab, and the anonymous reviewers for useful discussions. Julie Kallini is supported by a National Science Foundation Graduate Research Fellowship under grant number DGE-2146755.
|
| 231 |
+
|
| 232 |
+
# 7 Limitations
|
| 233 |
+
|
| 234 |
+
Due to resource constraints, we exclusively use the GPT-2 architecture to train models on our various synthetic impossible languages. Each of our experiments involves training a GPT-2 model from scratch on a different language dataset, and for every such language, we train multiple GPT-2 models to establish confidence intervals for our evaluation metrics. Applying this approach to several different model architectures would be quite resource-intensive, so we opted to choose a single architecture in this paper. Future work could apply our methodology to models trained with different architectures or training objectives.
|
| 235 |
+
|
| 236 |
+
Our impossible languages are derived by manipulating an English dataset. While we do not conduct experiments that use other natural languages as a starting point, our experimental choices (i.e. the synthetic languages we design) are informed by linguistic diversity and typology, distinguishing our impossible languages from those that are rare but attested. However, future work might involve deriving impossible languages from base languages other than English and include more morphological manipulations.
|
| 237 |
+
|
| 238 |
+
# 8 Ethics Statement
|
| 239 |
+
|
| 240 |
+
While this work makes the case for language models as useful tools for cognitive science and linguistics research, these models learn and generate language through processes that are fundamentally different from those employed by humans. Making direct claims about human language learning based on the results of this paper could pose potential risks and harms. This research merely aims to explore the learnability of different languages (specifically, those languages that cannot be acquired by humans and are not representative of any known human language) through the lens of neural models.
|
| 241 |
+
|
| 242 |
+
# References
|
| 243 |
+
|
| 244 |
+
Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, and Anders Søgaard. 2022. Word order does matter and shuffled language models know it. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6907-6919, Dublin, Ireland. Association for Computational Linguistics.
|
| 245 |
+
Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, and SueYeon Chung. 2021. Syntactic perturbations reveal representational correlates of hierarchical phrase structure in pretrained language models. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 263–276, Online. Association for Computational Linguistics.
|
| 246 |
+
Johan J. Bolhuis, Stephen Crain, Sandiway Fong, and Andrea Moro. 2024. Three reasons why AI doesn't model human language. Nature, 627(8004):489-489.
|
| 247 |
+
Noam Chomsky. 1956. Three models for the description of language. IRE Transactions on Information Theory, 2(3):113-124.
|
| 248 |
+
Noam Chomsky. 1957. Syntactic Structures. De Gruyter Mouton, Berlin, Boston.
|
| 249 |
+
Noam Chomsky. 1959. On certain formal properties of grammars. Information and Control, 2(2):137-167.
|
| 250 |
+
Noam Chomsky. 1965. Aspects of the Theory of Syntax. The MIT Press.
|
| 251 |
+
Noam Chomsky. 2002. On Nature and Language. Cambridge University Press.
|
| 252 |
+
Noam Chomsky. 2023. Conversations with Tyler: Noam Chomsky. Conversations with Tyler Podcast.
|
| 253 |
+
Noam Chomsky, Ian Roberts, and Jeffrey Watumull. 2023. Noam Chomsky: The false promise of ChatGPT. The New York Times.
|
| 254 |
+
Bernard Comrie. 1989. Language universals and linguistic typology: Syntax and morphology. University of Chicago press.
|
| 255 |
+
Gregoire Deletang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, and Pedro A Ortega. 2023. Neural networks and the Chomsky hierarchy. In The Eleventh International Conference on Learning Representations.
|
| 256 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 257 |
+
|
| 258 |
+
Javid Ebrahimi, Dhruv Gelda, and Wei Zhang. 2020. How can self-attention networks recognize Dyck-n languages? In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4301-4306, Online. Association for Computational Linguistics.
|
| 259 |
+
Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.
|
| 260 |
+
Nicholas Evans and Stephen C Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and brain sciences, 32(5):429-448.
|
| 261 |
+
Daniel L. Everett. 2012. What does Pirahā grammar have to teach us about human language and the mind? WIREs Cognitive Science, 3(6):555-563.
|
| 262 |
+
Richard Futrell. 2019. Information-theoretic locality properties of natural language. In Proceedings of the First Workshop on Quantitative Syntax (Quasy, SyntaxFest 2019), pages 2-15, Paris, France. Association for Computational Linguistics.
|
| 263 |
+
Richard Futrell and Michael Hahn. 2022. Information theory as a bridge between language function and language form. Frontiers in Communication, 7.
|
| 264 |
+
Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 265 |
+
Lukas Galke, Yoav Ram, and Limor Raviv. 2023. What makes a language easy to deep-learn?
|
| 266 |
+
Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. In Advances in Neural Information Processing Systems, volume 34, pages 9574-9586. Curran Associates, Inc.
|
| 267 |
+
Atticus Geiger, Kyle Richardson, and Christopher Potts. 2020. Neural natural language inference models partially embed theories of lexical entailment and negation. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 163-173, Online. Association for Computational Linguistics.
|
| 268 |
+
Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah D. Goodman. 2023. Finding alignments between interpretable causal variables and distributed neural representations. In Proceedings of Causal Learning and Reasoning 2024.
|
| 269 |
+
Joseph Greenberg. 1963. Some universals of grammar with particular reference to the order of meaningful elements. *Universals of Language*, pages 73-113.
|
| 270 |
+
|
| 271 |
+
Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156-171.
|
| 272 |
+
Michael Hahn, Judith Degen, and Richard Futrell. 2021. Modeling word and morpheme order in natural language as an efficient trade-off of memory and surprisal. Psychological Review, 128(4):726-756.
|
| 273 |
+
Michael Hahn, Dan Jurafsky, and Richard Futrell. 2020. Universals of word order reflect optimization of grammars for efficient communication. Proceedings of the National Academy of Sciences, 117(5):2347-2353.
|
| 274 |
+
John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
|
| 275 |
+
Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, and Simon Mendelsohn. 2018. Context-free transductions with neural stacks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 306-315, Brussels, Belgium. Association for Computational Linguistics.
|
| 276 |
+
Marc D. Hauser, Noam Chomsky, and W. Tecumseh Fitch. 2002. The faculty of language: What is it, who has it, and how did it evolve? Science, 298(5598):1569-1579.
|
| 277 |
+
Jack Hessel and Alexandra Schofield. 2021. How effective is BERT without word ordering? implications for language understanding and data privacy. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 204-211, Online. Association for Computational Linguistics.
|
| 278 |
+
John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, and Christopher D. Manning. 2020. RNNs can generate bounded hierarchical languages with optimal memory. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1978-2010, Online. Association for Computational Linguistics.
|
| 279 |
+
Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Computational Linguistics.
|
| 280 |
+
Qian Huang, Eric Zelikman, Sarah Li Chen, Yuhuai Wu, Gregory Valiant, and Percy Liang. 2023. Lexinvariant language models.
|
| 281 |
+
Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018. Depth-bounding is effective: Improvements and evaluation of unsupervised PCFG induction. In Proceedings of the
|
| 282 |
+
|
| 283 |
+
2018 Conference on Empirical Methods in Natural Language Processing, pages 2721-2731, Brussels, Belgium. Association for Computational Linguistics.
|
| 284 |
+
Aravind K. Joshi. 1985. Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions?, Studies in Natural Language Processing, page 206-250. Cambridge University Press.
|
| 285 |
+
Siddharth* Karamcheti, Laurel* Orr, Jason Bolton, Tianyi Zhang, Karan Goel, Avanika Narayan, Rishi Bommasani, Deepak Narayanan, Tatsunori Hashimoto, Dan Jurafsky, Christopher D. Manning, Christopher Potts, Christopher Ré, and Percy Liang. 2021. Mistral - a journey towards reproducible language model training.
|
| 286 |
+
Fred Karlsson. 2007. Constraints on multiple center-embedding of clauses. Journal of Linguistics, 43(2):365-392.
|
| 287 |
+
Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2023. The impact of positional encoding on length generalization in transformers. arXiv preprint arXiv:2305.19466.
|
| 288 |
+
Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
|
| 289 |
+
John Mansfield and Charles Kemp. 2023. The emergence of grammatical structure from interpredictability.
|
| 290 |
+
Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.
|
| 291 |
+
William Merrill. 2019. Sequential neural networks as automata. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 1-13, Florence. Association for Computational Linguistics.
|
| 292 |
+
William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, and Eran Yahav. 2020. A formal hierarchy of RNN architectures. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443-459, Online. Association for Computational Linguistics.
|
| 293 |
+
Jeff Mitchell and Jeffrey Bowers. 2020. Priorless recurrent networks learn curiously. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5147-5158, Barcelona, Spain (Online). International Committee on Computational Linguistics.
|
| 294 |
+
Andrea Moro, Matteo Greco, and Stefano F. Cappa. 2023. Large languages, impossible languages and human brains. Cortex, 167:82-85.
|
| 295 |
+
|
| 296 |
+
Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher D. Manning. 2023. Pushdown layers: Encoding recursive structure in transformer language models.
|
| 297 |
+
Mariacristina Musso, Andrea Moro, Volkmar Glauche, Michel Rijntjes, Jürgen Reichenbach, Christian Büchel, and Cornelius Weiller. 2003. Broca's area and the language instinct. Nature Neuroscience, 6(7):774-781.
|
| 298 |
+
Ryan M. Nefdt. 2024. The Philosophy of Theoretical Linguistics: A Contemporary Outlook. Cambridge University Press.
|
| 299 |
+
Isabel Papadimitriou, Richard Futrell, and Kyle Mahowald. 2022. When classifying grammatical role, BERT doesn't care about word order... except when it matters. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 636-643, Dublin, Ireland. Association for Computational Linguistics.
|
| 300 |
+
Isabel Papadimitriou and Dan Jurafsky. 2023. Injecting structural hints: Using language models to study inductive biases in language learning.
|
| 301 |
+
Thang Pham, Trung Bui, Long Mai, and Anh Nguyen. 2021. Out of order: How important is the sequential order of words in a sentence in natural language understanding tasks? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1145-1160, Online. Association for Computational Linguistics.
|
| 302 |
+
Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 66-76, Hong Kong, China. Association for Computational Linguistics.
|
| 303 |
+
Jorge Pérez, Pablo Barceló, and Javier Marinkovic. 2021. Attention is Turing-complete. Journal of Machine Learning Research, 22(75):1-35.
|
| 304 |
+
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations.
|
| 305 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Ms, OpenAI.
|
| 306 |
+
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Ms, OpenAI.
|
| 307 |
+
Stuart M. Shieber. 1985. Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8(3):333-343.
|
| 308 |
+
|
| 309 |
+
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 310 |
+
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
|
| 311 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
|
| 312 |
+
Alex Warstadt, Leshem Choshen, Aaron Mueller, Adina Williams, Ethan Wilcox, and Chengxu Zhuang. 2023. Call for papers – the BabyLM challenge: Sample-efficient pretraining on a developmentally plausible corpus.
|
| 313 |
+
Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision RNNs for language recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 740-745, Melbourne, Australia. Association for Computational Linguistics.
|
| 314 |
+
Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler-gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211-221, Brussels, Belgium. Association for Computational Linguistics.
|
| 315 |
+
Ethan Gotlieb Wilcox, Richard Futrell, and Roger Levy. 2023. Using computational models to test syntactic learnability. Linguistic Inquiry, pages 1-44.
|
| 316 |
+
Zhengxuan Wu, Karel D'Oosterlinck, Atticus Geiger, Amir Zur, and Christopher Potts. 2023a. Causal proxy models for concept-based model explanations. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 37313-37334. PMLR.
|
| 317 |
+
Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christopher Potts, and Noah Goodman. 2023b. Interpretability at scale: Identifying causal mechanisms in Alpaca. In Advances in Neural Information Processing Systems, volume 36, pages 78205-78226. Curran Associates, Inc.
|
| 318 |
+
|
| 319 |
+
Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, and Noah Goodman. 2022. Causal distillation for language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4288-4295, Seattle, United States. Association for Computational Linguistics.
|
| 320 |
+
|
| 321 |
+
# Supplementary Materials
|
| 322 |
+
|
| 323 |
+
# A Dataset Filters
|
| 324 |
+
|
| 325 |
+
The BabyLM dataset (Warstadt et al., 2023) is an English-language dataset of about 100 million words intended to approximate the amount of linguistic data available to an English-speaking child. To create a dataset for an impossible language, we first pre-process the BabyLM dataset using Stanza (Qi et al., 2020). We perform sentence segmentation on each dataset file and then extract part-of-speech (POS) and morphological feature tags for all the sentences, which are required for the *HOP transformations. We transform each tagged sentence in the original BabyLM dataset using the impossible language's rule-based perturbation function, as described in Section 3. Depending on the class of the impossible language and the specific features of the input sentence, perturbed sentences may be included or excluded from the final dataset used for model training (see below for details on this filtering). Since we apply these filters, the language classes have datasets of slightly different sizes. The *SHUFFLE and *REVERSE languages have training sets of about 9.69 million sentences, and the *HOP languages have training sets of about 8.43 million sentences.
|
| 326 |
+
|
| 327 |
+
*SHUFFLE FILTERS For the *SHUFFLE languages, we filter sentences from the BabyLM dataset such that the set of token sequence lengths seen in the validation and test sets are also seen in the training set. This ensures that any shuffles for the DETERMINISTIC SHUFFLE perturbation (which are determined by the token sequence length) in the test set have also occurred at least once in the training set. We apply these filters for all *SHUFFLE languages such that their datasets are comprised of the same subset of original sentences.
|
| 328 |
+
|
| 329 |
+
*REVERSE FILTERS For the *REVERSE languages, we do not apply any sentence filtering, so their models are trained on the entire BabyLM dataset.
|
| 330 |
+
|
| 331 |
+
*HOP FILTERS For the *HOP languages, we filter out sentences from the BabyLM dataset that would not allow the special markers to fully complete 4 hops in the TOKENHOP or WORDHOP perturbations, i.e. sentences in which a 3rd-person present tense verb is too close to the end of the sentence. We again filter out these sentences from all perturbations, so TOKENHOP, WORDHOP, and NoHOP
|
| 332 |
+
|
| 333 |
+
are comprised of the same subset of original sentences from the BabyLM dataset.
|
| 334 |
+
|
| 335 |
+
# B GPT-2 Training Details and Hyperparameters
|
| 336 |
+
|
| 337 |
+
We train GPT-2 small models with a standard training regime (Radford et al., 2018, 2019) using the library of Karamcheti et al. (2021). We mostly use the default GPT-2 small hyperparameters to train our models (context length of 1024, batch size of 512, etc.). We only change the total number of training steps and the number of warm-up steps. We train with a learning rate that linearly warms up from 0 to 6e-4 over 300 steps. While $10\%$ of steps for warm-up is typical for LLM training, we acknowledge that the best warm-up may be different when using a small pretraining dataset, so we also tried 1,000 warm-up steps and 4,000 warm-up steps. (4,000 steps is the GPT-2 default. Since we only train for 3,000 steps, this effectively means we have a learning-rate that linearly warms up from 0 to 4.5e-4.) Using a different warm-up did not change the ranking of impossible language model perplexities.
|
| 338 |
+
|
| 339 |
+
We train the models for 3,000 training steps, which equates to about 11.03 epochs for the *SHUFFLE languages, 10.05 epochs for the *REVERSE languages, and 12.04 epochs for the *HOP languages. The vocabulary set also varies based on the language. The *SHUFFLE languages use the standard GPT-2 vocabulary containing 50,257 tokens; the *REVERSE languages add one special token $\mathbb{R}$ , for a vocabulary size of 50,258; and the *HOP languages add two special tokens $\mathbb{S}$ and $\mathbb{P}$ for verb inflection, for a vocabulary size of 50,259. We train on NVIDIA RTX 3090 (24GB) GPUs and NVIDIA RTX A6000 (48GB) GPUs. The runtime for each pretraining experiment was $\sim 24$ hours (for one language and one random seed), for a total experiment runtime of $\sim 1800$ hours.
|
| 340 |
+
|
| 341 |
+
# C Results for Models without Positional Encodings
|
| 342 |
+
|
| 343 |
+
Here, we present results for each of our experiments using GPT-2 models we trained without positional encodings. All other aspects of the experiments are the same, including the impossible language datasets and training hyperparameters. We again train 5 sets of models initialized using different random seeds. Figure 6 presents the perplexity results; Figure 7 presents the surprisal results; and
|
| 344 |
+
|
| 345 |
+
Figure 8 presents the causal intervention results.
|
| 346 |
+
|
| 347 |
+
# D Constituency Probing Evaluation
|
| 348 |
+
|
| 349 |
+
We also test how perturbations might influence latent linguistic properties in sentences that are seemingly unaffected by the perturbations. For this, we develop a constituency probing experiment to examine whether the contextual representations generated by different models are effective in classifying a sequence of tokens with an appropriate constituent label, similar to the edge probing experiments of Tenney et al. 2019. For example, if the input sentence is "I enjoy strawberry ice cream" and the span of tokens in question represents the constituent "strawberry ice cream," the span should be labeled as a noun phrase (NP).
|
| 350 |
+
|
| 351 |
+
Setup. We conduct these experiments for *REVERSE and *HOP languages, since these languages have constituents in contiguous token sequences. For NOREVERSE and PARTIALREVERSE, we take a sample of unaltered BabyLM test sentences and omit the reversal token R. For FULLREVERSE, we use the same sample sentences, but reverse the tokens. For the *HOP languages, we use a sample of BabyLM test sentences that are unaffected by the perturbation, which are sentences that do not contain 3rd-person present tense verbs. To extract constituents for testing, we parse the sample sentences using Stanza's BERT-based constituency parser. We include noun phrases (NP), verb phrases (VP), adjective phrases (ADJP), adverb phrases (ADVP), and prepositional phrases (PP), and we stratify the samples so that there are equal numbers of example constituents for each phrasal category. We obtain a total of 10K examples for probe training and testing for each language class, where an example is comprised of a tokenized sentence, indices of the constituent span, and the constituent label.
|
| 352 |
+
|
| 353 |
+
Our probes are L2-regularized logistic regression classifiers trained on the span representations of the tokens corresponding to constituents in the examples. To obtain span representations for training the probes, we mean-pool the representations of the tokens within the span. We try extracting representations from GPT-2 by averaging the last four hidden layers of the model or using different layers individually. We train each probe for a maximum of 10 iterations and hold out $20\%$ of constituent examples for testing.
|
| 354 |
+
|
| 355 |
+
Hypothesis. Constituency probes will achieve higher accuracy for possible languages than impossible ones, in virtue of the fact that the impossible languages are defined by some rules that do not respect constituency boundaries.
|
| 356 |
+
|
| 357 |
+
Results. The results of the probing experiment using the average of the last four GPT-2 layers are presented in Figure 9. Across *REVERSE and *HOP models trained with positional encodings, there are not any clear trends indicating that certain models have better representations of constituents than others, as differences among probe accuracies are minimal and unstable across training steps. However, looking closely at the *REVERSE models without positional encodings, we can see that PARTIALREVERSE has significantly lower probe accuracy than the other models up until 2K training steps. We found similar results when using different layers for span representations, as shown in Figure 10. These results might indicate that the *HOP perturbations were too weak to fundamentally affect the models' representations of latent linguistic structure, but quite unnatural reversal rule of the PARTIALREVERSE language disturbed constituency boundaries in a way that could not be recovered by GPT-2 models without positional encodings.
|
| 358 |
+
|
| 359 |
+
# E Additional DETERMINISTICSHUFFLE Results
|
| 360 |
+
|
| 361 |
+
In addition to perplexities of each impossible language model on its own test data, we also obtain perplexities for each DETERMINISTICSHUFFLE model on the NONDETERMINISTICSHUFFLE test sample and all other DETERMINISTICSHUFFLE test samples. This measures whether these models have learned to distinguish their own shuffles from other shuffles. We found that this was indeed the case, as shown in the results in Figure 11.
|
| 362 |
+
|
| 363 |
+
# F Confidence Intervals for Interchange Intervention Accuracies
|
| 364 |
+
|
| 365 |
+
We present the same results of our causal abstraction experiments from Section 4.4, but include confidence intervals for results across models initialized on different random seeds. Figure 12 presents the results for NOHOP; Figure 13 presents the results for TOKENHOP; and Figure 14 presents the results for WORDHOP. Figures 15, 16, and 17 show the same plots for each *HOP model trained without positional encodings, respectively.
|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
Figure 6: Perplexities on a sample of 10K test sentences for each impossible language model trained without positional encodings. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
|
| 372 |
+

|
| 373 |
+
|
| 374 |
+

|
| 375 |
+
(a) Mean surprisesals of the verb marker token (S or P) for each *HOP model.
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
(b) Mean surprisal difference between the verb marker token (S or P) and the following token for each *HOP model.
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
Figure 7: Surprisal tests for each *HOP model over training steps (trained without positional encodings). Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
Figure 8: Subject-verb agreement interchange intervention accuracies (IIA) for each *HOP model trained without positional encodings. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb, respectively. $t_1 \ldots t_4$ represent the four tokens/words between the verb and its marker for TOKENHOP and WORDHOP. IIA values are averaged over results from 5 models initialized on different random seeds. See Figures 15, 16, and 17 for confidence intervals.
|
| 385 |
+
|
| 386 |
+

|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
(a) Probe accuracy for \*REVERSE and \*HOP models.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
(b) Probe accuracy without positional encodings.
|
| 393 |
+
|
| 394 |
+

|
| 395 |
+
Figure 9: Constituency probe accuracy for *REVERSE and *HOP models over training steps. Span representations were extracted by averaging the last four hidden layers of GPT-2. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
(a) Probe accuracy for \*REVERSE models.
|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
(b) Probe accuracy for ${}^{*}\mathrm{HOP}$ models.
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
(c) Probe accuracy for *REVERSE models without positional encodings.
|
| 405 |
+
(d) Probe accuracy for \*HOP models without positional encodings.
|
| 406 |
+
Figure 10: Constituency probe accuracy for *REVERSE and *HOP models using span representations extracted from different GPT-2 layers (1, 3, 6, 9, 12) over training steps. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
DeterministicShuffle(s = 21) Model
|
| 410 |
+
|
| 411 |
+

|
| 412 |
+
DeterministicShuffle(s = 57) Model
|
| 413 |
+
(a) Test perplexities for models with positional encodings.
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
DeterministicShuffle(s = 84) Model
|
| 417 |
+
|
| 418 |
+

|
| 419 |
+
DeterministicShuffle(s = 21) Model
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
DeterministicShuffle(s = 57) Model
|
| 423 |
+
(b) Test perplexities for models without positional encodings.
|
| 424 |
+
Figure 11: Test perplexities for each DETERMINISTICSHUFFLE model ( $s = 21$ left, $s = 57$ middle, $s = 84$ right) on the NONDETERMINISTICSHUFFLE test sample and all other DETERMINISTICSHUFFLE test samples. Perplexities were taken on a sample of 10K test sentences from each shuffled test set. Error bars indicate $95\%$ confidence intervals across 5 training runs initialized with different random seeds and evaluated on different test samples.
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
DeterministicShuffle(s = 84) Model
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
(a) 300 Training Steps.
|
| 437 |
+
(d) 1200 Training Steps.
|
| 438 |
+
Figure 12: Subject-verb agreement interchange intervention accuracies (IIA) for NoHOP, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb, respectively.
|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
(b) 600 Training Steps.
|
| 442 |
+
(e) 1500 Training Steps.
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
(c) 900 Training Steps.
|
| 446 |
+
(f) 3000 Training Steps.
|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
(a) 300 Training Steps.
|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
(b) 600 Training Steps.
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
(c) 900 Training Steps.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
(e) 1500 Training Steps.
|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
(d) 1200 Training Steps.
|
| 464 |
+
(f) 3000 Training Steps.
|
| 465 |
+
Figure 13: Subject-verb agreement interchange intervention accuracies (IIA) for TOKENHOP, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb. $t_1 \ldots t_4$ represent the four tokens/words between the verb.
|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
(a) 300 Training Steps.
|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
(b) 600 Training Steps.
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
(c) 900 Training Steps.
|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
|
| 478 |
+

|
| 479 |
+
(e) 1500 Training Steps.
|
| 480 |
+
|
| 481 |
+

|
| 482 |
+
(d) 1200 Training Steps.
|
| 483 |
+
(f) 3000 Training Steps.
|
| 484 |
+
Figure 14: Subject-verb agreement interchange intervention accuracies (IIA) for WORDHOP, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb. $t_1 \ldots t_4$ represent the four tokens/words between the verb.
|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
|
| 490 |
+

|
| 491 |
+
|
| 492 |
+

|
| 493 |
+
(a) 300 Training Steps.
|
| 494 |
+
(d) 1200 Training Steps.
|
| 495 |
+
Figure 15: Subject-verb agreement interchange intervention accuracies (IIA) for the NoHOP model trained without positional encodings, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb, respectively.
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
(b) 600 Training Steps.
|
| 499 |
+
(e) 1500 Training Steps.
|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
(c) 900 Training Steps.
|
| 503 |
+
(f) 3000 Training Steps.
|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
(a) 300 Training Steps.
|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
(b) 600 Training Steps.
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
(c) 900 Training Steps.
|
| 517 |
+
(e) 1500 Training Steps.
|
| 518 |
+
Figure 16: Subject-verb agreement interchange intervention accuracies (IIA) for the TOKENHOP model trained without positional encodings, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb. $t_1 \ldots t_4$ represent the four tokens/words between the verb.
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
(d) 1200 Training Steps.
|
| 522 |
+
(f) 3000 Training Steps.
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
|
| 526 |
+

|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
(a) 300 Training Steps.
|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
(b) 600 Training Steps.
|
| 533 |
+
|
| 534 |
+

|
| 535 |
+
(c) 900 Training Steps.
|
| 536 |
+
(e) 1500 Training Steps.
|
| 537 |
+
Figure 17: Subject-verb agreement interchange intervention accuracies (IIA) for the WORDHOP model trained without positional encodings, with confidence intervals across models trained on 5 different random seeds. Vertical axes denote the GPT-2 layer of the intervention, and horizontal axes denote the token position of the intervention. $t_d$ , $t_s$ , and $t_v$ represent the tokens for the determiner, subject, and verb. $t_1 \ldots t_4$ represent the four tokens/words between the verb.
|
| 538 |
+
|
| 539 |
+

|
| 540 |
+
(d) 1200 Training Steps.
|
| 541 |
+
(f) 3000 Training Steps.
|
2401.06xxx/2401.06416/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6ae24b849ac80be5bbf197f53a98b0ec070a2cd967238b381bb14d79f2360f4
|
| 3 |
+
size 2434470
|
2401.06xxx/2401.06416/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_content_list.json
ADDED
|
@@ -0,0 +1,1637 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Human-AI Collaborative Essay Scoring: A Dual-Process Framework with LLMs",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
290,
|
| 8 |
+
79,
|
| 9 |
+
709,
|
| 10 |
+
117
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Changrong Xiao $^{1}$ , Wenxing Ma $^{1}$ Qingping Song $^{2}$ Sean Xin Xu $^{1}$ , Kunpeng Zhang $^{3}$ , Yufang Wang $^{4}$ , Qi Fu $^{4}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
196,
|
| 19 |
+
124,
|
| 20 |
+
805,
|
| 21 |
+
158
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ School of Economics and Management, Tsinghua University",
|
| 28 |
+
"bbox": [
|
| 29 |
+
250,
|
| 30 |
+
159,
|
| 31 |
+
749,
|
| 32 |
+
175
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ Department of Information Systems, City University of Hong Kong",
|
| 39 |
+
"bbox": [
|
| 40 |
+
223,
|
| 41 |
+
175,
|
| 42 |
+
778,
|
| 43 |
+
192
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "$^{3}$ Department of Decision, Operations & Information Technologies, University of Maryland $^{4}$ Beijing Xicheng Educational Research Institute",
|
| 50 |
+
"bbox": [
|
| 51 |
+
134,
|
| 52 |
+
192,
|
| 53 |
+
868,
|
| 54 |
+
225
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "xcr21@mails.tsinghua.edu.cn, mawx21@mails.tsinghua.edu.cn, qisong@cityu.edu.hk, xuxin@sem.tsinghua.edu.cn,",
|
| 61 |
+
"bbox": [
|
| 62 |
+
105,
|
| 63 |
+
227,
|
| 64 |
+
894,
|
| 65 |
+
240
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "kpzhang@umd.edu, wangwang7587@163.com, lilyhoneypot@163.com",
|
| 72 |
+
"bbox": [
|
| 73 |
+
278,
|
| 74 |
+
244,
|
| 75 |
+
721,
|
| 76 |
+
256
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Abstract",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
260,
|
| 86 |
+
261,
|
| 87 |
+
339,
|
| 88 |
+
275
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "Receiving timely and personalized feedback is essential for second-language learners, especially when human instructors are unavailable. This study explores the effectiveness of Large Language Models (LLMs), including both proprietary and open-source models, for Automated Essay Scoring (AES). Through extensive experiments with public and private datasets, we find that while LLMs do not surpass conventional state-of-the-art (SOTA) grading models in performance, they exhibit notable consistency, generalizability, and explainability. We propose an open-source LLM-based AES system, inspired by the dual-process theory. Our system offers accurate grading and high-quality feedback, at least comparable to that of fine-tuned proprietary LLMs, in addition to its ability to alleviate misgrading. Furthermore, we conduct human-AI co-grading experiments with both novice and expert graders. We find that our system not only automates the grading process but also enhances the performance and efficiency of human graders, particularly for essays where the model has lower confidence. These results highlight the potential of LLMs to facilitate effective human-AI collaboration in the educational context, potentially transforming learning experiences through AI-generated feedback.",
|
| 95 |
+
"bbox": [
|
| 96 |
+
141,
|
| 97 |
+
290,
|
| 98 |
+
460,
|
| 99 |
+
702
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "1 Introduction",
|
| 106 |
+
"text_level": 1,
|
| 107 |
+
"bbox": [
|
| 108 |
+
114,
|
| 109 |
+
717,
|
| 110 |
+
258,
|
| 111 |
+
732
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Writing practice is an essential component of second-language learning. While the provision of timely and reliable feedback poses a considerable challenge for educators in China due to the high student-teacher ratio. This limitation hampers students' academic progress, especially those who are keen on self-directed learning. Automated Essay Scoring (AES) systems provide valuable assistance to students by offering immediate and consistent feedback on their work, and also simplifying the grading process for educators.",
|
| 118 |
+
"bbox": [
|
| 119 |
+
112,
|
| 120 |
+
744,
|
| 121 |
+
489,
|
| 122 |
+
921
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "image",
|
| 128 |
+
"img_path": "images/ec23e240367524bb974703339665fdafa16d168ee5c65d74b2ca96fda8413a9a.jpg",
|
| 129 |
+
"image_caption": [
|
| 130 |
+
"Figure 1: Our study reveals that LLM-based essay scoring systems can not only automate the grading process, but also elevate novice evaluators to the level of experts."
|
| 131 |
+
],
|
| 132 |
+
"image_footnote": [],
|
| 133 |
+
"bbox": [
|
| 134 |
+
514,
|
| 135 |
+
261,
|
| 136 |
+
894,
|
| 137 |
+
390
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "text",
|
| 143 |
+
"text": "However, implementing AES systems effectively in real-world educational scenarios presents several challenges. First, the diverse range of exercise contexts and the inherent ambiguity in scoring rubrics complicate the ability of traditional models to deliver accurate scores. Second, interviews with high school teachers indicate that despite receiving accurate score predictions, they must still review essays to mitigate potential errors from the models. Consequently, relying exclusively on this system without human supervision is impractical in real-world scenarios. Thus, there is a clear need for AES systems that not only predict scores accurately but also facilitate effective human-AI collaboration. This should be supported by natural language explanations and additional assistive features to enhance usability.",
|
| 144 |
+
"bbox": [
|
| 145 |
+
507,
|
| 146 |
+
472,
|
| 147 |
+
884,
|
| 148 |
+
730
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "To effectively tackle these challenges, it is crucial to highlight the latest advancements in the field of Natural Language Processing (NLP), particularly focusing on the development of large language models (LLMs). LLMs, such as OpenAI's ChatGPT $^{1}$ , not only showcase impressive capabilities of robust logical reasoning but also exhibit a remarkable ability to comprehend and faithfully follow human instructions (Ouyang et al., 2022). Furthermore, recent studies have highlighted the",
|
| 155 |
+
"bbox": [
|
| 156 |
+
507,
|
| 157 |
+
732,
|
| 158 |
+
885,
|
| 159 |
+
892
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "aside_text",
|
| 165 |
+
"text": "arXiv:2401.06431v2 [cs.CL] 15 Jun 2024",
|
| 166 |
+
"bbox": [
|
| 167 |
+
21,
|
| 168 |
+
310,
|
| 169 |
+
60,
|
| 170 |
+
725
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "page_footnote",
|
| 176 |
+
"text": "<https://chat.openai.com",
|
| 177 |
+
"bbox": [
|
| 178 |
+
531,
|
| 179 |
+
906,
|
| 180 |
+
715,
|
| 181 |
+
919
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 0
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "potential of leveraging LLMs in AES tasks (Mizumoto and Eguchi, 2023; Yancey et al., 2023; Naismith et al., 2023).",
|
| 188 |
+
"bbox": [
|
| 189 |
+
112,
|
| 190 |
+
84,
|
| 191 |
+
489,
|
| 192 |
+
131
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "In this study, we explore the potential of proprietary and open-source LLMs such as GPT-3.5, GPT-4, and LLaMA3 for AES tasks. We conducted extensive experiments with public essay-scoring datasets as well as a private collection of student essays to assess the zero-shot and few-shot performance of these models. Additionally, we enhanced their effectiveness through supervised fine-tuning (SFT). Drawing inspiration from the dual-process Theory, we developed an AES system based on LLaMA3 that matches the grading accuracy and feedback quality of fine-tuned LLaMA3. Our human-LLM co-grading experiment further revealed that this system significantly improves the performance and efficiency of both novice and expert graders, offering valuable insights into the educational impacts and potential for effective human-AI collaboration. Overall, our study contributes three major advancements to the field:",
|
| 199 |
+
"bbox": [
|
| 200 |
+
115,
|
| 201 |
+
134,
|
| 202 |
+
489,
|
| 203 |
+
439
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 1
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "list",
|
| 209 |
+
"sub_type": "text",
|
| 210 |
+
"list_items": [
|
| 211 |
+
"- We pioneer the exploration of LLMs' capabilities as AES systems, especially in complex scenarios featuring tailored grading criteria. Leveraging dual-process theory, our novel AES framework demonstrates remarkable accuracy, efficiency, and explainability.",
|
| 212 |
+
"- We introduce an extensive essay-scoring dataset, which includes 13,372 essays written by Chinese high school students. These essays are evaluated with multi-dimensional scores by expert educators. This dataset significantly enhances the resources available for AI in Education (AIEd) $^2$ .",
|
| 213 |
+
"- Our findings from the human-LLM cograding task highlight the potential of LLM-generated feedback to elevate the proficiency of individuals with limited domain expertise to a level akin to that of experts. Additionally, it enhances the efficiency and robustness of human graders by integrating model confidence scores and explanations. These insights set the stage for future investigation into human-AI collaboration and AI-assisted learning within educational contexts."
|
| 214 |
+
],
|
| 215 |
+
"bbox": [
|
| 216 |
+
136,
|
| 217 |
+
460,
|
| 218 |
+
489,
|
| 219 |
+
881
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"text": "2 Related Work",
|
| 226 |
+
"text_level": 1,
|
| 227 |
+
"bbox": [
|
| 228 |
+
509,
|
| 229 |
+
83,
|
| 230 |
+
665,
|
| 231 |
+
98
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "2.1 Automated Essay Scoring (AES)",
|
| 238 |
+
"text_level": 1,
|
| 239 |
+
"bbox": [
|
| 240 |
+
507,
|
| 241 |
+
111,
|
| 242 |
+
808,
|
| 243 |
+
126
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 1
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "Traditional Methods Automated Essay Scoring (AES) stands as a pivotal research area at the intersection of NLP and education. Traditional AES methods are usually regression-based or classification-based machine learning models (Sultan et al., 2016; Mathias and Bhattacharyya, 2018b,a; Salim et al., 2019) trained with textual features extracted from the target essays. With the advancement of deep learning, AES has witnessed the integration of advanced techniques such as convolutional neural networks (CNNs) (Dong and Zhang, 2016), long short-term memory networks (LSTMs) (Taghipour and Ng, 2016), and also pre-trained language models (Rodriguez et al., 2019; Lun et al., 2020). These innovations have led to more precise score predictions, and state-of-the-art methods are primarily based on Bidirectional Encoder Representations from Transformers (BERT) (Yang et al., 2020; Wang et al., 2022; Boquio and Naval, 2024).",
|
| 250 |
+
"bbox": [
|
| 251 |
+
507,
|
| 252 |
+
133,
|
| 253 |
+
885,
|
| 254 |
+
439
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "LLM Applications in AES Recent studies have explored The potential of leveraging the capabilities of modern LLMs in AES tasks. Mizumoto and Eguchi (2023) provided ChatGPT with specific IELTS scoring rubrics for essay evaluation but found limited improvements when incorporating GPT scores into the regression model. In a different approach, Yancey et al. (2023) used GPT-4's few-shot capabilities to predict Common European Framework of Reference for Languages (CEFR) levels for short essays written by second-language learners. However, the Quadratic Weighted Kappa (QWK) scores still did not surpass those achieved by the XGBoost baseline model or human annotators. Similarly, Han et al. (2023); Stahl et al. (2024) introduced prompting frameworks that did not outperform the conventional baselines.",
|
| 261 |
+
"bbox": [
|
| 262 |
+
507,
|
| 263 |
+
451,
|
| 264 |
+
884,
|
| 265 |
+
724
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"text": "2.2 AI-Assisted Decision Making",
|
| 272 |
+
"text_level": 1,
|
| 273 |
+
"bbox": [
|
| 274 |
+
507,
|
| 275 |
+
738,
|
| 276 |
+
784,
|
| 277 |
+
753
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "Researchers have extensively investigated human-AI teams, in which AI supports the decision-making process by providing recommendations or suggestions, while the human remains responsible for the final decision (van den Bosch et al., 2019). The objective of such human-AI collaboration is to achieve complementary performance, where the combined team performance exceeds that of either party operating independently (Bansal et al., 2021). To realize this, it is crucial to design an",
|
| 284 |
+
"bbox": [
|
| 285 |
+
507,
|
| 286 |
+
760,
|
| 287 |
+
885,
|
| 288 |
+
921
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "page_footnote",
|
| 294 |
+
"text": "$^{2}$ Codes and resources can be found in https://github.com/Xiaochr/LLM-AES",
|
| 295 |
+
"bbox": [
|
| 296 |
+
114,
|
| 297 |
+
894,
|
| 298 |
+
489,
|
| 299 |
+
919
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 1
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "text",
|
| 305 |
+
"text": "AI-assisted decision-making process that allows humans to effectively monitor and counteract any unpredictable or undesirable behavior exhibited by AI models (Eigner and Handler, 2024). This design aims to leverage the strengths of both humans and AI to enhance overall performance (Holstein and Aleven, 2022). To our knowledge, no studies have yet investigated AES systems from this angle of collaborative co-grading.",
|
| 306 |
+
"bbox": [
|
| 307 |
+
112,
|
| 308 |
+
84,
|
| 309 |
+
489,
|
| 310 |
+
230
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 2
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "text",
|
| 316 |
+
"text": "2.3 Dual-Process Theory",
|
| 317 |
+
"text_level": 1,
|
| 318 |
+
"bbox": [
|
| 319 |
+
112,
|
| 320 |
+
242,
|
| 321 |
+
326,
|
| 322 |
+
258
|
| 323 |
+
],
|
| 324 |
+
"page_idx": 2
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"text": "Recent studies have developed architectures that imitate human cognitive processes to enhance the capabilities of LLMs, particularly in reasoning and planning (Benfeghoul et al., 2024). According to dual-process theory in psychology (Wason and Evans, 1974; Kahneman, 2011), human cognition operates via two distinct systems: System 1 involves rapid, intuitive \"Fast Thinking\", while System 2 entails conscious and deliberate \"Slow Thinking\" processes. LLM architectures inspired by this theory have been implemented in complex interactive tasks (Lin et al., 2024; Tian et al., 2023), aiming to mitigate issues like social biases (Kamruzzaman and Kim, 2024) and hallucination (Bellini-Leite, 2023). These adaptations have demonstrated improved performances in various areas.",
|
| 329 |
+
"bbox": [
|
| 330 |
+
112,
|
| 331 |
+
263,
|
| 332 |
+
489,
|
| 333 |
+
521
|
| 334 |
+
],
|
| 335 |
+
"page_idx": 2
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"text": "3 Data",
|
| 340 |
+
"text_level": 1,
|
| 341 |
+
"bbox": [
|
| 342 |
+
112,
|
| 343 |
+
535,
|
| 344 |
+
191,
|
| 345 |
+
550
|
| 346 |
+
],
|
| 347 |
+
"page_idx": 2
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"type": "text",
|
| 351 |
+
"text": "ASAP dataset The Automated Student Assessment Prize (ASAP $^3$ ) dataset, stands as one of the most commonly used publicly accessible resources Automated Essay Scoring (AES) tasks. This comprehensive dataset comprises a total of 12,978 essays, encompassing responses to 8 distinct prompts. Each essay has been evaluated and scored by human annotators. Essay sets are also accompanied by detailed scoring rubrics, each tailored with unique scoring guidelines and score ranges. These intricacies are essential as they cater to the multifaceted requirements and diverse scenarios of AES.",
|
| 352 |
+
"bbox": [
|
| 353 |
+
112,
|
| 354 |
+
562,
|
| 355 |
+
489,
|
| 356 |
+
755
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "Our Chinese Student English Essay (CSEE) dataset We have developed a novel English essay scoring dataset specifically designed for AES tasks. The dataset was carefully curated in collaboration with 29 high schools in China, encompassing a total of 13,372 student essays responding to two distinct prompts used in final exams. The evaluation of these essays was carried out by highly experienced",
|
| 363 |
+
"bbox": [
|
| 364 |
+
112,
|
| 365 |
+
766,
|
| 366 |
+
489,
|
| 367 |
+
896
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "English teachers following the scoring guidelines of the Chinese National College Entrance Examination (Table 8). Scoring was comprehensively assessed across three critical dimensions: Content, Language, and Structure, with an Overall Score ranging from 0 to 20. More descriptions of the two datasets are presented in Appendix A.",
|
| 374 |
+
"bbox": [
|
| 375 |
+
507,
|
| 376 |
+
84,
|
| 377 |
+
884,
|
| 378 |
+
198
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "4 Methods",
|
| 385 |
+
"text_level": 1,
|
| 386 |
+
"bbox": [
|
| 387 |
+
507,
|
| 388 |
+
209,
|
| 389 |
+
620,
|
| 390 |
+
224
|
| 391 |
+
],
|
| 392 |
+
"page_idx": 2
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"type": "text",
|
| 396 |
+
"text": "In this section, we present the details of the models used in this study, including traditional AES baselines, LLM-based approaches, and our proposed Fast and Slow Thinking AES framework.",
|
| 397 |
+
"bbox": [
|
| 398 |
+
507,
|
| 399 |
+
236,
|
| 400 |
+
882,
|
| 401 |
+
300
|
| 402 |
+
],
|
| 403 |
+
"page_idx": 2
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "text",
|
| 407 |
+
"text": "4.1 Traditional Essay Scoring Baselines",
|
| 408 |
+
"text_level": 1,
|
| 409 |
+
"bbox": [
|
| 410 |
+
507,
|
| 411 |
+
312,
|
| 412 |
+
835,
|
| 413 |
+
328
|
| 414 |
+
],
|
| 415 |
+
"page_idx": 2
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"type": "text",
|
| 419 |
+
"text": "BERT Classifier Similar to the model used in Yang et al. (2020); Han et al. (2023)'s work, we implemented a simple yet effective baseline model for score prediction based on BERT. This model integrated a fully connected prediction layer following the BERT output, and the BERT parameters remained unfrozen during training. Both the BERT model and the prediction layer were jointly trained on the training essay set (details in Appendix B).",
|
| 420 |
+
"bbox": [
|
| 421 |
+
507,
|
| 422 |
+
332,
|
| 423 |
+
882,
|
| 424 |
+
479
|
| 425 |
+
],
|
| 426 |
+
"page_idx": 2
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"type": "text",
|
| 430 |
+
"text": "SOTA baselines We also incorporate models such as $R^2$ BERT (Yang et al., 2020) and Tran-BERT-MS-ML-R (Wang et al., 2022), which represent the highest levels of performance in the ASAP AES task. These models serve as the high-level benchmarks against which we evaluate the performance of our LLM-based models.",
|
| 431 |
+
"bbox": [
|
| 432 |
+
507,
|
| 433 |
+
488,
|
| 434 |
+
882,
|
| 435 |
+
600
|
| 436 |
+
],
|
| 437 |
+
"page_idx": 2
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"type": "text",
|
| 441 |
+
"text": "4.2 Prompting LLMs",
|
| 442 |
+
"text_level": 1,
|
| 443 |
+
"bbox": [
|
| 444 |
+
507,
|
| 445 |
+
612,
|
| 446 |
+
694,
|
| 447 |
+
627
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 2
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "text",
|
| 453 |
+
"text": "We considered various prompting strategies including with or without detailed rubrics context, zero-shot or few-shot settings. An illustrative example of a prompt and its corresponding model-generated output can be found in Table 9 in the Appendices.",
|
| 454 |
+
"bbox": [
|
| 455 |
+
507,
|
| 456 |
+
634,
|
| 457 |
+
882,
|
| 458 |
+
714
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 2
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "text",
|
| 464 |
+
"text": "GPT-4, zero-shot, without rubrics In this setting, we simply provide the prompt and the target essay to GPT-4. The model then evaluates the essay and assigns a score based on its comprehension within the specified score range.",
|
| 465 |
+
"bbox": [
|
| 466 |
+
507,
|
| 467 |
+
724,
|
| 468 |
+
882,
|
| 469 |
+
804
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 2
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "text",
|
| 475 |
+
"text": "GPT-4, zero-shot, with rubrics Alongside the prompt and the target essay, we also provide GPT-4 with explicit scoring rubrics, guiding its evaluation.",
|
| 476 |
+
"bbox": [
|
| 477 |
+
507,
|
| 478 |
+
815,
|
| 479 |
+
882,
|
| 480 |
+
863
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 2
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"text": "GPT-4, few-shot, with rubrics In addition to the zero-shot settings, the few-shot prompts include sample essays and their corresponding scores. This",
|
| 487 |
+
"bbox": [
|
| 488 |
+
507,
|
| 489 |
+
872,
|
| 490 |
+
882,
|
| 491 |
+
921
|
| 492 |
+
],
|
| 493 |
+
"page_idx": 2
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "page_footnote",
|
| 497 |
+
"text": "<sup>3</sup>https://www.kaggle.com/c/asap-aes.",
|
| 498 |
+
"bbox": [
|
| 499 |
+
134,
|
| 500 |
+
906,
|
| 501 |
+
400,
|
| 502 |
+
920
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 2
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "image",
|
| 508 |
+
"img_path": "images/a2851310426deeaa5b118291d04b659121216f30a8c8f79201d764cc3f340e0a.jpg",
|
| 509 |
+
"image_caption": [
|
| 510 |
+
"Figure 2: Our proposed Fast and Slow Thinking AES framework."
|
| 511 |
+
],
|
| 512 |
+
"image_footnote": [],
|
| 513 |
+
"bbox": [
|
| 514 |
+
137,
|
| 515 |
+
83,
|
| 516 |
+
860,
|
| 517 |
+
184
|
| 518 |
+
],
|
| 519 |
+
"page_idx": 3
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"type": "text",
|
| 523 |
+
"text": "assists GPT-4 in understanding the latent scoring patterns. With the given prompt, target essay, scoring rubrics, and a set of $k$ essay examples, GPT-4 provides an appropriate score reflecting this enriched context. See Appendix C for details.",
|
| 524 |
+
"bbox": [
|
| 525 |
+
112,
|
| 526 |
+
234,
|
| 527 |
+
487,
|
| 528 |
+
315
|
| 529 |
+
],
|
| 530 |
+
"page_idx": 3
|
| 531 |
+
},
|
| 532 |
+
{
|
| 533 |
+
"type": "text",
|
| 534 |
+
"text": "In all these configurations, we adopted the Chain-of-Thought (CoT) (Wei et al., 2022) strategy. This approach instructed the LLM to analyze and explain the provided materials before making final score determinations. Studies (Lampinen et al., 2022; Zhou et al., 2023; Li et al., 2023) have shown that this structured approach significantly enhances the capabilities of the LLM, optimizing performance in tasks that require inference and reasoning.",
|
| 535 |
+
"bbox": [
|
| 536 |
+
112,
|
| 537 |
+
319,
|
| 538 |
+
489,
|
| 539 |
+
464
|
| 540 |
+
],
|
| 541 |
+
"page_idx": 3
|
| 542 |
+
},
|
| 543 |
+
{
|
| 544 |
+
"type": "text",
|
| 545 |
+
"text": "4.3 Fine-tuning LLMs",
|
| 546 |
+
"text_level": 1,
|
| 547 |
+
"bbox": [
|
| 548 |
+
112,
|
| 549 |
+
492,
|
| 550 |
+
305,
|
| 551 |
+
508
|
| 552 |
+
],
|
| 553 |
+
"page_idx": 3
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"type": "text",
|
| 557 |
+
"text": "We conducted additional investigations into the effectiveness of supervised fine-tuning methods. Given that the ASAP and our CSEE dataset only include scores without expert explanations, we augmented these original datasets with explanations generated by GPT-4. To guide the explanation generation process, we provided GPT-4 with a few expert-curated explanations and a structured template. By organizing the data into an instructional format, we created fine-tuning inputs that enable the LLMs to not only generate accurate scores but also provide high-quality feedback.",
|
| 558 |
+
"bbox": [
|
| 559 |
+
112,
|
| 560 |
+
523,
|
| 561 |
+
489,
|
| 562 |
+
715
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 3
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"text": "We first fine-tuned OpenAI's GPT-3.5-turbo, one of the best-performing LLMs. However, due to the proprietary nature of GPT-3.5 and considerations such as data privacy, training and inference costs, and flexibility in fine-tuning, we also fine-tune an LLaMA3-8B $^{4}$ model with both original and augmented datasets. This recent open-source model mitigates these concerns and has remarkable capabilities, making it a more practical choice for use in educational scenarios.",
|
| 569 |
+
"bbox": [
|
| 570 |
+
112,
|
| 571 |
+
721,
|
| 572 |
+
489,
|
| 573 |
+
881
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 3
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "text",
|
| 579 |
+
"text": "4.4 Our Proposed Method",
|
| 580 |
+
"text_level": 1,
|
| 581 |
+
"bbox": [
|
| 582 |
+
507,
|
| 583 |
+
234,
|
| 584 |
+
732,
|
| 585 |
+
250
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 3
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "As previously mentioned, score prediction and explanation generation are distinct but interrelated tasks within the context of AES. Explanation generation, which covers the evaluation of content, language, and structure, necessitates deliberate and meticulous reasoning. On the other hand, score prediction can either be a swift process based on intuition and experiences or concluded after step-by-step inference. These features align with the idea of dual-process theory. Consequently, we have designed an integrated system that includes separate modules for each task: the Fast Module for quick score prediction and the Slow Module for detailed explanation generation. The framework of our proposed AES system is shown in Figure 2.",
|
| 592 |
+
"bbox": [
|
| 593 |
+
505,
|
| 594 |
+
256,
|
| 595 |
+
884,
|
| 596 |
+
498
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 3
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "text",
|
| 602 |
+
"text": "Slow Module: Fine-tuned LLM The Slow Module forms the core of our AES system, capable of analyzing essays in depth, providing evidence based on specific rubrics, and deriving appropriate scores. This detailed process is time-intensive but yields valuable natural language reasoning that informs the final grading decision. In this study, we implemented the fine-tuned LLaMA3-8B as the Slow Module. It is worth noting that this module is interchangeable with any other qualified LLM, demonstrating the flexibility of our framework.",
|
| 603 |
+
"bbox": [
|
| 604 |
+
507,
|
| 605 |
+
508,
|
| 606 |
+
884,
|
| 607 |
+
684
|
| 608 |
+
],
|
| 609 |
+
"page_idx": 3
|
| 610 |
+
},
|
| 611 |
+
{
|
| 612 |
+
"type": "text",
|
| 613 |
+
"text": "Fast Module: Lightweight Classifier In many cases, swift score prediction is preferable to detailed reasoning. To reduce the time and computational cost associated with generating detailed explanations, we introduced a simple fully connected layer as a bypass before the initiation of text generation by the Slow Module. By using only the embeddings of the input text, the Fast Module not only conserves resources but also leverages the latent knowledge acquired during the fine-tuning of the Slow Module, aligning with the 'intuitive' facet of Fast Thinking.",
|
| 614 |
+
"bbox": [
|
| 615 |
+
507,
|
| 616 |
+
695,
|
| 617 |
+
884,
|
| 618 |
+
888
|
| 619 |
+
],
|
| 620 |
+
"page_idx": 3
|
| 621 |
+
},
|
| 622 |
+
{
|
| 623 |
+
"type": "text",
|
| 624 |
+
"text": "When to switch from the Fast to Slow Thinking module is one of the challenges in the design of",
|
| 625 |
+
"bbox": [
|
| 626 |
+
507,
|
| 627 |
+
889,
|
| 628 |
+
882,
|
| 629 |
+
921
|
| 630 |
+
],
|
| 631 |
+
"page_idx": 3
|
| 632 |
+
},
|
| 633 |
+
{
|
| 634 |
+
"type": "page_footnote",
|
| 635 |
+
"text": "4https://llama.meta.com/llama3/",
|
| 636 |
+
"bbox": [
|
| 637 |
+
134,
|
| 638 |
+
904,
|
| 639 |
+
371,
|
| 640 |
+
920
|
| 641 |
+
],
|
| 642 |
+
"page_idx": 3
|
| 643 |
+
},
|
| 644 |
+
{
|
| 645 |
+
"type": "text",
|
| 646 |
+
"text": "dual-process LLM. Previous frameworks employed heuristic rules or error feedback as the triggering criteria (Lin et al., 2024; Tian et al., 2023), which might be impractical in real-world cases. Our Fast module also calculates the probabilities of each possible output score, which we standardize and treat as confidence scores. Predictions with low confidence are considered unreliable, triggering the Slow Module for self-reflection, or passing to external judges (either human or AI). This design aims to enhance essay scoring accuracy and foster effective human-AI collaboration, potentially elevating the complementary team performance.",
|
| 647 |
+
"bbox": [
|
| 648 |
+
115,
|
| 649 |
+
84,
|
| 650 |
+
485,
|
| 651 |
+
292
|
| 652 |
+
],
|
| 653 |
+
"page_idx": 4
|
| 654 |
+
},
|
| 655 |
+
{
|
| 656 |
+
"type": "text",
|
| 657 |
+
"text": "For training, we first fine-tune the Slow Module using our explanation-augmented dataset. Subsequently, we employ the Slow Module to derive input embeddings, which, paired with the rated scores, are used to train the Fast Classifier from scratch. During inference, essay inputs initially pass through the fine-tuned LLM and are transformed into the embedding format. They are then processed by the Fast Module to quickly derive scores. The Slow Module is activated only when prediction confidence is low or based on specific additional requirements.",
|
| 658 |
+
"bbox": [
|
| 659 |
+
115,
|
| 660 |
+
294,
|
| 661 |
+
485,
|
| 662 |
+
486
|
| 663 |
+
],
|
| 664 |
+
"page_idx": 4
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"type": "text",
|
| 668 |
+
"text": "5 Experimental Results",
|
| 669 |
+
"text_level": 1,
|
| 670 |
+
"bbox": [
|
| 671 |
+
115,
|
| 672 |
+
501,
|
| 673 |
+
332,
|
| 674 |
+
517
|
| 675 |
+
],
|
| 676 |
+
"page_idx": 4
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"text": "5.1 Performance of LLM-based Methods",
|
| 681 |
+
"text_level": 1,
|
| 682 |
+
"bbox": [
|
| 683 |
+
115,
|
| 684 |
+
527,
|
| 685 |
+
450,
|
| 686 |
+
542
|
| 687 |
+
],
|
| 688 |
+
"page_idx": 4
|
| 689 |
+
},
|
| 690 |
+
{
|
| 691 |
+
"type": "text",
|
| 692 |
+
"text": "We conducted experiments across all eight subsets of the ASAP dataset using both the LLM-based methods and baseline approaches. We adopted Cohen's Quadratic Weighted Kappa (QWK) as our primary evaluation metric, which is the most widely recognized automatic metric in AES tasks (Ramesh and Sanampudi, 2022). A higher QWK value indicates a greater degree of agreement between the predicted score and the ground truth. For methods requiring a training dataset, we divided the data for each subset using an 80:20 split ratio between training and testing.",
|
| 693 |
+
"bbox": [
|
| 694 |
+
115,
|
| 695 |
+
550,
|
| 696 |
+
485,
|
| 697 |
+
743
|
| 698 |
+
],
|
| 699 |
+
"page_idx": 4
|
| 700 |
+
},
|
| 701 |
+
{
|
| 702 |
+
"type": "text",
|
| 703 |
+
"text": "Our extensive experiments, as detailed in Table 1, revealed that despite using carefully curated prompts and providing detailed context, the zero-shot and few-shot capabilities of GPT-4 did not yield high QWK scores on the ASAP dataset. In zero-shot scenarios, GPT-4's performance was notably low, with some subsets scoring nearly as poorly as random guessing. For instance, Set 1 recorded a QWK of 0.0423 and Set 7 a QWK of 0.0809. This underperformance may be due to the broad scoring ranges and complex rubrics in",
|
| 704 |
+
"bbox": [
|
| 705 |
+
115,
|
| 706 |
+
745,
|
| 707 |
+
485,
|
| 708 |
+
919
|
| 709 |
+
],
|
| 710 |
+
"page_idx": 4
|
| 711 |
+
},
|
| 712 |
+
{
|
| 713 |
+
"type": "text",
|
| 714 |
+
"text": "ASAP, suggesting that even advanced LLMs like GPT-4 may struggle to fully comprehend and adhere to complicated human instructions. In few-shot settings, although there was an improvement in scoring performance, particularly for Sets 4-6, GPT-4 still significantly lagged behind SOTA grading methods. This is consistent with findings from recent studies that utilize LLMs for essay scoring.",
|
| 715 |
+
"bbox": [
|
| 716 |
+
512,
|
| 717 |
+
84,
|
| 718 |
+
882,
|
| 719 |
+
212
|
| 720 |
+
],
|
| 721 |
+
"page_idx": 4
|
| 722 |
+
},
|
| 723 |
+
{
|
| 724 |
+
"type": "text",
|
| 725 |
+
"text": "When fine-tuned with the training dataset, the LLMs demonstrated significantly improved performance compared to the zero-shot and few-shot results, with QWK scores generally exceeding 0.7. However, these fine-tuned LLMs still did not surpass traditional SOTA methods. Within our framework, the performance of the fine-tuned open-source LLaMA3-8B was comparable to that of fine-tuned proprietary models. Even simple supervised fine-tuning (SFT) of LLaMA3 achieved notable results, suggesting that open-source LLMs might be a cost-effective choice for AES tasks. The findings from our CSEE dataset (see Table 2) align with those on the ASAP dataset, indicating that our framework predicts reliable scores across content, language, and structure dimensions.",
|
| 726 |
+
"bbox": [
|
| 727 |
+
512,
|
| 728 |
+
212,
|
| 729 |
+
882,
|
| 730 |
+
469
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 4
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "text",
|
| 736 |
+
"text": "Although LLMs do not match traditional methods in terms of scoring accuracy, they excel at generating detailed explanations, benefiting both educators and students. Notably, when trained to produce both scores and explanations in a single output (our proposed Slow Module), LLaMA3-8B experienced a performance drop in grading accuracy. This decrease may be attributed to the model's optimization process, where numerical score values are treated similarly to textual data in the output, leading to suboptimal accuracy. In our Fast and Slow Thinking framework, however, separating numerical from textual outputs and integrating them based on a trigger condition improved the QWK scores, enhancing collaborative performance. Additionally, we evaluated the quality of explanations generated by our AES system against those produced by GPT-4. Through a comparison competition among crowdsourced workers, analyzing 20 sets of paired essay grading explanations, our system achieved a win rate of $35\\%$ , a tie rate of $40\\%$ , and a loss rate of $25\\%$ . These results demonstrate that our explanations are of high quality and comparable to those generated by GPT-4.",
|
| 737 |
+
"bbox": [
|
| 738 |
+
512,
|
| 739 |
+
470,
|
| 740 |
+
882,
|
| 741 |
+
856
|
| 742 |
+
],
|
| 743 |
+
"page_idx": 4
|
| 744 |
+
},
|
| 745 |
+
{
|
| 746 |
+
"type": "text",
|
| 747 |
+
"text": "5.2 Further Analyses",
|
| 748 |
+
"text_level": 1,
|
| 749 |
+
"bbox": [
|
| 750 |
+
512,
|
| 751 |
+
868,
|
| 752 |
+
690,
|
| 753 |
+
883
|
| 754 |
+
],
|
| 755 |
+
"page_idx": 4
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "text",
|
| 759 |
+
"text": "Consistency To assess the consistency of scores predicted by LLM-based methods, we conducted",
|
| 760 |
+
"bbox": [
|
| 761 |
+
512,
|
| 762 |
+
890,
|
| 763 |
+
880,
|
| 764 |
+
919
|
| 765 |
+
],
|
| 766 |
+
"page_idx": 4
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "table",
|
| 770 |
+
"img_path": "images/4c738240096fe1a1bcf9dd2485957c4f5389a01d0edffe458dd0f9fc3463f7f1.jpg",
|
| 771 |
+
"table_caption": [
|
| 772 |
+
"Table 1: Comparison of QWK scores for LLM-based methods and the baselines under the ASAP dataset. The \"E.\" column indicates whether the model output includes natural language explanations alongside the predicted scores."
|
| 773 |
+
],
|
| 774 |
+
"table_footnote": [],
|
| 775 |
+
"table_body": "<table><tr><td></td><td>E.</td><td>Set 1</td><td>Set 2</td><td>Set 3</td><td>Set 4</td><td>Set 5</td><td>Set 6</td><td>Set 7</td><td>Set 8</td><td>Avg.</td></tr><tr><td>BERT Classifier</td><td>✘</td><td>0.6486</td><td>0.6284</td><td>0.7327</td><td>0.7669</td><td>0.7432</td><td>0.6810</td><td>0.7165</td><td>0.4624</td><td>0.6725</td></tr><tr><td>Tran-BERT-MS-ML-R</td><td>✘</td><td>0.8340</td><td>0.7160</td><td>0.7140</td><td>0.8120</td><td>0.8130</td><td>0.8360</td><td>0.8390</td><td>0.7660</td><td>0.7910</td></tr><tr><td>R2BERT</td><td>✘</td><td>0.8170</td><td>0.7190</td><td>0.6980</td><td>0.8450</td><td>0.8410</td><td>0.8470</td><td>0.8390</td><td>0.7440</td><td>0.7940</td></tr><tr><td>GPT-4, zero-shot, w/o rubrics</td><td>✓</td><td>0.0423</td><td>0.4017</td><td>0.2805</td><td>0.5571</td><td>0.3659</td><td>0.5021</td><td>0.0809</td><td>0.4188</td><td>0.3312</td></tr><tr><td>GPT-4, zero-shot, with rubrics</td><td>✓</td><td>0.0715</td><td>0.3003</td><td>0.3661</td><td>0.6266</td><td>0.5227</td><td>0.3448</td><td>0.1101</td><td>0.4072</td><td>0.3437</td></tr><tr><td>GPT-4, few-shot, with rubrics</td><td>✓</td><td>0.2801</td><td>0.3376</td><td>0.3308</td><td>0.7839</td><td>0.6226</td><td>0.7284</td><td>0.2570</td><td>0.4541</td><td>0.4743</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>✘</td><td>0.7406</td><td>0.6183</td><td>0.7041</td><td>0.8593</td><td>0.7959</td><td>0.8480</td><td>0.7271</td><td>0.6135</td><td>0.7384</td></tr><tr><td>Fine-tuned LLaMA3</td><td>✘</td><td>0.7137</td><td>0.6696</td><td>0.6558</td><td>0.7712</td><td>0.7452</td><td>0.7489</td><td>0.6938</td><td>0.2952</td><td>0.6617</td></tr><tr><td>Ours</td><td>✓</td><td>0.7612</td><td>0.6517</td><td>0.7238</td><td>0.8093</td><td>0.8118</td><td>0.7764</td><td>0.7071</td><td>0.4885</td><td>0.7162</td></tr><tr><td>Fast module</td><td>✘</td><td>0.7580</td><td>0.6395</td><td>0.7228</td><td>0.7995</td><td>0.8023</td><td>0.7753</td><td>0.7157</td><td>0.5075</td><td>0.7151</td></tr><tr><td>Slow module</td><td>✓</td><td>0.6048</td><td>0.5621</td><td>0.5700</td><td>0.6992</td><td>0.6774</td><td>0.5943</td><td>0.5772</td><td>0.2677</td><td>0.5691</td></tr></table>",
|
| 776 |
+
"bbox": [
|
| 777 |
+
114,
|
| 778 |
+
122,
|
| 779 |
+
892,
|
| 780 |
+
326
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 5
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "table",
|
| 786 |
+
"img_path": "images/13aa424b22182c546e91dafcac94b241fdbf34d3e8a9f4c205ddccfb71f8280c.jpg",
|
| 787 |
+
"table_caption": [
|
| 788 |
+
"Table 2: Comparison of QWK scores for LLM-based methods and the baselines under our CSEE dataset. The \"E.\" column indicates whether the model output includes natural language explanations alongside the predicted scores."
|
| 789 |
+
],
|
| 790 |
+
"table_footnote": [],
|
| 791 |
+
"table_body": "<table><tr><td></td><td>E.</td><td>Overall</td><td>Content</td><td>Language</td><td>Structure</td></tr><tr><td>BERT Classifier</td><td>✘</td><td>0.7674</td><td>0.7312</td><td>0.7203</td><td>0.6650</td></tr><tr><td>GPT-4, zero-shot, w/o rubrics</td><td>✓</td><td>0.4688</td><td>0.4412</td><td>0.3081</td><td>0.5757</td></tr><tr><td>GPT-4, zero-shot, with rubrics</td><td>✓</td><td>0.5344</td><td>0.5391</td><td>0.4660</td><td>0.4256</td></tr><tr><td>GPT-4, few-shot, with rubrics</td><td>✓</td><td>0.6729</td><td>0.6484</td><td>0.6278</td><td>0.4661</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>✘</td><td>0.7532</td><td>0.7241</td><td>0.7513</td><td>0.6576</td></tr><tr><td>Fine-tuned LLaMA3</td><td>✘</td><td>0.7544</td><td>0.7321</td><td>0.7084</td><td>0.6461</td></tr><tr><td>Ours</td><td>✓</td><td>0.7634</td><td>0.7347</td><td>0.7192</td><td>0.6656</td></tr><tr><td>Fast module</td><td>✘</td><td>0.7364</td><td>0.7272</td><td>0.7072</td><td>0.6627</td></tr><tr><td>Slow module</td><td>✓</td><td>0.7310</td><td>0.6810</td><td>0.6990</td><td>0.6412</td></tr></table>",
|
| 792 |
+
"bbox": [
|
| 793 |
+
242,
|
| 794 |
+
379,
|
| 795 |
+
754,
|
| 796 |
+
557
|
| 797 |
+
],
|
| 798 |
+
"page_idx": 5
|
| 799 |
+
},
|
| 800 |
+
{
|
| 801 |
+
"type": "text",
|
| 802 |
+
"text": "the same experiment three times, each with the temperature parameter of the LLMs set to 0. We observed that over $80\\%$ of the ratings remained unchanged across these trials, indicating a high level of consistency. We then computed the average of these three values to determine the final results.",
|
| 803 |
+
"bbox": [
|
| 804 |
+
112,
|
| 805 |
+
581,
|
| 806 |
+
487,
|
| 807 |
+
677
|
| 808 |
+
],
|
| 809 |
+
"page_idx": 5
|
| 810 |
+
},
|
| 811 |
+
{
|
| 812 |
+
"type": "text",
|
| 813 |
+
"text": "Generalizability The eight subsets of the ASAP dataset, featuring diverse scoring criteria and ranges, serve as an excellent framework for evaluating the generalization capabilities of models. For methods such as fine-tuning and traditional baselines that require training data, we first trained the models on one subset and then assessed their performance across the remaining datasets. For example, we trained on Set 1 and tested on Sets 2-8, keeping the model weights fixed. We selected fine-tuned GPT-3.5 and the BERT Classifier to represent LLM-based and traditional methods, respectively. As detailed in Table 7, our fine-tuned GPT-3.5 generally outperformed the BERT classifier, although",
|
| 814 |
+
"bbox": [
|
| 815 |
+
112,
|
| 816 |
+
696,
|
| 817 |
+
489,
|
| 818 |
+
921
|
| 819 |
+
],
|
| 820 |
+
"page_idx": 5
|
| 821 |
+
},
|
| 822 |
+
{
|
| 823 |
+
"type": "text",
|
| 824 |
+
"text": "there were instances of underperformance, notably when trained on Set 4 and tested on Sets 1 and 7. The BERT classifier showed particularly weak generalization when trained on Sets 7 and 8, performing close to random guessing.",
|
| 825 |
+
"bbox": [
|
| 826 |
+
507,
|
| 827 |
+
581,
|
| 828 |
+
884,
|
| 829 |
+
661
|
| 830 |
+
],
|
| 831 |
+
"page_idx": 5
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "text",
|
| 835 |
+
"text": "Prediction Confidence and Self-Reflection To assess the reliability of confidence scores, we segmented the test samples based on the output confidence and observed a strong correlation between these scores and model performance in Figure 3. The trigger condition for switching from the Fast to the Slow Module is set when the confidence score falls below 0.2. Although the Slow Module generally exhibits lower performance compared to the Fast Module, the overall performance of the integrated system improved. This enhancement suggests that employing detailed reasoning for cases with low confidence is an effective grading strategy.",
|
| 836 |
+
"bbox": [
|
| 837 |
+
507,
|
| 838 |
+
670,
|
| 839 |
+
885,
|
| 840 |
+
879
|
| 841 |
+
],
|
| 842 |
+
"page_idx": 5
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"type": "text",
|
| 846 |
+
"text": "Time Efficiency Training the Slow Module for each epoch with our explanation-augmented",
|
| 847 |
+
"bbox": [
|
| 848 |
+
507,
|
| 849 |
+
889,
|
| 850 |
+
882,
|
| 851 |
+
921
|
| 852 |
+
],
|
| 853 |
+
"page_idx": 5
|
| 854 |
+
},
|
| 855 |
+
{
|
| 856 |
+
"type": "text",
|
| 857 |
+
"text": "dataset requires around 2 hours using an RTX 4090 24GB GPU, and the inference process consumes about 10 GPU hours. Meanwhile, training the Fast Module takes less than 0.5 hours, and scoring predictions are completed in just 0.2 hours. Our proposed framework, which incorporates a confidence trigger condition, offers an effective trade-off by enhancing both accuracy and efficiency.",
|
| 858 |
+
"bbox": [
|
| 859 |
+
112,
|
| 860 |
+
84,
|
| 861 |
+
489,
|
| 862 |
+
212
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 6
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "image",
|
| 868 |
+
"img_path": "images/3758104f48bda874e029580351407956c65f4627a86a02ccd78e516d3f5d5358.jpg",
|
| 869 |
+
"image_caption": [
|
| 870 |
+
"Figure 3: QWK scores of our Fast module in different confidence levels."
|
| 871 |
+
],
|
| 872 |
+
"image_footnote": [],
|
| 873 |
+
"bbox": [
|
| 874 |
+
131,
|
| 875 |
+
233,
|
| 876 |
+
453,
|
| 877 |
+
376
|
| 878 |
+
],
|
| 879 |
+
"page_idx": 6
|
| 880 |
+
},
|
| 881 |
+
{
|
| 882 |
+
"type": "text",
|
| 883 |
+
"text": "6 Human-AI Co-Grading Experiment",
|
| 884 |
+
"text_level": 1,
|
| 885 |
+
"bbox": [
|
| 886 |
+
112,
|
| 887 |
+
445,
|
| 888 |
+
460,
|
| 889 |
+
464
|
| 890 |
+
],
|
| 891 |
+
"page_idx": 6
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"text": "Given that the AES system not only provides score predictions but also functions as a teammate to educators, we further explore the effectiveness of our proposed system in assisting human grading.",
|
| 896 |
+
"bbox": [
|
| 897 |
+
112,
|
| 898 |
+
472,
|
| 899 |
+
489,
|
| 900 |
+
536
|
| 901 |
+
],
|
| 902 |
+
"page_idx": 6
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "text",
|
| 906 |
+
"text": "6.1 Experiment Design",
|
| 907 |
+
"text_level": 1,
|
| 908 |
+
"bbox": [
|
| 909 |
+
112,
|
| 910 |
+
546,
|
| 911 |
+
312,
|
| 912 |
+
562
|
| 913 |
+
],
|
| 914 |
+
"page_idx": 6
|
| 915 |
+
},
|
| 916 |
+
{
|
| 917 |
+
"type": "text",
|
| 918 |
+
"text": "To investigate the performance of human-only, AI-only, and human-AI team collaboration, we conducted a two-stage within-group experiment. We randomly selected 50 essays from the test set of our CSEE dataset, all on the same topic. We recruited 10 college students from a Normal University in Beijing, who are prospective high school teachers with no current grading experience, to serve as novice evaluators. Additionally, 5 experienced high school English teachers participated as expert evaluators. Initially, all evaluators graded the essays independently using standard rubrics. Subsequently, they were provided with the scores, prediction confidence levels, and explanations generated by our AES system and had the option to revise their initial scores based on this augmented information. To gather feedback on the process, we distributed questionnaires where evaluators rated their experience on a 5-point Likert scale, with higher scores indicating better-perceived performance.",
|
| 919 |
+
"bbox": [
|
| 920 |
+
112,
|
| 921 |
+
567,
|
| 922 |
+
489,
|
| 923 |
+
888
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 6
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "In short, we mainly focus on the following research questions:",
|
| 930 |
+
"bbox": [
|
| 931 |
+
112,
|
| 932 |
+
889,
|
| 933 |
+
489,
|
| 934 |
+
921
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 6
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "- Can novice and expert human evaluators achieve complementary performance in terms of accuracy and efficiency using the proposed AES system and collaborative workflow?",
|
| 941 |
+
"bbox": [
|
| 942 |
+
531,
|
| 943 |
+
84,
|
| 944 |
+
882,
|
| 945 |
+
149
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 6
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "text",
|
| 951 |
+
"text": "- Does the design of prediction confidence and explanation generation contribute to performance improvements?",
|
| 952 |
+
"bbox": [
|
| 953 |
+
531,
|
| 954 |
+
159,
|
| 955 |
+
884,
|
| 956 |
+
206
|
| 957 |
+
],
|
| 958 |
+
"page_idx": 6
|
| 959 |
+
},
|
| 960 |
+
{
|
| 961 |
+
"type": "text",
|
| 962 |
+
"text": "6.2 Results",
|
| 963 |
+
"text_level": 1,
|
| 964 |
+
"bbox": [
|
| 965 |
+
509,
|
| 966 |
+
219,
|
| 967 |
+
613,
|
| 968 |
+
233
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 6
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "Feedback generated by LLM elevates novice evaluators to expert level. As depicted in Figure 4 and Table 3, our findings reveal that novice graders, with the assistance of LLM-generated feedback (including both scores and explanations), achieved a significant improvement in performance. Their average QWK improved from 0.5256 to 0.6609, with a p-value of less than 0.01. Furthermore, when comparing the performance of LLM-assisted novice evaluators (mean QWK of 0.6609) to that of expert graders (mean QWK of 0.7117), no statistical difference was found between the two groups $(\\mathrm{p - value} = 0.27)$ . This indicates that with LLM support, novice evaluators achieved a level of grading proficiency comparable to that of experienced experts. Similar trends were observed in the scores for content, language, and structure, with detailed results presented in Table 5.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
507,
|
| 977 |
+
239,
|
| 978 |
+
885,
|
| 979 |
+
529
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 6
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "table",
|
| 985 |
+
"img_path": "images/18ac40de44c36cc55131ff771dbdefadca27172ef4833dd2573f10acda34b6b3.jpg",
|
| 986 |
+
"table_caption": [
|
| 987 |
+
"Table 3: $t$ -test of QWK scores for different experimental groups. Diff. means the difference of means between the two groups of QWK scores."
|
| 988 |
+
],
|
| 989 |
+
"table_footnote": [],
|
| 990 |
+
"table_body": "<table><tr><td></td><td>Diff.</td><td>t statistic</td><td>p-value</td></tr><tr><td>Expert vs. Novice</td><td>0.1860***</td><td>3.2152</td><td>0.0068</td></tr><tr><td>Novice+LLM vs. Novice</td><td>0.1353***</td><td>2.8882</td><td>0.0098</td></tr><tr><td>Expert+LLM vs. Expert</td><td>0.0617</td><td>1.7128</td><td>0.1251</td></tr><tr><td>Novice+LLM vs. Expert</td><td>-0.0508</td><td>-1.1566</td><td>0.2682</td></tr></table>",
|
| 991 |
+
"bbox": [
|
| 992 |
+
509,
|
| 993 |
+
594,
|
| 994 |
+
892,
|
| 995 |
+
671
|
| 996 |
+
],
|
| 997 |
+
"page_idx": 6
|
| 998 |
+
},
|
| 999 |
+
{
|
| 1000 |
+
"type": "text",
|
| 1001 |
+
"text": "Feedback generated by LLM boosts expert efficiency and consistency. The integration of LLM-generated feedback into the expert grading process led to an increase in the average QWK from 0.7117 to 0.7734, which also surpassed the performance of AES systems (a QWK of 0.7302) for these essay samples, thereby achieving superior complementary performance (where the Human-AI team outperforms both individual human and AI). Although this improvement is not statistically significant ( $p$ -value = 0.13), the benefits of LLM augmentation for experts were evident in several other aspects. According to self-report questionnaires (refer to Table 4), experts required less time to complete",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
507,
|
| 1004 |
+
696,
|
| 1005 |
+
885,
|
| 1006 |
+
921
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 6
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "image",
|
| 1012 |
+
"img_path": "images/bb6c48bd4156ff6f6a85bf8e448b492ffa69215b3ac5edab0671ba3303f6c9fc.jpg",
|
| 1013 |
+
"image_caption": [
|
| 1014 |
+
"Figure 4: QWK of the overall score in LLM-assisted cograting experiment for the novice and expert evaluators. The blue bar and orange bar of the LLM column indicate the performance of our Fast module and the integrated system respectively."
|
| 1015 |
+
],
|
| 1016 |
+
"image_footnote": [],
|
| 1017 |
+
"bbox": [
|
| 1018 |
+
161,
|
| 1019 |
+
86,
|
| 1020 |
+
448,
|
| 1021 |
+
244
|
| 1022 |
+
],
|
| 1023 |
+
"page_idx": 7
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"text": "grading tasks when assisted by the LLM. Furthermore, a reduction in the standard deviation of expert ratings was observed, indicating a higher level of consensus among experts. This suggests that LLM-generated feedback leads to more consistent evaluations of student essays. Experienced domain experts also commended the accuracy and practicality of the LLM-generated feedback, particularly praising the prediction confidence mechanism which alerted them to scrutinize more challenging cases. These findings highlight the potential to augment the human grading process with our AES system in real-world educational environments.",
|
| 1028 |
+
"bbox": [
|
| 1029 |
+
112,
|
| 1030 |
+
363,
|
| 1031 |
+
489,
|
| 1032 |
+
573
|
| 1033 |
+
],
|
| 1034 |
+
"page_idx": 7
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "table",
|
| 1038 |
+
"img_path": "images/683fe079d8ae50ceddadba9431ae320d7e07ec67e9923d782d7551bc0cc0258e.jpg",
|
| 1039 |
+
"table_caption": [
|
| 1040 |
+
"Table 4: Experts' feedback after grading student essays with the support of the LLM-based system."
|
| 1041 |
+
],
|
| 1042 |
+
"table_footnote": [],
|
| 1043 |
+
"table_body": "<table><tr><td></td><td>Score</td></tr><tr><td>Perceived accuracy of LLM overall score</td><td>4.3/5</td></tr><tr><td>Perceived accuracy of LLM content score</td><td>4.0/5</td></tr><tr><td>Perceived accuracy of LLM language score</td><td>3.9/5</td></tr><tr><td>Perceived accuracy of LLM structure score</td><td>3.8/5</td></tr><tr><td>Helpfulness of the predicted scores</td><td>4.6/5</td></tr><tr><td>Helpfulness of the confidence scores</td><td>4.8/5</td></tr><tr><td>Helpfulness of LLM explanations</td><td>4.7/5</td></tr><tr><td>Efficiency of LLM assistance</td><td>4.4/5</td></tr><tr><td>Willingness to use our AES system</td><td>4.3/5</td></tr></table>",
|
| 1044 |
+
"bbox": [
|
| 1045 |
+
137,
|
| 1046 |
+
624,
|
| 1047 |
+
463,
|
| 1048 |
+
768
|
| 1049 |
+
],
|
| 1050 |
+
"page_idx": 7
|
| 1051 |
+
},
|
| 1052 |
+
{
|
| 1053 |
+
"type": "text",
|
| 1054 |
+
"text": "The Importance of Prediction Confidence and Explanations We previously assessed the reliability of prediction confidence from our Fast Module and noted a modest improvement in model performance after self-reflection by the Slow Module (as shown in the LLM column of Figure 4). In the context of human-AI collaboration, we focused on cases where the predicted scores presented to",
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
112,
|
| 1057 |
+
791,
|
| 1058 |
+
489,
|
| 1059 |
+
921
|
| 1060 |
+
],
|
| 1061 |
+
"page_idx": 7
|
| 1062 |
+
},
|
| 1063 |
+
{
|
| 1064 |
+
"type": "text",
|
| 1065 |
+
"text": "human evaluators were of low confidence (below 0.2). We observed that the overall QWK scores for expert and novice evaluators were 0.6809 and 0.5680. These QWK values, lower than the average human performances, suggest that these essays are inherently challenging to grade, even for humans. However, human performances exceeded that of the LLM Slow Module's 0.5478 QWK, achieving complementary team performance. These findings support a practical, intuitive LLM-assisted decision-making workflow: the model manages routine cases with high confidence and minimal human intervention, while low-confidence cases are presented to human collaborators for in-depth analysis and final decision-making.",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
507,
|
| 1068 |
+
84,
|
| 1069 |
+
884,
|
| 1070 |
+
326
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 7
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "text",
|
| 1076 |
+
"text": "7 Conclusion",
|
| 1077 |
+
"text_level": 1,
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
509,
|
| 1080 |
+
343,
|
| 1081 |
+
640,
|
| 1082 |
+
357
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 7
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "text",
|
| 1088 |
+
"text": "In this study, we explored the capabilities of LLMs within AES systems. With detailed contexts, clear rubrics, and high-quality examples, GPT-4 demonstrated satisfactory performance, consistency, and generalizability. Further accuracy enhancements were achieved through supervised fine-tuning using task-specific instruction datasets, bringing LLM performance close to conventional SOTA methods. To leverage the LLMs' ability to generate natural language explanations along with predicted scores, we introduced an open-source Fast and Slow Thinking AES framework. This framework not only matches the quality of proprietary models but also offers greater efficiency.",
|
| 1089 |
+
"bbox": [
|
| 1090 |
+
507,
|
| 1091 |
+
372,
|
| 1092 |
+
884,
|
| 1093 |
+
598
|
| 1094 |
+
],
|
| 1095 |
+
"page_idx": 7
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "text",
|
| 1099 |
+
"text": "Our research extended into human-AI cograting experiments within this new framework. A notable finding was that LLMs not only automated the grading process but also augmented the grading skills of human evaluators. Novice graders, with support from our AES framework, reached accuracy levels comparable to those of experienced graders, while expert graders showed gains in efficiency and consistency. The collaboration between humans and AI particularly enhanced performance in handling low-confidence cases, demonstrating a significant synergy that approached the upper limits of team performance. These results highlight the transformative potential of AI-assisted and human-centered decision-making workflows, especially in elevating those with limited domain knowledge to expert-level proficiency. This study illuminates promising future directions for human-AI collaboration and underscores the evolving role of AI in educational contexts.",
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
507,
|
| 1102 |
+
599,
|
| 1103 |
+
884,
|
| 1104 |
+
920
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 7
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"text": "Limitations",
|
| 1111 |
+
"text_level": 1,
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
114,
|
| 1114 |
+
84,
|
| 1115 |
+
220,
|
| 1116 |
+
98
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 8
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "This study has certain limitations. Firstly, although our CSEE dataset includes a substantial number of student essays, these essays originate from only two final exams designed for high school English learners in China. This raises concerns about the robustness of our proposed AES system when applied to a broader range of topics and diverse student demographics. Secondly, our human-AI collaboration experiment, while indicative of promising directions for future human-AI co-grading tasks, is a pilot study that yields general results. Further experiments are necessary to thoroughly explore the mechanisms of complementary team performance, such as identifying circumstances under which humans are likely to recognize and correct their errors following AI feedback, or instances where unreliable AI feedback could potentially mislead them. A deeper understanding of these collaboration mechanisms will enable researchers to develop AES systems that offer more effective support to educators.",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
115,
|
| 1125 |
+
112,
|
| 1126 |
+
489,
|
| 1127 |
+
448
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 8
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "Ethical Considerations",
|
| 1134 |
+
"text_level": 1,
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
114,
|
| 1137 |
+
464,
|
| 1138 |
+
315,
|
| 1139 |
+
479
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 8
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "We secured Institutional Review Board (IRB) approval for both the data collection and the human-AI co-grading experiment (details provided in the online materials). In our CSEE dataset, all personal information concerning the students has been anonymized to safeguard their privacy. The dataset comprises solely of essays and the corresponding scores, omitting any additional information that might raise ethical concerns. However, details of the data annotation process remain undisclosed to us, including the number of teachers involved in the scoring and the level of inter-annotator agreement among them. We have also obtained explicit consent to use the data exclusively for research purposes from both teachers and students.",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
115,
|
| 1148 |
+
492,
|
| 1149 |
+
489,
|
| 1150 |
+
732
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 8
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "text",
|
| 1156 |
+
"text": "References",
|
| 1157 |
+
"text_level": 1,
|
| 1158 |
+
"bbox": [
|
| 1159 |
+
114,
|
| 1160 |
+
763,
|
| 1161 |
+
213,
|
| 1162 |
+
778
|
| 1163 |
+
],
|
| 1164 |
+
"page_idx": 8
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "list",
|
| 1168 |
+
"sub_type": "ref_text",
|
| 1169 |
+
"list_items": [
|
| 1170 |
+
"Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1-16.",
|
| 1171 |
+
"Samuel C Bellini-Leite. 2023. Dual process theory for large language models: An overview of using"
|
| 1172 |
+
],
|
| 1173 |
+
"bbox": [
|
| 1174 |
+
115,
|
| 1175 |
+
787,
|
| 1176 |
+
489,
|
| 1177 |
+
921
|
| 1178 |
+
],
|
| 1179 |
+
"page_idx": 8
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "list",
|
| 1183 |
+
"sub_type": "ref_text",
|
| 1184 |
+
"list_items": [
|
| 1185 |
+
"psychology to address hallucination and reliability issues. Adaptive Behavior.",
|
| 1186 |
+
"Martin Benfeghoul, Umais Zahid, Qinghai Guo, and Zafeirios Fountas. 2024. When in doubt, think slow: Iterative reasoning with latent imagination. arXiv preprint arXiv:2402.15283.",
|
| 1187 |
+
"Eujene Nikka V. Boquio and Prospero C. Naval, Jr. 2024. Beyond canonical fine-tuning: Leveraging hybrid multi-layer pooled representations of BERT for automated essay scoring. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2285-2295, Torino, Italia. ELRA and ICCL.",
|
| 1188 |
+
"Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring—an empirical study. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1072-1077.",
|
| 1189 |
+
"Eva Eigner and Thorsten Händler. 2024. Determinants of lIm-assisted decision-making. arXiv preprint arXiv:2402.17385.",
|
| 1190 |
+
"Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Hyunseung Lim, Yoonsu Kim, Tak Yeon Lee, Hwajung Hong, Juho Kim, So-Yeon Ahn, et al. 2023. Fabric: Automated scoring and feedback generation for essays. arXiv preprint arXiv:2310.05191.",
|
| 1191 |
+
"Kenneth Holstein and Vincent Aleven. 2022. Designing for human-ai complementarity in k-12 education. AI Magazine, 43(2):239-248.",
|
| 1192 |
+
"D. Kahneman. 2011. Thinking, Fast and Slow. Harvard Library e-reader collection. Farrar, Straus and Giroux.",
|
| 1193 |
+
"Mahammed Kamruzzaman and Gene Louis Kim. 2024. Prompting techniques for reducing social bias in llms through system 1 and system 2 cognitive processes. arXiv preprint arXiv:2404.17218.",
|
| 1194 |
+
"Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations.",
|
| 1195 |
+
"Andrew Lampinen, Ishita Dasgupta, Stephanie Chan, Kory Mathewson, Mh Tessler, Antonia Creswell, James McClelland, Jane Wang, and Felix Hill. 2022. Can language models learn from explanations in context? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 537-563, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1196 |
+
"Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315-5333, Toronto, Canada. Association for Computational Linguistics."
|
| 1197 |
+
],
|
| 1198 |
+
"bbox": [
|
| 1199 |
+
510,
|
| 1200 |
+
85,
|
| 1201 |
+
884,
|
| 1202 |
+
921
|
| 1203 |
+
],
|
| 1204 |
+
"page_idx": 8
|
| 1205 |
+
},
|
| 1206 |
+
{
|
| 1207 |
+
"type": "list",
|
| 1208 |
+
"sub_type": "ref_text",
|
| 1209 |
+
"list_items": [
|
| 1210 |
+
"Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. 2024. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. Advances in Neural Information Processing Systems, 36.",
|
| 1211 |
+
"Jiaqi Lun, Jia Zhu, Yong Tang, and Min Yang. 2020. Multiple data augmentation strategies for improving performance on automatic short answer scoring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13389-13396.",
|
| 1212 |
+
"Sandeep Mathias and Pushpak Bhattacharyya. 2018a. Asap++: Enriching the asap automated essay grading dataset with essay attribute scores. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018).",
|
| 1213 |
+
"Sandeep Mathias and Pushpak Bhattacharyya. 2018b. Thank \"goodness\"! a way to measure style in student essays. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 35-41.",
|
| 1214 |
+
"Atsushi Mizumoto and Masaki Eguchi. 2023. Exploring the potential of using an ai language model for automated essay scoring. Research Methods in Applied Linguistics, 2(2):100050.",
|
| 1215 |
+
"Ben Naismith, Phoebe Mulcaire, and Jill Burstein. 2023. Automated evaluation of written discourse coherence using GPT-4. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 394–403, Toronto, Canada. Association for Computational Linguistics.",
|
| 1216 |
+
"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.",
|
| 1217 |
+
"Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083.",
|
| 1218 |
+
"Dadi Ramesh and Suresh Kumar Sanampudi. 2022. An automated essay scoring systems: a systematic literature review. Artificial Intelligence Review, 55(3):2495-2527.",
|
| 1219 |
+
"Pedro Uria Rodriguez, Amir Jafari, and Christopher M Ormerod. 2019. Language models and automated essay scoring. arXiv preprint arXiv:1909.09482.",
|
| 1220 |
+
"Yafet Salim, Valdi Stevanus, Edwardo Barlian, Azani Cempaka Sari, and Derwin Suhartono. 2019. Automated english digital essay grader using machine learning. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE), pages 1-6. IEEE."
|
| 1221 |
+
],
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
115,
|
| 1224 |
+
85,
|
| 1225 |
+
485,
|
| 1226 |
+
919
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 9
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "list",
|
| 1232 |
+
"sub_type": "ref_text",
|
| 1233 |
+
"list_items": [
|
| 1234 |
+
"Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Replug: Retrievalaugmented black-box language models. arXiv preprint arXiv:2301.12652.",
|
| 1235 |
+
"Maja Stahl, Leon Biermann, Andreas Nehring, and Henning Wachsmuth. 2024. Exploring llm prompting strategies for joint essay scoring and feedback generation. arXiv preprint arXiv:2404.15845.",
|
| 1236 |
+
"Md Arafat Sultan, Cristobal Salazar, and Tamara Sumner. 2016. Fast and easy short answer grading with high accuracy. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1070-1075.",
|
| 1237 |
+
"Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1882-1891.",
|
| 1238 |
+
"Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, and Ming Cui. 2023. Duma: a dual-mind conversational agent with fast and slow thinking. arXiv preprint arXiv:2310.18075.",
|
| 1239 |
+
"Karel van den Bosch, Tjeerd Schoonderwoerd, Romy Blankendaal, and Mark Neerincx. 2019. Six challenges for human-ai co-learning. In Adaptive Instructional Systems: First International Conference, AIS 2019, Held as Part of the 21st HCI International Conference, HCI II 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings 21, pages 572–589. Springer.",
|
| 1240 |
+
"Yongjie Wang, Chuang Wang, Ruobing Li, and Hui Lin. 2022. On the use of bert for automated essay scoring: Joint learning of multi-scale essay representation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3416-3425, Seattle, United States. Association for Computational Linguistics.",
|
| 1241 |
+
"Peter C Wason and J St BT Evans. 1974. Dual processes in reasoning? Cognition, 3(2):141-154.",
|
| 1242 |
+
"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.",
|
| 1243 |
+
"Kevin P. Yancey, Geoffrey Laf Blair, Anthony Verardi, and Jill Burstein. 2023. Rating short L2 essays on the CEFR scale with GPT-4. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 576-584, Toronto, Canada. Association for Computational Linguistics.",
|
| 1244 |
+
"Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and"
|
| 1245 |
+
],
|
| 1246 |
+
"bbox": [
|
| 1247 |
+
510,
|
| 1248 |
+
85,
|
| 1249 |
+
880,
|
| 1250 |
+
919
|
| 1251 |
+
],
|
| 1252 |
+
"page_idx": 9
|
| 1253 |
+
},
|
| 1254 |
+
{
|
| 1255 |
+
"type": "text",
|
| 1256 |
+
"text": "ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1560-1569.",
|
| 1257 |
+
"bbox": [
|
| 1258 |
+
131,
|
| 1259 |
+
85,
|
| 1260 |
+
489,
|
| 1261 |
+
112
|
| 1262 |
+
],
|
| 1263 |
+
"page_idx": 10
|
| 1264 |
+
},
|
| 1265 |
+
{
|
| 1266 |
+
"type": "text",
|
| 1267 |
+
"text": "Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In *The Eleventh International Conference on Learning Representations*.",
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
115,
|
| 1270 |
+
121,
|
| 1271 |
+
489,
|
| 1272 |
+
212
|
| 1273 |
+
],
|
| 1274 |
+
"page_idx": 10
|
| 1275 |
+
},
|
| 1276 |
+
{
|
| 1277 |
+
"type": "text",
|
| 1278 |
+
"text": "A Datasets",
|
| 1279 |
+
"text_level": 1,
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
509,
|
| 1282 |
+
84,
|
| 1283 |
+
623,
|
| 1284 |
+
98
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 10
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "text",
|
| 1290 |
+
"text": "The details of the ASAP dataset are presented in Table 6. As previously mentioned, this dataset is composed of 8 subsets, each with unique prompts and scoring rubrics. Our Chinese Student English Essay (CSEE) dataset consists of 13,372 essays, along with their corresponding scores carefully rated by experienced English teachers based on the scoring standards in the Chinese National College Entrance Examination (Table 8). The basic statistics of this dataset are outlined in Table 5.",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
507,
|
| 1293 |
+
111,
|
| 1294 |
+
884,
|
| 1295 |
+
271
|
| 1296 |
+
],
|
| 1297 |
+
"page_idx": 10
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "table",
|
| 1301 |
+
"img_path": "images/708edc3999c83c4d693be3bc9f5164747bb88492a91067f155267a7dd1582d64.jpg",
|
| 1302 |
+
"table_caption": [
|
| 1303 |
+
"Table 5: Descriptive statistics of our private dataset."
|
| 1304 |
+
],
|
| 1305 |
+
"table_footnote": [],
|
| 1306 |
+
"table_body": "<table><tr><td colspan=\"2\">Chinese Student English Essay Dataset</td></tr><tr><td># of schools</td><td>29</td></tr><tr><td># of essay prompts</td><td>2</td></tr><tr><td># of student essays</td><td>13,372</td></tr><tr><td>avg. essay length</td><td>124.74</td></tr><tr><td>avg. Overall score</td><td>10.72</td></tr><tr><td>avg. Content score</td><td>4.13</td></tr><tr><td>avg. Language score</td><td>4.05</td></tr><tr><td>avg. Structure score</td><td>2.55</td></tr></table>",
|
| 1307 |
+
"bbox": [
|
| 1308 |
+
557,
|
| 1309 |
+
307,
|
| 1310 |
+
833,
|
| 1311 |
+
432
|
| 1312 |
+
],
|
| 1313 |
+
"page_idx": 10
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "text",
|
| 1317 |
+
"text": "B Details of BERT Classifier Baseline",
|
| 1318 |
+
"text_level": 1,
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
507,
|
| 1321 |
+
464,
|
| 1322 |
+
852,
|
| 1323 |
+
480
|
| 1324 |
+
],
|
| 1325 |
+
"page_idx": 10
|
| 1326 |
+
},
|
| 1327 |
+
{
|
| 1328 |
+
"type": "text",
|
| 1329 |
+
"text": "We employed the bert-base-uncased BERT model from the huggingface transformers library $^5$ using PyTorch. A simple fully connected layer was added to perform the classification task. The datasets were divided into training and testing sets at an 8:2 ratio. To ensure better reproducibility, we set all random seeds, including those for dataset splitting and model training, to the value 42. During training, we used cross-entropy loss as our loss function. We allowed BERT parameters to be finetuned, without freezing them, in line with the objective function. AdamW was chosen as the optimizer, with a learning rate set to $10^{-5}$ and epsilon at $10^{-6}$ . With a batch size of 16 and a maximum of 10 training epochs, we also integrated an early stopping strategy to mitigate potential overfitting. All the experiments of the BERT baseline were run with 2 RTX A4000 16G GPUs in around one week.",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
507,
|
| 1332 |
+
492,
|
| 1333 |
+
884,
|
| 1334 |
+
781
|
| 1335 |
+
],
|
| 1336 |
+
"page_idx": 10
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"type": "text",
|
| 1340 |
+
"text": "C Details of LLM-based Methods",
|
| 1341 |
+
"text_level": 1,
|
| 1342 |
+
"bbox": [
|
| 1343 |
+
507,
|
| 1344 |
+
797,
|
| 1345 |
+
818,
|
| 1346 |
+
812
|
| 1347 |
+
],
|
| 1348 |
+
"page_idx": 10
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "text",
|
| 1352 |
+
"text": "C.1 LLM Prompts",
|
| 1353 |
+
"text_level": 1,
|
| 1354 |
+
"bbox": [
|
| 1355 |
+
507,
|
| 1356 |
+
824,
|
| 1357 |
+
675,
|
| 1358 |
+
840
|
| 1359 |
+
],
|
| 1360 |
+
"page_idx": 10
|
| 1361 |
+
},
|
| 1362 |
+
{
|
| 1363 |
+
"type": "text",
|
| 1364 |
+
"text": "The prompts used for LLMs in our study fall into two distinct categories: firstly, the zero-shot and few-shot configurations of GPT-4; secondly, the",
|
| 1365 |
+
"bbox": [
|
| 1366 |
+
507,
|
| 1367 |
+
847,
|
| 1368 |
+
882,
|
| 1369 |
+
895
|
| 1370 |
+
],
|
| 1371 |
+
"page_idx": 10
|
| 1372 |
+
},
|
| 1373 |
+
{
|
| 1374 |
+
"type": "page_footnote",
|
| 1375 |
+
"text": "<sup>5</sup>https://huggingface.co/docs/transformers/",
|
| 1376 |
+
"bbox": [
|
| 1377 |
+
529,
|
| 1378 |
+
906,
|
| 1379 |
+
850,
|
| 1380 |
+
919
|
| 1381 |
+
],
|
| 1382 |
+
"page_idx": 10
|
| 1383 |
+
},
|
| 1384 |
+
{
|
| 1385 |
+
"type": "table",
|
| 1386 |
+
"img_path": "images/1ba6687786823af80309f62b758d9e8eb07a5af01420a58677738493d7a405e0.jpg",
|
| 1387 |
+
"table_caption": [
|
| 1388 |
+
"Table 6: Descriptive statistics of the ASAP dataset."
|
| 1389 |
+
],
|
| 1390 |
+
"table_footnote": [],
|
| 1391 |
+
"table_body": "<table><tr><td>Essay Set</td><td>Essay Type</td><td>Grade Level</td><td># of Essays</td><td>Avg. Length</td><td>Score Range</td></tr><tr><td>1</td><td>Persuasive/Narrative/Expository</td><td>8</td><td>1783</td><td>350</td><td>[2, 12]</td></tr><tr><td>2</td><td>Persuasive/Narrative/Expository</td><td>10</td><td>1800</td><td>350</td><td>[1, 6]</td></tr><tr><td>3</td><td>Source Dependent Responses</td><td>10</td><td>1726</td><td>150</td><td>[0, 3]</td></tr><tr><td>4</td><td>Source Dependent Responses</td><td>10</td><td>1772</td><td>150</td><td>[0, 3]</td></tr><tr><td>5</td><td>Source Dependent Responses</td><td>8</td><td>1805</td><td>150</td><td>[0, 4]</td></tr><tr><td>6</td><td>Source Dependent Responses</td><td>10</td><td>1800</td><td>150</td><td>[0, 4]</td></tr><tr><td>7</td><td>Persuasive/Narrative/Expository</td><td>7</td><td>1569</td><td>300</td><td>[0, 12]</td></tr><tr><td>8</td><td>Persuasive/Narrative/Expository</td><td>10</td><td>723</td><td>650</td><td>[0, 36]</td></tr></table>",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
171,
|
| 1394 |
+
107,
|
| 1395 |
+
826,
|
| 1396 |
+
231
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 11
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "instructions for fine-tuning and inference of GPT-3.5 and LLaMA3-8B. The prompts for the few-shot scenario incorporate those used in the zero-shot setting and overlap with the fine-tuning prompts. Therefore, for clarity and conciseness, we present examples of the GPT-4, few-shot, with rubrics and the inputs of fine-tuned LLaMA3-8B in Table 9.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
112,
|
| 1405 |
+
255,
|
| 1406 |
+
487,
|
| 1407 |
+
367
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 11
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"text": "C.2 Few-Shot GPT-4",
|
| 1414 |
+
"text_level": 1,
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
112,
|
| 1417 |
+
378,
|
| 1418 |
+
294,
|
| 1419 |
+
393
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 11
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "In the few-shot setting of GPT-4 with $k$ essay examples, as indicated by prior studies in AES tasks (Yancey et al., 2023), increasing the value of $k$ did not consistently yield better results, showing a trend of diminishing marginal returns. Therefore, we choose a suitable $k = 3$ in the study.",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
112,
|
| 1428 |
+
399,
|
| 1429 |
+
487,
|
| 1430 |
+
495
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 11
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"text": "We explored two sampling approaches. The first involved randomly selecting essays from various levels of quality to help LLM understand the approximate level of the target essay. The second method adopted a retrieval-based approach, which has been proven to be effective in enhancing LLM performance (Khandelwal et al., 2020; Shi et al., 2023; Ram et al., 2023). Leveraging OpenAI's text-embedding-ada-002 model, we calculated the embedding for each essay. This allowed us to identify the top $k$ similar essays based on cosine similarity (excluding the target essay). Our experiments demonstrated that this retrieval strategy consistently yielded superior results. Therefore, we focused on the latter approach in this study.",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
112,
|
| 1439 |
+
495,
|
| 1440 |
+
489,
|
| 1441 |
+
737
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 11
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "C.3 Fine-tuning LLaMA3",
|
| 1448 |
+
"text_level": 1,
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
112,
|
| 1451 |
+
747,
|
| 1452 |
+
334,
|
| 1453 |
+
764
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 11
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "We fine-tuned the llama-3-8b-bnb-4bit model using the unsloth framework. For this process, we employed a Parameter-Efficient Fine-Tuning (PEFT) approach with a rank of 16 and a LoRA alpha value of 16. We utilized an 8-bit AdamW optimizer, starting with an initial learning rate of $2 \\times 10^{-4}$ . After 50 warm-up steps, the learning rate was scheduled to decay linearly, with the weight decay parameter",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
112,
|
| 1462 |
+
769,
|
| 1463 |
+
487,
|
| 1464 |
+
898
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 11
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "set at 0.01. We maintained all random seeds at 3407 and completed the fine-tuning over 2 epochs. All experiments involving the fine-tuned LLaMA3-8B were conducted using a single RTX 4090 24GB GPU, spanning approximately three weeks.",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
507,
|
| 1473 |
+
255,
|
| 1474 |
+
882,
|
| 1475 |
+
336
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 11
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "text",
|
| 1481 |
+
"text": "D Human-AI Co-Grading Details",
|
| 1482 |
+
"text_level": 1,
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
507,
|
| 1485 |
+
347,
|
| 1486 |
+
816,
|
| 1487 |
+
363
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 11
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "In our LLM-assisted human grading experiment, the 10 college students were all from a Normal University in Beijing, and had a male-to-female ratio of 4:6, with ages ranging from 19 to 23 years (from freshmen to seniors). Their English capabilities were certified by China's College English Test (CET). None of the novices have the experience of grading student essays currently. The 5 expert evaluators comprised experienced English teachers from Beijing high schools, with teaching tenures ranging from 8 to 20 years. Before evaluation, all participants received training on the standard scoring rubrics. They were also incentivized with appropriate remuneration for their participation.",
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
505,
|
| 1496 |
+
372,
|
| 1497 |
+
882,
|
| 1498 |
+
598
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 11
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "text",
|
| 1504 |
+
"text": "The instructions for the evaluators include the standard scoring rubrics of the College Entrance Examination in China and several grading examples. The writing exercise and the essays designated for assessment will be presented to the evaluators. Moreover, supplementary feedback (scores, output confidences, and explanations) will be provided for the experimental groups. To enhance the evaluators' comprehension and avoid possible misunderstandings, all the information provided has been translated into Chinese.",
|
| 1505 |
+
"bbox": [
|
| 1506 |
+
507,
|
| 1507 |
+
598,
|
| 1508 |
+
882,
|
| 1509 |
+
772
|
| 1510 |
+
],
|
| 1511 |
+
"page_idx": 11
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "text",
|
| 1515 |
+
"text": "The results of Overall, Content, Language, and Structure scores in the human-AI co-grading experiment are presented in Figure 5. We observed that the Content and Language scores exhibit a similar trend as the Overall score discussed in the Results section. The expert evaluators noted that the Structure dimension is the most ambiguous and difficult part of the grading task which has the lowest QWK values among the three dimensions.",
|
| 1516 |
+
"bbox": [
|
| 1517 |
+
507,
|
| 1518 |
+
775,
|
| 1519 |
+
884,
|
| 1520 |
+
919
|
| 1521 |
+
],
|
| 1522 |
+
"page_idx": 11
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "page_footnote",
|
| 1526 |
+
"text": "<sup>6</sup>https://github.com/unslothai/unsloth",
|
| 1527 |
+
"bbox": [
|
| 1528 |
+
134,
|
| 1529 |
+
906,
|
| 1530 |
+
418,
|
| 1531 |
+
920
|
| 1532 |
+
],
|
| 1533 |
+
"page_idx": 11
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "table",
|
| 1537 |
+
"img_path": "images/e5a8650c367625befae144ae7dd3e4d0c4de9970bfb9c0770be180fc9b338c88.jpg",
|
| 1538 |
+
"table_caption": [
|
| 1539 |
+
"Table 7: Generalization comparison of QWK scores for the Fine-tuned GPT-3.5 and the BERT Classifier under the ASAP dataset."
|
| 1540 |
+
],
|
| 1541 |
+
"table_footnote": [],
|
| 1542 |
+
"table_body": "<table><tr><td></td><td></td><td>Set 1</td><td>Set 2</td><td>Set 3</td><td>Set 4</td><td>Set 5</td><td>Set 6</td><td>Set 7</td><td>Set 8</td></tr><tr><td rowspan=\"2\">Trained on Set 1</td><td>BERT Classifier</td><td>-</td><td>0.3299</td><td>0.1680</td><td>0.1380</td><td>0.3045</td><td>0.1234</td><td>0.3002</td><td>0.1541</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>-</td><td>0.5216</td><td>0.5405</td><td>0.4891</td><td>0.5076</td><td>0.6344</td><td>0.6306</td><td>0.3126</td></tr><tr><td rowspan=\"2\">Trained on Set 2</td><td>BERT Classifier</td><td>0.2776</td><td>-</td><td>0.1975</td><td>0.2392</td><td>0.1750</td><td>0.1453</td><td>0.2474</td><td>0.3783</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.4270</td><td>-</td><td>0.4131</td><td>0.4619</td><td>0.5958</td><td>0.5579</td><td>0.5438</td><td>0.6684</td></tr><tr><td rowspan=\"2\">Trained on Set 3</td><td>BERT Classifier</td><td>0.3468</td><td>0.4444</td><td>-</td><td>0.6230</td><td>0.6319</td><td>0.5299</td><td>0.4368</td><td>0.2427</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.3991</td><td>0.2488</td><td>-</td><td>0.7674</td><td>0.7714</td><td>0.7150</td><td>0.4964</td><td>0.1134</td></tr><tr><td rowspan=\"2\">Trained on Set 4</td><td>BERT Classifier</td><td>0.3257</td><td>0.5332</td><td>0.6267</td><td>-</td><td>0.5483</td><td>0.4959</td><td>0.4659</td><td>0.3204</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.0631</td><td>0.3493</td><td>0.4908</td><td>-</td><td>0.6515</td><td>0.7420</td><td>0.0865</td><td>0.3419</td></tr><tr><td rowspan=\"2\">Trained on Set 5</td><td>BERT Classifier</td><td>0.4051</td><td>0.3341</td><td>0.4264</td><td>0.4202</td><td>-</td><td>0.5243</td><td>0.3255</td><td>0.2035</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.4354</td><td>0.4301</td><td>0.5765</td><td>0.6877</td><td>-</td><td>0.7368</td><td>0.1061</td><td>0.3118</td></tr><tr><td rowspan=\"2\">Trained on Set 6</td><td>BERT Classifier</td><td>0.3164</td><td>0.3462</td><td>0.4000</td><td>0.3067</td><td>0.4882</td><td>-</td><td>0.2303</td><td>0.3047</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.1342</td><td>0.3607</td><td>0.4579</td><td>0.3157</td><td>0.3734</td><td>-</td><td>0.0061</td><td>0.0859</td></tr><tr><td rowspan=\"2\">Trained on Set 7</td><td>BERT Classifier</td><td>0.0975</td><td>0.0086</td><td>0.1854</td><td>0.0328</td><td>0.0554</td><td>0.1244</td><td>-</td><td>0.2917</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.5862</td><td>0.3993</td><td>0.4865</td><td>0.4425</td><td>0.4494</td><td>0.4417</td><td>-</td><td>0.2157</td></tr><tr><td rowspan=\"2\">Trained on Set 8</td><td>BERT Classifier</td><td>0.0560</td><td>0.1102</td><td>0.0110</td><td>0.0164</td><td>0.0371</td><td>0.0454</td><td>0.1777</td><td>-</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.2714</td><td>0.4822</td><td>0.4768</td><td>0.6009</td><td>0.4199</td><td>0.3231</td><td>0.5460</td><td>-</td></tr></table>",
|
| 1543 |
+
"bbox": [
|
| 1544 |
+
129,
|
| 1545 |
+
137,
|
| 1546 |
+
870,
|
| 1547 |
+
404
|
| 1548 |
+
],
|
| 1549 |
+
"page_idx": 12
|
| 1550 |
+
},
|
| 1551 |
+
{
|
| 1552 |
+
"type": "image",
|
| 1553 |
+
"img_path": "images/0c2b7b54617f11a813cc0970cbab482b50e17a7db9abd723463c71cf3fffca6d.jpg",
|
| 1554 |
+
"image_caption": [],
|
| 1555 |
+
"image_footnote": [],
|
| 1556 |
+
"bbox": [
|
| 1557 |
+
146,
|
| 1558 |
+
448,
|
| 1559 |
+
492,
|
| 1560 |
+
639
|
| 1561 |
+
],
|
| 1562 |
+
"page_idx": 12
|
| 1563 |
+
},
|
| 1564 |
+
{
|
| 1565 |
+
"type": "image",
|
| 1566 |
+
"img_path": "images/b57e796e2998bf8f44e2d69276d56bfad59f6dfc9db900fc3da4faa0f3468ac7.jpg",
|
| 1567 |
+
"image_caption": [],
|
| 1568 |
+
"image_footnote": [],
|
| 1569 |
+
"bbox": [
|
| 1570 |
+
517,
|
| 1571 |
+
448,
|
| 1572 |
+
863,
|
| 1573 |
+
639
|
| 1574 |
+
],
|
| 1575 |
+
"page_idx": 12
|
| 1576 |
+
},
|
| 1577 |
+
{
|
| 1578 |
+
"type": "image",
|
| 1579 |
+
"img_path": "images/04b34d9e0672513a43b4b41117ee8cf1324f02717aeb3476179d058f0895172d.jpg",
|
| 1580 |
+
"image_caption": [
|
| 1581 |
+
"Figure 5: LLM-assisted grading experiment results for the novice, expert, and GPT-4 evaluators. From the top left to the bottom right is the result of the Overall score, Content score, Language score, and Structure score, respectively."
|
| 1582 |
+
],
|
| 1583 |
+
"image_footnote": [],
|
| 1584 |
+
"bbox": [
|
| 1585 |
+
146,
|
| 1586 |
+
657,
|
| 1587 |
+
490,
|
| 1588 |
+
847
|
| 1589 |
+
],
|
| 1590 |
+
"page_idx": 12
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "image",
|
| 1594 |
+
"img_path": "images/260a400b33fae509bc77195cb013ed54860fd32cf5128edd90463ef9573911f7.jpg",
|
| 1595 |
+
"image_caption": [],
|
| 1596 |
+
"image_footnote": [],
|
| 1597 |
+
"bbox": [
|
| 1598 |
+
517,
|
| 1599 |
+
657,
|
| 1600 |
+
863,
|
| 1601 |
+
847
|
| 1602 |
+
],
|
| 1603 |
+
"page_idx": 12
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "table",
|
| 1607 |
+
"img_path": "images/936b165c9d0170d52ea91c6e855b1694d0fc4105ac09ccf193de8db39898d99a.jpg",
|
| 1608 |
+
"table_caption": [
|
| 1609 |
+
"Table 8: Rubrics for evaluating high school student essays in our private dataset."
|
| 1610 |
+
],
|
| 1611 |
+
"table_footnote": [],
|
| 1612 |
+
"table_body": "<table><tr><td>Rubrics</td></tr><tr><td>Overall Score (20 points) = Content Score (8 points) + Language Score (8 points) + Structure Score (4 points) \nContent Dimension (8 points in total)</td></tr><tr><td>• 6-8 points: \n- Content is complete with appropriate details \n- Expression is closely related to the topic</td></tr><tr><td>• 3-5 points: \n- Content is mostly complete \n- Expression is fundamentally related to the topic</td></tr><tr><td>• 0-2 points: \n- Content is incomplete \n- Expression is barely related or completely unrelated to the topic</td></tr><tr><td>Language Dimension (8 points in total)</td></tr><tr><td>• 6-8 points: \n- Language is accurate with diverse sentence structures and little or no errors (2 errors or fewer, 8 points; 3-4 errors, 7 points; 5-6 errors, 6 points) \n- Language expression is mostly appropriate</td></tr><tr><td>• 3-5 points: \n- Language is not quite accurate, with some variation in sentence structures and several errors, but they don’t impede understanding (7-8 errors, 5 points; 9-10 errors, 4 points; 11-12 errors, 3 points) \n- Language expression is somewhat inappropriate</td></tr><tr><td>• 0-2 points: \n- Language is hopelessly inaccurate with numerous language errors, hindering understanding (more than 12 errors) \n- Language expression is completely inappropriate</td></tr><tr><td>Structure Dimension (4 points in total)</td></tr><tr><td>• 3-4 points: \n- Clearly and logically structured \n- Smooth and coherent transitions</td></tr><tr><td>• 1-2 points: \n- Mostly clearly and logically structured \n- Relatively smooth and coherent transitions</td></tr><tr><td>• 0-1 points: \n- Not clearly and logically structured \n- Fragmented and disconnected structures and sentences</td></tr></table>",
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
152,
|
| 1615 |
+
186,
|
| 1616 |
+
843,
|
| 1617 |
+
839
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 13
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "table",
|
| 1623 |
+
"img_path": "images/250b6968e33bd54fd8c9cb46ebd6ac131a0e861ce4b9c5b40c57e8bae32f7749.jpg",
|
| 1624 |
+
"table_caption": [
|
| 1625 |
+
"Table 9: The example prompt (the GPT-4, few-shot, with rubrics approach and the fine-tuned LLaMA3-8B) and the corresponding model output."
|
| 1626 |
+
],
|
| 1627 |
+
"table_footnote": [],
|
| 1628 |
+
"table_body": "<table><tr><td>Example Prompt</td><td>Example Output</td></tr><tr><td>As a virtual evaluator with expertise in English composition, your role is to critically analyze and grade student essays according to a predetermined set of rubrics. You are to act as an impartial judge and evaluate the essays based on the quality of the writing and adherence to the essay prompt.</td><td>Explanations: The student's essay responds accurately to Jim's request for suggestions on how to learn Chinese, providing thoughtful advice and support. Detailed suggestions include listening to Chinese radio, reading Chinese literature, and using mobile applications. The tone is hopeful and supportive. \nContent Score: 8</td></tr><tr><td>Here are the specific guidelines for each score: \n{Rubrics} \nSample Essay Prompt: \n{Essay Prompt} \nThe graded example essays: \n{Examples} \nStudent's Essay to Evaluate: \n{Target Student Essay}</td><td>Explanations: There are no language errors; the grammar and spelling are all accurate. The student used formal, respectful language suitable for this kind of communication and also included diversity in sentence structures. \nLanguage Score: 8</td></tr><tr><td>Task Breakdown: \n1. Carefully read the provided essay prompt, scoring guidelines, and the student's essay. \n2. In the Explanations part, identify specific elements in the essay referring to the rubrics. In the language dimension, list all the spelling and grammar errors, and count the number of them to determine the Language Score. The Explanations for each dimension should be as detailed as possible. \n3. Determine the appropriate scores according to the analysis above.</td><td>Explanations: The essay is well-structured and organized. It follows the accepted structure of a reply email, with an opening greeting, a body containing the main content, and a closing remark. Transitions are also smooth and coherent, logically linking one thought to another. \nStructure Score: 4</td></tr><tr><td>Please present your evaluation in the following manner: \nExplanations: ..., Content Score: ... \nExplanations: ..., Language Score: ... \nExplanations: ..., Structure Score: ... \nExplanations: ..., Total Score: ... \nYour final evaluation: \n[Total Score: ..., Content Score: ..., Language Score: ..., Structure Score: ...]</td><td>Total Score: This is the sum of the content, language, and structure scores. Total Score: 20 \nYour final evaluation: \n[Total Score: 20, Content Score: 8, Language Score: 8, Structure Score: 4]</td></tr><tr><td>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.</td><td rowspan=\"3\">{ 'content': {'completeness': 'The essay covers the basic requirements of the prompt by introducing design ideas for the graduation ceremony and explaining the reasons behind these designs. However, the explanations are quite brief and lack depth', 'topic_relevance': 'The essay is related to the given topic, as it discusses ideas for a graduation ceremony design', 'content_details': 'The details provided are minimal and lack depth. For example, the student mentions launching a ball in the dining hall and writing gratitude letters but does not elaborate on how these activities contribute to the overall design or the reasons behind these choices in detail', 'score_level': 'Level 2', 'score_point': 4.5}, 'language': {...}, 'structure': {...}, 'overall': {...} }</td></tr><tr><td>##Input: \nScoring rubrics: \n{Rubrics} \nEssay Prompt: \n{Essay Prompt} \nStudent's Essay to Evaluate: \n{Target Student Essay}</td></tr><tr><td>## Response: \n{ }</td></tr></table>",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
114,
|
| 1631 |
+
177,
|
| 1632 |
+
882,
|
| 1633 |
+
863
|
| 1634 |
+
],
|
| 1635 |
+
"page_idx": 14
|
| 1636 |
+
}
|
| 1637 |
+
]
|
2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06431/9aaa3bbe-b62b-4aa0-a9c8-f8fd06458f9c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6c6e63b43ac4299b1a70c27bf8695de792fa46e09697bfc619a16ec1e53ba319
|
| 3 |
+
size 1591141
|
2401.06xxx/2401.06431/full.md
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Human-AI Collaborative Essay Scoring: A Dual-Process Framework with LLMs
|
| 2 |
+
|
| 3 |
+
Changrong Xiao $^{1}$ , Wenxing Ma $^{1}$ Qingping Song $^{2}$ Sean Xin Xu $^{1}$ , Kunpeng Zhang $^{3}$ , Yufang Wang $^{4}$ , Qi Fu $^{4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ School of Economics and Management, Tsinghua University
|
| 6 |
+
|
| 7 |
+
$^{2}$ Department of Information Systems, City University of Hong Kong
|
| 8 |
+
|
| 9 |
+
$^{3}$ Department of Decision, Operations & Information Technologies, University of Maryland $^{4}$ Beijing Xicheng Educational Research Institute
|
| 10 |
+
|
| 11 |
+
xcr21@mails.tsinghua.edu.cn, mawx21@mails.tsinghua.edu.cn, qisong@cityu.edu.hk, xuxin@sem.tsinghua.edu.cn,
|
| 12 |
+
|
| 13 |
+
kpzhang@umd.edu, wangwang7587@163.com, lilyhoneypot@163.com
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Receiving timely and personalized feedback is essential for second-language learners, especially when human instructors are unavailable. This study explores the effectiveness of Large Language Models (LLMs), including both proprietary and open-source models, for Automated Essay Scoring (AES). Through extensive experiments with public and private datasets, we find that while LLMs do not surpass conventional state-of-the-art (SOTA) grading models in performance, they exhibit notable consistency, generalizability, and explainability. We propose an open-source LLM-based AES system, inspired by the dual-process theory. Our system offers accurate grading and high-quality feedback, at least comparable to that of fine-tuned proprietary LLMs, in addition to its ability to alleviate misgrading. Furthermore, we conduct human-AI co-grading experiments with both novice and expert graders. We find that our system not only automates the grading process but also enhances the performance and efficiency of human graders, particularly for essays where the model has lower confidence. These results highlight the potential of LLMs to facilitate effective human-AI collaboration in the educational context, potentially transforming learning experiences through AI-generated feedback.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Writing practice is an essential component of second-language learning. While the provision of timely and reliable feedback poses a considerable challenge for educators in China due to the high student-teacher ratio. This limitation hampers students' academic progress, especially those who are keen on self-directed learning. Automated Essay Scoring (AES) systems provide valuable assistance to students by offering immediate and consistent feedback on their work, and also simplifying the grading process for educators.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Our study reveals that LLM-based essay scoring systems can not only automate the grading process, but also elevate novice evaluators to the level of experts.
|
| 25 |
+
|
| 26 |
+
However, implementing AES systems effectively in real-world educational scenarios presents several challenges. First, the diverse range of exercise contexts and the inherent ambiguity in scoring rubrics complicate the ability of traditional models to deliver accurate scores. Second, interviews with high school teachers indicate that despite receiving accurate score predictions, they must still review essays to mitigate potential errors from the models. Consequently, relying exclusively on this system without human supervision is impractical in real-world scenarios. Thus, there is a clear need for AES systems that not only predict scores accurately but also facilitate effective human-AI collaboration. This should be supported by natural language explanations and additional assistive features to enhance usability.
|
| 27 |
+
|
| 28 |
+
To effectively tackle these challenges, it is crucial to highlight the latest advancements in the field of Natural Language Processing (NLP), particularly focusing on the development of large language models (LLMs). LLMs, such as OpenAI's ChatGPT $^{1}$ , not only showcase impressive capabilities of robust logical reasoning but also exhibit a remarkable ability to comprehend and faithfully follow human instructions (Ouyang et al., 2022). Furthermore, recent studies have highlighted the
|
| 29 |
+
|
| 30 |
+
potential of leveraging LLMs in AES tasks (Mizumoto and Eguchi, 2023; Yancey et al., 2023; Naismith et al., 2023).
|
| 31 |
+
|
| 32 |
+
In this study, we explore the potential of proprietary and open-source LLMs such as GPT-3.5, GPT-4, and LLaMA3 for AES tasks. We conducted extensive experiments with public essay-scoring datasets as well as a private collection of student essays to assess the zero-shot and few-shot performance of these models. Additionally, we enhanced their effectiveness through supervised fine-tuning (SFT). Drawing inspiration from the dual-process Theory, we developed an AES system based on LLaMA3 that matches the grading accuracy and feedback quality of fine-tuned LLaMA3. Our human-LLM co-grading experiment further revealed that this system significantly improves the performance and efficiency of both novice and expert graders, offering valuable insights into the educational impacts and potential for effective human-AI collaboration. Overall, our study contributes three major advancements to the field:
|
| 33 |
+
|
| 34 |
+
- We pioneer the exploration of LLMs' capabilities as AES systems, especially in complex scenarios featuring tailored grading criteria. Leveraging dual-process theory, our novel AES framework demonstrates remarkable accuracy, efficiency, and explainability.
|
| 35 |
+
- We introduce an extensive essay-scoring dataset, which includes 13,372 essays written by Chinese high school students. These essays are evaluated with multi-dimensional scores by expert educators. This dataset significantly enhances the resources available for AI in Education (AIEd) $^2$ .
|
| 36 |
+
- Our findings from the human-LLM cograding task highlight the potential of LLM-generated feedback to elevate the proficiency of individuals with limited domain expertise to a level akin to that of experts. Additionally, it enhances the efficiency and robustness of human graders by integrating model confidence scores and explanations. These insights set the stage for future investigation into human-AI collaboration and AI-assisted learning within educational contexts.
|
| 37 |
+
|
| 38 |
+
# 2 Related Work
|
| 39 |
+
|
| 40 |
+
# 2.1 Automated Essay Scoring (AES)
|
| 41 |
+
|
| 42 |
+
Traditional Methods Automated Essay Scoring (AES) stands as a pivotal research area at the intersection of NLP and education. Traditional AES methods are usually regression-based or classification-based machine learning models (Sultan et al., 2016; Mathias and Bhattacharyya, 2018b,a; Salim et al., 2019) trained with textual features extracted from the target essays. With the advancement of deep learning, AES has witnessed the integration of advanced techniques such as convolutional neural networks (CNNs) (Dong and Zhang, 2016), long short-term memory networks (LSTMs) (Taghipour and Ng, 2016), and also pre-trained language models (Rodriguez et al., 2019; Lun et al., 2020). These innovations have led to more precise score predictions, and state-of-the-art methods are primarily based on Bidirectional Encoder Representations from Transformers (BERT) (Yang et al., 2020; Wang et al., 2022; Boquio and Naval, 2024).
|
| 43 |
+
|
| 44 |
+
LLM Applications in AES Recent studies have explored The potential of leveraging the capabilities of modern LLMs in AES tasks. Mizumoto and Eguchi (2023) provided ChatGPT with specific IELTS scoring rubrics for essay evaluation but found limited improvements when incorporating GPT scores into the regression model. In a different approach, Yancey et al. (2023) used GPT-4's few-shot capabilities to predict Common European Framework of Reference for Languages (CEFR) levels for short essays written by second-language learners. However, the Quadratic Weighted Kappa (QWK) scores still did not surpass those achieved by the XGBoost baseline model or human annotators. Similarly, Han et al. (2023); Stahl et al. (2024) introduced prompting frameworks that did not outperform the conventional baselines.
|
| 45 |
+
|
| 46 |
+
# 2.2 AI-Assisted Decision Making
|
| 47 |
+
|
| 48 |
+
Researchers have extensively investigated human-AI teams, in which AI supports the decision-making process by providing recommendations or suggestions, while the human remains responsible for the final decision (van den Bosch et al., 2019). The objective of such human-AI collaboration is to achieve complementary performance, where the combined team performance exceeds that of either party operating independently (Bansal et al., 2021). To realize this, it is crucial to design an
|
| 49 |
+
|
| 50 |
+
AI-assisted decision-making process that allows humans to effectively monitor and counteract any unpredictable or undesirable behavior exhibited by AI models (Eigner and Handler, 2024). This design aims to leverage the strengths of both humans and AI to enhance overall performance (Holstein and Aleven, 2022). To our knowledge, no studies have yet investigated AES systems from this angle of collaborative co-grading.
|
| 51 |
+
|
| 52 |
+
# 2.3 Dual-Process Theory
|
| 53 |
+
|
| 54 |
+
Recent studies have developed architectures that imitate human cognitive processes to enhance the capabilities of LLMs, particularly in reasoning and planning (Benfeghoul et al., 2024). According to dual-process theory in psychology (Wason and Evans, 1974; Kahneman, 2011), human cognition operates via two distinct systems: System 1 involves rapid, intuitive "Fast Thinking", while System 2 entails conscious and deliberate "Slow Thinking" processes. LLM architectures inspired by this theory have been implemented in complex interactive tasks (Lin et al., 2024; Tian et al., 2023), aiming to mitigate issues like social biases (Kamruzzaman and Kim, 2024) and hallucination (Bellini-Leite, 2023). These adaptations have demonstrated improved performances in various areas.
|
| 55 |
+
|
| 56 |
+
# 3 Data
|
| 57 |
+
|
| 58 |
+
ASAP dataset The Automated Student Assessment Prize (ASAP $^3$ ) dataset, stands as one of the most commonly used publicly accessible resources Automated Essay Scoring (AES) tasks. This comprehensive dataset comprises a total of 12,978 essays, encompassing responses to 8 distinct prompts. Each essay has been evaluated and scored by human annotators. Essay sets are also accompanied by detailed scoring rubrics, each tailored with unique scoring guidelines and score ranges. These intricacies are essential as they cater to the multifaceted requirements and diverse scenarios of AES.
|
| 59 |
+
|
| 60 |
+
Our Chinese Student English Essay (CSEE) dataset We have developed a novel English essay scoring dataset specifically designed for AES tasks. The dataset was carefully curated in collaboration with 29 high schools in China, encompassing a total of 13,372 student essays responding to two distinct prompts used in final exams. The evaluation of these essays was carried out by highly experienced
|
| 61 |
+
|
| 62 |
+
English teachers following the scoring guidelines of the Chinese National College Entrance Examination (Table 8). Scoring was comprehensively assessed across three critical dimensions: Content, Language, and Structure, with an Overall Score ranging from 0 to 20. More descriptions of the two datasets are presented in Appendix A.
|
| 63 |
+
|
| 64 |
+
# 4 Methods
|
| 65 |
+
|
| 66 |
+
In this section, we present the details of the models used in this study, including traditional AES baselines, LLM-based approaches, and our proposed Fast and Slow Thinking AES framework.
|
| 67 |
+
|
| 68 |
+
# 4.1 Traditional Essay Scoring Baselines
|
| 69 |
+
|
| 70 |
+
BERT Classifier Similar to the model used in Yang et al. (2020); Han et al. (2023)'s work, we implemented a simple yet effective baseline model for score prediction based on BERT. This model integrated a fully connected prediction layer following the BERT output, and the BERT parameters remained unfrozen during training. Both the BERT model and the prediction layer were jointly trained on the training essay set (details in Appendix B).
|
| 71 |
+
|
| 72 |
+
SOTA baselines We also incorporate models such as $R^2$ BERT (Yang et al., 2020) and Tran-BERT-MS-ML-R (Wang et al., 2022), which represent the highest levels of performance in the ASAP AES task. These models serve as the high-level benchmarks against which we evaluate the performance of our LLM-based models.
|
| 73 |
+
|
| 74 |
+
# 4.2 Prompting LLMs
|
| 75 |
+
|
| 76 |
+
We considered various prompting strategies including with or without detailed rubrics context, zero-shot or few-shot settings. An illustrative example of a prompt and its corresponding model-generated output can be found in Table 9 in the Appendices.
|
| 77 |
+
|
| 78 |
+
GPT-4, zero-shot, without rubrics In this setting, we simply provide the prompt and the target essay to GPT-4. The model then evaluates the essay and assigns a score based on its comprehension within the specified score range.
|
| 79 |
+
|
| 80 |
+
GPT-4, zero-shot, with rubrics Alongside the prompt and the target essay, we also provide GPT-4 with explicit scoring rubrics, guiding its evaluation.
|
| 81 |
+
|
| 82 |
+
GPT-4, few-shot, with rubrics In addition to the zero-shot settings, the few-shot prompts include sample essays and their corresponding scores. This
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 2: Our proposed Fast and Slow Thinking AES framework.
|
| 86 |
+
|
| 87 |
+
assists GPT-4 in understanding the latent scoring patterns. With the given prompt, target essay, scoring rubrics, and a set of $k$ essay examples, GPT-4 provides an appropriate score reflecting this enriched context. See Appendix C for details.
|
| 88 |
+
|
| 89 |
+
In all these configurations, we adopted the Chain-of-Thought (CoT) (Wei et al., 2022) strategy. This approach instructed the LLM to analyze and explain the provided materials before making final score determinations. Studies (Lampinen et al., 2022; Zhou et al., 2023; Li et al., 2023) have shown that this structured approach significantly enhances the capabilities of the LLM, optimizing performance in tasks that require inference and reasoning.
|
| 90 |
+
|
| 91 |
+
# 4.3 Fine-tuning LLMs
|
| 92 |
+
|
| 93 |
+
We conducted additional investigations into the effectiveness of supervised fine-tuning methods. Given that the ASAP and our CSEE dataset only include scores without expert explanations, we augmented these original datasets with explanations generated by GPT-4. To guide the explanation generation process, we provided GPT-4 with a few expert-curated explanations and a structured template. By organizing the data into an instructional format, we created fine-tuning inputs that enable the LLMs to not only generate accurate scores but also provide high-quality feedback.
|
| 94 |
+
|
| 95 |
+
We first fine-tuned OpenAI's GPT-3.5-turbo, one of the best-performing LLMs. However, due to the proprietary nature of GPT-3.5 and considerations such as data privacy, training and inference costs, and flexibility in fine-tuning, we also fine-tune an LLaMA3-8B $^{4}$ model with both original and augmented datasets. This recent open-source model mitigates these concerns and has remarkable capabilities, making it a more practical choice for use in educational scenarios.
|
| 96 |
+
|
| 97 |
+
# 4.4 Our Proposed Method
|
| 98 |
+
|
| 99 |
+
As previously mentioned, score prediction and explanation generation are distinct but interrelated tasks within the context of AES. Explanation generation, which covers the evaluation of content, language, and structure, necessitates deliberate and meticulous reasoning. On the other hand, score prediction can either be a swift process based on intuition and experiences or concluded after step-by-step inference. These features align with the idea of dual-process theory. Consequently, we have designed an integrated system that includes separate modules for each task: the Fast Module for quick score prediction and the Slow Module for detailed explanation generation. The framework of our proposed AES system is shown in Figure 2.
|
| 100 |
+
|
| 101 |
+
Slow Module: Fine-tuned LLM The Slow Module forms the core of our AES system, capable of analyzing essays in depth, providing evidence based on specific rubrics, and deriving appropriate scores. This detailed process is time-intensive but yields valuable natural language reasoning that informs the final grading decision. In this study, we implemented the fine-tuned LLaMA3-8B as the Slow Module. It is worth noting that this module is interchangeable with any other qualified LLM, demonstrating the flexibility of our framework.
|
| 102 |
+
|
| 103 |
+
Fast Module: Lightweight Classifier In many cases, swift score prediction is preferable to detailed reasoning. To reduce the time and computational cost associated with generating detailed explanations, we introduced a simple fully connected layer as a bypass before the initiation of text generation by the Slow Module. By using only the embeddings of the input text, the Fast Module not only conserves resources but also leverages the latent knowledge acquired during the fine-tuning of the Slow Module, aligning with the 'intuitive' facet of Fast Thinking.
|
| 104 |
+
|
| 105 |
+
When to switch from the Fast to Slow Thinking module is one of the challenges in the design of
|
| 106 |
+
|
| 107 |
+
dual-process LLM. Previous frameworks employed heuristic rules or error feedback as the triggering criteria (Lin et al., 2024; Tian et al., 2023), which might be impractical in real-world cases. Our Fast module also calculates the probabilities of each possible output score, which we standardize and treat as confidence scores. Predictions with low confidence are considered unreliable, triggering the Slow Module for self-reflection, or passing to external judges (either human or AI). This design aims to enhance essay scoring accuracy and foster effective human-AI collaboration, potentially elevating the complementary team performance.
|
| 108 |
+
|
| 109 |
+
For training, we first fine-tune the Slow Module using our explanation-augmented dataset. Subsequently, we employ the Slow Module to derive input embeddings, which, paired with the rated scores, are used to train the Fast Classifier from scratch. During inference, essay inputs initially pass through the fine-tuned LLM and are transformed into the embedding format. They are then processed by the Fast Module to quickly derive scores. The Slow Module is activated only when prediction confidence is low or based on specific additional requirements.
|
| 110 |
+
|
| 111 |
+
# 5 Experimental Results
|
| 112 |
+
|
| 113 |
+
# 5.1 Performance of LLM-based Methods
|
| 114 |
+
|
| 115 |
+
We conducted experiments across all eight subsets of the ASAP dataset using both the LLM-based methods and baseline approaches. We adopted Cohen's Quadratic Weighted Kappa (QWK) as our primary evaluation metric, which is the most widely recognized automatic metric in AES tasks (Ramesh and Sanampudi, 2022). A higher QWK value indicates a greater degree of agreement between the predicted score and the ground truth. For methods requiring a training dataset, we divided the data for each subset using an 80:20 split ratio between training and testing.
|
| 116 |
+
|
| 117 |
+
Our extensive experiments, as detailed in Table 1, revealed that despite using carefully curated prompts and providing detailed context, the zero-shot and few-shot capabilities of GPT-4 did not yield high QWK scores on the ASAP dataset. In zero-shot scenarios, GPT-4's performance was notably low, with some subsets scoring nearly as poorly as random guessing. For instance, Set 1 recorded a QWK of 0.0423 and Set 7 a QWK of 0.0809. This underperformance may be due to the broad scoring ranges and complex rubrics in
|
| 118 |
+
|
| 119 |
+
ASAP, suggesting that even advanced LLMs like GPT-4 may struggle to fully comprehend and adhere to complicated human instructions. In few-shot settings, although there was an improvement in scoring performance, particularly for Sets 4-6, GPT-4 still significantly lagged behind SOTA grading methods. This is consistent with findings from recent studies that utilize LLMs for essay scoring.
|
| 120 |
+
|
| 121 |
+
When fine-tuned with the training dataset, the LLMs demonstrated significantly improved performance compared to the zero-shot and few-shot results, with QWK scores generally exceeding 0.7. However, these fine-tuned LLMs still did not surpass traditional SOTA methods. Within our framework, the performance of the fine-tuned open-source LLaMA3-8B was comparable to that of fine-tuned proprietary models. Even simple supervised fine-tuning (SFT) of LLaMA3 achieved notable results, suggesting that open-source LLMs might be a cost-effective choice for AES tasks. The findings from our CSEE dataset (see Table 2) align with those on the ASAP dataset, indicating that our framework predicts reliable scores across content, language, and structure dimensions.
|
| 122 |
+
|
| 123 |
+
Although LLMs do not match traditional methods in terms of scoring accuracy, they excel at generating detailed explanations, benefiting both educators and students. Notably, when trained to produce both scores and explanations in a single output (our proposed Slow Module), LLaMA3-8B experienced a performance drop in grading accuracy. This decrease may be attributed to the model's optimization process, where numerical score values are treated similarly to textual data in the output, leading to suboptimal accuracy. In our Fast and Slow Thinking framework, however, separating numerical from textual outputs and integrating them based on a trigger condition improved the QWK scores, enhancing collaborative performance. Additionally, we evaluated the quality of explanations generated by our AES system against those produced by GPT-4. Through a comparison competition among crowdsourced workers, analyzing 20 sets of paired essay grading explanations, our system achieved a win rate of $35\%$ , a tie rate of $40\%$ , and a loss rate of $25\%$ . These results demonstrate that our explanations are of high quality and comparable to those generated by GPT-4.
|
| 124 |
+
|
| 125 |
+
# 5.2 Further Analyses
|
| 126 |
+
|
| 127 |
+
Consistency To assess the consistency of scores predicted by LLM-based methods, we conducted
|
| 128 |
+
|
| 129 |
+
Table 1: Comparison of QWK scores for LLM-based methods and the baselines under the ASAP dataset. The "E." column indicates whether the model output includes natural language explanations alongside the predicted scores.
|
| 130 |
+
|
| 131 |
+
<table><tr><td></td><td>E.</td><td>Set 1</td><td>Set 2</td><td>Set 3</td><td>Set 4</td><td>Set 5</td><td>Set 6</td><td>Set 7</td><td>Set 8</td><td>Avg.</td></tr><tr><td>BERT Classifier</td><td>✘</td><td>0.6486</td><td>0.6284</td><td>0.7327</td><td>0.7669</td><td>0.7432</td><td>0.6810</td><td>0.7165</td><td>0.4624</td><td>0.6725</td></tr><tr><td>Tran-BERT-MS-ML-R</td><td>✘</td><td>0.8340</td><td>0.7160</td><td>0.7140</td><td>0.8120</td><td>0.8130</td><td>0.8360</td><td>0.8390</td><td>0.7660</td><td>0.7910</td></tr><tr><td>R2BERT</td><td>✘</td><td>0.8170</td><td>0.7190</td><td>0.6980</td><td>0.8450</td><td>0.8410</td><td>0.8470</td><td>0.8390</td><td>0.7440</td><td>0.7940</td></tr><tr><td>GPT-4, zero-shot, w/o rubrics</td><td>✓</td><td>0.0423</td><td>0.4017</td><td>0.2805</td><td>0.5571</td><td>0.3659</td><td>0.5021</td><td>0.0809</td><td>0.4188</td><td>0.3312</td></tr><tr><td>GPT-4, zero-shot, with rubrics</td><td>✓</td><td>0.0715</td><td>0.3003</td><td>0.3661</td><td>0.6266</td><td>0.5227</td><td>0.3448</td><td>0.1101</td><td>0.4072</td><td>0.3437</td></tr><tr><td>GPT-4, few-shot, with rubrics</td><td>✓</td><td>0.2801</td><td>0.3376</td><td>0.3308</td><td>0.7839</td><td>0.6226</td><td>0.7284</td><td>0.2570</td><td>0.4541</td><td>0.4743</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>✘</td><td>0.7406</td><td>0.6183</td><td>0.7041</td><td>0.8593</td><td>0.7959</td><td>0.8480</td><td>0.7271</td><td>0.6135</td><td>0.7384</td></tr><tr><td>Fine-tuned LLaMA3</td><td>✘</td><td>0.7137</td><td>0.6696</td><td>0.6558</td><td>0.7712</td><td>0.7452</td><td>0.7489</td><td>0.6938</td><td>0.2952</td><td>0.6617</td></tr><tr><td>Ours</td><td>✓</td><td>0.7612</td><td>0.6517</td><td>0.7238</td><td>0.8093</td><td>0.8118</td><td>0.7764</td><td>0.7071</td><td>0.4885</td><td>0.7162</td></tr><tr><td>Fast module</td><td>✘</td><td>0.7580</td><td>0.6395</td><td>0.7228</td><td>0.7995</td><td>0.8023</td><td>0.7753</td><td>0.7157</td><td>0.5075</td><td>0.7151</td></tr><tr><td>Slow module</td><td>✓</td><td>0.6048</td><td>0.5621</td><td>0.5700</td><td>0.6992</td><td>0.6774</td><td>0.5943</td><td>0.5772</td><td>0.2677</td><td>0.5691</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 2: Comparison of QWK scores for LLM-based methods and the baselines under our CSEE dataset. The "E." column indicates whether the model output includes natural language explanations alongside the predicted scores.
|
| 134 |
+
|
| 135 |
+
<table><tr><td></td><td>E.</td><td>Overall</td><td>Content</td><td>Language</td><td>Structure</td></tr><tr><td>BERT Classifier</td><td>✘</td><td>0.7674</td><td>0.7312</td><td>0.7203</td><td>0.6650</td></tr><tr><td>GPT-4, zero-shot, w/o rubrics</td><td>✓</td><td>0.4688</td><td>0.4412</td><td>0.3081</td><td>0.5757</td></tr><tr><td>GPT-4, zero-shot, with rubrics</td><td>✓</td><td>0.5344</td><td>0.5391</td><td>0.4660</td><td>0.4256</td></tr><tr><td>GPT-4, few-shot, with rubrics</td><td>✓</td><td>0.6729</td><td>0.6484</td><td>0.6278</td><td>0.4661</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>✘</td><td>0.7532</td><td>0.7241</td><td>0.7513</td><td>0.6576</td></tr><tr><td>Fine-tuned LLaMA3</td><td>✘</td><td>0.7544</td><td>0.7321</td><td>0.7084</td><td>0.6461</td></tr><tr><td>Ours</td><td>✓</td><td>0.7634</td><td>0.7347</td><td>0.7192</td><td>0.6656</td></tr><tr><td>Fast module</td><td>✘</td><td>0.7364</td><td>0.7272</td><td>0.7072</td><td>0.6627</td></tr><tr><td>Slow module</td><td>✓</td><td>0.7310</td><td>0.6810</td><td>0.6990</td><td>0.6412</td></tr></table>
|
| 136 |
+
|
| 137 |
+
the same experiment three times, each with the temperature parameter of the LLMs set to 0. We observed that over $80\%$ of the ratings remained unchanged across these trials, indicating a high level of consistency. We then computed the average of these three values to determine the final results.
|
| 138 |
+
|
| 139 |
+
Generalizability The eight subsets of the ASAP dataset, featuring diverse scoring criteria and ranges, serve as an excellent framework for evaluating the generalization capabilities of models. For methods such as fine-tuning and traditional baselines that require training data, we first trained the models on one subset and then assessed their performance across the remaining datasets. For example, we trained on Set 1 and tested on Sets 2-8, keeping the model weights fixed. We selected fine-tuned GPT-3.5 and the BERT Classifier to represent LLM-based and traditional methods, respectively. As detailed in Table 7, our fine-tuned GPT-3.5 generally outperformed the BERT classifier, although
|
| 140 |
+
|
| 141 |
+
there were instances of underperformance, notably when trained on Set 4 and tested on Sets 1 and 7. The BERT classifier showed particularly weak generalization when trained on Sets 7 and 8, performing close to random guessing.
|
| 142 |
+
|
| 143 |
+
Prediction Confidence and Self-Reflection To assess the reliability of confidence scores, we segmented the test samples based on the output confidence and observed a strong correlation between these scores and model performance in Figure 3. The trigger condition for switching from the Fast to the Slow Module is set when the confidence score falls below 0.2. Although the Slow Module generally exhibits lower performance compared to the Fast Module, the overall performance of the integrated system improved. This enhancement suggests that employing detailed reasoning for cases with low confidence is an effective grading strategy.
|
| 144 |
+
|
| 145 |
+
Time Efficiency Training the Slow Module for each epoch with our explanation-augmented
|
| 146 |
+
|
| 147 |
+
dataset requires around 2 hours using an RTX 4090 24GB GPU, and the inference process consumes about 10 GPU hours. Meanwhile, training the Fast Module takes less than 0.5 hours, and scoring predictions are completed in just 0.2 hours. Our proposed framework, which incorporates a confidence trigger condition, offers an effective trade-off by enhancing both accuracy and efficiency.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
Figure 3: QWK scores of our Fast module in different confidence levels.
|
| 151 |
+
|
| 152 |
+
# 6 Human-AI Co-Grading Experiment
|
| 153 |
+
|
| 154 |
+
Given that the AES system not only provides score predictions but also functions as a teammate to educators, we further explore the effectiveness of our proposed system in assisting human grading.
|
| 155 |
+
|
| 156 |
+
# 6.1 Experiment Design
|
| 157 |
+
|
| 158 |
+
To investigate the performance of human-only, AI-only, and human-AI team collaboration, we conducted a two-stage within-group experiment. We randomly selected 50 essays from the test set of our CSEE dataset, all on the same topic. We recruited 10 college students from a Normal University in Beijing, who are prospective high school teachers with no current grading experience, to serve as novice evaluators. Additionally, 5 experienced high school English teachers participated as expert evaluators. Initially, all evaluators graded the essays independently using standard rubrics. Subsequently, they were provided with the scores, prediction confidence levels, and explanations generated by our AES system and had the option to revise their initial scores based on this augmented information. To gather feedback on the process, we distributed questionnaires where evaluators rated their experience on a 5-point Likert scale, with higher scores indicating better-perceived performance.
|
| 159 |
+
|
| 160 |
+
In short, we mainly focus on the following research questions:
|
| 161 |
+
|
| 162 |
+
- Can novice and expert human evaluators achieve complementary performance in terms of accuracy and efficiency using the proposed AES system and collaborative workflow?
|
| 163 |
+
|
| 164 |
+
- Does the design of prediction confidence and explanation generation contribute to performance improvements?
|
| 165 |
+
|
| 166 |
+
# 6.2 Results
|
| 167 |
+
|
| 168 |
+
Feedback generated by LLM elevates novice evaluators to expert level. As depicted in Figure 4 and Table 3, our findings reveal that novice graders, with the assistance of LLM-generated feedback (including both scores and explanations), achieved a significant improvement in performance. Their average QWK improved from 0.5256 to 0.6609, with a p-value of less than 0.01. Furthermore, when comparing the performance of LLM-assisted novice evaluators (mean QWK of 0.6609) to that of expert graders (mean QWK of 0.7117), no statistical difference was found between the two groups $(\mathrm{p - value} = 0.27)$ . This indicates that with LLM support, novice evaluators achieved a level of grading proficiency comparable to that of experienced experts. Similar trends were observed in the scores for content, language, and structure, with detailed results presented in Table 5.
|
| 169 |
+
|
| 170 |
+
Table 3: $t$ -test of QWK scores for different experimental groups. Diff. means the difference of means between the two groups of QWK scores.
|
| 171 |
+
|
| 172 |
+
<table><tr><td></td><td>Diff.</td><td>t statistic</td><td>p-value</td></tr><tr><td>Expert vs. Novice</td><td>0.1860***</td><td>3.2152</td><td>0.0068</td></tr><tr><td>Novice+LLM vs. Novice</td><td>0.1353***</td><td>2.8882</td><td>0.0098</td></tr><tr><td>Expert+LLM vs. Expert</td><td>0.0617</td><td>1.7128</td><td>0.1251</td></tr><tr><td>Novice+LLM vs. Expert</td><td>-0.0508</td><td>-1.1566</td><td>0.2682</td></tr></table>
|
| 173 |
+
|
| 174 |
+
Feedback generated by LLM boosts expert efficiency and consistency. The integration of LLM-generated feedback into the expert grading process led to an increase in the average QWK from 0.7117 to 0.7734, which also surpassed the performance of AES systems (a QWK of 0.7302) for these essay samples, thereby achieving superior complementary performance (where the Human-AI team outperforms both individual human and AI). Although this improvement is not statistically significant ( $p$ -value = 0.13), the benefits of LLM augmentation for experts were evident in several other aspects. According to self-report questionnaires (refer to Table 4), experts required less time to complete
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
Figure 4: QWK of the overall score in LLM-assisted cograting experiment for the novice and expert evaluators. The blue bar and orange bar of the LLM column indicate the performance of our Fast module and the integrated system respectively.
|
| 178 |
+
|
| 179 |
+
grading tasks when assisted by the LLM. Furthermore, a reduction in the standard deviation of expert ratings was observed, indicating a higher level of consensus among experts. This suggests that LLM-generated feedback leads to more consistent evaluations of student essays. Experienced domain experts also commended the accuracy and practicality of the LLM-generated feedback, particularly praising the prediction confidence mechanism which alerted them to scrutinize more challenging cases. These findings highlight the potential to augment the human grading process with our AES system in real-world educational environments.
|
| 180 |
+
|
| 181 |
+
Table 4: Experts' feedback after grading student essays with the support of the LLM-based system.
|
| 182 |
+
|
| 183 |
+
<table><tr><td></td><td>Score</td></tr><tr><td>Perceived accuracy of LLM overall score</td><td>4.3/5</td></tr><tr><td>Perceived accuracy of LLM content score</td><td>4.0/5</td></tr><tr><td>Perceived accuracy of LLM language score</td><td>3.9/5</td></tr><tr><td>Perceived accuracy of LLM structure score</td><td>3.8/5</td></tr><tr><td>Helpfulness of the predicted scores</td><td>4.6/5</td></tr><tr><td>Helpfulness of the confidence scores</td><td>4.8/5</td></tr><tr><td>Helpfulness of LLM explanations</td><td>4.7/5</td></tr><tr><td>Efficiency of LLM assistance</td><td>4.4/5</td></tr><tr><td>Willingness to use our AES system</td><td>4.3/5</td></tr></table>
|
| 184 |
+
|
| 185 |
+
The Importance of Prediction Confidence and Explanations We previously assessed the reliability of prediction confidence from our Fast Module and noted a modest improvement in model performance after self-reflection by the Slow Module (as shown in the LLM column of Figure 4). In the context of human-AI collaboration, we focused on cases where the predicted scores presented to
|
| 186 |
+
|
| 187 |
+
human evaluators were of low confidence (below 0.2). We observed that the overall QWK scores for expert and novice evaluators were 0.6809 and 0.5680. These QWK values, lower than the average human performances, suggest that these essays are inherently challenging to grade, even for humans. However, human performances exceeded that of the LLM Slow Module's 0.5478 QWK, achieving complementary team performance. These findings support a practical, intuitive LLM-assisted decision-making workflow: the model manages routine cases with high confidence and minimal human intervention, while low-confidence cases are presented to human collaborators for in-depth analysis and final decision-making.
|
| 188 |
+
|
| 189 |
+
# 7 Conclusion
|
| 190 |
+
|
| 191 |
+
In this study, we explored the capabilities of LLMs within AES systems. With detailed contexts, clear rubrics, and high-quality examples, GPT-4 demonstrated satisfactory performance, consistency, and generalizability. Further accuracy enhancements were achieved through supervised fine-tuning using task-specific instruction datasets, bringing LLM performance close to conventional SOTA methods. To leverage the LLMs' ability to generate natural language explanations along with predicted scores, we introduced an open-source Fast and Slow Thinking AES framework. This framework not only matches the quality of proprietary models but also offers greater efficiency.
|
| 192 |
+
|
| 193 |
+
Our research extended into human-AI cograting experiments within this new framework. A notable finding was that LLMs not only automated the grading process but also augmented the grading skills of human evaluators. Novice graders, with support from our AES framework, reached accuracy levels comparable to those of experienced graders, while expert graders showed gains in efficiency and consistency. The collaboration between humans and AI particularly enhanced performance in handling low-confidence cases, demonstrating a significant synergy that approached the upper limits of team performance. These results highlight the transformative potential of AI-assisted and human-centered decision-making workflows, especially in elevating those with limited domain knowledge to expert-level proficiency. This study illuminates promising future directions for human-AI collaboration and underscores the evolving role of AI in educational contexts.
|
| 194 |
+
|
| 195 |
+
# Limitations
|
| 196 |
+
|
| 197 |
+
This study has certain limitations. Firstly, although our CSEE dataset includes a substantial number of student essays, these essays originate from only two final exams designed for high school English learners in China. This raises concerns about the robustness of our proposed AES system when applied to a broader range of topics and diverse student demographics. Secondly, our human-AI collaboration experiment, while indicative of promising directions for future human-AI co-grading tasks, is a pilot study that yields general results. Further experiments are necessary to thoroughly explore the mechanisms of complementary team performance, such as identifying circumstances under which humans are likely to recognize and correct their errors following AI feedback, or instances where unreliable AI feedback could potentially mislead them. A deeper understanding of these collaboration mechanisms will enable researchers to develop AES systems that offer more effective support to educators.
|
| 198 |
+
|
| 199 |
+
# Ethical Considerations
|
| 200 |
+
|
| 201 |
+
We secured Institutional Review Board (IRB) approval for both the data collection and the human-AI co-grading experiment (details provided in the online materials). In our CSEE dataset, all personal information concerning the students has been anonymized to safeguard their privacy. The dataset comprises solely of essays and the corresponding scores, omitting any additional information that might raise ethical concerns. However, details of the data annotation process remain undisclosed to us, including the number of teachers involved in the scoring and the level of inter-annotator agreement among them. We have also obtained explicit consent to use the data exclusively for research purposes from both teachers and students.
|
| 202 |
+
|
| 203 |
+
# References
|
| 204 |
+
|
| 205 |
+
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1-16.
|
| 206 |
+
Samuel C Bellini-Leite. 2023. Dual process theory for large language models: An overview of using
|
| 207 |
+
|
| 208 |
+
psychology to address hallucination and reliability issues. Adaptive Behavior.
|
| 209 |
+
Martin Benfeghoul, Umais Zahid, Qinghai Guo, and Zafeirios Fountas. 2024. When in doubt, think slow: Iterative reasoning with latent imagination. arXiv preprint arXiv:2402.15283.
|
| 210 |
+
Eujene Nikka V. Boquio and Prospero C. Naval, Jr. 2024. Beyond canonical fine-tuning: Leveraging hybrid multi-layer pooled representations of BERT for automated essay scoring. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2285-2295, Torino, Italia. ELRA and ICCL.
|
| 211 |
+
Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring—an empirical study. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1072-1077.
|
| 212 |
+
Eva Eigner and Thorsten Händler. 2024. Determinants of lIm-assisted decision-making. arXiv preprint arXiv:2402.17385.
|
| 213 |
+
Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Hyunseung Lim, Yoonsu Kim, Tak Yeon Lee, Hwajung Hong, Juho Kim, So-Yeon Ahn, et al. 2023. Fabric: Automated scoring and feedback generation for essays. arXiv preprint arXiv:2310.05191.
|
| 214 |
+
Kenneth Holstein and Vincent Aleven. 2022. Designing for human-ai complementarity in k-12 education. AI Magazine, 43(2):239-248.
|
| 215 |
+
D. Kahneman. 2011. Thinking, Fast and Slow. Harvard Library e-reader collection. Farrar, Straus and Giroux.
|
| 216 |
+
Mahammed Kamruzzaman and Gene Louis Kim. 2024. Prompting techniques for reducing social bias in llms through system 1 and system 2 cognitive processes. arXiv preprint arXiv:2404.17218.
|
| 217 |
+
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations.
|
| 218 |
+
Andrew Lampinen, Ishita Dasgupta, Stephanie Chan, Kory Mathewson, Mh Tessler, Antonia Creswell, James McClelland, Jane Wang, and Felix Hill. 2022. Can language models learn from explanations in context? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 537-563, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 219 |
+
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315-5333, Toronto, Canada. Association for Computational Linguistics.
|
| 220 |
+
|
| 221 |
+
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. 2024. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. Advances in Neural Information Processing Systems, 36.
|
| 222 |
+
Jiaqi Lun, Jia Zhu, Yong Tang, and Min Yang. 2020. Multiple data augmentation strategies for improving performance on automatic short answer scoring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13389-13396.
|
| 223 |
+
Sandeep Mathias and Pushpak Bhattacharyya. 2018a. Asap++: Enriching the asap automated essay grading dataset with essay attribute scores. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018).
|
| 224 |
+
Sandeep Mathias and Pushpak Bhattacharyya. 2018b. Thank "goodness"! a way to measure style in student essays. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 35-41.
|
| 225 |
+
Atsushi Mizumoto and Masaki Eguchi. 2023. Exploring the potential of using an ai language model for automated essay scoring. Research Methods in Applied Linguistics, 2(2):100050.
|
| 226 |
+
Ben Naismith, Phoebe Mulcaire, and Jill Burstein. 2023. Automated evaluation of written discourse coherence using GPT-4. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 394–403, Toronto, Canada. Association for Computational Linguistics.
|
| 227 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.
|
| 228 |
+
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083.
|
| 229 |
+
Dadi Ramesh and Suresh Kumar Sanampudi. 2022. An automated essay scoring systems: a systematic literature review. Artificial Intelligence Review, 55(3):2495-2527.
|
| 230 |
+
Pedro Uria Rodriguez, Amir Jafari, and Christopher M Ormerod. 2019. Language models and automated essay scoring. arXiv preprint arXiv:1909.09482.
|
| 231 |
+
Yafet Salim, Valdi Stevanus, Edwardo Barlian, Azani Cempaka Sari, and Derwin Suhartono. 2019. Automated english digital essay grader using machine learning. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE), pages 1-6. IEEE.
|
| 232 |
+
|
| 233 |
+
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Replug: Retrievalaugmented black-box language models. arXiv preprint arXiv:2301.12652.
|
| 234 |
+
Maja Stahl, Leon Biermann, Andreas Nehring, and Henning Wachsmuth. 2024. Exploring llm prompting strategies for joint essay scoring and feedback generation. arXiv preprint arXiv:2404.15845.
|
| 235 |
+
Md Arafat Sultan, Cristobal Salazar, and Tamara Sumner. 2016. Fast and easy short answer grading with high accuracy. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1070-1075.
|
| 236 |
+
Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1882-1891.
|
| 237 |
+
Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, and Ming Cui. 2023. Duma: a dual-mind conversational agent with fast and slow thinking. arXiv preprint arXiv:2310.18075.
|
| 238 |
+
Karel van den Bosch, Tjeerd Schoonderwoerd, Romy Blankendaal, and Mark Neerincx. 2019. Six challenges for human-ai co-learning. In Adaptive Instructional Systems: First International Conference, AIS 2019, Held as Part of the 21st HCI International Conference, HCI II 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings 21, pages 572–589. Springer.
|
| 239 |
+
Yongjie Wang, Chuang Wang, Ruobing Li, and Hui Lin. 2022. On the use of bert for automated essay scoring: Joint learning of multi-scale essay representation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3416-3425, Seattle, United States. Association for Computational Linguistics.
|
| 240 |
+
Peter C Wason and J St BT Evans. 1974. Dual processes in reasoning? Cognition, 3(2):141-154.
|
| 241 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.
|
| 242 |
+
Kevin P. Yancey, Geoffrey Laf Blair, Anthony Verardi, and Jill Burstein. 2023. Rating short L2 essays on the CEFR scale with GPT-4. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 576-584, Toronto, Canada. Association for Computational Linguistics.
|
| 243 |
+
Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and
|
| 244 |
+
|
| 245 |
+
ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1560-1569.
|
| 246 |
+
|
| 247 |
+
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In *The Eleventh International Conference on Learning Representations*.
|
| 248 |
+
|
| 249 |
+
# A Datasets
|
| 250 |
+
|
| 251 |
+
The details of the ASAP dataset are presented in Table 6. As previously mentioned, this dataset is composed of 8 subsets, each with unique prompts and scoring rubrics. Our Chinese Student English Essay (CSEE) dataset consists of 13,372 essays, along with their corresponding scores carefully rated by experienced English teachers based on the scoring standards in the Chinese National College Entrance Examination (Table 8). The basic statistics of this dataset are outlined in Table 5.
|
| 252 |
+
|
| 253 |
+
Table 5: Descriptive statistics of our private dataset.
|
| 254 |
+
|
| 255 |
+
<table><tr><td colspan="2">Chinese Student English Essay Dataset</td></tr><tr><td># of schools</td><td>29</td></tr><tr><td># of essay prompts</td><td>2</td></tr><tr><td># of student essays</td><td>13,372</td></tr><tr><td>avg. essay length</td><td>124.74</td></tr><tr><td>avg. Overall score</td><td>10.72</td></tr><tr><td>avg. Content score</td><td>4.13</td></tr><tr><td>avg. Language score</td><td>4.05</td></tr><tr><td>avg. Structure score</td><td>2.55</td></tr></table>
|
| 256 |
+
|
| 257 |
+
# B Details of BERT Classifier Baseline
|
| 258 |
+
|
| 259 |
+
We employed the bert-base-uncased BERT model from the huggingface transformers library $^5$ using PyTorch. A simple fully connected layer was added to perform the classification task. The datasets were divided into training and testing sets at an 8:2 ratio. To ensure better reproducibility, we set all random seeds, including those for dataset splitting and model training, to the value 42. During training, we used cross-entropy loss as our loss function. We allowed BERT parameters to be finetuned, without freezing them, in line with the objective function. AdamW was chosen as the optimizer, with a learning rate set to $10^{-5}$ and epsilon at $10^{-6}$ . With a batch size of 16 and a maximum of 10 training epochs, we also integrated an early stopping strategy to mitigate potential overfitting. All the experiments of the BERT baseline were run with 2 RTX A4000 16G GPUs in around one week.
|
| 260 |
+
|
| 261 |
+
# C Details of LLM-based Methods
|
| 262 |
+
|
| 263 |
+
# C.1 LLM Prompts
|
| 264 |
+
|
| 265 |
+
The prompts used for LLMs in our study fall into two distinct categories: firstly, the zero-shot and few-shot configurations of GPT-4; secondly, the
|
| 266 |
+
|
| 267 |
+
Table 6: Descriptive statistics of the ASAP dataset.
|
| 268 |
+
|
| 269 |
+
<table><tr><td>Essay Set</td><td>Essay Type</td><td>Grade Level</td><td># of Essays</td><td>Avg. Length</td><td>Score Range</td></tr><tr><td>1</td><td>Persuasive/Narrative/Expository</td><td>8</td><td>1783</td><td>350</td><td>[2, 12]</td></tr><tr><td>2</td><td>Persuasive/Narrative/Expository</td><td>10</td><td>1800</td><td>350</td><td>[1, 6]</td></tr><tr><td>3</td><td>Source Dependent Responses</td><td>10</td><td>1726</td><td>150</td><td>[0, 3]</td></tr><tr><td>4</td><td>Source Dependent Responses</td><td>10</td><td>1772</td><td>150</td><td>[0, 3]</td></tr><tr><td>5</td><td>Source Dependent Responses</td><td>8</td><td>1805</td><td>150</td><td>[0, 4]</td></tr><tr><td>6</td><td>Source Dependent Responses</td><td>10</td><td>1800</td><td>150</td><td>[0, 4]</td></tr><tr><td>7</td><td>Persuasive/Narrative/Expository</td><td>7</td><td>1569</td><td>300</td><td>[0, 12]</td></tr><tr><td>8</td><td>Persuasive/Narrative/Expository</td><td>10</td><td>723</td><td>650</td><td>[0, 36]</td></tr></table>
|
| 270 |
+
|
| 271 |
+
instructions for fine-tuning and inference of GPT-3.5 and LLaMA3-8B. The prompts for the few-shot scenario incorporate those used in the zero-shot setting and overlap with the fine-tuning prompts. Therefore, for clarity and conciseness, we present examples of the GPT-4, few-shot, with rubrics and the inputs of fine-tuned LLaMA3-8B in Table 9.
|
| 272 |
+
|
| 273 |
+
# C.2 Few-Shot GPT-4
|
| 274 |
+
|
| 275 |
+
In the few-shot setting of GPT-4 with $k$ essay examples, as indicated by prior studies in AES tasks (Yancey et al., 2023), increasing the value of $k$ did not consistently yield better results, showing a trend of diminishing marginal returns. Therefore, we choose a suitable $k = 3$ in the study.
|
| 276 |
+
|
| 277 |
+
We explored two sampling approaches. The first involved randomly selecting essays from various levels of quality to help LLM understand the approximate level of the target essay. The second method adopted a retrieval-based approach, which has been proven to be effective in enhancing LLM performance (Khandelwal et al., 2020; Shi et al., 2023; Ram et al., 2023). Leveraging OpenAI's text-embedding-ada-002 model, we calculated the embedding for each essay. This allowed us to identify the top $k$ similar essays based on cosine similarity (excluding the target essay). Our experiments demonstrated that this retrieval strategy consistently yielded superior results. Therefore, we focused on the latter approach in this study.
|
| 278 |
+
|
| 279 |
+
# C.3 Fine-tuning LLaMA3
|
| 280 |
+
|
| 281 |
+
We fine-tuned the llama-3-8b-bnb-4bit model using the unsloth framework. For this process, we employed a Parameter-Efficient Fine-Tuning (PEFT) approach with a rank of 16 and a LoRA alpha value of 16. We utilized an 8-bit AdamW optimizer, starting with an initial learning rate of $2 \times 10^{-4}$ . After 50 warm-up steps, the learning rate was scheduled to decay linearly, with the weight decay parameter
|
| 282 |
+
|
| 283 |
+
set at 0.01. We maintained all random seeds at 3407 and completed the fine-tuning over 2 epochs. All experiments involving the fine-tuned LLaMA3-8B were conducted using a single RTX 4090 24GB GPU, spanning approximately three weeks.
|
| 284 |
+
|
| 285 |
+
# D Human-AI Co-Grading Details
|
| 286 |
+
|
| 287 |
+
In our LLM-assisted human grading experiment, the 10 college students were all from a Normal University in Beijing, and had a male-to-female ratio of 4:6, with ages ranging from 19 to 23 years (from freshmen to seniors). Their English capabilities were certified by China's College English Test (CET). None of the novices have the experience of grading student essays currently. The 5 expert evaluators comprised experienced English teachers from Beijing high schools, with teaching tenures ranging from 8 to 20 years. Before evaluation, all participants received training on the standard scoring rubrics. They were also incentivized with appropriate remuneration for their participation.
|
| 288 |
+
|
| 289 |
+
The instructions for the evaluators include the standard scoring rubrics of the College Entrance Examination in China and several grading examples. The writing exercise and the essays designated for assessment will be presented to the evaluators. Moreover, supplementary feedback (scores, output confidences, and explanations) will be provided for the experimental groups. To enhance the evaluators' comprehension and avoid possible misunderstandings, all the information provided has been translated into Chinese.
|
| 290 |
+
|
| 291 |
+
The results of Overall, Content, Language, and Structure scores in the human-AI co-grading experiment are presented in Figure 5. We observed that the Content and Language scores exhibit a similar trend as the Overall score discussed in the Results section. The expert evaluators noted that the Structure dimension is the most ambiguous and difficult part of the grading task which has the lowest QWK values among the three dimensions.
|
| 292 |
+
|
| 293 |
+
Table 7: Generalization comparison of QWK scores for the Fine-tuned GPT-3.5 and the BERT Classifier under the ASAP dataset.
|
| 294 |
+
|
| 295 |
+
<table><tr><td></td><td></td><td>Set 1</td><td>Set 2</td><td>Set 3</td><td>Set 4</td><td>Set 5</td><td>Set 6</td><td>Set 7</td><td>Set 8</td></tr><tr><td rowspan="2">Trained on Set 1</td><td>BERT Classifier</td><td>-</td><td>0.3299</td><td>0.1680</td><td>0.1380</td><td>0.3045</td><td>0.1234</td><td>0.3002</td><td>0.1541</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>-</td><td>0.5216</td><td>0.5405</td><td>0.4891</td><td>0.5076</td><td>0.6344</td><td>0.6306</td><td>0.3126</td></tr><tr><td rowspan="2">Trained on Set 2</td><td>BERT Classifier</td><td>0.2776</td><td>-</td><td>0.1975</td><td>0.2392</td><td>0.1750</td><td>0.1453</td><td>0.2474</td><td>0.3783</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.4270</td><td>-</td><td>0.4131</td><td>0.4619</td><td>0.5958</td><td>0.5579</td><td>0.5438</td><td>0.6684</td></tr><tr><td rowspan="2">Trained on Set 3</td><td>BERT Classifier</td><td>0.3468</td><td>0.4444</td><td>-</td><td>0.6230</td><td>0.6319</td><td>0.5299</td><td>0.4368</td><td>0.2427</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.3991</td><td>0.2488</td><td>-</td><td>0.7674</td><td>0.7714</td><td>0.7150</td><td>0.4964</td><td>0.1134</td></tr><tr><td rowspan="2">Trained on Set 4</td><td>BERT Classifier</td><td>0.3257</td><td>0.5332</td><td>0.6267</td><td>-</td><td>0.5483</td><td>0.4959</td><td>0.4659</td><td>0.3204</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.0631</td><td>0.3493</td><td>0.4908</td><td>-</td><td>0.6515</td><td>0.7420</td><td>0.0865</td><td>0.3419</td></tr><tr><td rowspan="2">Trained on Set 5</td><td>BERT Classifier</td><td>0.4051</td><td>0.3341</td><td>0.4264</td><td>0.4202</td><td>-</td><td>0.5243</td><td>0.3255</td><td>0.2035</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.4354</td><td>0.4301</td><td>0.5765</td><td>0.6877</td><td>-</td><td>0.7368</td><td>0.1061</td><td>0.3118</td></tr><tr><td rowspan="2">Trained on Set 6</td><td>BERT Classifier</td><td>0.3164</td><td>0.3462</td><td>0.4000</td><td>0.3067</td><td>0.4882</td><td>-</td><td>0.2303</td><td>0.3047</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.1342</td><td>0.3607</td><td>0.4579</td><td>0.3157</td><td>0.3734</td><td>-</td><td>0.0061</td><td>0.0859</td></tr><tr><td rowspan="2">Trained on Set 7</td><td>BERT Classifier</td><td>0.0975</td><td>0.0086</td><td>0.1854</td><td>0.0328</td><td>0.0554</td><td>0.1244</td><td>-</td><td>0.2917</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.5862</td><td>0.3993</td><td>0.4865</td><td>0.4425</td><td>0.4494</td><td>0.4417</td><td>-</td><td>0.2157</td></tr><tr><td rowspan="2">Trained on Set 8</td><td>BERT Classifier</td><td>0.0560</td><td>0.1102</td><td>0.0110</td><td>0.0164</td><td>0.0371</td><td>0.0454</td><td>0.1777</td><td>-</td></tr><tr><td>Fine-tuned GPT-3.5</td><td>0.2714</td><td>0.4822</td><td>0.4768</td><td>0.6009</td><td>0.4199</td><td>0.3231</td><td>0.5460</td><td>-</td></tr></table>
|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
Figure 5: LLM-assisted grading experiment results for the novice, expert, and GPT-4 evaluators. From the top left to the bottom right is the result of the Overall score, Content score, Language score, and Structure score, respectively.
|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
|
| 306 |
+
Table 8: Rubrics for evaluating high school student essays in our private dataset.
|
| 307 |
+
|
| 308 |
+
<table><tr><td>Rubrics</td></tr><tr><td>Overall Score (20 points) = Content Score (8 points) + Language Score (8 points) + Structure Score (4 points)
|
| 309 |
+
Content Dimension (8 points in total)</td></tr><tr><td>• 6-8 points:
|
| 310 |
+
- Content is complete with appropriate details
|
| 311 |
+
- Expression is closely related to the topic</td></tr><tr><td>• 3-5 points:
|
| 312 |
+
- Content is mostly complete
|
| 313 |
+
- Expression is fundamentally related to the topic</td></tr><tr><td>• 0-2 points:
|
| 314 |
+
- Content is incomplete
|
| 315 |
+
- Expression is barely related or completely unrelated to the topic</td></tr><tr><td>Language Dimension (8 points in total)</td></tr><tr><td>• 6-8 points:
|
| 316 |
+
- Language is accurate with diverse sentence structures and little or no errors (2 errors or fewer, 8 points; 3-4 errors, 7 points; 5-6 errors, 6 points)
|
| 317 |
+
- Language expression is mostly appropriate</td></tr><tr><td>• 3-5 points:
|
| 318 |
+
- Language is not quite accurate, with some variation in sentence structures and several errors, but they don’t impede understanding (7-8 errors, 5 points; 9-10 errors, 4 points; 11-12 errors, 3 points)
|
| 319 |
+
- Language expression is somewhat inappropriate</td></tr><tr><td>• 0-2 points:
|
| 320 |
+
- Language is hopelessly inaccurate with numerous language errors, hindering understanding (more than 12 errors)
|
| 321 |
+
- Language expression is completely inappropriate</td></tr><tr><td>Structure Dimension (4 points in total)</td></tr><tr><td>• 3-4 points:
|
| 322 |
+
- Clearly and logically structured
|
| 323 |
+
- Smooth and coherent transitions</td></tr><tr><td>• 1-2 points:
|
| 324 |
+
- Mostly clearly and logically structured
|
| 325 |
+
- Relatively smooth and coherent transitions</td></tr><tr><td>• 0-1 points:
|
| 326 |
+
- Not clearly and logically structured
|
| 327 |
+
- Fragmented and disconnected structures and sentences</td></tr></table>
|
| 328 |
+
|
| 329 |
+
Table 9: The example prompt (the GPT-4, few-shot, with rubrics approach and the fine-tuned LLaMA3-8B) and the corresponding model output.
|
| 330 |
+
|
| 331 |
+
<table><tr><td>Example Prompt</td><td>Example Output</td></tr><tr><td>As a virtual evaluator with expertise in English composition, your role is to critically analyze and grade student essays according to a predetermined set of rubrics. You are to act as an impartial judge and evaluate the essays based on the quality of the writing and adherence to the essay prompt.</td><td>Explanations: The student's essay responds accurately to Jim's request for suggestions on how to learn Chinese, providing thoughtful advice and support. Detailed suggestions include listening to Chinese radio, reading Chinese literature, and using mobile applications. The tone is hopeful and supportive.
|
| 332 |
+
Content Score: 8</td></tr><tr><td>Here are the specific guidelines for each score:
|
| 333 |
+
{Rubrics}
|
| 334 |
+
Sample Essay Prompt:
|
| 335 |
+
{Essay Prompt}
|
| 336 |
+
The graded example essays:
|
| 337 |
+
{Examples}
|
| 338 |
+
Student's Essay to Evaluate:
|
| 339 |
+
{Target Student Essay}</td><td>Explanations: There are no language errors; the grammar and spelling are all accurate. The student used formal, respectful language suitable for this kind of communication and also included diversity in sentence structures.
|
| 340 |
+
Language Score: 8</td></tr><tr><td>Task Breakdown:
|
| 341 |
+
1. Carefully read the provided essay prompt, scoring guidelines, and the student's essay.
|
| 342 |
+
2. In the Explanations part, identify specific elements in the essay referring to the rubrics. In the language dimension, list all the spelling and grammar errors, and count the number of them to determine the Language Score. The Explanations for each dimension should be as detailed as possible.
|
| 343 |
+
3. Determine the appropriate scores according to the analysis above.</td><td>Explanations: The essay is well-structured and organized. It follows the accepted structure of a reply email, with an opening greeting, a body containing the main content, and a closing remark. Transitions are also smooth and coherent, logically linking one thought to another.
|
| 344 |
+
Structure Score: 4</td></tr><tr><td>Please present your evaluation in the following manner:
|
| 345 |
+
Explanations: ..., Content Score: ...
|
| 346 |
+
Explanations: ..., Language Score: ...
|
| 347 |
+
Explanations: ..., Structure Score: ...
|
| 348 |
+
Explanations: ..., Total Score: ...
|
| 349 |
+
Your final evaluation:
|
| 350 |
+
[Total Score: ..., Content Score: ..., Language Score: ..., Structure Score: ...]</td><td>Total Score: This is the sum of the content, language, and structure scores. Total Score: 20
|
| 351 |
+
Your final evaluation:
|
| 352 |
+
[Total Score: 20, Content Score: 8, Language Score: 8, Structure Score: 4]</td></tr><tr><td>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.</td><td rowspan="3">{ 'content': {'completeness': 'The essay covers the basic requirements of the prompt by introducing design ideas for the graduation ceremony and explaining the reasons behind these designs. However, the explanations are quite brief and lack depth', 'topic_relevance': 'The essay is related to the given topic, as it discusses ideas for a graduation ceremony design', 'content_details': 'The details provided are minimal and lack depth. For example, the student mentions launching a ball in the dining hall and writing gratitude letters but does not elaborate on how these activities contribute to the overall design or the reasons behind these choices in detail', 'score_level': 'Level 2', 'score_point': 4.5}, 'language': {...}, 'structure': {...}, 'overall': {...} }</td></tr><tr><td>##Input:
|
| 353 |
+
Scoring rubrics:
|
| 354 |
+
{Rubrics}
|
| 355 |
+
Essay Prompt:
|
| 356 |
+
{Essay Prompt}
|
| 357 |
+
Student's Essay to Evaluate:
|
| 358 |
+
{Target Student Essay}</td></tr><tr><td>## Response:
|
| 359 |
+
{ }</td></tr></table>
|
2401.06xxx/2401.06431/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:15833aa35eaadaa965e9224a2702b580730c6d891c12d55da52c005705b27d0f
|
| 3 |
+
size 1100995
|
2401.06xxx/2401.06431/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_content_list.json
ADDED
|
@@ -0,0 +1,1261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
236,
|
| 8 |
+
91,
|
| 9 |
+
763,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Yae Jee Cho $^{1*}$ , Luyang Liu $^{2}$ , Zheng Xu $^{2}$ , Aldi Fahrezi $^{2}$ , Gauri Joshi $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
208,
|
| 19 |
+
152,
|
| 20 |
+
794,
|
| 21 |
+
170
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Carnegie Mellon University, $^{2}$ Google Research",
|
| 28 |
+
"bbox": [
|
| 29 |
+
305,
|
| 30 |
+
170,
|
| 31 |
+
695,
|
| 32 |
+
186
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "yaejeec@andrew.cmu.edu, {luyangliu,xuzheng,aldifahrezi} $@$ google.com,",
|
| 39 |
+
"bbox": [
|
| 40 |
+
191,
|
| 41 |
+
187,
|
| 42 |
+
806,
|
| 43 |
+
204
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "gaurij@andrew.cmu.edu",
|
| 50 |
+
"bbox": [
|
| 51 |
+
401,
|
| 52 |
+
205,
|
| 53 |
+
600,
|
| 54 |
+
219
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
260,
|
| 64 |
+
262,
|
| 65 |
+
339,
|
| 66 |
+
277
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs with on-device local data. For federated fine-tuning of FMs, we consider the FMs with small to medium parameter sizes of single digit billion at maximum, referred to as ondevice FMs (ODFMs) that can be deployed on devices for inference but can only be finetuned with parameter efficient methods. In our work, we tackle the data and system heterogeneity problem of federated fine-tuning of ODFMs by proposing a novel method using heterogeneous low-rank approximations (LoRAs), namely HETLORA. First, we show that the naive approach of using homogeneous LoRA ranks across devices face a trade-off between overfitting and slow convergence, and thus propose HETLORA, which allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules. By applying rank self-pruning locally and sparsity-weighted aggregation at the server, HETLORA combines the advantages of high and low-rank LoRAs, which achieves improved convergence speed and final performance compared to homogeneous LoRA. Furthermore, HETLORA offers enhanced computation efficiency compared to full fine-tuning, making it suitable for federated fine-tuning across heterogeneous devices.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
142,
|
| 75 |
+
288,
|
| 76 |
+
460,
|
| 77 |
+
746
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1 Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
115,
|
| 87 |
+
758,
|
| 88 |
+
260,
|
| 89 |
+
772
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "The emerging foundation models (FMs) (Bommasani et al., 2022; Zhou et al., 2023; Radford et al., 2021; Devlin et al., 2019; OpenAI, 2023; Google, 2022; Touvron et al., 2023; Brown",
|
| 96 |
+
"bbox": [
|
| 97 |
+
114,
|
| 98 |
+
784,
|
| 99 |
+
489,
|
| 100 |
+
848
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "et al., 2020; Google, 2022; Driess et al., 2023; Google, 2023) have shown remarkable zero/few shot learning capabilities, performing well on a variety of tasks including text/image generation with prompts, language translation, solving math problems, and conversing in natural language. Standard FMs, however, demand costly resources for directly fine-tuning their entire parameter space. To tackle this issue, many recent works have proposed different parameter-efficient fine-tuning (PEFT) methods of FMs such as prompt tuning (Lester et al., 2021), utilizing adapters (Houlsby et al., 2019), or low-rank adaptation (LoRA) of the original model (Hu et al., 2021) which freezes the original pre-trained parameters of the FM and train additional, smaller number of parameters instead.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
262,
|
| 110 |
+
882,
|
| 111 |
+
520
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "These PEFT methods, however, assume that i) FMs are deployed to and trained with the data of a single machine/client for adaptation to the downstream task and that ii) the client has enough resources to even fit a standard FM of hundred billion size for, at least, inference. In practice, there are frequently cases where we are interested in fine-tuning FMs for on-device private data that is distributed across multiple devices (clients). For instance, sensitive and private data such as medical information or law-related documents may be hard to collect centrally in a private manner and fine-tuning of the FMs may need to be done at the edge (Manoel et al., 2023; Shoham and Rappoport, 2023; Zhang et al., 2023c).",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
523,
|
| 121 |
+
880,
|
| 122 |
+
765
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "In our work, we focus on such federated fine-tuning scenarios, where we train a set of parameters collaboratively across clients to obtain a global set of parameters that can be plugged in to the FM for the targeted downstream task. Note that federated fine-tuning is orthogonal to personalization of FMs in federated learning (FL) (Guo et al., 2023), which",
|
| 129 |
+
"bbox": [
|
| 130 |
+
507,
|
| 131 |
+
769,
|
| 132 |
+
880,
|
| 133 |
+
881
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "aside_text",
|
| 139 |
+
"text": "arXiv:2401.06432v2 [cs.LG] 20 Feb 2024",
|
| 140 |
+
"bbox": [
|
| 141 |
+
21,
|
| 142 |
+
307,
|
| 143 |
+
60,
|
| 144 |
+
724
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "page_footnote",
|
| 150 |
+
"text": "*Work done while at Google Research. Corresponding authors: yaejeec@andrew.cmu.edu, luyangliu@google.com",
|
| 151 |
+
"bbox": [
|
| 152 |
+
115,
|
| 153 |
+
854,
|
| 154 |
+
487,
|
| 155 |
+
882
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "table",
|
| 161 |
+
"img_path": "images/8e3e6953625345453eb40e22387406b6fffa439af169820f38ff9a37f2cea3ca.jpg",
|
| 162 |
+
"table_caption": [],
|
| 163 |
+
"table_footnote": [],
|
| 164 |
+
"table_body": "<table><tr><td></td><td>Zero-Shot</td><td>Few-Shot</td><td>Full-Training</td></tr><tr><td>PaLM 2 XXS</td><td>2930.23</td><td>2541.86</td><td>23.71</td></tr><tr><td>PaLM 2 XS</td><td>2712.86</td><td>481.95</td><td>18.32</td></tr></table>",
|
| 165 |
+
"bbox": [
|
| 166 |
+
124,
|
| 167 |
+
84,
|
| 168 |
+
475,
|
| 169 |
+
129
|
| 170 |
+
],
|
| 171 |
+
"page_idx": 1
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"type": "text",
|
| 175 |
+
"text": "Table 1: Perplexity of PaLM 2 for zero-shot, few-shot (5 communication rounds), and full federated fine-tuning (200 communication rounds) for chat response on the multi-session chat data (further experimental details are in Section 4.)",
|
| 176 |
+
"bbox": [
|
| 177 |
+
114,
|
| 178 |
+
134,
|
| 179 |
+
487,
|
| 180 |
+
184
|
| 181 |
+
],
|
| 182 |
+
"page_idx": 1
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"type": "text",
|
| 186 |
+
"text": "aims to train parameters that perform well for individual clients rather than general downstream tasks. We also define on-device FMs (ODFMs) as models with few billion parameters at max that are able to fit into memory on limited capacity clients considering current hardwares.",
|
| 187 |
+
"bbox": [
|
| 188 |
+
114,
|
| 189 |
+
187,
|
| 190 |
+
487,
|
| 191 |
+
282
|
| 192 |
+
],
|
| 193 |
+
"page_idx": 1
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"type": "text",
|
| 197 |
+
"text": "Federated fine-tuning of ODFMs entails unique challenges non-present in either the standard PEFT of FMs or the standard federated training of models that are not FMs. First, FMs have their zero/few-shot learning capability often supported by their large parameter space that is trained on massive data. However, as we show in Table 1 and also presented by previous literature (Kojima et al., 2022), FMs' performance deteriorates as their sizes get smaller and federated fine-tuning may not merely be useful but inevitable for ODFMs to perform well for downstream tasks on devices.",
|
| 198 |
+
"bbox": [
|
| 199 |
+
114,
|
| 200 |
+
285,
|
| 201 |
+
487,
|
| 202 |
+
476
|
| 203 |
+
],
|
| 204 |
+
"page_idx": 1
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"type": "text",
|
| 208 |
+
"text": "Moreover, devices have limited and heterogeneous system capabilities (Wang et al., 2019; Bonawitz et al., 2016) and data distributions (Sahu et al., 2020). A suitable PEFT method that flexibly adapts to such heterogeneity across devices should be investigated for federated fine-tuning of ODFMs. Previous work evaluated PEFT with FL via performing a general evaluation over different PEFT methods naively combined with FL (Guo et al., 2022; Zhang et al., 2023d; Chen et al., 2022; Wortsman et al., 2023; Yu et al., 2023). However, they do not consider the practical setting for ODFMs where PEFT methods are catered to the system and data heterogeneity of clients.",
|
| 209 |
+
"bbox": [
|
| 210 |
+
114,
|
| 211 |
+
479,
|
| 212 |
+
489,
|
| 213 |
+
703
|
| 214 |
+
],
|
| 215 |
+
"page_idx": 1
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"type": "text",
|
| 219 |
+
"text": "In our work, we focus on one of the most prominent PEFT methods, LoRA (Hu et al., 2021) which proposes to train low-rank approximations of the original model. Using LoRA, the number of trainable parameters is greatly reduced to at most $0.02\\%$ of the original ODFM size (see Table 2). The simplest way to apply LoRA to federated fine-tuning is training with homogeneous rank $r$ across the clients as one would train any global model with FL. However, this does not cater to the heterogeneity in FL, where it is even difficult to choose the",
|
| 220 |
+
"bbox": [
|
| 221 |
+
114,
|
| 222 |
+
705,
|
| 223 |
+
489,
|
| 224 |
+
882
|
| 225 |
+
],
|
| 226 |
+
"page_idx": 1
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"type": "image",
|
| 230 |
+
"img_path": "images/ef8a24ab12a92a5bd3140e6567c742443e30d009465bed0d1d009b0430f696fe.jpg",
|
| 231 |
+
"image_caption": [
|
| 232 |
+
"Figure 1: Overview of heterogeneous rank deployment of LoRA: the pretrained weights $\\mathbf{W}_0$ are stored on-device and heterogeneous ranks are assigned to different clients with $r_{\\mathrm{min}} = r_1 < r_2 < r_3 = r_{\\mathrm{max}}$ . In our proposed HETLORA, the server receives the trained heterogeneous LoRA modules and aggregates them with sparsity-weighted aggregation to update the global LoRA module."
|
| 233 |
+
],
|
| 234 |
+
"image_footnote": [],
|
| 235 |
+
"bbox": [
|
| 236 |
+
512,
|
| 237 |
+
83,
|
| 238 |
+
880,
|
| 239 |
+
229
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "right LoRA rank for resource limited mobile devices with natural system and data heterogeneity.",
|
| 246 |
+
"bbox": [
|
| 247 |
+
507,
|
| 248 |
+
325,
|
| 249 |
+
880,
|
| 250 |
+
357
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "text",
|
| 256 |
+
"text": "To this end, we propose heterogeneous LoRA, namely HETLORA in short, for federated fine-tuning to cater to system and data heterogeneity and outperform the naive combination of LoRA and federated fine-tuning where homogeneous ranks are applied across clients. We show the performance of PaLM 2 (Google, 2023) of XXS and XS size for chat responses on the multi-session chat data (Xu et al., 2021) and text summarization for the Reddit data (Volske et al., 2017), both which are real world data from clients. Our contributions can be summarized as follows:",
|
| 257 |
+
"bbox": [
|
| 258 |
+
507,
|
| 259 |
+
362,
|
| 260 |
+
880,
|
| 261 |
+
552
|
| 262 |
+
],
|
| 263 |
+
"page_idx": 1
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"type": "list",
|
| 267 |
+
"sub_type": "text",
|
| 268 |
+
"list_items": [
|
| 269 |
+
"- We propose HETLORA that can apply different rank LoRA modules to different clients to cater to the heterogeneous system capabilities and data complexities of the clients, via utilizing rank self-pruning and sparsity-weighted aggregation.",
|
| 270 |
+
"- We show the performance of naively applying LoRA with homogeneous ranks across clients for federated fine-tuning, and show that while large ranks help in speeding-up training, they lead to faster overfitting while smaller ranks are slower in training but does not suffer from overfitting.",
|
| 271 |
+
"- We then evaluate HETLORA to show that it outperforms naively applying homogeneous ranks across clients in terms of both training speed, communication/computation efficiency, and final performance, gaining the best of both worlds of homogeneous LoRA with high and low ranks."
|
| 272 |
+
],
|
| 273 |
+
"bbox": [
|
| 274 |
+
507,
|
| 275 |
+
577,
|
| 276 |
+
880,
|
| 277 |
+
881
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "page_number",
|
| 283 |
+
"text": "2",
|
| 284 |
+
"bbox": [
|
| 285 |
+
492,
|
| 286 |
+
903,
|
| 287 |
+
504,
|
| 288 |
+
915
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "table",
|
| 294 |
+
"img_path": "images/5bab3c18a2cd72e7a0ccb65ae6efc71c2bd4b7191c84d1365a5334a257470f33.jpg",
|
| 295 |
+
"table_caption": [],
|
| 296 |
+
"table_footnote": [],
|
| 297 |
+
"table_body": "<table><tr><td></td><td>r = 1</td><td>r = 5</td><td>r = 10</td><td>r = 20</td><td>r = 50</td><td>r = 100</td><td>r = 150</td><td>r = 200</td></tr><tr><td>PaLM 2 XXS, PaLM 2 XS</td><td>0.02%</td><td>0.11%</td><td>0.21%</td><td>0.42%</td><td>1.05%</td><td>2.10%</td><td>3.14%</td><td>4.19%</td></tr></table>",
|
| 298 |
+
"bbox": [
|
| 299 |
+
142,
|
| 300 |
+
93,
|
| 301 |
+
853,
|
| 302 |
+
130
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 2
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "text",
|
| 308 |
+
"text": "Table 2: Percentage of the LoRA parameters' size for different ranks $r$ compared to the original pre-trained ODFM's parameter size. Even for large ranks such as $r = {200}$ the trainable LoRA parameters' size compared to the original pre-trained ODFM size is less than 5% for both PaLM 2-XS and PaLM 2-XXS.",
|
| 309 |
+
"bbox": [
|
| 310 |
+
114,
|
| 311 |
+
137,
|
| 312 |
+
880,
|
| 313 |
+
174
|
| 314 |
+
],
|
| 315 |
+
"page_idx": 2
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"type": "text",
|
| 319 |
+
"text": "2 Related Work",
|
| 320 |
+
"text_level": 1,
|
| 321 |
+
"bbox": [
|
| 322 |
+
115,
|
| 323 |
+
187,
|
| 324 |
+
272,
|
| 325 |
+
204
|
| 326 |
+
],
|
| 327 |
+
"page_idx": 2
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"type": "text",
|
| 331 |
+
"text": "Parameter-Efficient Fine Tuning. There has been a plethora of recent work on PEFT which either trains a subset of parameters within the existing FM whilst other parameters are frozen or introduces an additional set of trainable parameters whilst keeping the original FM frozen. For the former, methods such as head or bias fine-tuning (Wei et al., 2021; Bu et al., 2022; Lee et al., 2019; Zaken et al., 2021) has been explored, while for the latter, methods such as adapters (Houlsby et al., 2019), prompt (Lester et al., 2021) or prefix-tuning (Li and Liang, 2021), and low-rank approximation (Hu et al., 2021) has been proposed. While these number of methods has been proven to perform as well as full model fine-tuning with just a few number of parameters for the centralized setting, it has not been thoroughly explored how these methods perform for a much smaller FM such as ODFMs, in the decentralized setting where clients' system capacities can be heterogeneous and much limited.",
|
| 332 |
+
"bbox": [
|
| 333 |
+
114,
|
| 334 |
+
218,
|
| 335 |
+
489,
|
| 336 |
+
539
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "Federated Fine-Tuning. Recently, interest in the intersection of FMs and FL has notably increased (Zhou et al., 2023; Yu et al., 2023). Many recent work has proposed to combine the PEFT methods devised for the centralized setting to FL such as training prompts or adapters collaboratively with FL (Guo et al., 2022; Chen et al., 2022; Zhang et al., 2023a; Shysheya et al., 2023; Legate et al., 2023). Another line of work has proposed to perform a few-shot or nearly zero-shot training of FMs with FL for improved communication-efficiency (Wortsman et al., 2023; Zhang et al., 2023d). However, these work either overlooks that most devices do not have the resource to fit a general FM (Touvron et al., 2023; Brown et al., 2020) ( $>8\\mathrm{B}$ parameters) even for inference or does not consider the heterogeneous system capacities of the clients. It is detrimental to consider these factors since FMs that actually fits to the devices in FL are much smaller, making them weaker in the general intelligence capabilities, and also hetero-",
|
| 343 |
+
"bbox": [
|
| 344 |
+
114,
|
| 345 |
+
544,
|
| 346 |
+
489,
|
| 347 |
+
882
|
| 348 |
+
],
|
| 349 |
+
"page_idx": 2
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"type": "text",
|
| 353 |
+
"text": "genuine system capacities may prohibit deploying same sized PEFT parameters across clients.",
|
| 354 |
+
"bbox": [
|
| 355 |
+
507,
|
| 356 |
+
190,
|
| 357 |
+
878,
|
| 358 |
+
221
|
| 359 |
+
],
|
| 360 |
+
"page_idx": 2
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"type": "text",
|
| 364 |
+
"text": "Only a few number of recent work has looked in to using LoRA for FL. For instance, in (Babakniya et al., 2023), the importance of the initialization for the LoRA modules is evaluated where they propose to train the LoRA modules with FL and then perform singular value decomposition (SVD) to gain a good initialization of the LoRA modules. However, the training process of LoRA itself is not altered to adapt to heterogeneous system capabilities of devices. Another recent work (Yi et al., 2023) has evaluated LoRA in the context of personalized FL, but other than applying LoRA to personalization, the LoRA method itself is, again, not changed. Our work proposes heterogeneous LoRA for federated fine-tuning where heterogeneous ranks are deployed and trained across clients by a new algorithm that includes rank self-pruning and sparsity weighted aggregation.",
|
| 365 |
+
"bbox": [
|
| 366 |
+
507,
|
| 367 |
+
223,
|
| 368 |
+
884,
|
| 369 |
+
512
|
| 370 |
+
],
|
| 371 |
+
"page_idx": 2
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"type": "text",
|
| 375 |
+
"text": "3 Federated Fine-Tuning with LoRA",
|
| 376 |
+
"text_level": 1,
|
| 377 |
+
"bbox": [
|
| 378 |
+
507,
|
| 379 |
+
525,
|
| 380 |
+
843,
|
| 381 |
+
542
|
| 382 |
+
],
|
| 383 |
+
"page_idx": 2
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"type": "text",
|
| 387 |
+
"text": "3.1 Preliminaries",
|
| 388 |
+
"text_level": 1,
|
| 389 |
+
"bbox": [
|
| 390 |
+
507,
|
| 391 |
+
552,
|
| 392 |
+
663,
|
| 393 |
+
565
|
| 394 |
+
],
|
| 395 |
+
"page_idx": 2
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"type": "text",
|
| 399 |
+
"text": "Formally, we define the pre-trained ODFM as $\\mathbf{W}_0\\in \\mathbb{R}^{d\\times l}$ and the trainable low-rank decomposed matrix as $\\Delta \\mathbf{W}\\in \\mathbb{R}^{d\\times l}$ . In standard LoRA (Hu et al., 2021) under the centralized setting, the low-rank decomposition of $\\Delta \\mathbf{W}$ is constructed such that $\\Delta \\mathbf{W} = \\mathbf{BA}$ where $\\mathbf{B}\\in \\mathbb{R}^{d\\times r}$ and $\\mathbf{A}\\in \\mathbb{R}^{r\\times l}$ are the low rank decomposition of $\\Delta \\mathbf{W}$ with identical rank $r$ . Now, let us consider LoRA for federated fine-tuning where there are $M$ total clients. Each client $k\\in [M]$ has private data $\\mathcal{B}_k$ and its corresponding local empirical loss function $F_{k}(\\mathbf{W}) = \\frac{1}{|\\mathcal{B}_{k}|}\\sum_{\\xi \\in \\mathcal{B}_{k}}\\ell (\\mathbf{W},\\xi)$ , where $\\ell (\\mathbf{W},\\xi)$ is the loss for model $\\mathbf{W}$ at data sample $\\xi$ . The optimization task for federated fine-tuning is to collaboratively find the global parameters which we define as $\\overline{\\mathbf{B}}$ and $\\overline{\\mathbf{A}}$ , given the pretrained knowledge $\\mathbf{W}_0$ that can minimize the global objective $F(\\overline{\\mathbf{W}}) = \\frac{1}{M}\\sum_{k = 1}^{M}F_{k}(\\overline{\\mathbf{W}})$ where $\\overline{\\mathbf{W}} = \\mathbf{W}_0 + \\overline{\\mathbf{B}}\\overline{\\mathbf{A}}$ . Later in the paper, when",
|
| 400 |
+
"bbox": [
|
| 401 |
+
507,
|
| 402 |
+
574,
|
| 403 |
+
882,
|
| 404 |
+
882
|
| 405 |
+
],
|
| 406 |
+
"page_idx": 2
|
| 407 |
+
},
|
| 408 |
+
{
|
| 409 |
+
"type": "page_number",
|
| 410 |
+
"text": "3",
|
| 411 |
+
"bbox": [
|
| 412 |
+
492,
|
| 413 |
+
903,
|
| 414 |
+
505,
|
| 415 |
+
915
|
| 416 |
+
],
|
| 417 |
+
"page_idx": 2
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"type": "text",
|
| 421 |
+
"text": "introducing heterogeneous LoRA we truncate the LoRA modules' rank dimension, for example from $\\mathbf{B} \\in \\mathbb{R}^{d \\times r}$ , $\\mathbf{A} \\in \\mathbb{R}^{r \\times l}$ to $\\mathbf{B}' \\in \\mathbb{R}^{d \\times r'}$ , $\\mathbf{A}' \\in \\mathbb{R}^{r' \\times l}$ where $r' < r$ . Throughout the paper, we denote such truncation of a matrix with the : symbol for each row and column at the subscript. For instance, for truncation to $r' < r$ at the column for the matrix $\\mathbf{B} \\in \\mathbb{R}^{d \\times r}$ , we keep all the columns until $r'$ and omit the last $r - r'$ columns and denote the resulting matrix it as $\\mathbf{B}_{:,:r'}$ .",
|
| 422 |
+
"bbox": [
|
| 423 |
+
114,
|
| 424 |
+
85,
|
| 425 |
+
489,
|
| 426 |
+
247
|
| 427 |
+
],
|
| 428 |
+
"page_idx": 3
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"type": "text",
|
| 432 |
+
"text": "3.2 Naïve Case: Homogeneous LoRA",
|
| 433 |
+
"text_level": 1,
|
| 434 |
+
"bbox": [
|
| 435 |
+
115,
|
| 436 |
+
256,
|
| 437 |
+
421,
|
| 438 |
+
272
|
| 439 |
+
],
|
| 440 |
+
"page_idx": 3
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"type": "text",
|
| 444 |
+
"text": "A straightforward way to perform federated fine-tuning with LoRA is to train the LoRA modules $\\mathbf{B}$ , $\\mathbf{A}$ with homogeneous rank $r$ across all clients with standard FL (McMahan et al., 2017). Specifically, first the clients have the pre-trained ODFM weights $\\mathbf{W}_0$ stored in their devices prior to training for the forward pass when training the LoRA modules. Then, the server sends the global LoRA modules $\\overline{\\mathbf{B}}^{(t)}$ , $\\overline{\\mathbf{A}}^{(t)}$ to the set of $m$ selected clients $S^{(t)}$ per communication round $t$ . Each selected client $k \\in S^{(t)}$ trains the LoRA modules on their local data for a few local iterations (usually with mini-batch SGD) and send the updated modules $\\mathbf{B}_k^{(t)}$ , $\\mathbf{A}_k^{(t)}$ back to the server. The server then updates the global LoRA modules accordingly to $\\overline{\\mathbf{B}}^{(t+1)} = \\sum_{k \\in S^{(t)}} \\mathbf{B}_k^{(t)} / m$ , $\\overline{\\mathbf{A}}^{(t+1)} = \\sum_{k \\in S^{(t)}} \\mathbf{A}_k^{(t)} / m$ and sends back to the next set of selected clients for the next communication round. This training process is nearly identical to the standard FL algorithm (McMahan et al., 2017) except that the pretrained weights $\\mathbf{W}_0$ are frozen and locally stored in the clients' devices and only the LoRA moduels are trained and communicated.",
|
| 445 |
+
"bbox": [
|
| 446 |
+
114,
|
| 447 |
+
275,
|
| 448 |
+
489,
|
| 449 |
+
655
|
| 450 |
+
],
|
| 451 |
+
"page_idx": 3
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"type": "text",
|
| 455 |
+
"text": "Instead of such homogeneous rank deployment across all clients, it is not only possible but more practical to use heterogeneous rank deployment for federated fine-tuning. This involves training LoRA modules with varying ranks on different clients, based on their system capabilities. Such setting is motivated and often required from the system constraints of the clients (Wang et al., 2021) where most of the clients are only capable of having smaller ranks while a few can handle larger ranks. However, this approach poses challenges in aggregating and redistributing the LoRA modules. To address these challenges, we introduce a solution called HETLORA, which pushes the limits",
|
| 456 |
+
"bbox": [
|
| 457 |
+
114,
|
| 458 |
+
657,
|
| 459 |
+
489,
|
| 460 |
+
882
|
| 461 |
+
],
|
| 462 |
+
"page_idx": 3
|
| 463 |
+
},
|
| 464 |
+
{
|
| 465 |
+
"type": "text",
|
| 466 |
+
"text": "beyond homogeneous LoRA deployment.",
|
| 467 |
+
"bbox": [
|
| 468 |
+
507,
|
| 469 |
+
85,
|
| 470 |
+
821,
|
| 471 |
+
101
|
| 472 |
+
],
|
| 473 |
+
"page_idx": 3
|
| 474 |
+
},
|
| 475 |
+
{
|
| 476 |
+
"type": "text",
|
| 477 |
+
"text": "3.3 Proposed Method: Heterogeneous LoRA",
|
| 478 |
+
"text_level": 1,
|
| 479 |
+
"bbox": [
|
| 480 |
+
507,
|
| 481 |
+
112,
|
| 482 |
+
875,
|
| 483 |
+
128
|
| 484 |
+
],
|
| 485 |
+
"page_idx": 3
|
| 486 |
+
},
|
| 487 |
+
{
|
| 488 |
+
"type": "text",
|
| 489 |
+
"text": "Overview. Our proposed heterogeneous LoRA method, namely HETLORA, is not restricted to any specific method to assign the ranks to the clients and the clients can decide their respective ranks themselves. For formality, in our paper, we formulate that each client has a rank denoted as $r_k$ , within a range of $r_k \\in [r_{\\min}, r_{\\max}]$ , $\\forall k$ (see Fig. 1). HETLORA comprises three steps: 1) Distribution via Truncation, 2) Local Training with Rank Self-Pruning, and 3) Sparsity-Weighted Aggregation of the LoRA modules. These steps are detailed further in the subsequent paragraphs. An overview of HETLORA is illustrated in Fig. 2.",
|
| 490 |
+
"bbox": [
|
| 491 |
+
505,
|
| 492 |
+
134,
|
| 493 |
+
882,
|
| 494 |
+
343
|
| 495 |
+
],
|
| 496 |
+
"page_idx": 3
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"type": "text",
|
| 500 |
+
"text": "1) Distribution via Truncation. At the beginning of each communication round $t$ , the server holds initial global LoRA modules $\\overline{\\mathbf{B}}^{(t)}$ , $\\overline{\\mathbf{A}}^{(t)}$ with a global rank $r^{(t)}$ . The value of the global rank $r^{(t)}$ depends on how we aggregate the heterogeneous rank LoRA modules which is elaborated on in step 3). The server then distributes these global LoRA modules to a subset of selected set of clients $S^{(t)}$ with heterogeneous ranks $r_k^{(t)}$ , $k \\in S^{(t)}$ for local training<sup>1</sup>. With the given global LoRA modules, we consider a simple and intuitive method of truncation where the server sends $\\overline{\\mathbf{B}}_{:,r_k}^{(t)}$ , $\\overline{\\mathbf{A}}_{:r_k}^{(t)}$ ; to each client $k$ with rank $r_k^{(t)}$ for local training where we omitted the superscript for $r_k$ for simplicity.",
|
| 501 |
+
"bbox": [
|
| 502 |
+
507,
|
| 503 |
+
344,
|
| 504 |
+
884,
|
| 505 |
+
577
|
| 506 |
+
],
|
| 507 |
+
"page_idx": 3
|
| 508 |
+
},
|
| 509 |
+
{
|
| 510 |
+
"type": "text",
|
| 511 |
+
"text": "2) Local Training with Rank Self-Pruning. After receiving LoRA modules from the server as $\\mathbf{B}_k^{(t,0)} = \\overline{\\mathbf{B}}_{\\therefore r_k}^{(t)}$ , $\\mathbf{A}_k^{(t,0)} = \\overline{\\mathbf{A}}_{r_k,\\cdot}$ , each client $k \\in S^{(t)}$ performs $\\tau$ local iterations of mini-batch SGD on their local data to minimize the local objective $\\frac{1}{|\\mathcal{B}_k|} \\sum_{\\xi \\in \\mathcal{B}_k} \\ell((\\mathbf{B}_k, \\mathbf{A}_k), \\xi | \\mathbf{W}_0)$ , and sends back the updated LoRA modules $\\mathbf{B}_k^{(t,\\tau)} \\in \\mathbb{R}^{d \\times r_k^{(t)}}$ and $\\mathbf{A}_k^{(t,\\tau)} \\in \\mathbb{R}^{r_k^{(t)} \\times l}$ to the server. This is the same process as the standard local training step in vanilla FedAvg (McMahan et al., 2017). However, we improve this vanilla local training step by adding a rank self-pruning mechanism where clients self-prune their respective ranks depending on the magnitude of the model parameters.",
|
| 512 |
+
"bbox": [
|
| 513 |
+
507,
|
| 514 |
+
580,
|
| 515 |
+
882,
|
| 516 |
+
821
|
| 517 |
+
],
|
| 518 |
+
"page_idx": 3
|
| 519 |
+
},
|
| 520 |
+
{
|
| 521 |
+
"type": "page_footnote",
|
| 522 |
+
"text": "There is a superscript $t$ for the ranks $r_k^{(t)}$ across clients which indicates that in HETLORA these heterogeneous ranks can be changed over the communication rounds via self-pruning explained in step 2).",
|
| 523 |
+
"bbox": [
|
| 524 |
+
507,
|
| 525 |
+
829,
|
| 526 |
+
882,
|
| 527 |
+
882
|
| 528 |
+
],
|
| 529 |
+
"page_idx": 3
|
| 530 |
+
},
|
| 531 |
+
{
|
| 532 |
+
"type": "page_number",
|
| 533 |
+
"text": "4",
|
| 534 |
+
"bbox": [
|
| 535 |
+
492,
|
| 536 |
+
903,
|
| 537 |
+
504,
|
| 538 |
+
915
|
| 539 |
+
],
|
| 540 |
+
"page_idx": 3
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"type": "image",
|
| 544 |
+
"img_path": "images/5909e0a929e5fcb47a9129d5abedd6e660b26aa2cb1e13f49f3c37d8a35634bb.jpg",
|
| 545 |
+
"image_caption": [
|
| 546 |
+
"Figure 2: Overview of the zero-padding, sparsity-weighted aggregation, and truncation method for HETLORA; (a): Zero-pad LoRA modules with smaller ranks to $r_{\\mathrm{max}}$ (clients with rank $r_{\\mathrm{max}}$ does not need padding) and calculate their sparsity by calculating the Frobenius norm of the reconstructed model $\\Delta \\mathbf{W}_k^{(t)} = \\mathbf{B}_k^{(t)}\\mathbf{A}_k^{(t)}$ ; (b): After padding, aggregate all of the clients' LoRA modules with the weights $\\| \\mathbf{S}_k^{(t)}\\| /\\mathbf{Z}^{(t)}$ calculated by $\\Delta \\mathbf{W}_k^{(t)}$ to get the global LoRA modules; (c): Truncate the global LoRA modules for the specific rank of the next selected client (example for client with rank $r_2$ )."
|
| 547 |
+
],
|
| 548 |
+
"image_footnote": [],
|
| 549 |
+
"bbox": [
|
| 550 |
+
117,
|
| 551 |
+
82,
|
| 552 |
+
571,
|
| 553 |
+
200
|
| 554 |
+
],
|
| 555 |
+
"page_idx": 4
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "image",
|
| 559 |
+
"img_path": "images/6ccbf0685e991e5597744dd1103b252d28b910a95b9ece1f824df33735fcdef8.jpg",
|
| 560 |
+
"image_caption": [],
|
| 561 |
+
"image_footnote": [],
|
| 562 |
+
"bbox": [
|
| 563 |
+
576,
|
| 564 |
+
82,
|
| 565 |
+
737,
|
| 566 |
+
200
|
| 567 |
+
],
|
| 568 |
+
"page_idx": 4
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "image",
|
| 572 |
+
"img_path": "images/9062524d640118aeafcaf92eec7e6d50579ef057711d281da51048599b7c9753.jpg",
|
| 573 |
+
"image_caption": [],
|
| 574 |
+
"image_footnote": [],
|
| 575 |
+
"bbox": [
|
| 576 |
+
739,
|
| 577 |
+
82,
|
| 578 |
+
880,
|
| 579 |
+
200
|
| 580 |
+
],
|
| 581 |
+
"page_idx": 4
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "text",
|
| 585 |
+
"text": "Specifically, we add a regularization term to the original local objective to get $\\min_{\\mathbf{B}_k,\\mathbf{A}_k}\\frac{1}{|\\mathcal{B}_k|}\\sum_{\\xi \\in \\mathcal{B}_k}\\ell ((\\mathbf{B}_k,\\mathbf{A}_k),\\xi |\\mathbf{W}_0) + \\lambda \\| \\mathbf{B}_{k,:r_k\\gamma :r_k}\\| \\| \\mathbf{A}_{k,r_k\\gamma :r_k,:}\\|$ where $\\gamma < 1$ is a decay-factor that determines how aggressively we want to prune the ranks to a smaller value. The regularization term aims to minimize the norm of the last few ranks, which will become smaller if the first loss term $\\frac{1}{|\\mathcal{B}_k|}\\sum_{\\xi \\in \\mathcal{B}_k}\\ell ((\\mathbf{B}_k,\\mathbf{A}_k),\\xi |\\mathbf{W}_0)$ is not very large. After training with the new local objective we compare the norm of the updated LoRA modules' last few layers $\\| \\mathbf{B}_{k,:r_k\\gamma :r_k}\\| \\| \\mathbf{A}_{k,r_k\\gamma :r_k,:}\\|$ with the ones from the initially received LoRA modules. If the former is smaller we prune the last few layers (pruning intensity is determined by $\\gamma$ ) and send back the LoRA modules with a smaller rank. This means that for the LoRA modules which incurs a small local loss, i.e., well-trained on the clients' local data, the LoRA modules are more likely to be pruned to a smaller rank.",
|
| 586 |
+
"bbox": [
|
| 587 |
+
114,
|
| 588 |
+
284,
|
| 589 |
+
487,
|
| 590 |
+
624
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 4
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "list",
|
| 596 |
+
"sub_type": "text",
|
| 597 |
+
"list_items": [
|
| 598 |
+
"Such pruning allows HETLORA to reduce the noise in the LoRA modules introduced by clients having a larger rank than the actual rank that their data complexity requires, and also reduces the complexity of the LoRA modules to improve generalization and prevent overfitting (see Table 4). Once the rank is pruned for a client, the client saves the updated rank and uses it as the starting rank if selected for future communication rounds. The client then sends back their updated and possibly rank-pruned LoRA modules to the server for the modules to be processed in the next server-side aggregation step.",
|
| 599 |
+
"3) Sparsity-Weighted Aggregation. Finally, the last step of HETLORA is aggregating the received heterogeneous LoRA modules"
|
| 600 |
+
],
|
| 601 |
+
"bbox": [
|
| 602 |
+
115,
|
| 603 |
+
625,
|
| 604 |
+
489,
|
| 605 |
+
882
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 4
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "$\\mathbf{B}_k^{(t,\\tau)}$ , $\\mathbf{A}_k^{(t,\\tau)}$ , $k \\in S^{(t)}$ . A straightforward way to aggregate the heterogeneous LoRA modules is using zero-padding to all the received LoRA modules with $r_i^{(t)} < \\max \\{r_k^{(t)} | k \\in S^{(t)}\\}$ and then perform simple averaging over the modules. However, such naive aggregation can lead to biasing the model towards higher rank clients even when these clients may not hold valuable training information, i.e., having low data complexity, giving noisy updates.",
|
| 612 |
+
"bbox": [
|
| 613 |
+
507,
|
| 614 |
+
282,
|
| 615 |
+
880,
|
| 616 |
+
432
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 4
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "In an ideal scenario where we can deploy any rank to any client, deploying higher ranks to the clients with higher data complexity or larger local datasets can retrieve more informative and less sparse updates from the clients. Conversely if we assign higher ranks to the clients whose data complexity is low, the actual rank of the full model from the reconstructed LoRA modules can be smaller than the assigned rank. Thus the higher rank client's update may be unnecessarily over-emphasized in the naive zero padding method.",
|
| 623 |
+
"bbox": [
|
| 624 |
+
507,
|
| 625 |
+
432,
|
| 626 |
+
880,
|
| 627 |
+
609
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 4
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "Based on this insight we propose a sparsity-weighted aggregation scheme where the server reconstructs these LoRA modules to the full model as $\\Delta \\mathbf{W}_k^{(t)} = \\mathbf{B}_k^{(t)}\\mathbf{A}_k^{(t)}$ and gets the norm of the singular value vectors from the full models denoted as $\\mathbf{S}_k^{(t)}$ by calculating $\\| \\Delta \\mathbf{W}_k^{(t)}\\| _F$ . Note that the costly process of performing SVD for each of the full model $\\Delta \\mathbf{W}_k^{(t)}$ can be avoided by simply calculating the Frobenius norm of $\\Delta \\mathbf{W}_k^{(t)}$ (see Lemma 1.2 in (Guruswami and Kannan, 2012)). The server then weighs the LoRA modules with aggregation weight $p_k^{(t)}$ which is proportional to the norm of the singular value vectors. Formally, we have the global LoRA modules updated as $\\overline{\\mathbf{B}}^{(t + 1)} = \\sum_{k\\in \\mathcal{S}^{(t)}}p_k^{(t)}\\mathbf{B}_k^{(t)},\\overline{\\mathbf{A}}^{(t + 1)} = \\sum_{k\\in \\mathcal{S}^{(t)}}p_k^{(t)}\\mathbf{A}_k^{(t)}$ where $p_k^{(t)}\\coloneqq \\| \\mathbf{S}_k^{(t)}\\| /\\mathbf{Z}^{(t)}$ with",
|
| 634 |
+
"bbox": [
|
| 635 |
+
507,
|
| 636 |
+
609,
|
| 637 |
+
880,
|
| 638 |
+
884
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 4
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "page_number",
|
| 644 |
+
"text": "5",
|
| 645 |
+
"bbox": [
|
| 646 |
+
492,
|
| 647 |
+
903,
|
| 648 |
+
504,
|
| 649 |
+
915
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 4
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "normalizing factor $\\mathbf{Z}^{(t)}\\coloneqq \\sum_{k^{\\prime}\\in S^{(t)}}\\| \\mathbf{S}_{k^{\\prime}}^{(t)}\\|$ . This way, we can de-emphasize the larger rank assigned-clients that have rather less informative updates, and more emphasize the smaller rank assigned-clients that have more informative ones.",
|
| 656 |
+
"bbox": [
|
| 657 |
+
114,
|
| 658 |
+
84,
|
| 659 |
+
489,
|
| 660 |
+
167
|
| 661 |
+
],
|
| 662 |
+
"page_idx": 5
|
| 663 |
+
},
|
| 664 |
+
{
|
| 665 |
+
"type": "text",
|
| 666 |
+
"text": "3.4 Why not Simply Reconstruct First, then Redistribute the LoRA modules?",
|
| 667 |
+
"text_level": 1,
|
| 668 |
+
"bbox": [
|
| 669 |
+
115,
|
| 670 |
+
183,
|
| 671 |
+
478,
|
| 672 |
+
212
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 5
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "text",
|
| 678 |
+
"text": "One might ask why not simply reconstruct each of the LoRA modules to the full matrix and aggregate them. Here we show that reconstructing the LoRA modules and aggregating them to get the full model results in a different full model compared to when we aggregate the LoRA modules first and then reconstruct the final model. In Section 4 we also empirically show that reconstructing the LoRA modules to the full model and redistributing them after truncated SVD to the corresponding rank of the clients results in an overwhelming performance compared to HETLoRA.",
|
| 679 |
+
"bbox": [
|
| 680 |
+
114,
|
| 681 |
+
223,
|
| 682 |
+
489,
|
| 683 |
+
416
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 5
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "text",
|
| 689 |
+
"text": "Let us consider a simple case where there are 2 clients with heterogeneous rank lora modules $\\mathbf{B}_1\\in$ $\\mathbb{R}^{d\\times 1}$ $\\mathbf{A}_1\\in \\mathbb{R}^{1\\times l}$ and $\\mathbf{B}_2\\in \\mathbb{R}^{d\\times 1}$ $\\mathbf{A}_2\\in \\mathbb{R}^{2\\times l}$ respectively for client 1 and client 2 where the former has rank 1 and latter has rank 2. We set the notation for the LoRA modules' $i^{th}$ row and $j^{th}$ column value for $\\mathbf{B}_k$ and $\\mathbf{A}_k$ as $b_{k,ij}$ and $a_{k,ij}$ respectively. Then with $d = 3$ $l = 2$ when we reconstruct each of the LoRA modules first and then aggregate the full model we have its $i^{th}$ row and $j^{th}$ column as $(\\sum_{k = 1}^{2}b_{k,i0}a_{k,0j}) + b_{2,i1}a_{2,1j}$ and aggregating the LoRA modules first and then reconstructing the model has the full model's $i^{th}$ row and $j^{th}$ column as $(\\sum_{k = 1}^{2}b_{k,i0})(\\sum_{k = 1}^{2}a_{k,0j}) + b_{2,i1}a_{2,1j}$ .",
|
| 690 |
+
"bbox": [
|
| 691 |
+
114,
|
| 692 |
+
418,
|
| 693 |
+
489,
|
| 694 |
+
645
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 5
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "text",
|
| 700 |
+
"text": "One can observe that the difference between the two models are the cross-terms between the left and right module of different client 1 and 2, i.e., $b_{1,i0}a_{2,0j} + b_{2,i0}a_{1,0j}$ for the $i^{th}$ row and $j^{th}$ column. In other words, when we reconstruct the LoRA modules first and then aggregate them to get the full model, each term in the full model are cross-products between the left and right module of each client and not the cross-products between clients. Thus, reconstructing the LoRA modules loses information on the cross-relation across clients, only retaining the knowledge on the cross-relation between the LoRA modules B and A. Such observation is also corroborated by the reconstruction first's underwhelming performance in Table 3.",
|
| 701 |
+
"bbox": [
|
| 702 |
+
114,
|
| 703 |
+
646,
|
| 704 |
+
490,
|
| 705 |
+
887
|
| 706 |
+
],
|
| 707 |
+
"page_idx": 5
|
| 708 |
+
},
|
| 709 |
+
{
|
| 710 |
+
"type": "text",
|
| 711 |
+
"text": "4 Experiments",
|
| 712 |
+
"text_level": 1,
|
| 713 |
+
"bbox": [
|
| 714 |
+
507,
|
| 715 |
+
85,
|
| 716 |
+
655,
|
| 717 |
+
101
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 5
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "In this section, we present results for HETLORA and its baselines in terms of the performance on training speed, computation/communication efficiency, and final achieved performance. First, we show the performance of homogeneous LoRA to show how LoRA in general performs for low and high rank values. Second, we demonstrate HETLORA's performance for different $r_{\\mathrm{min}}$ and $r_{\\mathrm{max}}$ values comparing them with full fine-tuning, homogeneous LoRA, and the reconstruction-first method elaborated in Section 3.4. We also conduct an ablation study on HETLORA with varying decay factor $\\gamma$ for the rank self-pruning step. The rank distribution across clients for HETLORA, unless mentioned otherwise, is set to a truncated power-law distribution with $\\alpha = 0.1$ in the range between $[r_{\\mathrm{min}}, r_{\\mathrm{max}}]$ (inclusively), where the small $\\alpha$ value makes the distribution skewed towards smaller ranks. All experiments were ran with 3 different random seeds and their average is shown along with the standard deviation.",
|
| 724 |
+
"bbox": [
|
| 725 |
+
507,
|
| 726 |
+
107,
|
| 727 |
+
882,
|
| 728 |
+
444
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 5
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "text",
|
| 734 |
+
"text": "Model. We use the Transformer-based language model PaLM 2 (Google, 2023) of size XXS and XS for our experiments which are lightweight enough to fit in to the category of ODFMs (Google DeepMind, 2023) compared to standard FMs. The LoRA modules are applied to only the self-attention layers as proposed in the original LoRA paper (Hu et al., 2021), and their relative number of parameters compared to the original model are shown in Table 2.",
|
| 735 |
+
"bbox": [
|
| 736 |
+
507,
|
| 737 |
+
445,
|
| 738 |
+
882,
|
| 739 |
+
606
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 5
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "text",
|
| 745 |
+
"text": "Tasks. The tasks we consider are the chat dialogue from the multi-session chat (MSC) dataset (Xu et al., 2021) and the text summarization task from the Reddit dataset (Volske et al., 2017). The MSC data is a collection of human-human interactions comprising numerous extended chat sessions, and we use perplexity (Zhang et al., 2018) as the metric which has been used to show the quality of chat responses from generative models from previous literature (Sedoc et al., 2019). We sample 100 users uniformly at random and partition their data for training and evaluation by each previous_DIALOG and dialog. The Reddit text summarization data consists of real users' reddit posts and their summarization, and we use RougeL (Lin, 2004) as the metric. We use 298 users from Reddit that have at least 100 data sam",
|
| 746 |
+
"bbox": [
|
| 747 |
+
507,
|
| 748 |
+
608,
|
| 749 |
+
882,
|
| 750 |
+
881
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 5
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "page_number",
|
| 756 |
+
"text": "6",
|
| 757 |
+
"bbox": [
|
| 758 |
+
492,
|
| 759 |
+
903,
|
| 760 |
+
505,
|
| 761 |
+
915
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 5
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "table",
|
| 767 |
+
"img_path": "images/89fdf03e26887956b635cf2b96e73d47888cd1e2db6262807b9e0e04748f7f3b.jpg",
|
| 768 |
+
"table_caption": [],
|
| 769 |
+
"table_footnote": [],
|
| 770 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">Reddit (RougeL)</td><td colspan=\"2\">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td></tr><tr><td>Full</td><td>94.56(±0.01)</td><td>94.87(±0.04)</td><td>32.70(±0.17)</td><td>23.40(±0.36)</td></tr><tr><td>HOMLORA r = 5</td><td>92.57(±1.56), ×0.001</td><td>92.89(±0.96)</td><td>80.51(±8.32), ×0.001</td><td>64.59(±9.31)</td></tr><tr><td>HOMLORA r = 50</td><td>70.57(±2.13), ×0.01</td><td>84.95(±1.59)</td><td>307.96(±11.43), ×0.01</td><td>167.46(±1.72)</td></tr><tr><td>Recon+SVD</td><td>63.28(±1.92), ×0.003</td><td>75.17(±1.25)</td><td>323.89(±20.57), ×0.002</td><td>215.63(±15.38)</td></tr><tr><td>HETLORA γ = 0.99</td><td>94.23(±0.03), ×0.003</td><td>94.41(±0.05)</td><td>53.93(±1.57), ×0.002</td><td>38.76(±0.52)</td></tr></table>",
|
| 771 |
+
"bbox": [
|
| 772 |
+
142,
|
| 773 |
+
83,
|
| 774 |
+
853,
|
| 775 |
+
178
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 6
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"text": "Table 3: Final RougeL score for Reddit text summarization and perplexity for multi-session chat for different federated fine-tuning methods. The blue text indicates the ratio of trained number of parameters compared to the full fine-tuning case. HETLORA outperforms both HOMLORA and Recon+SVD method, but slightly underperforms the full fine-tuning case. However, compared to full fine-tuning the number of trained parameter is significantly smaller.",
|
| 782 |
+
"bbox": [
|
| 783 |
+
114,
|
| 784 |
+
187,
|
| 785 |
+
880,
|
| 786 |
+
237
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 6
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "image",
|
| 792 |
+
"img_path": "images/a137447f508a53cc52e53c835f385e34031de5c4a2f5275c1476a0467243a2a0.jpg",
|
| 793 |
+
"image_caption": [
|
| 794 |
+
"(a) PaLM 2-XXS"
|
| 795 |
+
],
|
| 796 |
+
"image_footnote": [],
|
| 797 |
+
"bbox": [
|
| 798 |
+
119,
|
| 799 |
+
249,
|
| 800 |
+
297,
|
| 801 |
+
365
|
| 802 |
+
],
|
| 803 |
+
"page_idx": 6
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "image",
|
| 807 |
+
"img_path": "images/1d9997d2a18bb74b2b2df957a97f0f79da1e1a7d4061fb62bb34449500266356.jpg",
|
| 808 |
+
"image_caption": [
|
| 809 |
+
"(b) PaLM 2-XS",
|
| 810 |
+
"Figure 3: Performance of homogeneous LoRA for different rank $r$ . Higher ranks achieve better performance with fewer communication rounds than the lower ranks, but they overfit quickly. Conversely, the lowest rank $r = 1$ achieves low perplexity slower than higher ranks, but without overfitting."
|
| 811 |
+
],
|
| 812 |
+
"image_footnote": [],
|
| 813 |
+
"bbox": [
|
| 814 |
+
305,
|
| 815 |
+
250,
|
| 816 |
+
480,
|
| 817 |
+
365
|
| 818 |
+
],
|
| 819 |
+
"page_idx": 6
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "text",
|
| 823 |
+
"text": "plies as the training clients and use another 100 users with at least 100 data samples for evaluation.",
|
| 824 |
+
"bbox": [
|
| 825 |
+
115,
|
| 826 |
+
450,
|
| 827 |
+
487,
|
| 828 |
+
481
|
| 829 |
+
],
|
| 830 |
+
"page_idx": 6
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "text",
|
| 834 |
+
"text": "Local Training. We use mini-batch size 8 and number of local iterations $\\tau = 5$ with the feature length set to 1024. For the learning rate we perform grid search in $\\eta = \\{0.1, 0.01, 0.001, 0.0001\\}$ . For each MSC and Reddit task, we select 5 and 10 clients per communication round respectively.",
|
| 835 |
+
"bbox": [
|
| 836 |
+
114,
|
| 837 |
+
483,
|
| 838 |
+
487,
|
| 839 |
+
580
|
| 840 |
+
],
|
| 841 |
+
"page_idx": 6
|
| 842 |
+
},
|
| 843 |
+
{
|
| 844 |
+
"type": "text",
|
| 845 |
+
"text": "4.1 Experiment Results",
|
| 846 |
+
"text_level": 1,
|
| 847 |
+
"bbox": [
|
| 848 |
+
115,
|
| 849 |
+
590,
|
| 850 |
+
317,
|
| 851 |
+
606
|
| 852 |
+
],
|
| 853 |
+
"page_idx": 6
|
| 854 |
+
},
|
| 855 |
+
{
|
| 856 |
+
"type": "text",
|
| 857 |
+
"text": "Homogeneous LoRA and the Effect of Ranks",
|
| 858 |
+
"text_level": 1,
|
| 859 |
+
"bbox": [
|
| 860 |
+
114,
|
| 861 |
+
608,
|
| 862 |
+
489,
|
| 863 |
+
623
|
| 864 |
+
],
|
| 865 |
+
"page_idx": 6
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "text",
|
| 869 |
+
"text": "$r$ . First, we evaluate the performance of federated fine-tuning of the LoRA modules with homogeneous LoRA deployment across clients in Fig. 3 for different ranks $r \\in [1,5,20,50]$ . We observe that a higher rank $r$ for homogeneous LoRA achieves better perplexity floor with fewer communication rounds than the lower ranks but quickly overfits resulting in worse performance compared to the lower ranks after more communication rounds. On the other hand, while the lower rank cases need more communication rounds to achieve good performance, it does not have the problem of overfitting as the higher ranks. Hence for homogeneous LoRA, there is a trade-off to consider between low and high ranks, in terms of faster performance achievement and overfitting. Note that these ob",
|
| 870 |
+
"bbox": [
|
| 871 |
+
114,
|
| 872 |
+
624,
|
| 873 |
+
489,
|
| 874 |
+
881
|
| 875 |
+
],
|
| 876 |
+
"page_idx": 6
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "image",
|
| 880 |
+
"img_path": "images/89e6060679c6eaba50f632d83ec0208b8ade559368d86053b6cb1037b29fcf3d.jpg",
|
| 881 |
+
"image_caption": [
|
| 882 |
+
"(a) $r_{\\mathrm{max}} = 10$",
|
| 883 |
+
"Figure 4: Performance of HETLORA without rank pruning or and with simple average aggregation. Similar to homogeneous LoRA, larger $r_{\\mathrm{min}}$ leads to overfitting for heterogeneous LoRA, but it is not as severe as homogeneous LoRA even for larger maximum rank $r_{\\mathrm{max}} = 50$ showing that the smaller rank LoRA modules act as a regularizer for HETLORA."
|
| 884 |
+
],
|
| 885 |
+
"image_footnote": [],
|
| 886 |
+
"bbox": [
|
| 887 |
+
515,
|
| 888 |
+
250,
|
| 889 |
+
690,
|
| 890 |
+
341
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 6
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "image",
|
| 896 |
+
"img_path": "images/160c57883da8f71cb41c8e57238d11bcf9e2527f0ea7eca20a2fbc17673a32ad.jpg",
|
| 897 |
+
"image_caption": [
|
| 898 |
+
"(b) $r_{\\mathrm{max}} = 50$"
|
| 899 |
+
],
|
| 900 |
+
"image_footnote": [],
|
| 901 |
+
"bbox": [
|
| 902 |
+
699,
|
| 903 |
+
250,
|
| 904 |
+
873,
|
| 905 |
+
341
|
| 906 |
+
],
|
| 907 |
+
"page_idx": 6
|
| 908 |
+
},
|
| 909 |
+
{
|
| 910 |
+
"type": "text",
|
| 911 |
+
"text": "servations are consistent with previous literature in the centralized setting where a higher rank does not necessarily yields the best performance (Hu et al., 2021; Zhang et al., 2023b). Next, we show that HETLoRA achieves good performance quickly without this overfitting issue, showing better performance than the homogeneous LoRA case.",
|
| 912 |
+
"bbox": [
|
| 913 |
+
507,
|
| 914 |
+
438,
|
| 915 |
+
880,
|
| 916 |
+
550
|
| 917 |
+
],
|
| 918 |
+
"page_idx": 6
|
| 919 |
+
},
|
| 920 |
+
{
|
| 921 |
+
"type": "text",
|
| 922 |
+
"text": "Naïve Heterogeneous LoRA and the Effect of $r_{\\mathrm{min}}$ and $r_{\\mathrm{max}}$ . First, we show the performance of naïve heterogeneous LoRA without self rank-pruning and with only average aggregation instead of the sparsity-weighted aggregation in Fig. 4. We can see similar observations to those from homogeneous LoRA where a smaller minimum rank $r_{\\mathrm{min}} = 1$ leads to slower training but better performance while a larger maximum rank leads to faster training but worse performance. However, compared to homogeneous LoRA the overfitting does not get as severe for heterogeneous LoRA even with much larger ranks such as $r_{\\mathrm{max}} = 50$ . We can imply from this result that the smaller rank LoRA modules act as a regularizer in heterogeneous LoRA. Next, we show that by adding the self rank-pruning and sparsity-weighted aggregation, even with $r_{\\mathrm{min}} = 5$ we are able to prevent overfitting issues and achieve better training speed and final performance than other baselines.",
|
| 923 |
+
"bbox": [
|
| 924 |
+
507,
|
| 925 |
+
560,
|
| 926 |
+
882,
|
| 927 |
+
882
|
| 928 |
+
],
|
| 929 |
+
"page_idx": 6
|
| 930 |
+
},
|
| 931 |
+
{
|
| 932 |
+
"type": "page_number",
|
| 933 |
+
"text": "7",
|
| 934 |
+
"bbox": [
|
| 935 |
+
492,
|
| 936 |
+
903,
|
| 937 |
+
504,
|
| 938 |
+
915
|
| 939 |
+
],
|
| 940 |
+
"page_idx": 6
|
| 941 |
+
},
|
| 942 |
+
{
|
| 943 |
+
"type": "table",
|
| 944 |
+
"img_path": "images/43c5c455d4af7cdbd3d608494de688d9afd9680889c3a6355574bb3534ca5956.jpg",
|
| 945 |
+
"table_caption": [],
|
| 946 |
+
"table_footnote": [],
|
| 947 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">Reddit (RougeL)</td><td colspan=\"2\">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td></tr><tr><td>HETLORA, γ = 1</td><td>92.17 (±0.08)</td><td>91.95 (±0.03)</td><td>55.07 (±0.81)</td><td>40.92 (±0.58)</td></tr><tr><td>HETLORA, γ = 0.99</td><td>94.23 (±0.03)</td><td>94.41 (±0.05)</td><td>53.93 (±1.57)</td><td>38.76 (±0.52)</td></tr><tr><td>HETLORA, γ = 0.95</td><td>89.62 (±1.33)</td><td>83.19 (±1.70)</td><td>71.10 (±1.39)</td><td>46.39 (±0.87)</td></tr><tr><td>HETLORA, γ = 0.85</td><td>60.31 (±3.04)</td><td>53.28 (±2.47)</td><td>120.72 (±10.93)</td><td>59.67 (±1.98)</td></tr></table>",
|
| 948 |
+
"bbox": [
|
| 949 |
+
191,
|
| 950 |
+
83,
|
| 951 |
+
803,
|
| 952 |
+
162
|
| 953 |
+
],
|
| 954 |
+
"page_idx": 7
|
| 955 |
+
},
|
| 956 |
+
{
|
| 957 |
+
"type": "text",
|
| 958 |
+
"text": "Table 4: Ablation study on the effect of the decaying factor $\\gamma$ for HETLORA's self-rank pruning in the local training step. While aggressive pruning can be harmful to HETLORA's performance, pruning $\\left( {\\gamma = {0.99}}\\right)$ can outperform the case when there is no pruning at all $\\left( {\\gamma = 1}\\right)$ by reducing the noise introduced by large rank clients with low data complexity.",
|
| 959 |
+
"bbox": [
|
| 960 |
+
114,
|
| 961 |
+
167,
|
| 962 |
+
880,
|
| 963 |
+
206
|
| 964 |
+
],
|
| 965 |
+
"page_idx": 7
|
| 966 |
+
},
|
| 967 |
+
{
|
| 968 |
+
"type": "text",
|
| 969 |
+
"text": "Heterogeneous LoRA compared to Baselines. Finally, we compare our proposed HETLORA with other baselines in Table 3 and Fig. 5. We see that HETLORA with $r_{\\mathrm{min}} = 5$ and $r_{\\mathrm{max}} = 50$ achieves faster training as well as better performance than homogeneous LoRA cases with both edge cases of the ranks $r \\in \\{5, 50\\}$ and reconstruction+SVD which was explained in Section 3.4. This implies that HETLORA is not only practical in the sense that clients are allowed to have their own rank values, it can also outperform the limited case of homogeneous LoRA where all clients have $r = r_{\\mathrm{min}}$ or the impractical case where all clients have $r = r_{\\mathrm{max}}$ . We also observe that HETLORA achieves slightly lower performance than full fine-tuning. However, as shown in the blue text in Table 3 that shows the number of trained parameters compared to the full fine-tuning case, full fine-tuning requires to train a much larger number of parameters compared to HETLORA, making it infeasible to train with ODFMs in practice. We also show in Fig. 6 that to achieve the targeted performance for both Reddit and MSC task, HETLORA requires significantly less number of parameters to be trained and communicated compared to full fine-tuning. Although for Reddit, HOMLORA has a slightly less number of parameters to be trained, the final achieved RougeL is outperformed by HETLORA as shown in Table 3.",
|
| 970 |
+
"bbox": [
|
| 971 |
+
114,
|
| 972 |
+
214,
|
| 973 |
+
489,
|
| 974 |
+
681
|
| 975 |
+
],
|
| 976 |
+
"page_idx": 7
|
| 977 |
+
},
|
| 978 |
+
{
|
| 979 |
+
"type": "text",
|
| 980 |
+
"text": "Effect of the Decaying Factor $\\gamma$ . Lastly, we conduct an ablation study on the effect of the decaying factor $\\gamma$ of HETLORA's local training step with self-rank pruning in Table 4. We observed that aggressive pruning hurts the performance where $\\gamma = 0.85$ shows the worse performance across the varying $\\gamma$ values. On the other hand, no pruning at all ( $\\gamma = 1$ ) underperforms the case when there is pruning ( $\\gamma = 0.99$ ), showing that reducing the noise introduced by large rank clients which data complexity is actually not that high indeed improves the performance.",
|
| 981 |
+
"bbox": [
|
| 982 |
+
114,
|
| 983 |
+
692,
|
| 984 |
+
489,
|
| 985 |
+
885
|
| 986 |
+
],
|
| 987 |
+
"page_idx": 7
|
| 988 |
+
},
|
| 989 |
+
{
|
| 990 |
+
"type": "image",
|
| 991 |
+
"img_path": "images/3f2353e2680e18751150c6a2ac7afc07a0a12e3f9af54e3e82d3eb4bb3b77cff.jpg",
|
| 992 |
+
"image_caption": [
|
| 993 |
+
"(a) Reddit"
|
| 994 |
+
],
|
| 995 |
+
"image_footnote": [],
|
| 996 |
+
"bbox": [
|
| 997 |
+
515,
|
| 998 |
+
212,
|
| 999 |
+
694,
|
| 1000 |
+
349
|
| 1001 |
+
],
|
| 1002 |
+
"page_idx": 7
|
| 1003 |
+
},
|
| 1004 |
+
{
|
| 1005 |
+
"type": "image",
|
| 1006 |
+
"img_path": "images/35ec4921bcfd14ba238e8ef855771d4f0e047746baed4fbe5535537800d2367b.jpg",
|
| 1007 |
+
"image_caption": [
|
| 1008 |
+
"(b) MSC"
|
| 1009 |
+
],
|
| 1010 |
+
"image_footnote": [],
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
697,
|
| 1013 |
+
214,
|
| 1014 |
+
875,
|
| 1015 |
+
349
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 7
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "image",
|
| 1021 |
+
"img_path": "images/27545a04b750ea500465f2e4d511fc61362c359fc4021f9e2d5942f926845714.jpg",
|
| 1022 |
+
"image_caption": [
|
| 1023 |
+
"Figure 5: Comparison of the performance across homogeneous LoRA, heterogeneous LoRA, and full fine-tuning. Heterogeneous LoRA achieves better performance than homogeneous LoRA with fewer number of communication rounds.",
|
| 1024 |
+
"Figure 6: Ratio of communicated number of parameters for different PEFT methods to full fine-tuning to achieve the target value for the metric where it is RougeL 80 for Reddit text summarization task and perplexity 150 for the multisession chat response task. The 'X' means that the target metric is not achieved even after convergence."
|
| 1025 |
+
],
|
| 1026 |
+
"image_footnote": [],
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
534,
|
| 1029 |
+
436,
|
| 1030 |
+
855,
|
| 1031 |
+
546
|
| 1032 |
+
],
|
| 1033 |
+
"page_idx": 7
|
| 1034 |
+
},
|
| 1035 |
+
{
|
| 1036 |
+
"type": "text",
|
| 1037 |
+
"text": "5 Discussions and Concluding Remarks",
|
| 1038 |
+
"text_level": 1,
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
507,
|
| 1041 |
+
644,
|
| 1042 |
+
868,
|
| 1043 |
+
659
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 7
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"text": "In our work, we investigated federated fine-tuning for ODFMs that cater to device system and data heterogeneity with our proposed HETLORA. We show that HETLORA is not only practical but also achieves better training speed, communication/computation efficiency, and final performance compared to homogeneous LoRA. We also show interesting findings consistent with previous literature (Hu et al., 2021; Zhang et al., 2023b) that increasing ranks does not always help for homogeneous LoRA. Our findings in this work opens up several questions worth investigating. For instance, if the settings allow us to assign specific ranks to",
|
| 1050 |
+
"bbox": [
|
| 1051 |
+
505,
|
| 1052 |
+
673,
|
| 1053 |
+
882,
|
| 1054 |
+
882
|
| 1055 |
+
],
|
| 1056 |
+
"page_idx": 7
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "page_number",
|
| 1060 |
+
"text": "8",
|
| 1061 |
+
"bbox": [
|
| 1062 |
+
492,
|
| 1063 |
+
903,
|
| 1064 |
+
504,
|
| 1065 |
+
915
|
| 1066 |
+
],
|
| 1067 |
+
"page_idx": 7
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "text",
|
| 1071 |
+
"text": "clients what will be the effective way to assign the ranks across clients for better convergence and performance? Another important next step of our work includes pursuing the theoretical convergence and generalization of heterogeneous LoRA.",
|
| 1072 |
+
"bbox": [
|
| 1073 |
+
114,
|
| 1074 |
+
86,
|
| 1075 |
+
485,
|
| 1076 |
+
165
|
| 1077 |
+
],
|
| 1078 |
+
"page_idx": 8
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"text": "6 Limitations",
|
| 1083 |
+
"text_level": 1,
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
115,
|
| 1086 |
+
178,
|
| 1087 |
+
250,
|
| 1088 |
+
192
|
| 1089 |
+
],
|
| 1090 |
+
"page_idx": 8
|
| 1091 |
+
},
|
| 1092 |
+
{
|
| 1093 |
+
"type": "text",
|
| 1094 |
+
"text": "In this work, we address tackling system and data heterogeneity in federated fine-tuning of on-device foundation models. Our work is motivated by clients being able to carry different ranks for the LoRA fine-tuning method depending on their available resources, and thus exploiting this characteristic to improve federated fine-tuning with heterogeneous LoRA. However, our work assumes that the rank distribution across clients (which is analogous to how system resources are distributed across clients) is independent to the data distribution. There can be scenarios in which this is not necessarily the case where the rank and data distribution can be correlated. For instance, more affluent populations can have better off devices with larger resource capacity, and may have data distributions different to that of less affluent populations. Such correlation should be explored for future work to better understand the implications of heterogenous LoRA.",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
114,
|
| 1097 |
+
203,
|
| 1098 |
+
489,
|
| 1099 |
+
525
|
| 1100 |
+
],
|
| 1101 |
+
"page_idx": 8
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"type": "text",
|
| 1105 |
+
"text": "References",
|
| 1106 |
+
"text_level": 1,
|
| 1107 |
+
"bbox": [
|
| 1108 |
+
115,
|
| 1109 |
+
551,
|
| 1110 |
+
213,
|
| 1111 |
+
565
|
| 1112 |
+
],
|
| 1113 |
+
"page_idx": 8
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "list",
|
| 1117 |
+
"sub_type": "ref_text",
|
| 1118 |
+
"list_items": [
|
| 1119 |
+
"Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, and Salman Avestimehr. 2023. Slora: Federated parameter efficient fine-tuning of language models. CoRR, abs/2308.06522.",
|
| 1120 |
+
"Rishi Bommasani, Drew A. Hudson, and et. al. Ehsan Adeli. 2022. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.",
|
| 1121 |
+
"Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2016. Practical secure aggregation for federated learning on user-held data. In NIPS Workshop on Private Multi-Party Machine Learning.",
|
| 1122 |
+
"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,"
|
| 1123 |
+
],
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
117,
|
| 1126 |
+
574,
|
| 1127 |
+
487,
|
| 1128 |
+
881
|
| 1129 |
+
],
|
| 1130 |
+
"page_idx": 8
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "list",
|
| 1134 |
+
"sub_type": "ref_text",
|
| 1135 |
+
"list_items": [
|
| 1136 |
+
"Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
|
| 1137 |
+
"Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis. 2022. Differentially private bias-term only fine-tuning of foundation models. arXiv preprint arXiv:2210.00036.",
|
| 1138 |
+
"Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, and Haozhao Wang. 2022. Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers. CoRR, abs/2211.08025.",
|
| 1139 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
|
| 1140 |
+
"Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 2023. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378.",
|
| 1141 |
+
"Google. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.",
|
| 1142 |
+
"Google. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.1040.",
|
| 1143 |
+
"Google DeepMind. 2023. Introducing palm2. https://blog.google/technology/ai/ google-palm-2-ai-large-language-model/.",
|
| 1144 |
+
"Tao Guo, Song Guo, and Junxiao Wang. 2023. Pfed-prompt: Learning personalized prompt for vision-language models in federated learning. In Proceedings of the ACM Web Conference 2023, WWW '23, page 1364-1374, New York, NY, USA. Association for Computing Machinery.",
|
| 1145 |
+
"Tao Guo, Song Guo, Junxiao Wang, and Wenchao Xu. 2022. Promptfl: Let federated participants cooperatively learn prompts instead of models — federated learning in age of foundation model. CoRR, abs/2208.11625.",
|
| 1146 |
+
"Venkatesan Guruswami and Ravi Kannan. 2012. Lecture notes in computer science theory for the information age.",
|
| 1147 |
+
"Neil Houlsby, Andrei Giurgiu, Stanisław Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019."
|
| 1148 |
+
],
|
| 1149 |
+
"bbox": [
|
| 1150 |
+
510,
|
| 1151 |
+
86,
|
| 1152 |
+
880,
|
| 1153 |
+
881
|
| 1154 |
+
],
|
| 1155 |
+
"page_idx": 8
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "page_number",
|
| 1159 |
+
"text": "9",
|
| 1160 |
+
"bbox": [
|
| 1161 |
+
492,
|
| 1162 |
+
903,
|
| 1163 |
+
504,
|
| 1164 |
+
914
|
| 1165 |
+
],
|
| 1166 |
+
"page_idx": 8
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "list",
|
| 1170 |
+
"sub_type": "ref_text",
|
| 1171 |
+
"list_items": [
|
| 1172 |
+
"Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning (ICML).",
|
| 1173 |
+
"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR).",
|
| 1174 |
+
"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In The 36th Conference on Neural Information Processing Systems (NeurIPS 2022).",
|
| 1175 |
+
"Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090.",
|
| 1176 |
+
"Gwen Legate, Nicolas Bernier, Lucas Caccia, Edouard Oyallon, and Eugene Belilovsky. 2023. Guiding the last layer in federated learning with pre-trained models. In Workshop of Federated Learning and Analytics in Practice@ICML.",
|
| 1177 |
+
"Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Empirical Methods in Natural Language Processing (EMNLP).",
|
| 1178 |
+
"Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.",
|
| 1179 |
+
"Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
|
| 1180 |
+
"Andre Manoel, Mirian del Carmen Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Robert Sim, Dan Miller, Danny Karmon, and Dimitrios Dimitriadis. 2023. Federated multilingual models for medical transcript analysis. In Conference on Health, Inference, and Learning (CHIL), pages 147-162.",
|
| 1181 |
+
"H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agoury Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. International Conference on Artificial Intelligence and Statistics (AISTATS).",
|
| 1182 |
+
"OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:submit/4812508.",
|
| 1183 |
+
"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020."
|
| 1184 |
+
],
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
117,
|
| 1187 |
+
86,
|
| 1188 |
+
487,
|
| 1189 |
+
881
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 9
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "list",
|
| 1195 |
+
"sub_type": "ref_text",
|
| 1196 |
+
"list_items": [
|
| 1197 |
+
"Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization for heterogeneous networks. In Proceedings of the 3rd MLSys Conference.",
|
| 1198 |
+
"Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. Chateval: A tool for chatbot evaluation. Proceedings of NAACL-HLT.",
|
| 1199 |
+
"Ofir Ben Shoham and Nadav Rappoport. 2023. Federated learning of medical concepts embedding using behrt. arXiv preprint arXiv:2305.13052.",
|
| 1200 |
+
"Aliaksandra Shysheya, John F Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, and Richard E Turner. 2023. Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification. International Conference on Learning Representations (ICLR).",
|
| 1201 |
+
"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.",
|
| 1202 |
+
"Michael Volske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59-63, Copenhagen, Denmark. Association for Computational Linguistics.",
|
| 1203 |
+
"Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. 2021. A field guide to federated optimization. arXiv preprint arXiv:2107.06917.",
|
| 1204 |
+
"Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 37(6):1205-1221.",
|
| 1205 |
+
"Colin Wei, Sang Michael Xie, and Tengyu Ma. 2021. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information Processing Systems, 34:16158-16170.",
|
| 1206 |
+
"Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig Schmidt, Michael Rabbat, and Ari S. Morcos. 2023. lo-fi: distributed fine-tuning without communication. Transactions on Machine Learning Research (TMLR).",
|
| 1207 |
+
"Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567."
|
| 1208 |
+
],
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
510,
|
| 1211 |
+
86,
|
| 1212 |
+
880,
|
| 1213 |
+
881
|
| 1214 |
+
],
|
| 1215 |
+
"page_idx": 9
|
| 1216 |
+
},
|
| 1217 |
+
{
|
| 1218 |
+
"type": "page_number",
|
| 1219 |
+
"text": "10",
|
| 1220 |
+
"bbox": [
|
| 1221 |
+
489,
|
| 1222 |
+
903,
|
| 1223 |
+
509,
|
| 1224 |
+
914
|
| 1225 |
+
],
|
| 1226 |
+
"page_idx": 9
|
| 1227 |
+
},
|
| 1228 |
+
{
|
| 1229 |
+
"type": "list",
|
| 1230 |
+
"sub_type": "ref_text",
|
| 1231 |
+
"list_items": [
|
| 1232 |
+
"Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. 2023. Fedlora: Model-heterogeneous personalized federated learning with lora tuning. arXiv preprint arXiv:2310.13283.",
|
| 1233 |
+
"Sixing Yu, J. Pablo Muñoz, and Ali Jannesari. 2023. Federated foundation models: Privacy-preserving and collaborative learning for large models. arXiv preprint arXiv:2305.11414.",
|
| 1234 |
+
"Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199.",
|
| 1235 |
+
"Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, and Guoyin Wangand Yiran Chen. 2023a. Towards building the federated gpt: Federated instruction tuning. CoRR, abs/2305.05644.",
|
| 1236 |
+
"Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023b. Adaptive budget allocation for parameter-efficient fine-tuning. In The 11th International Conference on Learning Representations (ICLR).",
|
| 1237 |
+
"Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL).",
|
| 1238 |
+
"Zhuo Zhang, Xiangjing Hu, Jingyuan Zhang, Yating Zhang, Hui Wang, Lizhen Qu, and Zenglin Xu. 2023c. Fedlegal: The first real-world federated learning benchmark for legal nlp. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL).",
|
| 1239 |
+
"Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. 2023d. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. Findings of the Association for Computational Linguistics (ACL).",
|
| 1240 |
+
"Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, Jianxin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, and Lichao Sun. 2023. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419."
|
| 1241 |
+
],
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
117,
|
| 1244 |
+
86,
|
| 1245 |
+
489,
|
| 1246 |
+
784
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 10
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "page_number",
|
| 1252 |
+
"text": "11",
|
| 1253 |
+
"bbox": [
|
| 1254 |
+
489,
|
| 1255 |
+
903,
|
| 1256 |
+
507,
|
| 1257 |
+
914
|
| 1258 |
+
],
|
| 1259 |
+
"page_idx": 10
|
| 1260 |
+
}
|
| 1261 |
+
]
|
2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_model.json
ADDED
|
@@ -0,0 +1,1861 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.308,
|
| 8 |
+
0.061,
|
| 9 |
+
0.725
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2401.06432v2 [cs.LG] 20 Feb 2024"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.238,
|
| 18 |
+
0.092,
|
| 19 |
+
0.764,
|
| 20 |
+
0.131
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.21,
|
| 29 |
+
0.153,
|
| 30 |
+
0.796,
|
| 31 |
+
0.171
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Yae Jee Cho\\(^{1*}\\), Luyang Liu\\(^{2}\\), Zheng Xu\\(^{2}\\), Aldi Fahrezi\\(^{2}\\), Gauri Joshi\\(^{1}\\)"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.307,
|
| 40 |
+
0.171,
|
| 41 |
+
0.697,
|
| 42 |
+
0.187
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "\\(^{1}\\)Carnegie Mellon University, \\(^{2}\\)Google Research"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.192,
|
| 51 |
+
0.189,
|
| 52 |
+
0.808,
|
| 53 |
+
0.205
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "yaejeec@andrew.cmu.edu, {luyangliu,xuzheng,aldifahrezi} \\(@\\) google.com,"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.403,
|
| 62 |
+
0.206,
|
| 63 |
+
0.6,
|
| 64 |
+
0.22
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "gaurij@andrew.cmu.edu"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.262,
|
| 73 |
+
0.263,
|
| 74 |
+
0.341,
|
| 75 |
+
0.278
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Abstract"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.144,
|
| 84 |
+
0.29,
|
| 85 |
+
0.461,
|
| 86 |
+
0.747
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs with on-device local data. For federated fine-tuning of FMs, we consider the FMs with small to medium parameter sizes of single digit billion at maximum, referred to as ondevice FMs (ODFMs) that can be deployed on devices for inference but can only be finetuned with parameter efficient methods. In our work, we tackle the data and system heterogeneity problem of federated fine-tuning of ODFMs by proposing a novel method using heterogeneous low-rank approximations (LoRAs), namely HETLORA. First, we show that the naive approach of using homogeneous LoRA ranks across devices face a trade-off between overfitting and slow convergence, and thus propose HETLORA, which allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules. By applying rank self-pruning locally and sparsity-weighted aggregation at the server, HETLORA combines the advantages of high and low-rank LoRAs, which achieves improved convergence speed and final performance compared to homogeneous LoRA. Furthermore, HETLORA offers enhanced computation efficiency compared to full fine-tuning, making it suitable for federated fine-tuning across heterogeneous devices."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "title",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.117,
|
| 95 |
+
0.759,
|
| 96 |
+
0.262,
|
| 97 |
+
0.774
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "1 Introduction"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.115,
|
| 106 |
+
0.785,
|
| 107 |
+
0.49,
|
| 108 |
+
0.849
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "The emerging foundation models (FMs) (Bommasani et al., 2022; Zhou et al., 2023; Radford et al., 2021; Devlin et al., 2019; OpenAI, 2023; Google, 2022; Touvron et al., 2023; Brown"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.508,
|
| 117 |
+
0.263,
|
| 118 |
+
0.883,
|
| 119 |
+
0.521
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "et al., 2020; Google, 2022; Driess et al., 2023; Google, 2023) have shown remarkable zero/few shot learning capabilities, performing well on a variety of tasks including text/image generation with prompts, language translation, solving math problems, and conversing in natural language. Standard FMs, however, demand costly resources for directly fine-tuning their entire parameter space. To tackle this issue, many recent works have proposed different parameter-efficient fine-tuning (PEFT) methods of FMs such as prompt tuning (Lester et al., 2021), utilizing adapters (Houlsby et al., 2019), or low-rank adaptation (LoRA) of the original model (Hu et al., 2021) which freezes the original pre-trained parameters of the FM and train additional, smaller number of parameters instead."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.508,
|
| 128 |
+
0.524,
|
| 129 |
+
0.882,
|
| 130 |
+
0.766
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "These PEFT methods, however, assume that i) FMs are deployed to and trained with the data of a single machine/client for adaptation to the downstream task and that ii) the client has enough resources to even fit a standard FM of hundred billion size for, at least, inference. In practice, there are frequently cases where we are interested in fine-tuning FMs for on-device private data that is distributed across multiple devices (clients). For instance, sensitive and private data such as medical information or law-related documents may be hard to collect centrally in a private manner and fine-tuning of the FMs may need to be done at the edge (Manoel et al., 2023; Shoham and Rappoport, 2023; Zhang et al., 2023c)."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.508,
|
| 139 |
+
0.77,
|
| 140 |
+
0.882,
|
| 141 |
+
0.882
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "In our work, we focus on such federated fine-tuning scenarios, where we train a set of parameters collaboratively across clients to obtain a global set of parameters that can be plugged in to the FM for the targeted downstream task. Note that federated fine-tuning is orthogonal to personalization of FMs in federated learning (FL) (Guo et al., 2023), which"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.116,
|
| 150 |
+
0.856,
|
| 151 |
+
0.488,
|
| 152 |
+
0.883
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "*Work done while at Google Research. Corresponding authors: yaejeec@andrew.cmu.edu, luyangliu@google.com"
|
| 156 |
+
}
|
| 157 |
+
],
|
| 158 |
+
[
|
| 159 |
+
{
|
| 160 |
+
"type": "table",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.126,
|
| 163 |
+
0.085,
|
| 164 |
+
0.477,
|
| 165 |
+
0.13
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "<table><tr><td></td><td>Zero-Shot</td><td>Few-Shot</td><td>Full-Training</td></tr><tr><td>PaLM 2 XXS</td><td>2930.23</td><td>2541.86</td><td>23.71</td></tr><tr><td>PaLM 2 XS</td><td>2712.86</td><td>481.95</td><td>18.32</td></tr></table>"
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "table_caption",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.115,
|
| 174 |
+
0.135,
|
| 175 |
+
0.488,
|
| 176 |
+
0.185
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "Table 1: Perplexity of PaLM 2 for zero-shot, few-shot (5 communication rounds), and full federated fine-tuning (200 communication rounds) for chat response on the multi-session chat data (further experimental details are in Section 4.)"
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.115,
|
| 185 |
+
0.189,
|
| 186 |
+
0.489,
|
| 187 |
+
0.284
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "aims to train parameters that perform well for individual clients rather than general downstream tasks. We also define on-device FMs (ODFMs) as models with few billion parameters at max that are able to fit into memory on limited capacity clients considering current hardwares."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.115,
|
| 196 |
+
0.286,
|
| 197 |
+
0.489,
|
| 198 |
+
0.478
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "Federated fine-tuning of ODFMs entails unique challenges non-present in either the standard PEFT of FMs or the standard federated training of models that are not FMs. First, FMs have their zero/few-shot learning capability often supported by their large parameter space that is trained on massive data. However, as we show in Table 1 and also presented by previous literature (Kojima et al., 2022), FMs' performance deteriorates as their sizes get smaller and federated fine-tuning may not merely be useful but inevitable for ODFMs to perform well for downstream tasks on devices."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.115,
|
| 207 |
+
0.48,
|
| 208 |
+
0.49,
|
| 209 |
+
0.705
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Moreover, devices have limited and heterogeneous system capabilities (Wang et al., 2019; Bonawitz et al., 2016) and data distributions (Sahu et al., 2020). A suitable PEFT method that flexibly adapts to such heterogeneity across devices should be investigated for federated fine-tuning of ODFMs. Previous work evaluated PEFT with FL via performing a general evaluation over different PEFT methods naively combined with FL (Guo et al., 2022; Zhang et al., 2023d; Chen et al., 2022; Wortsman et al., 2023; Yu et al., 2023). However, they do not consider the practical setting for ODFMs where PEFT methods are catered to the system and data heterogeneity of clients."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.115,
|
| 218 |
+
0.706,
|
| 219 |
+
0.49,
|
| 220 |
+
0.883
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "In our work, we focus on one of the most prominent PEFT methods, LoRA (Hu et al., 2021) which proposes to train low-rank approximations of the original model. Using LoRA, the number of trainable parameters is greatly reduced to at most \\(0.02\\%\\) of the original ODFM size (see Table 2). The simplest way to apply LoRA to federated fine-tuning is training with homogeneous rank \\(r\\) across the clients as one would train any global model with FL. However, this does not cater to the heterogeneity in FL, where it is even difficult to choose the"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "image",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.513,
|
| 229 |
+
0.084,
|
| 230 |
+
0.881,
|
| 231 |
+
0.23
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": null
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "image_caption",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.508,
|
| 240 |
+
0.235,
|
| 241 |
+
0.882,
|
| 242 |
+
0.32
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Figure 1: Overview of heterogeneous rank deployment of LoRA: the pretrained weights \\(\\mathbf{W}_0\\) are stored on-device and heterogeneous ranks are assigned to different clients with \\(r_{\\mathrm{min}} = r_1 < r_2 < r_3 = r_{\\mathrm{max}}\\). In our proposed HETLORA, the server receives the trained heterogeneous LoRA modules and aggregates them with sparsity-weighted aggregation to update the global LoRA module."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.508,
|
| 251 |
+
0.326,
|
| 252 |
+
0.882,
|
| 253 |
+
0.358
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "right LoRA rank for resource limited mobile devices with natural system and data heterogeneity."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.508,
|
| 262 |
+
0.363,
|
| 263 |
+
0.882,
|
| 264 |
+
0.553
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "To this end, we propose heterogeneous LoRA, namely HETLORA in short, for federated fine-tuning to cater to system and data heterogeneity and outperform the naive combination of LoRA and federated fine-tuning where homogeneous ranks are applied across clients. We show the performance of PaLM 2 (Google, 2023) of XXS and XS size for chat responses on the multi-session chat data (Xu et al., 2021) and text summarization for the Reddit data (Volske et al., 2017), both which are real world data from clients. Our contributions can be summarized as follows:"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.509,
|
| 273 |
+
0.579,
|
| 274 |
+
0.881,
|
| 275 |
+
0.66
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "- We propose HETLORA that can apply different rank LoRA modules to different clients to cater to the heterogeneous system capabilities and data complexities of the clients, via utilizing rank self-pruning and sparsity-weighted aggregation."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.509,
|
| 284 |
+
0.675,
|
| 285 |
+
0.881,
|
| 286 |
+
0.771
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "- We show the performance of naively applying LoRA with homogeneous ranks across clients for federated fine-tuning, and show that while large ranks help in speeding-up training, they lead to faster overfitting while smaller ranks are slower in training but does not suffer from overfitting."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.509,
|
| 295 |
+
0.787,
|
| 296 |
+
0.882,
|
| 297 |
+
0.882
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "- We then evaluate HETLORA to show that it outperforms naively applying homogeneous ranks across clients in terms of both training speed, communication/computation efficiency, and final performance, gaining the best of both worlds of homogeneous LoRA with high and low ranks."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "list",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.509,
|
| 306 |
+
0.579,
|
| 307 |
+
0.882,
|
| 308 |
+
0.882
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": null
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "page_number",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.493,
|
| 317 |
+
0.904,
|
| 318 |
+
0.505,
|
| 319 |
+
0.916
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "2"
|
| 323 |
+
}
|
| 324 |
+
],
|
| 325 |
+
[
|
| 326 |
+
{
|
| 327 |
+
"type": "table",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.143,
|
| 330 |
+
0.094,
|
| 331 |
+
0.855,
|
| 332 |
+
0.131
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "<table><tr><td></td><td>r = 1</td><td>r = 5</td><td>r = 10</td><td>r = 20</td><td>r = 50</td><td>r = 100</td><td>r = 150</td><td>r = 200</td></tr><tr><td>PaLM 2 XXS, PaLM 2 XS</td><td>0.02%</td><td>0.11%</td><td>0.21%</td><td>0.42%</td><td>1.05%</td><td>2.10%</td><td>3.14%</td><td>4.19%</td></tr></table>"
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "table_caption",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.115,
|
| 341 |
+
0.139,
|
| 342 |
+
0.881,
|
| 343 |
+
0.175
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "Table 2: Percentage of the LoRA parameters' size for different ranks \\( r \\) compared to the original pre-trained ODFM's parameter size. Even for large ranks such as \\( r = {200} \\) the trainable LoRA parameters' size compared to the original pre-trained ODFM size is less than 5% for both PaLM 2-XS and PaLM 2-XXS."
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "title",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.116,
|
| 352 |
+
0.189,
|
| 353 |
+
0.273,
|
| 354 |
+
0.205
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "2 Related Work"
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.115,
|
| 363 |
+
0.219,
|
| 364 |
+
0.49,
|
| 365 |
+
0.541
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "Parameter-Efficient Fine Tuning. There has been a plethora of recent work on PEFT which either trains a subset of parameters within the existing FM whilst other parameters are frozen or introduces an additional set of trainable parameters whilst keeping the original FM frozen. For the former, methods such as head or bias fine-tuning (Wei et al., 2021; Bu et al., 2022; Lee et al., 2019; Zaken et al., 2021) has been explored, while for the latter, methods such as adapters (Houlsby et al., 2019), prompt (Lester et al., 2021) or prefix-tuning (Li and Liang, 2021), and low-rank approximation (Hu et al., 2021) has been proposed. While these number of methods has been proven to perform as well as full model fine-tuning with just a few number of parameters for the centralized setting, it has not been thoroughly explored how these methods perform for a much smaller FM such as ODFMs, in the decentralized setting where clients' system capacities can be heterogeneous and much limited."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.115,
|
| 374 |
+
0.545,
|
| 375 |
+
0.49,
|
| 376 |
+
0.883
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Federated Fine-Tuning. Recently, interest in the intersection of FMs and FL has notably increased (Zhou et al., 2023; Yu et al., 2023). Many recent work has proposed to combine the PEFT methods devised for the centralized setting to FL such as training prompts or adapters collaboratively with FL (Guo et al., 2022; Chen et al., 2022; Zhang et al., 2023a; Shysheya et al., 2023; Legate et al., 2023). Another line of work has proposed to perform a few-shot or nearly zero-shot training of FMs with FL for improved communication-efficiency (Wortsman et al., 2023; Zhang et al., 2023d). However, these work either overlooks that most devices do not have the resource to fit a general FM (Touvron et al., 2023; Brown et al., 2020) (\\(>8\\mathrm{B}\\) parameters) even for inference or does not consider the heterogeneous system capacities of the clients. It is detrimental to consider these factors since FMs that actually fits to the devices in FL are much smaller, making them weaker in the general intelligence capabilities, and also hetero-"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.509,
|
| 385 |
+
0.191,
|
| 386 |
+
0.88,
|
| 387 |
+
0.222
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "genuine system capacities may prohibit deploying same sized PEFT parameters across clients."
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.508,
|
| 396 |
+
0.224,
|
| 397 |
+
0.885,
|
| 398 |
+
0.513
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "Only a few number of recent work has looked in to using LoRA for FL. For instance, in (Babakniya et al., 2023), the importance of the initialization for the LoRA modules is evaluated where they propose to train the LoRA modules with FL and then perform singular value decomposition (SVD) to gain a good initialization of the LoRA modules. However, the training process of LoRA itself is not altered to adapt to heterogeneous system capabilities of devices. Another recent work (Yi et al., 2023) has evaluated LoRA in the context of personalized FL, but other than applying LoRA to personalization, the LoRA method itself is, again, not changed. Our work proposes heterogeneous LoRA for federated fine-tuning where heterogeneous ranks are deployed and trained across clients by a new algorithm that includes rank self-pruning and sparsity weighted aggregation."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "title",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.509,
|
| 407 |
+
0.526,
|
| 408 |
+
0.844,
|
| 409 |
+
0.543
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "3 Federated Fine-Tuning with LoRA"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "title",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.509,
|
| 418 |
+
0.553,
|
| 419 |
+
0.664,
|
| 420 |
+
0.567
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "3.1 Preliminaries"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.508,
|
| 429 |
+
0.575,
|
| 430 |
+
0.883,
|
| 431 |
+
0.883
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "Formally, we define the pre-trained ODFM as \\(\\mathbf{W}_0\\in \\mathbb{R}^{d\\times l}\\) and the trainable low-rank decomposed matrix as \\(\\Delta \\mathbf{W}\\in \\mathbb{R}^{d\\times l}\\). In standard LoRA (Hu et al., 2021) under the centralized setting, the low-rank decomposition of \\(\\Delta \\mathbf{W}\\) is constructed such that \\(\\Delta \\mathbf{W} = \\mathbf{BA}\\) where \\(\\mathbf{B}\\in \\mathbb{R}^{d\\times r}\\) and \\(\\mathbf{A}\\in \\mathbb{R}^{r\\times l}\\) are the low rank decomposition of \\(\\Delta \\mathbf{W}\\) with identical rank \\(r\\). Now, let us consider LoRA for federated fine-tuning where there are \\(M\\) total clients. Each client \\(k\\in [M]\\) has private data \\(\\mathcal{B}_k\\) and its corresponding local empirical loss function \\(F_{k}(\\mathbf{W}) = \\frac{1}{|\\mathcal{B}_{k}|}\\sum_{\\xi \\in \\mathcal{B}_{k}}\\ell (\\mathbf{W},\\xi)\\), where \\(\\ell (\\mathbf{W},\\xi)\\) is the loss for model \\(\\mathbf{W}\\) at data sample \\(\\xi\\). The optimization task for federated fine-tuning is to collaboratively find the global parameters which we define as \\(\\overline{\\mathbf{B}}\\) and \\(\\overline{\\mathbf{A}}\\), given the pretrained knowledge \\(\\mathbf{W}_0\\) that can minimize the global objective \\(F(\\overline{\\mathbf{W}}) = \\frac{1}{M}\\sum_{k = 1}^{M}F_{k}(\\overline{\\mathbf{W}})\\) where \\(\\overline{\\mathbf{W}} = \\mathbf{W}_0 + \\overline{\\mathbf{B}}\\overline{\\mathbf{A}}\\). Later in the paper, when"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "page_number",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.493,
|
| 440 |
+
0.904,
|
| 441 |
+
0.506,
|
| 442 |
+
0.916
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "3"
|
| 446 |
+
}
|
| 447 |
+
],
|
| 448 |
+
[
|
| 449 |
+
{
|
| 450 |
+
"type": "text",
|
| 451 |
+
"bbox": [
|
| 452 |
+
0.115,
|
| 453 |
+
0.086,
|
| 454 |
+
0.49,
|
| 455 |
+
0.248
|
| 456 |
+
],
|
| 457 |
+
"angle": 0,
|
| 458 |
+
"content": "introducing heterogeneous LoRA we truncate the LoRA modules' rank dimension, for example from \\(\\mathbf{B} \\in \\mathbb{R}^{d \\times r}\\), \\(\\mathbf{A} \\in \\mathbb{R}^{r \\times l}\\) to \\(\\mathbf{B}' \\in \\mathbb{R}^{d \\times r'}\\), \\(\\mathbf{A}' \\in \\mathbb{R}^{r' \\times l}\\) where \\(r' < r\\). Throughout the paper, we denote such truncation of a matrix with the : symbol for each row and column at the subscript. For instance, for truncation to \\(r' < r\\) at the column for the matrix \\(\\mathbf{B} \\in \\mathbb{R}^{d \\times r}\\), we keep all the columns until \\(r'\\) and omit the last \\(r - r'\\) columns and denote the resulting matrix it as \\(\\mathbf{B}_{:,:r'}\\)."
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "title",
|
| 462 |
+
"bbox": [
|
| 463 |
+
0.116,
|
| 464 |
+
0.258,
|
| 465 |
+
0.423,
|
| 466 |
+
0.273
|
| 467 |
+
],
|
| 468 |
+
"angle": 0,
|
| 469 |
+
"content": "3.2 Naïve Case: Homogeneous LoRA"
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"bbox": [
|
| 474 |
+
0.115,
|
| 475 |
+
0.276,
|
| 476 |
+
0.49,
|
| 477 |
+
0.656
|
| 478 |
+
],
|
| 479 |
+
"angle": 0,
|
| 480 |
+
"content": "A straightforward way to perform federated fine-tuning with LoRA is to train the LoRA modules \\(\\mathbf{B}\\), \\(\\mathbf{A}\\) with homogeneous rank \\(r\\) across all clients with standard FL (McMahan et al., 2017). Specifically, first the clients have the pre-trained ODFM weights \\(\\mathbf{W}_0\\) stored in their devices prior to training for the forward pass when training the LoRA modules. Then, the server sends the global LoRA modules \\(\\overline{\\mathbf{B}}^{(t)}\\), \\(\\overline{\\mathbf{A}}^{(t)}\\) to the set of \\(m\\) selected clients \\(S^{(t)}\\) per communication round \\(t\\). Each selected client \\(k \\in S^{(t)}\\) trains the LoRA modules on their local data for a few local iterations (usually with mini-batch SGD) and send the updated modules \\(\\mathbf{B}_k^{(t)}\\), \\(\\mathbf{A}_k^{(t)}\\) back to the server. The server then updates the global LoRA modules accordingly to \\(\\overline{\\mathbf{B}}^{(t+1)} = \\sum_{k \\in S^{(t)}} \\mathbf{B}_k^{(t)} / m\\), \\(\\overline{\\mathbf{A}}^{(t+1)} = \\sum_{k \\in S^{(t)}} \\mathbf{A}_k^{(t)} / m\\) and sends back to the next set of selected clients for the next communication round. This training process is nearly identical to the standard FL algorithm (McMahan et al., 2017) except that the pretrained weights \\(\\mathbf{W}_0\\) are frozen and locally stored in the clients' devices and only the LoRA moduels are trained and communicated."
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "text",
|
| 484 |
+
"bbox": [
|
| 485 |
+
0.115,
|
| 486 |
+
0.658,
|
| 487 |
+
0.49,
|
| 488 |
+
0.883
|
| 489 |
+
],
|
| 490 |
+
"angle": 0,
|
| 491 |
+
"content": "Instead of such homogeneous rank deployment across all clients, it is not only possible but more practical to use heterogeneous rank deployment for federated fine-tuning. This involves training LoRA modules with varying ranks on different clients, based on their system capabilities. Such setting is motivated and often required from the system constraints of the clients (Wang et al., 2021) where most of the clients are only capable of having smaller ranks while a few can handle larger ranks. However, this approach poses challenges in aggregating and redistributing the LoRA modules. To address these challenges, we introduce a solution called HETLORA, which pushes the limits"
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "text",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.509,
|
| 497 |
+
0.086,
|
| 498 |
+
0.822,
|
| 499 |
+
0.102
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "beyond homogeneous LoRA deployment."
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "title",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.509,
|
| 508 |
+
0.114,
|
| 509 |
+
0.876,
|
| 510 |
+
0.129
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "3.3 Proposed Method: Heterogeneous LoRA"
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "text",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.507,
|
| 519 |
+
0.135,
|
| 520 |
+
0.883,
|
| 521 |
+
0.344
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "Overview. Our proposed heterogeneous LoRA method, namely HETLORA, is not restricted to any specific method to assign the ranks to the clients and the clients can decide their respective ranks themselves. For formality, in our paper, we formulate that each client has a rank denoted as \\( r_k \\), within a range of \\( r_k \\in [r_{\\min}, r_{\\max}] \\), \\( \\forall k \\) (see Fig. 1). HETLORA comprises three steps: 1) Distribution via Truncation, 2) Local Training with Rank Self-Pruning, and 3) Sparsity-Weighted Aggregation of the LoRA modules. These steps are detailed further in the subsequent paragraphs. An overview of HETLORA is illustrated in Fig. 2."
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.508,
|
| 530 |
+
0.346,
|
| 531 |
+
0.885,
|
| 532 |
+
0.579
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "1) Distribution via Truncation. At the beginning of each communication round \\( t \\), the server holds initial global LoRA modules \\( \\overline{\\mathbf{B}}^{(t)} \\), \\( \\overline{\\mathbf{A}}^{(t)} \\) with a global rank \\( r^{(t)} \\). The value of the global rank \\( r^{(t)} \\) depends on how we aggregate the heterogeneous rank LoRA modules which is elaborated on in step 3). The server then distributes these global LoRA modules to a subset of selected set of clients \\( S^{(t)} \\) with heterogeneous ranks \\( r_k^{(t)} \\), \\( k \\in S^{(t)} \\) for local training<sup>1</sup>. With the given global LoRA modules, we consider a simple and intuitive method of truncation where the server sends \\( \\overline{\\mathbf{B}}_{:,r_k}^{(t)} \\), \\( \\overline{\\mathbf{A}}_{:r_k}^{(t)} \\); to each client \\( k \\) with rank \\( r_k^{(t)} \\) for local training where we omitted the superscript for \\( r_k \\) for simplicity."
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.508,
|
| 541 |
+
0.581,
|
| 542 |
+
0.884,
|
| 543 |
+
0.822
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "2) Local Training with Rank Self-Pruning. After receiving LoRA modules from the server as \\(\\mathbf{B}_k^{(t,0)} = \\overline{\\mathbf{B}}_{\\therefore r_k}^{(t)}\\), \\(\\mathbf{A}_k^{(t,0)} = \\overline{\\mathbf{A}}_{r_k,\\cdot}\\), each client \\(k \\in S^{(t)}\\) performs \\(\\tau\\) local iterations of mini-batch SGD on their local data to minimize the local objective \\(\\frac{1}{|\\mathcal{B}_k|} \\sum_{\\xi \\in \\mathcal{B}_k} \\ell((\\mathbf{B}_k, \\mathbf{A}_k), \\xi | \\mathbf{W}_0)\\), and sends back the updated LoRA modules \\(\\mathbf{B}_k^{(t,\\tau)} \\in \\mathbb{R}^{d \\times r_k^{(t)}}\\) and \\(\\mathbf{A}_k^{(t,\\tau)} \\in \\mathbb{R}^{r_k^{(t)} \\times l}\\) to the server. This is the same process as the standard local training step in vanilla FedAvg (McMahan et al., 2017). However, we improve this vanilla local training step by adding a rank self-pruning mechanism where clients self-prune their respective ranks depending on the magnitude of the model parameters."
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "page_footnote",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.508,
|
| 552 |
+
0.831,
|
| 553 |
+
0.883,
|
| 554 |
+
0.883
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "There is a superscript \\(t\\) for the ranks \\(r_k^{(t)}\\) across clients which indicates that in HETLORA these heterogeneous ranks can be changed over the communication rounds via self-pruning explained in step 2)."
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "page_number",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.494,
|
| 563 |
+
0.904,
|
| 564 |
+
0.505,
|
| 565 |
+
0.916
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "4"
|
| 569 |
+
}
|
| 570 |
+
],
|
| 571 |
+
[
|
| 572 |
+
{
|
| 573 |
+
"type": "image",
|
| 574 |
+
"bbox": [
|
| 575 |
+
0.118,
|
| 576 |
+
0.083,
|
| 577 |
+
0.573,
|
| 578 |
+
0.201
|
| 579 |
+
],
|
| 580 |
+
"angle": 0,
|
| 581 |
+
"content": null
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "image",
|
| 585 |
+
"bbox": [
|
| 586 |
+
0.578,
|
| 587 |
+
0.083,
|
| 588 |
+
0.738,
|
| 589 |
+
0.201
|
| 590 |
+
],
|
| 591 |
+
"angle": 0,
|
| 592 |
+
"content": null
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "image",
|
| 596 |
+
"bbox": [
|
| 597 |
+
0.741,
|
| 598 |
+
0.083,
|
| 599 |
+
0.881,
|
| 600 |
+
0.201
|
| 601 |
+
],
|
| 602 |
+
"angle": 0,
|
| 603 |
+
"content": null
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "image_caption",
|
| 607 |
+
"bbox": [
|
| 608 |
+
0.115,
|
| 609 |
+
0.205,
|
| 610 |
+
0.882,
|
| 611 |
+
0.272
|
| 612 |
+
],
|
| 613 |
+
"angle": 0,
|
| 614 |
+
"content": "Figure 2: Overview of the zero-padding, sparsity-weighted aggregation, and truncation method for HETLORA; (a): Zero-pad LoRA modules with smaller ranks to \\( r_{\\mathrm{max}} \\) (clients with rank \\( r_{\\mathrm{max}} \\) does not need padding) and calculate their sparsity by calculating the Frobenius norm of the reconstructed model \\( \\Delta \\mathbf{W}_k^{(t)} = \\mathbf{B}_k^{(t)}\\mathbf{A}_k^{(t)} \\); (b): After padding, aggregate all of the clients' LoRA modules with the weights \\( \\| \\mathbf{S}_k^{(t)}\\| /\\mathbf{Z}^{(t)} \\) calculated by \\( \\Delta \\mathbf{W}_k^{(t)} \\) to get the global LoRA modules; (c): Truncate the global LoRA modules for the specific rank of the next selected client (example for client with rank \\( r_2 \\))."
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"bbox": [
|
| 619 |
+
0.115,
|
| 620 |
+
0.285,
|
| 621 |
+
0.489,
|
| 622 |
+
0.625
|
| 623 |
+
],
|
| 624 |
+
"angle": 0,
|
| 625 |
+
"content": "Specifically, we add a regularization term to the original local objective to get \\(\\min_{\\mathbf{B}_k,\\mathbf{A}_k}\\frac{1}{|\\mathcal{B}_k|}\\sum_{\\xi \\in \\mathcal{B}_k}\\ell ((\\mathbf{B}_k,\\mathbf{A}_k),\\xi |\\mathbf{W}_0) + \\lambda \\| \\mathbf{B}_{k,:r_k\\gamma :r_k}\\| \\| \\mathbf{A}_{k,r_k\\gamma :r_k,:}\\|\\) where \\(\\gamma < 1\\) is a decay-factor that determines how aggressively we want to prune the ranks to a smaller value. The regularization term aims to minimize the norm of the last few ranks, which will become smaller if the first loss term \\(\\frac{1}{|\\mathcal{B}_k|}\\sum_{\\xi \\in \\mathcal{B}_k}\\ell ((\\mathbf{B}_k,\\mathbf{A}_k),\\xi |\\mathbf{W}_0)\\) is not very large. After training with the new local objective we compare the norm of the updated LoRA modules' last few layers \\(\\| \\mathbf{B}_{k,:r_k\\gamma :r_k}\\| \\| \\mathbf{A}_{k,r_k\\gamma :r_k,:}\\|\\) with the ones from the initially received LoRA modules. If the former is smaller we prune the last few layers (pruning intensity is determined by \\(\\gamma\\)) and send back the LoRA modules with a smaller rank. This means that for the LoRA modules which incurs a small local loss, i.e., well-trained on the clients' local data, the LoRA modules are more likely to be pruned to a smaller rank."
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"bbox": [
|
| 630 |
+
0.115,
|
| 631 |
+
0.626,
|
| 632 |
+
0.489,
|
| 633 |
+
0.834
|
| 634 |
+
],
|
| 635 |
+
"angle": 0,
|
| 636 |
+
"content": "Such pruning allows HETLORA to reduce the noise in the LoRA modules introduced by clients having a larger rank than the actual rank that their data complexity requires, and also reduces the complexity of the LoRA modules to improve generalization and prevent overfitting (see Table 4). Once the rank is pruned for a client, the client saves the updated rank and uses it as the starting rank if selected for future communication rounds. The client then sends back their updated and possibly rank-pruned LoRA modules to the server for the modules to be processed in the next server-side aggregation step."
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "text",
|
| 640 |
+
"bbox": [
|
| 641 |
+
0.116,
|
| 642 |
+
0.835,
|
| 643 |
+
0.49,
|
| 644 |
+
0.883
|
| 645 |
+
],
|
| 646 |
+
"angle": 0,
|
| 647 |
+
"content": "3) Sparsity-Weighted Aggregation. Finally, the last step of HETLORA is aggregating the received heterogeneous LoRA modules"
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "list",
|
| 651 |
+
"bbox": [
|
| 652 |
+
0.116,
|
| 653 |
+
0.626,
|
| 654 |
+
0.49,
|
| 655 |
+
0.883
|
| 656 |
+
],
|
| 657 |
+
"angle": 0,
|
| 658 |
+
"content": null
|
| 659 |
+
},
|
| 660 |
+
{
|
| 661 |
+
"type": "text",
|
| 662 |
+
"bbox": [
|
| 663 |
+
0.508,
|
| 664 |
+
0.283,
|
| 665 |
+
0.882,
|
| 666 |
+
0.433
|
| 667 |
+
],
|
| 668 |
+
"angle": 0,
|
| 669 |
+
"content": "\\(\\mathbf{B}_k^{(t,\\tau)}\\), \\(\\mathbf{A}_k^{(t,\\tau)}\\), \\(k \\in S^{(t)}\\). A straightforward way to aggregate the heterogeneous LoRA modules is using zero-padding to all the received LoRA modules with \\(r_i^{(t)} < \\max \\{r_k^{(t)} | k \\in S^{(t)}\\}\\) and then perform simple averaging over the modules. However, such naive aggregation can lead to biasing the model towards higher rank clients even when these clients may not hold valuable training information, i.e., having low data complexity, giving noisy updates."
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "text",
|
| 673 |
+
"bbox": [
|
| 674 |
+
0.508,
|
| 675 |
+
0.433,
|
| 676 |
+
0.882,
|
| 677 |
+
0.61
|
| 678 |
+
],
|
| 679 |
+
"angle": 0,
|
| 680 |
+
"content": "In an ideal scenario where we can deploy any rank to any client, deploying higher ranks to the clients with higher data complexity or larger local datasets can retrieve more informative and less sparse updates from the clients. Conversely if we assign higher ranks to the clients whose data complexity is low, the actual rank of the full model from the reconstructed LoRA modules can be smaller than the assigned rank. Thus the higher rank client's update may be unnecessarily over-emphasized in the naive zero padding method."
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "text",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.508,
|
| 686 |
+
0.611,
|
| 687 |
+
0.882,
|
| 688 |
+
0.885
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "Based on this insight we propose a sparsity-weighted aggregation scheme where the server reconstructs these LoRA modules to the full model as \\(\\Delta \\mathbf{W}_k^{(t)} = \\mathbf{B}_k^{(t)}\\mathbf{A}_k^{(t)}\\) and gets the norm of the singular value vectors from the full models denoted as \\(\\mathbf{S}_k^{(t)}\\) by calculating \\(\\| \\Delta \\mathbf{W}_k^{(t)}\\| _F\\). Note that the costly process of performing SVD for each of the full model \\(\\Delta \\mathbf{W}_k^{(t)}\\) can be avoided by simply calculating the Frobenius norm of \\(\\Delta \\mathbf{W}_k^{(t)}\\) (see Lemma 1.2 in (Guruswami and Kannan, 2012)). The server then weighs the LoRA modules with aggregation weight \\(p_k^{(t)}\\) which is proportional to the norm of the singular value vectors. Formally, we have the global LoRA modules updated as \\(\\overline{\\mathbf{B}}^{(t + 1)} = \\sum_{k\\in \\mathcal{S}^{(t)}}p_k^{(t)}\\mathbf{B}_k^{(t)},\\overline{\\mathbf{A}}^{(t + 1)} = \\sum_{k\\in \\mathcal{S}^{(t)}}p_k^{(t)}\\mathbf{A}_k^{(t)}\\) where \\(p_k^{(t)}\\coloneqq \\| \\mathbf{S}_k^{(t)}\\| /\\mathbf{Z}^{(t)}\\) with"
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "page_number",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.493,
|
| 697 |
+
0.904,
|
| 698 |
+
0.505,
|
| 699 |
+
0.916
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "5"
|
| 703 |
+
}
|
| 704 |
+
],
|
| 705 |
+
[
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"bbox": [
|
| 709 |
+
0.115,
|
| 710 |
+
0.085,
|
| 711 |
+
0.49,
|
| 712 |
+
0.168
|
| 713 |
+
],
|
| 714 |
+
"angle": 0,
|
| 715 |
+
"content": "normalizing factor \\(\\mathbf{Z}^{(t)}\\coloneqq \\sum_{k^{\\prime}\\in S^{(t)}}\\| \\mathbf{S}_{k^{\\prime}}^{(t)}\\|\\). This way, we can de-emphasize the larger rank assigned-clients that have rather less informative updates, and more emphasize the smaller rank assigned-clients that have more informative ones."
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "title",
|
| 719 |
+
"bbox": [
|
| 720 |
+
0.116,
|
| 721 |
+
0.184,
|
| 722 |
+
0.479,
|
| 723 |
+
0.214
|
| 724 |
+
],
|
| 725 |
+
"angle": 0,
|
| 726 |
+
"content": "3.4 Why not Simply Reconstruct First, then Redistribute the LoRA modules?"
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "text",
|
| 730 |
+
"bbox": [
|
| 731 |
+
0.115,
|
| 732 |
+
0.224,
|
| 733 |
+
0.49,
|
| 734 |
+
0.417
|
| 735 |
+
],
|
| 736 |
+
"angle": 0,
|
| 737 |
+
"content": "One might ask why not simply reconstruct each of the LoRA modules to the full matrix and aggregate them. Here we show that reconstructing the LoRA modules and aggregating them to get the full model results in a different full model compared to when we aggregate the LoRA modules first and then reconstruct the final model. In Section 4 we also empirically show that reconstructing the LoRA modules to the full model and redistributing them after truncated SVD to the corresponding rank of the clients results in an overwhelming performance compared to HETLoRA."
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "text",
|
| 741 |
+
"bbox": [
|
| 742 |
+
0.115,
|
| 743 |
+
0.419,
|
| 744 |
+
0.49,
|
| 745 |
+
0.646
|
| 746 |
+
],
|
| 747 |
+
"angle": 0,
|
| 748 |
+
"content": "Let us consider a simple case where there are 2 clients with heterogeneous rank lora modules \\(\\mathbf{B}_1\\in\\) \\(\\mathbb{R}^{d\\times 1}\\) \\(\\mathbf{A}_1\\in \\mathbb{R}^{1\\times l}\\) and \\(\\mathbf{B}_2\\in \\mathbb{R}^{d\\times 1}\\) \\(\\mathbf{A}_2\\in \\mathbb{R}^{2\\times l}\\) respectively for client 1 and client 2 where the former has rank 1 and latter has rank 2. We set the notation for the LoRA modules' \\(i^{th}\\) row and \\(j^{th}\\) column value for \\(\\mathbf{B}_k\\) and \\(\\mathbf{A}_k\\) as \\(b_{k,ij}\\) and \\(a_{k,ij}\\) respectively. Then with \\(d = 3\\) \\(l = 2\\) when we reconstruct each of the LoRA modules first and then aggregate the full model we have its \\(i^{th}\\) row and \\(j^{th}\\) column as \\((\\sum_{k = 1}^{2}b_{k,i0}a_{k,0j}) + b_{2,i1}a_{2,1j}\\) and aggregating the LoRA modules first and then reconstructing the model has the full model's \\(i^{th}\\) row and \\(j^{th}\\) column as \\((\\sum_{k = 1}^{2}b_{k,i0})(\\sum_{k = 1}^{2}a_{k,0j}) + b_{2,i1}a_{2,1j}\\)."
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"bbox": [
|
| 753 |
+
0.115,
|
| 754 |
+
0.647,
|
| 755 |
+
0.492,
|
| 756 |
+
0.888
|
| 757 |
+
],
|
| 758 |
+
"angle": 0,
|
| 759 |
+
"content": "One can observe that the difference between the two models are the cross-terms between the left and right module of different client 1 and 2, i.e., \\( b_{1,i0}a_{2,0j} + b_{2,i0}a_{1,0j} \\) for the \\( i^{th} \\) row and \\( j^{th} \\) column. In other words, when we reconstruct the LoRA modules first and then aggregate them to get the full model, each term in the full model are cross-products between the left and right module of each client and not the cross-products between clients. Thus, reconstructing the LoRA modules loses information on the cross-relation across clients, only retaining the knowledge on the cross-relation between the LoRA modules B and A. Such observation is also corroborated by the reconstruction first's underwhelming performance in Table 3."
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "title",
|
| 763 |
+
"bbox": [
|
| 764 |
+
0.509,
|
| 765 |
+
0.086,
|
| 766 |
+
0.657,
|
| 767 |
+
0.102
|
| 768 |
+
],
|
| 769 |
+
"angle": 0,
|
| 770 |
+
"content": "4 Experiments"
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"bbox": [
|
| 775 |
+
0.508,
|
| 776 |
+
0.108,
|
| 777 |
+
0.883,
|
| 778 |
+
0.445
|
| 779 |
+
],
|
| 780 |
+
"angle": 0,
|
| 781 |
+
"content": "In this section, we present results for HETLORA and its baselines in terms of the performance on training speed, computation/communication efficiency, and final achieved performance. First, we show the performance of homogeneous LoRA to show how LoRA in general performs for low and high rank values. Second, we demonstrate HETLORA's performance for different \\( r_{\\mathrm{min}} \\) and \\( r_{\\mathrm{max}} \\) values comparing them with full fine-tuning, homogeneous LoRA, and the reconstruction-first method elaborated in Section 3.4. We also conduct an ablation study on HETLORA with varying decay factor \\( \\gamma \\) for the rank self-pruning step. The rank distribution across clients for HETLORA, unless mentioned otherwise, is set to a truncated power-law distribution with \\( \\alpha = 0.1 \\) in the range between \\( [r_{\\mathrm{min}}, r_{\\mathrm{max}}] \\) (inclusively), where the small \\( \\alpha \\) value makes the distribution skewed towards smaller ranks. All experiments were ran with 3 different random seeds and their average is shown along with the standard deviation."
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"bbox": [
|
| 786 |
+
0.508,
|
| 787 |
+
0.447,
|
| 788 |
+
0.884,
|
| 789 |
+
0.607
|
| 790 |
+
],
|
| 791 |
+
"angle": 0,
|
| 792 |
+
"content": "Model. We use the Transformer-based language model PaLM 2 (Google, 2023) of size XXS and XS for our experiments which are lightweight enough to fit in to the category of ODFMs (Google DeepMind, 2023) compared to standard FMs. The LoRA modules are applied to only the self-attention layers as proposed in the original LoRA paper (Hu et al., 2021), and their relative number of parameters compared to the original model are shown in Table 2."
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "text",
|
| 796 |
+
"bbox": [
|
| 797 |
+
0.508,
|
| 798 |
+
0.609,
|
| 799 |
+
0.884,
|
| 800 |
+
0.882
|
| 801 |
+
],
|
| 802 |
+
"angle": 0,
|
| 803 |
+
"content": "Tasks. The tasks we consider are the chat dialogue from the multi-session chat (MSC) dataset (Xu et al., 2021) and the text summarization task from the Reddit dataset (Volske et al., 2017). The MSC data is a collection of human-human interactions comprising numerous extended chat sessions, and we use perplexity (Zhang et al., 2018) as the metric which has been used to show the quality of chat responses from generative models from previous literature (Sedoc et al., 2019). We sample 100 users uniformly at random and partition their data for training and evaluation by each previous_DIALOG and dialog. The Reddit text summarization data consists of real users' reddit posts and their summarization, and we use RougeL (Lin, 2004) as the metric. We use 298 users from Reddit that have at least 100 data sam"
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "page_number",
|
| 807 |
+
"bbox": [
|
| 808 |
+
0.494,
|
| 809 |
+
0.904,
|
| 810 |
+
0.506,
|
| 811 |
+
0.916
|
| 812 |
+
],
|
| 813 |
+
"angle": 0,
|
| 814 |
+
"content": "6"
|
| 815 |
+
}
|
| 816 |
+
],
|
| 817 |
+
[
|
| 818 |
+
{
|
| 819 |
+
"type": "table",
|
| 820 |
+
"bbox": [
|
| 821 |
+
0.143,
|
| 822 |
+
0.084,
|
| 823 |
+
0.855,
|
| 824 |
+
0.179
|
| 825 |
+
],
|
| 826 |
+
"angle": 0,
|
| 827 |
+
"content": "<table><tr><td></td><td colspan=\"2\">Reddit (RougeL)</td><td colspan=\"2\">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td></tr><tr><td>Full</td><td>94.56(±0.01)</td><td>94.87(±0.04)</td><td>32.70(±0.17)</td><td>23.40(±0.36)</td></tr><tr><td>HOMLORA r = 5</td><td>92.57(±1.56), ×0.001</td><td>92.89(±0.96)</td><td>80.51(±8.32), ×0.001</td><td>64.59(±9.31)</td></tr><tr><td>HOMLORA r = 50</td><td>70.57(±2.13), ×0.01</td><td>84.95(±1.59)</td><td>307.96(±11.43), ×0.01</td><td>167.46(±1.72)</td></tr><tr><td>Recon+SVD</td><td>63.28(±1.92), ×0.003</td><td>75.17(±1.25)</td><td>323.89(±20.57), ×0.002</td><td>215.63(±15.38)</td></tr><tr><td>HETLORA γ = 0.99</td><td>94.23(±0.03), ×0.003</td><td>94.41(±0.05)</td><td>53.93(±1.57), ×0.002</td><td>38.76(±0.52)</td></tr></table>"
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "table_caption",
|
| 831 |
+
"bbox": [
|
| 832 |
+
0.115,
|
| 833 |
+
0.189,
|
| 834 |
+
0.882,
|
| 835 |
+
0.238
|
| 836 |
+
],
|
| 837 |
+
"angle": 0,
|
| 838 |
+
"content": "Table 3: Final RougeL score for Reddit text summarization and perplexity for multi-session chat for different federated fine-tuning methods. The blue text indicates the ratio of trained number of parameters compared to the full fine-tuning case. HETLORA outperforms both HOMLORA and Recon+SVD method, but slightly underperforms the full fine-tuning case. However, compared to full fine-tuning the number of trained parameter is significantly smaller."
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "image",
|
| 842 |
+
"bbox": [
|
| 843 |
+
0.121,
|
| 844 |
+
0.25,
|
| 845 |
+
0.299,
|
| 846 |
+
0.366
|
| 847 |
+
],
|
| 848 |
+
"angle": 0,
|
| 849 |
+
"content": null
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "image_caption",
|
| 853 |
+
"bbox": [
|
| 854 |
+
0.157,
|
| 855 |
+
0.368,
|
| 856 |
+
0.264,
|
| 857 |
+
0.38
|
| 858 |
+
],
|
| 859 |
+
"angle": 0,
|
| 860 |
+
"content": "(a) PaLM 2-XXS"
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "image",
|
| 864 |
+
"bbox": [
|
| 865 |
+
0.306,
|
| 866 |
+
0.251,
|
| 867 |
+
0.482,
|
| 868 |
+
0.366
|
| 869 |
+
],
|
| 870 |
+
"angle": 0,
|
| 871 |
+
"content": null
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "image_caption",
|
| 875 |
+
"bbox": [
|
| 876 |
+
0.345,
|
| 877 |
+
0.368,
|
| 878 |
+
0.443,
|
| 879 |
+
0.38
|
| 880 |
+
],
|
| 881 |
+
"angle": 0,
|
| 882 |
+
"content": "(b) PaLM 2-XS"
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "image_caption",
|
| 886 |
+
"bbox": [
|
| 887 |
+
0.115,
|
| 888 |
+
0.383,
|
| 889 |
+
0.489,
|
| 890 |
+
0.445
|
| 891 |
+
],
|
| 892 |
+
"angle": 0,
|
| 893 |
+
"content": "Figure 3: Performance of homogeneous LoRA for different rank \\( r \\). Higher ranks achieve better performance with fewer communication rounds than the lower ranks, but they overfit quickly. Conversely, the lowest rank \\( r = 1 \\) achieves low perplexity slower than higher ranks, but without overfitting."
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "text",
|
| 897 |
+
"bbox": [
|
| 898 |
+
0.116,
|
| 899 |
+
0.451,
|
| 900 |
+
0.489,
|
| 901 |
+
0.482
|
| 902 |
+
],
|
| 903 |
+
"angle": 0,
|
| 904 |
+
"content": "plies as the training clients and use another 100 users with at least 100 data samples for evaluation."
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"bbox": [
|
| 909 |
+
0.115,
|
| 910 |
+
0.485,
|
| 911 |
+
0.489,
|
| 912 |
+
0.581
|
| 913 |
+
],
|
| 914 |
+
"angle": 0,
|
| 915 |
+
"content": "Local Training. We use mini-batch size 8 and number of local iterations \\(\\tau = 5\\) with the feature length set to 1024. For the learning rate we perform grid search in \\(\\eta = \\{0.1, 0.01, 0.001, 0.0001\\}\\). For each MSC and Reddit task, we select 5 and 10 clients per communication round respectively."
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "title",
|
| 919 |
+
"bbox": [
|
| 920 |
+
0.116,
|
| 921 |
+
0.592,
|
| 922 |
+
0.318,
|
| 923 |
+
0.607
|
| 924 |
+
],
|
| 925 |
+
"angle": 0,
|
| 926 |
+
"content": "4.1 Experiment Results"
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "title",
|
| 930 |
+
"bbox": [
|
| 931 |
+
0.115,
|
| 932 |
+
0.609,
|
| 933 |
+
0.49,
|
| 934 |
+
0.624
|
| 935 |
+
],
|
| 936 |
+
"angle": 0,
|
| 937 |
+
"content": "Homogeneous LoRA and the Effect of Ranks"
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"bbox": [
|
| 942 |
+
0.115,
|
| 943 |
+
0.625,
|
| 944 |
+
0.49,
|
| 945 |
+
0.882
|
| 946 |
+
],
|
| 947 |
+
"angle": 0,
|
| 948 |
+
"content": "\\(r\\). First, we evaluate the performance of federated fine-tuning of the LoRA modules with homogeneous LoRA deployment across clients in Fig. 3 for different ranks \\(r \\in [1,5,20,50]\\). We observe that a higher rank \\(r\\) for homogeneous LoRA achieves better perplexity floor with fewer communication rounds than the lower ranks but quickly overfits resulting in worse performance compared to the lower ranks after more communication rounds. On the other hand, while the lower rank cases need more communication rounds to achieve good performance, it does not have the problem of overfitting as the higher ranks. Hence for homogeneous LoRA, there is a trade-off to consider between low and high ranks, in terms of faster performance achievement and overfitting. Note that these ob"
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "image",
|
| 952 |
+
"bbox": [
|
| 953 |
+
0.516,
|
| 954 |
+
0.251,
|
| 955 |
+
0.692,
|
| 956 |
+
0.342
|
| 957 |
+
],
|
| 958 |
+
"angle": 0,
|
| 959 |
+
"content": null
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "image_caption",
|
| 963 |
+
"bbox": [
|
| 964 |
+
0.562,
|
| 965 |
+
0.344,
|
| 966 |
+
0.646,
|
| 967 |
+
0.357
|
| 968 |
+
],
|
| 969 |
+
"angle": 0,
|
| 970 |
+
"content": "(a) \\(r_{\\mathrm{max}} = 10\\)"
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "image",
|
| 974 |
+
"bbox": [
|
| 975 |
+
0.7,
|
| 976 |
+
0.251,
|
| 977 |
+
0.875,
|
| 978 |
+
0.342
|
| 979 |
+
],
|
| 980 |
+
"angle": 0,
|
| 981 |
+
"content": null
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "image_caption",
|
| 985 |
+
"bbox": [
|
| 986 |
+
0.744,
|
| 987 |
+
0.344,
|
| 988 |
+
0.83,
|
| 989 |
+
0.357
|
| 990 |
+
],
|
| 991 |
+
"angle": 0,
|
| 992 |
+
"content": "(b) \\(r_{\\mathrm{max}} = 50\\)"
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "image_caption",
|
| 996 |
+
"bbox": [
|
| 997 |
+
0.508,
|
| 998 |
+
0.359,
|
| 999 |
+
0.882,
|
| 1000 |
+
0.432
|
| 1001 |
+
],
|
| 1002 |
+
"angle": 0,
|
| 1003 |
+
"content": "Figure 4: Performance of HETLORA without rank pruning or and with simple average aggregation. Similar to homogeneous LoRA, larger \\( r_{\\mathrm{min}} \\) leads to overfitting for heterogeneous LoRA, but it is not as severe as homogeneous LoRA even for larger maximum rank \\( r_{\\mathrm{max}} = 50 \\) showing that the smaller rank LoRA modules act as a regularizer for HETLORA."
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
0.508,
|
| 1009 |
+
0.439,
|
| 1010 |
+
0.882,
|
| 1011 |
+
0.551
|
| 1012 |
+
],
|
| 1013 |
+
"angle": 0,
|
| 1014 |
+
"content": "servations are consistent with previous literature in the centralized setting where a higher rank does not necessarily yields the best performance (Hu et al., 2021; Zhang et al., 2023b). Next, we show that HETLoRA achieves good performance quickly without this overfitting issue, showing better performance than the homogeneous LoRA case."
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
0.508,
|
| 1020 |
+
0.561,
|
| 1021 |
+
0.883,
|
| 1022 |
+
0.883
|
| 1023 |
+
],
|
| 1024 |
+
"angle": 0,
|
| 1025 |
+
"content": "Naïve Heterogeneous LoRA and the Effect of \\( r_{\\mathrm{min}} \\) and \\( r_{\\mathrm{max}} \\). First, we show the performance of naïve heterogeneous LoRA without self rank-pruning and with only average aggregation instead of the sparsity-weighted aggregation in Fig. 4. We can see similar observations to those from homogeneous LoRA where a smaller minimum rank \\( r_{\\mathrm{min}} = 1 \\) leads to slower training but better performance while a larger maximum rank leads to faster training but worse performance. However, compared to homogeneous LoRA the overfitting does not get as severe for heterogeneous LoRA even with much larger ranks such as \\( r_{\\mathrm{max}} = 50 \\). We can imply from this result that the smaller rank LoRA modules act as a regularizer in heterogeneous LoRA. Next, we show that by adding the self rank-pruning and sparsity-weighted aggregation, even with \\( r_{\\mathrm{min}} = 5 \\) we are able to prevent overfitting issues and achieve better training speed and final performance than other baselines."
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "page_number",
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
0.493,
|
| 1031 |
+
0.904,
|
| 1032 |
+
0.505,
|
| 1033 |
+
0.916
|
| 1034 |
+
],
|
| 1035 |
+
"angle": 0,
|
| 1036 |
+
"content": "7"
|
| 1037 |
+
}
|
| 1038 |
+
],
|
| 1039 |
+
[
|
| 1040 |
+
{
|
| 1041 |
+
"type": "table",
|
| 1042 |
+
"bbox": [
|
| 1043 |
+
0.192,
|
| 1044 |
+
0.084,
|
| 1045 |
+
0.805,
|
| 1046 |
+
0.164
|
| 1047 |
+
],
|
| 1048 |
+
"angle": 0,
|
| 1049 |
+
"content": "<table><tr><td></td><td colspan=\"2\">Reddit (RougeL)</td><td colspan=\"2\">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td></tr><tr><td>HETLORA, γ = 1</td><td>92.17 (±0.08)</td><td>91.95 (±0.03)</td><td>55.07 (±0.81)</td><td>40.92 (±0.58)</td></tr><tr><td>HETLORA, γ = 0.99</td><td>94.23 (±0.03)</td><td>94.41 (±0.05)</td><td>53.93 (±1.57)</td><td>38.76 (±0.52)</td></tr><tr><td>HETLORA, γ = 0.95</td><td>89.62 (±1.33)</td><td>83.19 (±1.70)</td><td>71.10 (±1.39)</td><td>46.39 (±0.87)</td></tr><tr><td>HETLORA, γ = 0.85</td><td>60.31 (±3.04)</td><td>53.28 (±2.47)</td><td>120.72 (±10.93)</td><td>59.67 (±1.98)</td></tr></table>"
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "table_caption",
|
| 1053 |
+
"bbox": [
|
| 1054 |
+
0.115,
|
| 1055 |
+
0.168,
|
| 1056 |
+
0.882,
|
| 1057 |
+
0.207
|
| 1058 |
+
],
|
| 1059 |
+
"angle": 0,
|
| 1060 |
+
"content": "Table 4: Ablation study on the effect of the decaying factor \\( \\gamma \\) for HETLORA's self-rank pruning in the local training step. While aggressive pruning can be harmful to HETLORA's performance, pruning \\( \\left( {\\gamma = {0.99}}\\right) \\) can outperform the case when there is no pruning at all \\( \\left( {\\gamma = 1}\\right) \\) by reducing the noise introduced by large rank clients with low data complexity."
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"bbox": [
|
| 1065 |
+
0.115,
|
| 1066 |
+
0.215,
|
| 1067 |
+
0.49,
|
| 1068 |
+
0.682
|
| 1069 |
+
],
|
| 1070 |
+
"angle": 0,
|
| 1071 |
+
"content": "Heterogeneous LoRA compared to Baselines. Finally, we compare our proposed HETLORA with other baselines in Table 3 and Fig. 5. We see that HETLORA with \\( r_{\\mathrm{min}} = 5 \\) and \\( r_{\\mathrm{max}} = 50 \\) achieves faster training as well as better performance than homogeneous LoRA cases with both edge cases of the ranks \\( r \\in \\{5, 50\\} \\) and reconstruction+SVD which was explained in Section 3.4. This implies that HETLORA is not only practical in the sense that clients are allowed to have their own rank values, it can also outperform the limited case of homogeneous LoRA where all clients have \\( r = r_{\\mathrm{min}} \\) or the impractical case where all clients have \\( r = r_{\\mathrm{max}} \\). We also observe that HETLORA achieves slightly lower performance than full fine-tuning. However, as shown in the blue text in Table 3 that shows the number of trained parameters compared to the full fine-tuning case, full fine-tuning requires to train a much larger number of parameters compared to HETLORA, making it infeasible to train with ODFMs in practice. We also show in Fig. 6 that to achieve the targeted performance for both Reddit and MSC task, HETLORA requires significantly less number of parameters to be trained and communicated compared to full fine-tuning. Although for Reddit, HOMLORA has a slightly less number of parameters to be trained, the final achieved RougeL is outperformed by HETLORA as shown in Table 3."
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "text",
|
| 1075 |
+
"bbox": [
|
| 1076 |
+
0.115,
|
| 1077 |
+
0.693,
|
| 1078 |
+
0.49,
|
| 1079 |
+
0.887
|
| 1080 |
+
],
|
| 1081 |
+
"angle": 0,
|
| 1082 |
+
"content": "Effect of the Decaying Factor \\(\\gamma\\). Lastly, we conduct an ablation study on the effect of the decaying factor \\(\\gamma\\) of HETLORA's local training step with self-rank pruning in Table 4. We observed that aggressive pruning hurts the performance where \\(\\gamma = 0.85\\) shows the worse performance across the varying \\(\\gamma\\) values. On the other hand, no pruning at all (\\(\\gamma = 1\\)) underperforms the case when there is pruning (\\(\\gamma = 0.99\\)), showing that reducing the noise introduced by large rank clients which data complexity is actually not that high indeed improves the performance."
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "image",
|
| 1086 |
+
"bbox": [
|
| 1087 |
+
0.516,
|
| 1088 |
+
0.214,
|
| 1089 |
+
0.695,
|
| 1090 |
+
0.35
|
| 1091 |
+
],
|
| 1092 |
+
"angle": 0,
|
| 1093 |
+
"content": null
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "image_caption",
|
| 1097 |
+
"bbox": [
|
| 1098 |
+
0.572,
|
| 1099 |
+
0.352,
|
| 1100 |
+
0.637,
|
| 1101 |
+
0.363
|
| 1102 |
+
],
|
| 1103 |
+
"angle": 0,
|
| 1104 |
+
"content": "(a) Reddit"
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "image",
|
| 1108 |
+
"bbox": [
|
| 1109 |
+
0.699,
|
| 1110 |
+
0.215,
|
| 1111 |
+
0.877,
|
| 1112 |
+
0.35
|
| 1113 |
+
],
|
| 1114 |
+
"angle": 0,
|
| 1115 |
+
"content": null
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "image_caption",
|
| 1119 |
+
"bbox": [
|
| 1120 |
+
0.759,
|
| 1121 |
+
0.352,
|
| 1122 |
+
0.816,
|
| 1123 |
+
0.363
|
| 1124 |
+
],
|
| 1125 |
+
"angle": 0,
|
| 1126 |
+
"content": "(b) MSC"
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "image_caption",
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
0.509,
|
| 1132 |
+
0.367,
|
| 1133 |
+
0.882,
|
| 1134 |
+
0.415
|
| 1135 |
+
],
|
| 1136 |
+
"angle": 0,
|
| 1137 |
+
"content": "Figure 5: Comparison of the performance across homogeneous LoRA, heterogeneous LoRA, and full fine-tuning. Heterogeneous LoRA achieves better performance than homogeneous LoRA with fewer number of communication rounds."
|
| 1138 |
+
},
|
| 1139 |
+
{
|
| 1140 |
+
"type": "image",
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
0.536,
|
| 1143 |
+
0.437,
|
| 1144 |
+
0.857,
|
| 1145 |
+
0.548
|
| 1146 |
+
],
|
| 1147 |
+
"angle": 0,
|
| 1148 |
+
"content": null
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "image_caption",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
0.508,
|
| 1154 |
+
0.551,
|
| 1155 |
+
0.882,
|
| 1156 |
+
0.623
|
| 1157 |
+
],
|
| 1158 |
+
"angle": 0,
|
| 1159 |
+
"content": "Figure 6: Ratio of communicated number of parameters for different PEFT methods to full fine-tuning to achieve the target value for the metric where it is RougeL 80 for Reddit text summarization task and perplexity 150 for the multisession chat response task. The 'X' means that the target metric is not achieved even after convergence."
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "title",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
0.509,
|
| 1165 |
+
0.645,
|
| 1166 |
+
0.869,
|
| 1167 |
+
0.661
|
| 1168 |
+
],
|
| 1169 |
+
"angle": 0,
|
| 1170 |
+
"content": "5 Discussions and Concluding Remarks"
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "text",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
0.507,
|
| 1176 |
+
0.674,
|
| 1177 |
+
0.883,
|
| 1178 |
+
0.883
|
| 1179 |
+
],
|
| 1180 |
+
"angle": 0,
|
| 1181 |
+
"content": "In our work, we investigated federated fine-tuning for ODFMs that cater to device system and data heterogeneity with our proposed HETLORA. We show that HETLORA is not only practical but also achieves better training speed, communication/computation efficiency, and final performance compared to homogeneous LoRA. We also show interesting findings consistent with previous literature (Hu et al., 2021; Zhang et al., 2023b) that increasing ranks does not always help for homogeneous LoRA. Our findings in this work opens up several questions worth investigating. For instance, if the settings allow us to assign specific ranks to"
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"type": "page_number",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
0.493,
|
| 1187 |
+
0.904,
|
| 1188 |
+
0.505,
|
| 1189 |
+
0.916
|
| 1190 |
+
],
|
| 1191 |
+
"angle": 0,
|
| 1192 |
+
"content": "8"
|
| 1193 |
+
}
|
| 1194 |
+
],
|
| 1195 |
+
[
|
| 1196 |
+
{
|
| 1197 |
+
"type": "text",
|
| 1198 |
+
"bbox": [
|
| 1199 |
+
0.115,
|
| 1200 |
+
0.087,
|
| 1201 |
+
0.486,
|
| 1202 |
+
0.166
|
| 1203 |
+
],
|
| 1204 |
+
"angle": 0,
|
| 1205 |
+
"content": "clients what will be the effective way to assign the ranks across clients for better convergence and performance? Another important next step of our work includes pursuing the theoretical convergence and generalization of heterogeneous LoRA."
|
| 1206 |
+
},
|
| 1207 |
+
{
|
| 1208 |
+
"type": "title",
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
0.116,
|
| 1211 |
+
0.179,
|
| 1212 |
+
0.251,
|
| 1213 |
+
0.193
|
| 1214 |
+
],
|
| 1215 |
+
"angle": 0,
|
| 1216 |
+
"content": "6 Limitations"
|
| 1217 |
+
},
|
| 1218 |
+
{
|
| 1219 |
+
"type": "text",
|
| 1220 |
+
"bbox": [
|
| 1221 |
+
0.115,
|
| 1222 |
+
0.204,
|
| 1223 |
+
0.49,
|
| 1224 |
+
0.526
|
| 1225 |
+
],
|
| 1226 |
+
"angle": 0,
|
| 1227 |
+
"content": "In this work, we address tackling system and data heterogeneity in federated fine-tuning of on-device foundation models. Our work is motivated by clients being able to carry different ranks for the LoRA fine-tuning method depending on their available resources, and thus exploiting this characteristic to improve federated fine-tuning with heterogeneous LoRA. However, our work assumes that the rank distribution across clients (which is analogous to how system resources are distributed across clients) is independent to the data distribution. There can be scenarios in which this is not necessarily the case where the rank and data distribution can be correlated. For instance, more affluent populations can have better off devices with larger resource capacity, and may have data distributions different to that of less affluent populations. Such correlation should be explored for future work to better understand the implications of heterogenous LoRA."
|
| 1228 |
+
},
|
| 1229 |
+
{
|
| 1230 |
+
"type": "title",
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
0.117,
|
| 1233 |
+
0.552,
|
| 1234 |
+
0.215,
|
| 1235 |
+
0.567
|
| 1236 |
+
],
|
| 1237 |
+
"angle": 0,
|
| 1238 |
+
"content": "References"
|
| 1239 |
+
},
|
| 1240 |
+
{
|
| 1241 |
+
"type": "ref_text",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
0.118,
|
| 1244 |
+
0.575,
|
| 1245 |
+
0.489,
|
| 1246 |
+
0.641
|
| 1247 |
+
],
|
| 1248 |
+
"angle": 0,
|
| 1249 |
+
"content": "Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, and Salman Avestimehr. 2023. Slora: Federated parameter efficient fine-tuning of language models. CoRR, abs/2308.06522."
|
| 1250 |
+
},
|
| 1251 |
+
{
|
| 1252 |
+
"type": "ref_text",
|
| 1253 |
+
"bbox": [
|
| 1254 |
+
0.118,
|
| 1255 |
+
0.651,
|
| 1256 |
+
0.489,
|
| 1257 |
+
0.704
|
| 1258 |
+
],
|
| 1259 |
+
"angle": 0,
|
| 1260 |
+
"content": "Rishi Bommasani, Drew A. Hudson, and et. al. Ehsan Adeli. 2022. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258."
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"type": "ref_text",
|
| 1264 |
+
"bbox": [
|
| 1265 |
+
0.118,
|
| 1266 |
+
0.714,
|
| 1267 |
+
0.489,
|
| 1268 |
+
0.793
|
| 1269 |
+
],
|
| 1270 |
+
"angle": 0,
|
| 1271 |
+
"content": "Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2016. Practical secure aggregation for federated learning on user-held data. In NIPS Workshop on Private Multi-Party Machine Learning."
|
| 1272 |
+
},
|
| 1273 |
+
{
|
| 1274 |
+
"type": "ref_text",
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
0.118,
|
| 1277 |
+
0.803,
|
| 1278 |
+
0.489,
|
| 1279 |
+
0.882
|
| 1280 |
+
],
|
| 1281 |
+
"angle": 0,
|
| 1282 |
+
"content": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,"
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "list",
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
0.118,
|
| 1288 |
+
0.575,
|
| 1289 |
+
0.489,
|
| 1290 |
+
0.882
|
| 1291 |
+
],
|
| 1292 |
+
"angle": 0,
|
| 1293 |
+
"content": null
|
| 1294 |
+
},
|
| 1295 |
+
{
|
| 1296 |
+
"type": "ref_text",
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
0.529,
|
| 1299 |
+
0.087,
|
| 1300 |
+
0.882,
|
| 1301 |
+
0.167
|
| 1302 |
+
],
|
| 1303 |
+
"angle": 0,
|
| 1304 |
+
"content": "Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165."
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "ref_text",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
0.511,
|
| 1310 |
+
0.178,
|
| 1311 |
+
0.882,
|
| 1312 |
+
0.23
|
| 1313 |
+
],
|
| 1314 |
+
"angle": 0,
|
| 1315 |
+
"content": "Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis. 2022. Differentially private bias-term only fine-tuning of foundation models. arXiv preprint arXiv:2210.00036."
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "ref_text",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
0.511,
|
| 1321 |
+
0.241,
|
| 1322 |
+
0.882,
|
| 1323 |
+
0.294
|
| 1324 |
+
],
|
| 1325 |
+
"angle": 0,
|
| 1326 |
+
"content": "Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, and Haozhao Wang. 2022. Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers. CoRR, abs/2211.08025."
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "ref_text",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
0.511,
|
| 1332 |
+
0.305,
|
| 1333 |
+
0.882,
|
| 1334 |
+
0.358
|
| 1335 |
+
],
|
| 1336 |
+
"angle": 0,
|
| 1337 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "ref_text",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
0.511,
|
| 1343 |
+
0.369,
|
| 1344 |
+
0.882,
|
| 1345 |
+
0.486
|
| 1346 |
+
],
|
| 1347 |
+
"angle": 0,
|
| 1348 |
+
"content": "Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 2023. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378."
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "ref_text",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
0.511,
|
| 1354 |
+
0.498,
|
| 1355 |
+
0.88,
|
| 1356 |
+
0.526
|
| 1357 |
+
],
|
| 1358 |
+
"angle": 0,
|
| 1359 |
+
"content": "Google. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311."
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "ref_text",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
0.511,
|
| 1365 |
+
0.536,
|
| 1366 |
+
0.88,
|
| 1367 |
+
0.562
|
| 1368 |
+
],
|
| 1369 |
+
"angle": 0,
|
| 1370 |
+
"content": "Google. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.1040."
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "ref_text",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
0.511,
|
| 1376 |
+
0.574,
|
| 1377 |
+
0.882,
|
| 1378 |
+
0.613
|
| 1379 |
+
],
|
| 1380 |
+
"angle": 0,
|
| 1381 |
+
"content": "Google DeepMind. 2023. Introducing palm2. https://blog.google/technology/ai/ google-palm-2-ai-large-language-model/."
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "ref_text",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
0.511,
|
| 1387 |
+
0.624,
|
| 1388 |
+
0.882,
|
| 1389 |
+
0.703
|
| 1390 |
+
],
|
| 1391 |
+
"angle": 0,
|
| 1392 |
+
"content": "Tao Guo, Song Guo, and Junxiao Wang. 2023. Pfed-prompt: Learning personalized prompt for vision-language models in federated learning. In Proceedings of the ACM Web Conference 2023, WWW '23, page 1364-1374, New York, NY, USA. Association for Computing Machinery."
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "ref_text",
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
0.511,
|
| 1398 |
+
0.714,
|
| 1399 |
+
0.882,
|
| 1400 |
+
0.78
|
| 1401 |
+
],
|
| 1402 |
+
"angle": 0,
|
| 1403 |
+
"content": "Tao Guo, Song Guo, Junxiao Wang, and Wenchao Xu. 2022. Promptfl: Let federated participants cooperatively learn prompts instead of models — federated learning in age of foundation model. CoRR, abs/2208.11625."
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "ref_text",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.511,
|
| 1409 |
+
0.791,
|
| 1410 |
+
0.882,
|
| 1411 |
+
0.831
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "Venkatesan Guruswami and Ravi Kannan. 2012. Lecture notes in computer science theory for the information age."
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "ref_text",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.511,
|
| 1420 |
+
0.842,
|
| 1421 |
+
0.882,
|
| 1422 |
+
0.882
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "Neil Houlsby, Andrei Giurgiu, Stanisław Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019."
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "list",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.511,
|
| 1431 |
+
0.087,
|
| 1432 |
+
0.882,
|
| 1433 |
+
0.882
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": null
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "page_number",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.494,
|
| 1442 |
+
0.904,
|
| 1443 |
+
0.505,
|
| 1444 |
+
0.915
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "9"
|
| 1448 |
+
}
|
| 1449 |
+
],
|
| 1450 |
+
[
|
| 1451 |
+
{
|
| 1452 |
+
"type": "ref_text",
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
0.135,
|
| 1455 |
+
0.087,
|
| 1456 |
+
0.489,
|
| 1457 |
+
0.128
|
| 1458 |
+
],
|
| 1459 |
+
"angle": 0,
|
| 1460 |
+
"content": "Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning (ICML)."
|
| 1461 |
+
},
|
| 1462 |
+
{
|
| 1463 |
+
"type": "ref_text",
|
| 1464 |
+
"bbox": [
|
| 1465 |
+
0.119,
|
| 1466 |
+
0.137,
|
| 1467 |
+
0.487,
|
| 1468 |
+
0.203
|
| 1469 |
+
],
|
| 1470 |
+
"angle": 0,
|
| 1471 |
+
"content": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR)."
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"type": "ref_text",
|
| 1475 |
+
"bbox": [
|
| 1476 |
+
0.119,
|
| 1477 |
+
0.213,
|
| 1478 |
+
0.487,
|
| 1479 |
+
0.279
|
| 1480 |
+
],
|
| 1481 |
+
"angle": 0,
|
| 1482 |
+
"content": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In The 36th Conference on Neural Information Processing Systems (NeurIPS 2022)."
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "ref_text",
|
| 1486 |
+
"bbox": [
|
| 1487 |
+
0.119,
|
| 1488 |
+
0.289,
|
| 1489 |
+
0.486,
|
| 1490 |
+
0.328
|
| 1491 |
+
],
|
| 1492 |
+
"angle": 0,
|
| 1493 |
+
"content": "Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090."
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"type": "ref_text",
|
| 1497 |
+
"bbox": [
|
| 1498 |
+
0.119,
|
| 1499 |
+
0.338,
|
| 1500 |
+
0.487,
|
| 1501 |
+
0.404
|
| 1502 |
+
],
|
| 1503 |
+
"angle": 0,
|
| 1504 |
+
"content": "Gwen Legate, Nicolas Bernier, Lucas Caccia, Edouard Oyallon, and Eugene Belilovsky. 2023. Guiding the last layer in federated learning with pre-trained models. In Workshop of Federated Learning and Analytics in Practice@ICML."
|
| 1505 |
+
},
|
| 1506 |
+
{
|
| 1507 |
+
"type": "ref_text",
|
| 1508 |
+
"bbox": [
|
| 1509 |
+
0.119,
|
| 1510 |
+
0.414,
|
| 1511 |
+
0.487,
|
| 1512 |
+
0.466
|
| 1513 |
+
],
|
| 1514 |
+
"angle": 0,
|
| 1515 |
+
"content": "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Empirical Methods in Natural Language Processing (EMNLP)."
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "ref_text",
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
0.119,
|
| 1521 |
+
0.476,
|
| 1522 |
+
0.487,
|
| 1523 |
+
0.516
|
| 1524 |
+
],
|
| 1525 |
+
"angle": 0,
|
| 1526 |
+
"content": "Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190."
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "ref_text",
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
0.119,
|
| 1532 |
+
0.526,
|
| 1533 |
+
0.487,
|
| 1534 |
+
0.579
|
| 1535 |
+
],
|
| 1536 |
+
"angle": 0,
|
| 1537 |
+
"content": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics."
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "ref_text",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
0.119,
|
| 1543 |
+
0.589,
|
| 1544 |
+
0.487,
|
| 1545 |
+
0.668
|
| 1546 |
+
],
|
| 1547 |
+
"angle": 0,
|
| 1548 |
+
"content": "Andre Manoel, Mirian del Carmen Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Robert Sim, Dan Miller, Danny Karmon, and Dimitrios Dimitriadis. 2023. Federated multilingual models for medical transcript analysis. In Conference on Health, Inference, and Learning (CHIL), pages 147-162."
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "ref_text",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
0.119,
|
| 1554 |
+
0.678,
|
| 1555 |
+
0.487,
|
| 1556 |
+
0.756
|
| 1557 |
+
],
|
| 1558 |
+
"angle": 0,
|
| 1559 |
+
"content": "H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agoury Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. International Conference on Artificial Intelligence and Statistics (AISTATS)."
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "ref_text",
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
0.119,
|
| 1565 |
+
0.766,
|
| 1566 |
+
0.486,
|
| 1567 |
+
0.792
|
| 1568 |
+
],
|
| 1569 |
+
"angle": 0,
|
| 1570 |
+
"content": "OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:submit/4812508."
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "ref_text",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
0.119,
|
| 1576 |
+
0.803,
|
| 1577 |
+
0.487,
|
| 1578 |
+
0.882
|
| 1579 |
+
],
|
| 1580 |
+
"angle": 0,
|
| 1581 |
+
"content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020."
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "list",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
0.119,
|
| 1587 |
+
0.087,
|
| 1588 |
+
0.489,
|
| 1589 |
+
0.882
|
| 1590 |
+
],
|
| 1591 |
+
"angle": 0,
|
| 1592 |
+
"content": null
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "ref_text",
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
0.512,
|
| 1598 |
+
0.087,
|
| 1599 |
+
0.881,
|
| 1600 |
+
0.14
|
| 1601 |
+
],
|
| 1602 |
+
"angle": 0,
|
| 1603 |
+
"content": "Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization for heterogeneous networks. In Proceedings of the 3rd MLSys Conference."
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "ref_text",
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
0.512,
|
| 1609 |
+
0.15,
|
| 1610 |
+
0.881,
|
| 1611 |
+
0.202
|
| 1612 |
+
],
|
| 1613 |
+
"angle": 0,
|
| 1614 |
+
"content": "Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. Chateval: A tool for chatbot evaluation. Proceedings of NAACL-HLT."
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "ref_text",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
0.512,
|
| 1620 |
+
0.213,
|
| 1621 |
+
0.881,
|
| 1622 |
+
0.251
|
| 1623 |
+
],
|
| 1624 |
+
"angle": 0,
|
| 1625 |
+
"content": "Ofir Ben Shoham and Nadav Rappoport. 2023. Federated learning of medical concepts embedding using behrt. arXiv preprint arXiv:2305.13052."
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "ref_text",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
0.512,
|
| 1631 |
+
0.262,
|
| 1632 |
+
0.881,
|
| 1633 |
+
0.34
|
| 1634 |
+
],
|
| 1635 |
+
"angle": 0,
|
| 1636 |
+
"content": "Aliaksandra Shysheya, John F Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, and Richard E Turner. 2023. Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification. International Conference on Learning Representations (ICLR)."
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "ref_text",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
0.512,
|
| 1642 |
+
0.35,
|
| 1643 |
+
0.881,
|
| 1644 |
+
0.428
|
| 1645 |
+
],
|
| 1646 |
+
"angle": 0,
|
| 1647 |
+
"content": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288."
|
| 1648 |
+
},
|
| 1649 |
+
{
|
| 1650 |
+
"type": "ref_text",
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
0.512,
|
| 1653 |
+
0.439,
|
| 1654 |
+
0.881,
|
| 1655 |
+
0.517
|
| 1656 |
+
],
|
| 1657 |
+
"angle": 0,
|
| 1658 |
+
"content": "Michael Volske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59-63, Copenhagen, Denmark. Association for Computational Linguistics."
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "ref_text",
|
| 1662 |
+
"bbox": [
|
| 1663 |
+
0.512,
|
| 1664 |
+
0.527,
|
| 1665 |
+
0.881,
|
| 1666 |
+
0.593
|
| 1667 |
+
],
|
| 1668 |
+
"angle": 0,
|
| 1669 |
+
"content": "Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. 2021. A field guide to federated optimization. arXiv preprint arXiv:2107.06917."
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "ref_text",
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
0.512,
|
| 1675 |
+
0.603,
|
| 1676 |
+
0.881,
|
| 1677 |
+
0.68
|
| 1678 |
+
],
|
| 1679 |
+
"angle": 0,
|
| 1680 |
+
"content": "Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 37(6):1205-1221."
|
| 1681 |
+
},
|
| 1682 |
+
{
|
| 1683 |
+
"type": "ref_text",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
0.512,
|
| 1686 |
+
0.691,
|
| 1687 |
+
0.881,
|
| 1688 |
+
0.756
|
| 1689 |
+
],
|
| 1690 |
+
"angle": 0,
|
| 1691 |
+
"content": "Colin Wei, Sang Michael Xie, and Tengyu Ma. 2021. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information Processing Systems, 34:16158-16170."
|
| 1692 |
+
},
|
| 1693 |
+
{
|
| 1694 |
+
"type": "ref_text",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
0.512,
|
| 1697 |
+
0.766,
|
| 1698 |
+
0.881,
|
| 1699 |
+
0.832
|
| 1700 |
+
],
|
| 1701 |
+
"angle": 0,
|
| 1702 |
+
"content": "Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig Schmidt, Michael Rabbat, and Ari S. Morcos. 2023. lo-fi: distributed fine-tuning without communication. Transactions on Machine Learning Research (TMLR)."
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "ref_text",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
0.512,
|
| 1708 |
+
0.842,
|
| 1709 |
+
0.881,
|
| 1710 |
+
0.882
|
| 1711 |
+
],
|
| 1712 |
+
"angle": 0,
|
| 1713 |
+
"content": "Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567."
|
| 1714 |
+
},
|
| 1715 |
+
{
|
| 1716 |
+
"type": "list",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
0.512,
|
| 1719 |
+
0.087,
|
| 1720 |
+
0.881,
|
| 1721 |
+
0.882
|
| 1722 |
+
],
|
| 1723 |
+
"angle": 0,
|
| 1724 |
+
"content": null
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "page_number",
|
| 1728 |
+
"bbox": [
|
| 1729 |
+
0.49,
|
| 1730 |
+
0.904,
|
| 1731 |
+
0.51,
|
| 1732 |
+
0.915
|
| 1733 |
+
],
|
| 1734 |
+
"angle": 0,
|
| 1735 |
+
"content": "10"
|
| 1736 |
+
}
|
| 1737 |
+
],
|
| 1738 |
+
[
|
| 1739 |
+
{
|
| 1740 |
+
"type": "ref_text",
|
| 1741 |
+
"bbox": [
|
| 1742 |
+
0.118,
|
| 1743 |
+
0.087,
|
| 1744 |
+
0.49,
|
| 1745 |
+
0.14
|
| 1746 |
+
],
|
| 1747 |
+
"angle": 0,
|
| 1748 |
+
"content": "Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. 2023. Fedlora: Model-heterogeneous personalized federated learning with lora tuning. arXiv preprint arXiv:2310.13283."
|
| 1749 |
+
},
|
| 1750 |
+
{
|
| 1751 |
+
"type": "ref_text",
|
| 1752 |
+
"bbox": [
|
| 1753 |
+
0.118,
|
| 1754 |
+
0.15,
|
| 1755 |
+
0.49,
|
| 1756 |
+
0.204
|
| 1757 |
+
],
|
| 1758 |
+
"angle": 0,
|
| 1759 |
+
"content": "Sixing Yu, J. Pablo Muñoz, and Ali Jannesari. 2023. Federated foundation models: Privacy-preserving and collaborative learning for large models. arXiv preprint arXiv:2305.11414."
|
| 1760 |
+
},
|
| 1761 |
+
{
|
| 1762 |
+
"type": "ref_text",
|
| 1763 |
+
"bbox": [
|
| 1764 |
+
0.118,
|
| 1765 |
+
0.213,
|
| 1766 |
+
0.49,
|
| 1767 |
+
0.266
|
| 1768 |
+
],
|
| 1769 |
+
"angle": 0,
|
| 1770 |
+
"content": "Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199."
|
| 1771 |
+
},
|
| 1772 |
+
{
|
| 1773 |
+
"type": "ref_text",
|
| 1774 |
+
"bbox": [
|
| 1775 |
+
0.118,
|
| 1776 |
+
0.276,
|
| 1777 |
+
0.49,
|
| 1778 |
+
0.328
|
| 1779 |
+
],
|
| 1780 |
+
"angle": 0,
|
| 1781 |
+
"content": "Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, and Guoyin Wangand Yiran Chen. 2023a. Towards building the federated gpt: Federated instruction tuning. CoRR, abs/2305.05644."
|
| 1782 |
+
},
|
| 1783 |
+
{
|
| 1784 |
+
"type": "ref_text",
|
| 1785 |
+
"bbox": [
|
| 1786 |
+
0.118,
|
| 1787 |
+
0.338,
|
| 1788 |
+
0.49,
|
| 1789 |
+
0.417
|
| 1790 |
+
],
|
| 1791 |
+
"angle": 0,
|
| 1792 |
+
"content": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023b. Adaptive budget allocation for parameter-efficient fine-tuning. In The 11th International Conference on Learning Representations (ICLR)."
|
| 1793 |
+
},
|
| 1794 |
+
{
|
| 1795 |
+
"type": "ref_text",
|
| 1796 |
+
"bbox": [
|
| 1797 |
+
0.118,
|
| 1798 |
+
0.427,
|
| 1799 |
+
0.49,
|
| 1800 |
+
0.506
|
| 1801 |
+
],
|
| 1802 |
+
"angle": 0,
|
| 1803 |
+
"content": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)."
|
| 1804 |
+
},
|
| 1805 |
+
{
|
| 1806 |
+
"type": "ref_text",
|
| 1807 |
+
"bbox": [
|
| 1808 |
+
0.118,
|
| 1809 |
+
0.516,
|
| 1810 |
+
0.49,
|
| 1811 |
+
0.595
|
| 1812 |
+
],
|
| 1813 |
+
"angle": 0,
|
| 1814 |
+
"content": "Zhuo Zhang, Xiangjing Hu, Jingyuan Zhang, Yating Zhang, Hui Wang, Lizhen Qu, and Zenglin Xu. 2023c. Fedlegal: The first real-world federated learning benchmark for legal nlp. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)."
|
| 1815 |
+
},
|
| 1816 |
+
{
|
| 1817 |
+
"type": "ref_text",
|
| 1818 |
+
"bbox": [
|
| 1819 |
+
0.118,
|
| 1820 |
+
0.605,
|
| 1821 |
+
0.49,
|
| 1822 |
+
0.684
|
| 1823 |
+
],
|
| 1824 |
+
"angle": 0,
|
| 1825 |
+
"content": "Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. 2023d. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. Findings of the Association for Computational Linguistics (ACL)."
|
| 1826 |
+
},
|
| 1827 |
+
{
|
| 1828 |
+
"type": "ref_text",
|
| 1829 |
+
"bbox": [
|
| 1830 |
+
0.118,
|
| 1831 |
+
0.693,
|
| 1832 |
+
0.49,
|
| 1833 |
+
0.785
|
| 1834 |
+
],
|
| 1835 |
+
"angle": 0,
|
| 1836 |
+
"content": "Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, Jianxin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, and Lichao Sun. 2023. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419."
|
| 1837 |
+
},
|
| 1838 |
+
{
|
| 1839 |
+
"type": "list",
|
| 1840 |
+
"bbox": [
|
| 1841 |
+
0.118,
|
| 1842 |
+
0.087,
|
| 1843 |
+
0.49,
|
| 1844 |
+
0.785
|
| 1845 |
+
],
|
| 1846 |
+
"angle": 0,
|
| 1847 |
+
"content": null
|
| 1848 |
+
},
|
| 1849 |
+
{
|
| 1850 |
+
"type": "page_number",
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
0.49,
|
| 1853 |
+
0.904,
|
| 1854 |
+
0.508,
|
| 1855 |
+
0.915
|
| 1856 |
+
],
|
| 1857 |
+
"angle": 0,
|
| 1858 |
+
"content": "11"
|
| 1859 |
+
}
|
| 1860 |
+
]
|
| 1861 |
+
]
|
2401.06xxx/2401.06432/0c08f168-4d4b-467a-927e-a93fb082f76c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d72c4cc459202f5aa50098cd089ee3e00fb55beb4ffb3a866e692442ed2bb19
|
| 3 |
+
size 1426882
|
2401.06xxx/2401.06432/full.md
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models
|
| 2 |
+
|
| 3 |
+
Yae Jee Cho $^{1*}$ , Luyang Liu $^{2}$ , Zheng Xu $^{2}$ , Aldi Fahrezi $^{2}$ , Gauri Joshi $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Carnegie Mellon University, $^{2}$ Google Research
|
| 6 |
+
|
| 7 |
+
yaejeec@andrew.cmu.edu, {luyangliu,xuzheng,aldifahrezi} $@$ google.com,
|
| 8 |
+
|
| 9 |
+
gaurij@andrew.cmu.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Foundation models (FMs) adapt well to specific domains or tasks with fine-tuning, and federated learning (FL) enables the potential for privacy-preserving fine-tuning of the FMs with on-device local data. For federated fine-tuning of FMs, we consider the FMs with small to medium parameter sizes of single digit billion at maximum, referred to as ondevice FMs (ODFMs) that can be deployed on devices for inference but can only be finetuned with parameter efficient methods. In our work, we tackle the data and system heterogeneity problem of federated fine-tuning of ODFMs by proposing a novel method using heterogeneous low-rank approximations (LoRAs), namely HETLORA. First, we show that the naive approach of using homogeneous LoRA ranks across devices face a trade-off between overfitting and slow convergence, and thus propose HETLORA, which allows heterogeneous ranks across client devices and efficiently aggregates and distributes these heterogeneous LoRA modules. By applying rank self-pruning locally and sparsity-weighted aggregation at the server, HETLORA combines the advantages of high and low-rank LoRAs, which achieves improved convergence speed and final performance compared to homogeneous LoRA. Furthermore, HETLORA offers enhanced computation efficiency compared to full fine-tuning, making it suitable for federated fine-tuning across heterogeneous devices.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
The emerging foundation models (FMs) (Bommasani et al., 2022; Zhou et al., 2023; Radford et al., 2021; Devlin et al., 2019; OpenAI, 2023; Google, 2022; Touvron et al., 2023; Brown
|
| 18 |
+
|
| 19 |
+
et al., 2020; Google, 2022; Driess et al., 2023; Google, 2023) have shown remarkable zero/few shot learning capabilities, performing well on a variety of tasks including text/image generation with prompts, language translation, solving math problems, and conversing in natural language. Standard FMs, however, demand costly resources for directly fine-tuning their entire parameter space. To tackle this issue, many recent works have proposed different parameter-efficient fine-tuning (PEFT) methods of FMs such as prompt tuning (Lester et al., 2021), utilizing adapters (Houlsby et al., 2019), or low-rank adaptation (LoRA) of the original model (Hu et al., 2021) which freezes the original pre-trained parameters of the FM and train additional, smaller number of parameters instead.
|
| 20 |
+
|
| 21 |
+
These PEFT methods, however, assume that i) FMs are deployed to and trained with the data of a single machine/client for adaptation to the downstream task and that ii) the client has enough resources to even fit a standard FM of hundred billion size for, at least, inference. In practice, there are frequently cases where we are interested in fine-tuning FMs for on-device private data that is distributed across multiple devices (clients). For instance, sensitive and private data such as medical information or law-related documents may be hard to collect centrally in a private manner and fine-tuning of the FMs may need to be done at the edge (Manoel et al., 2023; Shoham and Rappoport, 2023; Zhang et al., 2023c).
|
| 22 |
+
|
| 23 |
+
In our work, we focus on such federated fine-tuning scenarios, where we train a set of parameters collaboratively across clients to obtain a global set of parameters that can be plugged in to the FM for the targeted downstream task. Note that federated fine-tuning is orthogonal to personalization of FMs in federated learning (FL) (Guo et al., 2023), which
|
| 24 |
+
|
| 25 |
+
<table><tr><td></td><td>Zero-Shot</td><td>Few-Shot</td><td>Full-Training</td></tr><tr><td>PaLM 2 XXS</td><td>2930.23</td><td>2541.86</td><td>23.71</td></tr><tr><td>PaLM 2 XS</td><td>2712.86</td><td>481.95</td><td>18.32</td></tr></table>
|
| 26 |
+
|
| 27 |
+
Table 1: Perplexity of PaLM 2 for zero-shot, few-shot (5 communication rounds), and full federated fine-tuning (200 communication rounds) for chat response on the multi-session chat data (further experimental details are in Section 4.)
|
| 28 |
+
|
| 29 |
+
aims to train parameters that perform well for individual clients rather than general downstream tasks. We also define on-device FMs (ODFMs) as models with few billion parameters at max that are able to fit into memory on limited capacity clients considering current hardwares.
|
| 30 |
+
|
| 31 |
+
Federated fine-tuning of ODFMs entails unique challenges non-present in either the standard PEFT of FMs or the standard federated training of models that are not FMs. First, FMs have their zero/few-shot learning capability often supported by their large parameter space that is trained on massive data. However, as we show in Table 1 and also presented by previous literature (Kojima et al., 2022), FMs' performance deteriorates as their sizes get smaller and federated fine-tuning may not merely be useful but inevitable for ODFMs to perform well for downstream tasks on devices.
|
| 32 |
+
|
| 33 |
+
Moreover, devices have limited and heterogeneous system capabilities (Wang et al., 2019; Bonawitz et al., 2016) and data distributions (Sahu et al., 2020). A suitable PEFT method that flexibly adapts to such heterogeneity across devices should be investigated for federated fine-tuning of ODFMs. Previous work evaluated PEFT with FL via performing a general evaluation over different PEFT methods naively combined with FL (Guo et al., 2022; Zhang et al., 2023d; Chen et al., 2022; Wortsman et al., 2023; Yu et al., 2023). However, they do not consider the practical setting for ODFMs where PEFT methods are catered to the system and data heterogeneity of clients.
|
| 34 |
+
|
| 35 |
+
In our work, we focus on one of the most prominent PEFT methods, LoRA (Hu et al., 2021) which proposes to train low-rank approximations of the original model. Using LoRA, the number of trainable parameters is greatly reduced to at most $0.02\%$ of the original ODFM size (see Table 2). The simplest way to apply LoRA to federated fine-tuning is training with homogeneous rank $r$ across the clients as one would train any global model with FL. However, this does not cater to the heterogeneity in FL, where it is even difficult to choose the
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
Figure 1: Overview of heterogeneous rank deployment of LoRA: the pretrained weights $\mathbf{W}_0$ are stored on-device and heterogeneous ranks are assigned to different clients with $r_{\mathrm{min}} = r_1 < r_2 < r_3 = r_{\mathrm{max}}$ . In our proposed HETLORA, the server receives the trained heterogeneous LoRA modules and aggregates them with sparsity-weighted aggregation to update the global LoRA module.
|
| 39 |
+
|
| 40 |
+
right LoRA rank for resource limited mobile devices with natural system and data heterogeneity.
|
| 41 |
+
|
| 42 |
+
To this end, we propose heterogeneous LoRA, namely HETLORA in short, for federated fine-tuning to cater to system and data heterogeneity and outperform the naive combination of LoRA and federated fine-tuning where homogeneous ranks are applied across clients. We show the performance of PaLM 2 (Google, 2023) of XXS and XS size for chat responses on the multi-session chat data (Xu et al., 2021) and text summarization for the Reddit data (Volske et al., 2017), both which are real world data from clients. Our contributions can be summarized as follows:
|
| 43 |
+
|
| 44 |
+
- We propose HETLORA that can apply different rank LoRA modules to different clients to cater to the heterogeneous system capabilities and data complexities of the clients, via utilizing rank self-pruning and sparsity-weighted aggregation.
|
| 45 |
+
- We show the performance of naively applying LoRA with homogeneous ranks across clients for federated fine-tuning, and show that while large ranks help in speeding-up training, they lead to faster overfitting while smaller ranks are slower in training but does not suffer from overfitting.
|
| 46 |
+
- We then evaluate HETLORA to show that it outperforms naively applying homogeneous ranks across clients in terms of both training speed, communication/computation efficiency, and final performance, gaining the best of both worlds of homogeneous LoRA with high and low ranks.
|
| 47 |
+
|
| 48 |
+
<table><tr><td></td><td>r = 1</td><td>r = 5</td><td>r = 10</td><td>r = 20</td><td>r = 50</td><td>r = 100</td><td>r = 150</td><td>r = 200</td></tr><tr><td>PaLM 2 XXS, PaLM 2 XS</td><td>0.02%</td><td>0.11%</td><td>0.21%</td><td>0.42%</td><td>1.05%</td><td>2.10%</td><td>3.14%</td><td>4.19%</td></tr></table>
|
| 49 |
+
|
| 50 |
+
Table 2: Percentage of the LoRA parameters' size for different ranks $r$ compared to the original pre-trained ODFM's parameter size. Even for large ranks such as $r = {200}$ the trainable LoRA parameters' size compared to the original pre-trained ODFM size is less than 5% for both PaLM 2-XS and PaLM 2-XXS.
|
| 51 |
+
|
| 52 |
+
# 2 Related Work
|
| 53 |
+
|
| 54 |
+
Parameter-Efficient Fine Tuning. There has been a plethora of recent work on PEFT which either trains a subset of parameters within the existing FM whilst other parameters are frozen or introduces an additional set of trainable parameters whilst keeping the original FM frozen. For the former, methods such as head or bias fine-tuning (Wei et al., 2021; Bu et al., 2022; Lee et al., 2019; Zaken et al., 2021) has been explored, while for the latter, methods such as adapters (Houlsby et al., 2019), prompt (Lester et al., 2021) or prefix-tuning (Li and Liang, 2021), and low-rank approximation (Hu et al., 2021) has been proposed. While these number of methods has been proven to perform as well as full model fine-tuning with just a few number of parameters for the centralized setting, it has not been thoroughly explored how these methods perform for a much smaller FM such as ODFMs, in the decentralized setting where clients' system capacities can be heterogeneous and much limited.
|
| 55 |
+
|
| 56 |
+
Federated Fine-Tuning. Recently, interest in the intersection of FMs and FL has notably increased (Zhou et al., 2023; Yu et al., 2023). Many recent work has proposed to combine the PEFT methods devised for the centralized setting to FL such as training prompts or adapters collaboratively with FL (Guo et al., 2022; Chen et al., 2022; Zhang et al., 2023a; Shysheya et al., 2023; Legate et al., 2023). Another line of work has proposed to perform a few-shot or nearly zero-shot training of FMs with FL for improved communication-efficiency (Wortsman et al., 2023; Zhang et al., 2023d). However, these work either overlooks that most devices do not have the resource to fit a general FM (Touvron et al., 2023; Brown et al., 2020) ( $>8\mathrm{B}$ parameters) even for inference or does not consider the heterogeneous system capacities of the clients. It is detrimental to consider these factors since FMs that actually fits to the devices in FL are much smaller, making them weaker in the general intelligence capabilities, and also hetero-
|
| 57 |
+
|
| 58 |
+
genuine system capacities may prohibit deploying same sized PEFT parameters across clients.
|
| 59 |
+
|
| 60 |
+
Only a few number of recent work has looked in to using LoRA for FL. For instance, in (Babakniya et al., 2023), the importance of the initialization for the LoRA modules is evaluated where they propose to train the LoRA modules with FL and then perform singular value decomposition (SVD) to gain a good initialization of the LoRA modules. However, the training process of LoRA itself is not altered to adapt to heterogeneous system capabilities of devices. Another recent work (Yi et al., 2023) has evaluated LoRA in the context of personalized FL, but other than applying LoRA to personalization, the LoRA method itself is, again, not changed. Our work proposes heterogeneous LoRA for federated fine-tuning where heterogeneous ranks are deployed and trained across clients by a new algorithm that includes rank self-pruning and sparsity weighted aggregation.
|
| 61 |
+
|
| 62 |
+
# 3 Federated Fine-Tuning with LoRA
|
| 63 |
+
|
| 64 |
+
# 3.1 Preliminaries
|
| 65 |
+
|
| 66 |
+
Formally, we define the pre-trained ODFM as $\mathbf{W}_0\in \mathbb{R}^{d\times l}$ and the trainable low-rank decomposed matrix as $\Delta \mathbf{W}\in \mathbb{R}^{d\times l}$ . In standard LoRA (Hu et al., 2021) under the centralized setting, the low-rank decomposition of $\Delta \mathbf{W}$ is constructed such that $\Delta \mathbf{W} = \mathbf{BA}$ where $\mathbf{B}\in \mathbb{R}^{d\times r}$ and $\mathbf{A}\in \mathbb{R}^{r\times l}$ are the low rank decomposition of $\Delta \mathbf{W}$ with identical rank $r$ . Now, let us consider LoRA for federated fine-tuning where there are $M$ total clients. Each client $k\in [M]$ has private data $\mathcal{B}_k$ and its corresponding local empirical loss function $F_{k}(\mathbf{W}) = \frac{1}{|\mathcal{B}_{k}|}\sum_{\xi \in \mathcal{B}_{k}}\ell (\mathbf{W},\xi)$ , where $\ell (\mathbf{W},\xi)$ is the loss for model $\mathbf{W}$ at data sample $\xi$ . The optimization task for federated fine-tuning is to collaboratively find the global parameters which we define as $\overline{\mathbf{B}}$ and $\overline{\mathbf{A}}$ , given the pretrained knowledge $\mathbf{W}_0$ that can minimize the global objective $F(\overline{\mathbf{W}}) = \frac{1}{M}\sum_{k = 1}^{M}F_{k}(\overline{\mathbf{W}})$ where $\overline{\mathbf{W}} = \mathbf{W}_0 + \overline{\mathbf{B}}\overline{\mathbf{A}}$ . Later in the paper, when
|
| 67 |
+
|
| 68 |
+
introducing heterogeneous LoRA we truncate the LoRA modules' rank dimension, for example from $\mathbf{B} \in \mathbb{R}^{d \times r}$ , $\mathbf{A} \in \mathbb{R}^{r \times l}$ to $\mathbf{B}' \in \mathbb{R}^{d \times r'}$ , $\mathbf{A}' \in \mathbb{R}^{r' \times l}$ where $r' < r$ . Throughout the paper, we denote such truncation of a matrix with the : symbol for each row and column at the subscript. For instance, for truncation to $r' < r$ at the column for the matrix $\mathbf{B} \in \mathbb{R}^{d \times r}$ , we keep all the columns until $r'$ and omit the last $r - r'$ columns and denote the resulting matrix it as $\mathbf{B}_{:,:r'}$ .
|
| 69 |
+
|
| 70 |
+
# 3.2 Naïve Case: Homogeneous LoRA
|
| 71 |
+
|
| 72 |
+
A straightforward way to perform federated fine-tuning with LoRA is to train the LoRA modules $\mathbf{B}$ , $\mathbf{A}$ with homogeneous rank $r$ across all clients with standard FL (McMahan et al., 2017). Specifically, first the clients have the pre-trained ODFM weights $\mathbf{W}_0$ stored in their devices prior to training for the forward pass when training the LoRA modules. Then, the server sends the global LoRA modules $\overline{\mathbf{B}}^{(t)}$ , $\overline{\mathbf{A}}^{(t)}$ to the set of $m$ selected clients $S^{(t)}$ per communication round $t$ . Each selected client $k \in S^{(t)}$ trains the LoRA modules on their local data for a few local iterations (usually with mini-batch SGD) and send the updated modules $\mathbf{B}_k^{(t)}$ , $\mathbf{A}_k^{(t)}$ back to the server. The server then updates the global LoRA modules accordingly to $\overline{\mathbf{B}}^{(t+1)} = \sum_{k \in S^{(t)}} \mathbf{B}_k^{(t)} / m$ , $\overline{\mathbf{A}}^{(t+1)} = \sum_{k \in S^{(t)}} \mathbf{A}_k^{(t)} / m$ and sends back to the next set of selected clients for the next communication round. This training process is nearly identical to the standard FL algorithm (McMahan et al., 2017) except that the pretrained weights $\mathbf{W}_0$ are frozen and locally stored in the clients' devices and only the LoRA moduels are trained and communicated.
|
| 73 |
+
|
| 74 |
+
Instead of such homogeneous rank deployment across all clients, it is not only possible but more practical to use heterogeneous rank deployment for federated fine-tuning. This involves training LoRA modules with varying ranks on different clients, based on their system capabilities. Such setting is motivated and often required from the system constraints of the clients (Wang et al., 2021) where most of the clients are only capable of having smaller ranks while a few can handle larger ranks. However, this approach poses challenges in aggregating and redistributing the LoRA modules. To address these challenges, we introduce a solution called HETLORA, which pushes the limits
|
| 75 |
+
|
| 76 |
+
beyond homogeneous LoRA deployment.
|
| 77 |
+
|
| 78 |
+
# 3.3 Proposed Method: Heterogeneous LoRA
|
| 79 |
+
|
| 80 |
+
Overview. Our proposed heterogeneous LoRA method, namely HETLORA, is not restricted to any specific method to assign the ranks to the clients and the clients can decide their respective ranks themselves. For formality, in our paper, we formulate that each client has a rank denoted as $r_k$ , within a range of $r_k \in [r_{\min}, r_{\max}]$ , $\forall k$ (see Fig. 1). HETLORA comprises three steps: 1) Distribution via Truncation, 2) Local Training with Rank Self-Pruning, and 3) Sparsity-Weighted Aggregation of the LoRA modules. These steps are detailed further in the subsequent paragraphs. An overview of HETLORA is illustrated in Fig. 2.
|
| 81 |
+
|
| 82 |
+
1) Distribution via Truncation. At the beginning of each communication round $t$ , the server holds initial global LoRA modules $\overline{\mathbf{B}}^{(t)}$ , $\overline{\mathbf{A}}^{(t)}$ with a global rank $r^{(t)}$ . The value of the global rank $r^{(t)}$ depends on how we aggregate the heterogeneous rank LoRA modules which is elaborated on in step 3). The server then distributes these global LoRA modules to a subset of selected set of clients $S^{(t)}$ with heterogeneous ranks $r_k^{(t)}$ , $k \in S^{(t)}$ for local training<sup>1</sup>. With the given global LoRA modules, we consider a simple and intuitive method of truncation where the server sends $\overline{\mathbf{B}}_{:,r_k}^{(t)}$ , $\overline{\mathbf{A}}_{:r_k}^{(t)}$ ; to each client $k$ with rank $r_k^{(t)}$ for local training where we omitted the superscript for $r_k$ for simplicity.
|
| 83 |
+
|
| 84 |
+
2) Local Training with Rank Self-Pruning. After receiving LoRA modules from the server as $\mathbf{B}_k^{(t,0)} = \overline{\mathbf{B}}_{\therefore r_k}^{(t)}$ , $\mathbf{A}_k^{(t,0)} = \overline{\mathbf{A}}_{r_k,\cdot}$ , each client $k \in S^{(t)}$ performs $\tau$ local iterations of mini-batch SGD on their local data to minimize the local objective $\frac{1}{|\mathcal{B}_k|} \sum_{\xi \in \mathcal{B}_k} \ell((\mathbf{B}_k, \mathbf{A}_k), \xi | \mathbf{W}_0)$ , and sends back the updated LoRA modules $\mathbf{B}_k^{(t,\tau)} \in \mathbb{R}^{d \times r_k^{(t)}}$ and $\mathbf{A}_k^{(t,\tau)} \in \mathbb{R}^{r_k^{(t)} \times l}$ to the server. This is the same process as the standard local training step in vanilla FedAvg (McMahan et al., 2017). However, we improve this vanilla local training step by adding a rank self-pruning mechanism where clients self-prune their respective ranks depending on the magnitude of the model parameters.
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
Figure 2: Overview of the zero-padding, sparsity-weighted aggregation, and truncation method for HETLORA; (a): Zero-pad LoRA modules with smaller ranks to $r_{\mathrm{max}}$ (clients with rank $r_{\mathrm{max}}$ does not need padding) and calculate their sparsity by calculating the Frobenius norm of the reconstructed model $\Delta \mathbf{W}_k^{(t)} = \mathbf{B}_k^{(t)}\mathbf{A}_k^{(t)}$ ; (b): After padding, aggregate all of the clients' LoRA modules with the weights $\| \mathbf{S}_k^{(t)}\| /\mathbf{Z}^{(t)}$ calculated by $\Delta \mathbf{W}_k^{(t)}$ to get the global LoRA modules; (c): Truncate the global LoRA modules for the specific rank of the next selected client (example for client with rank $r_2$ ).
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
Specifically, we add a regularization term to the original local objective to get $\min_{\mathbf{B}_k,\mathbf{A}_k}\frac{1}{|\mathcal{B}_k|}\sum_{\xi \in \mathcal{B}_k}\ell ((\mathbf{B}_k,\mathbf{A}_k),\xi |\mathbf{W}_0) + \lambda \| \mathbf{B}_{k,:r_k\gamma :r_k}\| \| \mathbf{A}_{k,r_k\gamma :r_k,:}\|$ where $\gamma < 1$ is a decay-factor that determines how aggressively we want to prune the ranks to a smaller value. The regularization term aims to minimize the norm of the last few ranks, which will become smaller if the first loss term $\frac{1}{|\mathcal{B}_k|}\sum_{\xi \in \mathcal{B}_k}\ell ((\mathbf{B}_k,\mathbf{A}_k),\xi |\mathbf{W}_0)$ is not very large. After training with the new local objective we compare the norm of the updated LoRA modules' last few layers $\| \mathbf{B}_{k,:r_k\gamma :r_k}\| \| \mathbf{A}_{k,r_k\gamma :r_k,:}\|$ with the ones from the initially received LoRA modules. If the former is smaller we prune the last few layers (pruning intensity is determined by $\gamma$ ) and send back the LoRA modules with a smaller rank. This means that for the LoRA modules which incurs a small local loss, i.e., well-trained on the clients' local data, the LoRA modules are more likely to be pruned to a smaller rank.
|
| 94 |
+
|
| 95 |
+
Such pruning allows HETLORA to reduce the noise in the LoRA modules introduced by clients having a larger rank than the actual rank that their data complexity requires, and also reduces the complexity of the LoRA modules to improve generalization and prevent overfitting (see Table 4). Once the rank is pruned for a client, the client saves the updated rank and uses it as the starting rank if selected for future communication rounds. The client then sends back their updated and possibly rank-pruned LoRA modules to the server for the modules to be processed in the next server-side aggregation step.
|
| 96 |
+
3) Sparsity-Weighted Aggregation. Finally, the last step of HETLORA is aggregating the received heterogeneous LoRA modules
|
| 97 |
+
|
| 98 |
+
$\mathbf{B}_k^{(t,\tau)}$ , $\mathbf{A}_k^{(t,\tau)}$ , $k \in S^{(t)}$ . A straightforward way to aggregate the heterogeneous LoRA modules is using zero-padding to all the received LoRA modules with $r_i^{(t)} < \max \{r_k^{(t)} | k \in S^{(t)}\}$ and then perform simple averaging over the modules. However, such naive aggregation can lead to biasing the model towards higher rank clients even when these clients may not hold valuable training information, i.e., having low data complexity, giving noisy updates.
|
| 99 |
+
|
| 100 |
+
In an ideal scenario where we can deploy any rank to any client, deploying higher ranks to the clients with higher data complexity or larger local datasets can retrieve more informative and less sparse updates from the clients. Conversely if we assign higher ranks to the clients whose data complexity is low, the actual rank of the full model from the reconstructed LoRA modules can be smaller than the assigned rank. Thus the higher rank client's update may be unnecessarily over-emphasized in the naive zero padding method.
|
| 101 |
+
|
| 102 |
+
Based on this insight we propose a sparsity-weighted aggregation scheme where the server reconstructs these LoRA modules to the full model as $\Delta \mathbf{W}_k^{(t)} = \mathbf{B}_k^{(t)}\mathbf{A}_k^{(t)}$ and gets the norm of the singular value vectors from the full models denoted as $\mathbf{S}_k^{(t)}$ by calculating $\| \Delta \mathbf{W}_k^{(t)}\| _F$ . Note that the costly process of performing SVD for each of the full model $\Delta \mathbf{W}_k^{(t)}$ can be avoided by simply calculating the Frobenius norm of $\Delta \mathbf{W}_k^{(t)}$ (see Lemma 1.2 in (Guruswami and Kannan, 2012)). The server then weighs the LoRA modules with aggregation weight $p_k^{(t)}$ which is proportional to the norm of the singular value vectors. Formally, we have the global LoRA modules updated as $\overline{\mathbf{B}}^{(t + 1)} = \sum_{k\in \mathcal{S}^{(t)}}p_k^{(t)}\mathbf{B}_k^{(t)},\overline{\mathbf{A}}^{(t + 1)} = \sum_{k\in \mathcal{S}^{(t)}}p_k^{(t)}\mathbf{A}_k^{(t)}$ where $p_k^{(t)}\coloneqq \| \mathbf{S}_k^{(t)}\| /\mathbf{Z}^{(t)}$ with
|
| 103 |
+
|
| 104 |
+
normalizing factor $\mathbf{Z}^{(t)}\coloneqq \sum_{k^{\prime}\in S^{(t)}}\| \mathbf{S}_{k^{\prime}}^{(t)}\|$ . This way, we can de-emphasize the larger rank assigned-clients that have rather less informative updates, and more emphasize the smaller rank assigned-clients that have more informative ones.
|
| 105 |
+
|
| 106 |
+
# 3.4 Why not Simply Reconstruct First, then Redistribute the LoRA modules?
|
| 107 |
+
|
| 108 |
+
One might ask why not simply reconstruct each of the LoRA modules to the full matrix and aggregate them. Here we show that reconstructing the LoRA modules and aggregating them to get the full model results in a different full model compared to when we aggregate the LoRA modules first and then reconstruct the final model. In Section 4 we also empirically show that reconstructing the LoRA modules to the full model and redistributing them after truncated SVD to the corresponding rank of the clients results in an overwhelming performance compared to HETLoRA.
|
| 109 |
+
|
| 110 |
+
Let us consider a simple case where there are 2 clients with heterogeneous rank lora modules $\mathbf{B}_1\in$ $\mathbb{R}^{d\times 1}$ $\mathbf{A}_1\in \mathbb{R}^{1\times l}$ and $\mathbf{B}_2\in \mathbb{R}^{d\times 1}$ $\mathbf{A}_2\in \mathbb{R}^{2\times l}$ respectively for client 1 and client 2 where the former has rank 1 and latter has rank 2. We set the notation for the LoRA modules' $i^{th}$ row and $j^{th}$ column value for $\mathbf{B}_k$ and $\mathbf{A}_k$ as $b_{k,ij}$ and $a_{k,ij}$ respectively. Then with $d = 3$ $l = 2$ when we reconstruct each of the LoRA modules first and then aggregate the full model we have its $i^{th}$ row and $j^{th}$ column as $(\sum_{k = 1}^{2}b_{k,i0}a_{k,0j}) + b_{2,i1}a_{2,1j}$ and aggregating the LoRA modules first and then reconstructing the model has the full model's $i^{th}$ row and $j^{th}$ column as $(\sum_{k = 1}^{2}b_{k,i0})(\sum_{k = 1}^{2}a_{k,0j}) + b_{2,i1}a_{2,1j}$ .
|
| 111 |
+
|
| 112 |
+
One can observe that the difference between the two models are the cross-terms between the left and right module of different client 1 and 2, i.e., $b_{1,i0}a_{2,0j} + b_{2,i0}a_{1,0j}$ for the $i^{th}$ row and $j^{th}$ column. In other words, when we reconstruct the LoRA modules first and then aggregate them to get the full model, each term in the full model are cross-products between the left and right module of each client and not the cross-products between clients. Thus, reconstructing the LoRA modules loses information on the cross-relation across clients, only retaining the knowledge on the cross-relation between the LoRA modules B and A. Such observation is also corroborated by the reconstruction first's underwhelming performance in Table 3.
|
| 113 |
+
|
| 114 |
+
# 4 Experiments
|
| 115 |
+
|
| 116 |
+
In this section, we present results for HETLORA and its baselines in terms of the performance on training speed, computation/communication efficiency, and final achieved performance. First, we show the performance of homogeneous LoRA to show how LoRA in general performs for low and high rank values. Second, we demonstrate HETLORA's performance for different $r_{\mathrm{min}}$ and $r_{\mathrm{max}}$ values comparing them with full fine-tuning, homogeneous LoRA, and the reconstruction-first method elaborated in Section 3.4. We also conduct an ablation study on HETLORA with varying decay factor $\gamma$ for the rank self-pruning step. The rank distribution across clients for HETLORA, unless mentioned otherwise, is set to a truncated power-law distribution with $\alpha = 0.1$ in the range between $[r_{\mathrm{min}}, r_{\mathrm{max}}]$ (inclusively), where the small $\alpha$ value makes the distribution skewed towards smaller ranks. All experiments were ran with 3 different random seeds and their average is shown along with the standard deviation.
|
| 117 |
+
|
| 118 |
+
Model. We use the Transformer-based language model PaLM 2 (Google, 2023) of size XXS and XS for our experiments which are lightweight enough to fit in to the category of ODFMs (Google DeepMind, 2023) compared to standard FMs. The LoRA modules are applied to only the self-attention layers as proposed in the original LoRA paper (Hu et al., 2021), and their relative number of parameters compared to the original model are shown in Table 2.
|
| 119 |
+
|
| 120 |
+
Tasks. The tasks we consider are the chat dialogue from the multi-session chat (MSC) dataset (Xu et al., 2021) and the text summarization task from the Reddit dataset (Volske et al., 2017). The MSC data is a collection of human-human interactions comprising numerous extended chat sessions, and we use perplexity (Zhang et al., 2018) as the metric which has been used to show the quality of chat responses from generative models from previous literature (Sedoc et al., 2019). We sample 100 users uniformly at random and partition their data for training and evaluation by each previous_DIALOG and dialog. The Reddit text summarization data consists of real users' reddit posts and their summarization, and we use RougeL (Lin, 2004) as the metric. We use 298 users from Reddit that have at least 100 data sam
|
| 121 |
+
|
| 122 |
+
<table><tr><td></td><td colspan="2">Reddit (RougeL)</td><td colspan="2">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td><td>PaLM 2-XXS</td></tr><tr><td>Full</td><td>94.56(±0.01)</td><td>94.87(±0.04)</td><td>32.70(±0.17)</td><td>23.40(±0.36)</td></tr><tr><td>HOMLORA r = 5</td><td>92.57(±1.56), ×0.001</td><td>92.89(±0.96)</td><td>80.51(±8.32), ×0.001</td><td>64.59(±9.31)</td></tr><tr><td>HOMLORA r = 50</td><td>70.57(±2.13), ×0.01</td><td>84.95(±1.59)</td><td>307.96(±11.43), ×0.01</td><td>167.46(±1.72)</td></tr><tr><td>Recon+SVD</td><td>63.28(±1.92), ×0.003</td><td>75.17(±1.25)</td><td>323.89(±20.57), ×0.002</td><td>215.63(±15.38)</td></tr><tr><td>HETLORA γ = 0.99</td><td>94.23(±0.03), ×0.003</td><td>94.41(±0.05)</td><td>53.93(±1.57), ×0.002</td><td>38.76(±0.52)</td></tr></table>
|
| 123 |
+
|
| 124 |
+
Table 3: Final RougeL score for Reddit text summarization and perplexity for multi-session chat for different federated fine-tuning methods. The blue text indicates the ratio of trained number of parameters compared to the full fine-tuning case. HETLORA outperforms both HOMLORA and Recon+SVD method, but slightly underperforms the full fine-tuning case. However, compared to full fine-tuning the number of trained parameter is significantly smaller.
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
(a) PaLM 2-XXS
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
(b) PaLM 2-XS
|
| 131 |
+
Figure 3: Performance of homogeneous LoRA for different rank $r$ . Higher ranks achieve better performance with fewer communication rounds than the lower ranks, but they overfit quickly. Conversely, the lowest rank $r = 1$ achieves low perplexity slower than higher ranks, but without overfitting.
|
| 132 |
+
|
| 133 |
+
plies as the training clients and use another 100 users with at least 100 data samples for evaluation.
|
| 134 |
+
|
| 135 |
+
Local Training. We use mini-batch size 8 and number of local iterations $\tau = 5$ with the feature length set to 1024. For the learning rate we perform grid search in $\eta = \{0.1, 0.01, 0.001, 0.0001\}$ . For each MSC and Reddit task, we select 5 and 10 clients per communication round respectively.
|
| 136 |
+
|
| 137 |
+
# 4.1 Experiment Results
|
| 138 |
+
|
| 139 |
+
# Homogeneous LoRA and the Effect of Ranks
|
| 140 |
+
|
| 141 |
+
$r$ . First, we evaluate the performance of federated fine-tuning of the LoRA modules with homogeneous LoRA deployment across clients in Fig. 3 for different ranks $r \in [1,5,20,50]$ . We observe that a higher rank $r$ for homogeneous LoRA achieves better perplexity floor with fewer communication rounds than the lower ranks but quickly overfits resulting in worse performance compared to the lower ranks after more communication rounds. On the other hand, while the lower rank cases need more communication rounds to achieve good performance, it does not have the problem of overfitting as the higher ranks. Hence for homogeneous LoRA, there is a trade-off to consider between low and high ranks, in terms of faster performance achievement and overfitting. Note that these ob
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
(a) $r_{\mathrm{max}} = 10$
|
| 145 |
+
Figure 4: Performance of HETLORA without rank pruning or and with simple average aggregation. Similar to homogeneous LoRA, larger $r_{\mathrm{min}}$ leads to overfitting for heterogeneous LoRA, but it is not as severe as homogeneous LoRA even for larger maximum rank $r_{\mathrm{max}} = 50$ showing that the smaller rank LoRA modules act as a regularizer for HETLORA.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
(b) $r_{\mathrm{max}} = 50$
|
| 149 |
+
|
| 150 |
+
servations are consistent with previous literature in the centralized setting where a higher rank does not necessarily yields the best performance (Hu et al., 2021; Zhang et al., 2023b). Next, we show that HETLoRA achieves good performance quickly without this overfitting issue, showing better performance than the homogeneous LoRA case.
|
| 151 |
+
|
| 152 |
+
Naïve Heterogeneous LoRA and the Effect of $r_{\mathrm{min}}$ and $r_{\mathrm{max}}$ . First, we show the performance of naïve heterogeneous LoRA without self rank-pruning and with only average aggregation instead of the sparsity-weighted aggregation in Fig. 4. We can see similar observations to those from homogeneous LoRA where a smaller minimum rank $r_{\mathrm{min}} = 1$ leads to slower training but better performance while a larger maximum rank leads to faster training but worse performance. However, compared to homogeneous LoRA the overfitting does not get as severe for heterogeneous LoRA even with much larger ranks such as $r_{\mathrm{max}} = 50$ . We can imply from this result that the smaller rank LoRA modules act as a regularizer in heterogeneous LoRA. Next, we show that by adding the self rank-pruning and sparsity-weighted aggregation, even with $r_{\mathrm{min}} = 5$ we are able to prevent overfitting issues and achieve better training speed and final performance than other baselines.
|
| 153 |
+
|
| 154 |
+
<table><tr><td></td><td colspan="2">Reddit (RougeL)</td><td colspan="2">Multi-Session Chat (Perplexity)</td></tr><tr><td></td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td><td>PaLM 2-XXS</td><td>PaLM 2-XS</td></tr><tr><td>HETLORA, γ = 1</td><td>92.17 (±0.08)</td><td>91.95 (±0.03)</td><td>55.07 (±0.81)</td><td>40.92 (±0.58)</td></tr><tr><td>HETLORA, γ = 0.99</td><td>94.23 (±0.03)</td><td>94.41 (±0.05)</td><td>53.93 (±1.57)</td><td>38.76 (±0.52)</td></tr><tr><td>HETLORA, γ = 0.95</td><td>89.62 (±1.33)</td><td>83.19 (±1.70)</td><td>71.10 (±1.39)</td><td>46.39 (±0.87)</td></tr><tr><td>HETLORA, γ = 0.85</td><td>60.31 (±3.04)</td><td>53.28 (±2.47)</td><td>120.72 (±10.93)</td><td>59.67 (±1.98)</td></tr></table>
|
| 155 |
+
|
| 156 |
+
Table 4: Ablation study on the effect of the decaying factor $\gamma$ for HETLORA's self-rank pruning in the local training step. While aggressive pruning can be harmful to HETLORA's performance, pruning $\left( {\gamma = {0.99}}\right)$ can outperform the case when there is no pruning at all $\left( {\gamma = 1}\right)$ by reducing the noise introduced by large rank clients with low data complexity.
|
| 157 |
+
|
| 158 |
+
Heterogeneous LoRA compared to Baselines. Finally, we compare our proposed HETLORA with other baselines in Table 3 and Fig. 5. We see that HETLORA with $r_{\mathrm{min}} = 5$ and $r_{\mathrm{max}} = 50$ achieves faster training as well as better performance than homogeneous LoRA cases with both edge cases of the ranks $r \in \{5, 50\}$ and reconstruction+SVD which was explained in Section 3.4. This implies that HETLORA is not only practical in the sense that clients are allowed to have their own rank values, it can also outperform the limited case of homogeneous LoRA where all clients have $r = r_{\mathrm{min}}$ or the impractical case where all clients have $r = r_{\mathrm{max}}$ . We also observe that HETLORA achieves slightly lower performance than full fine-tuning. However, as shown in the blue text in Table 3 that shows the number of trained parameters compared to the full fine-tuning case, full fine-tuning requires to train a much larger number of parameters compared to HETLORA, making it infeasible to train with ODFMs in practice. We also show in Fig. 6 that to achieve the targeted performance for both Reddit and MSC task, HETLORA requires significantly less number of parameters to be trained and communicated compared to full fine-tuning. Although for Reddit, HOMLORA has a slightly less number of parameters to be trained, the final achieved RougeL is outperformed by HETLORA as shown in Table 3.
|
| 159 |
+
|
| 160 |
+
Effect of the Decaying Factor $\gamma$ . Lastly, we conduct an ablation study on the effect of the decaying factor $\gamma$ of HETLORA's local training step with self-rank pruning in Table 4. We observed that aggressive pruning hurts the performance where $\gamma = 0.85$ shows the worse performance across the varying $\gamma$ values. On the other hand, no pruning at all ( $\gamma = 1$ ) underperforms the case when there is pruning ( $\gamma = 0.99$ ), showing that reducing the noise introduced by large rank clients which data complexity is actually not that high indeed improves the performance.
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
(a) Reddit
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
(b) MSC
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
Figure 5: Comparison of the performance across homogeneous LoRA, heterogeneous LoRA, and full fine-tuning. Heterogeneous LoRA achieves better performance than homogeneous LoRA with fewer number of communication rounds.
|
| 170 |
+
Figure 6: Ratio of communicated number of parameters for different PEFT methods to full fine-tuning to achieve the target value for the metric where it is RougeL 80 for Reddit text summarization task and perplexity 150 for the multisession chat response task. The 'X' means that the target metric is not achieved even after convergence.
|
| 171 |
+
|
| 172 |
+
# 5 Discussions and Concluding Remarks
|
| 173 |
+
|
| 174 |
+
In our work, we investigated federated fine-tuning for ODFMs that cater to device system and data heterogeneity with our proposed HETLORA. We show that HETLORA is not only practical but also achieves better training speed, communication/computation efficiency, and final performance compared to homogeneous LoRA. We also show interesting findings consistent with previous literature (Hu et al., 2021; Zhang et al., 2023b) that increasing ranks does not always help for homogeneous LoRA. Our findings in this work opens up several questions worth investigating. For instance, if the settings allow us to assign specific ranks to
|
| 175 |
+
|
| 176 |
+
clients what will be the effective way to assign the ranks across clients for better convergence and performance? Another important next step of our work includes pursuing the theoretical convergence and generalization of heterogeneous LoRA.
|
| 177 |
+
|
| 178 |
+
# 6 Limitations
|
| 179 |
+
|
| 180 |
+
In this work, we address tackling system and data heterogeneity in federated fine-tuning of on-device foundation models. Our work is motivated by clients being able to carry different ranks for the LoRA fine-tuning method depending on their available resources, and thus exploiting this characteristic to improve federated fine-tuning with heterogeneous LoRA. However, our work assumes that the rank distribution across clients (which is analogous to how system resources are distributed across clients) is independent to the data distribution. There can be scenarios in which this is not necessarily the case where the rank and data distribution can be correlated. For instance, more affluent populations can have better off devices with larger resource capacity, and may have data distributions different to that of less affluent populations. Such correlation should be explored for future work to better understand the implications of heterogenous LoRA.
|
| 181 |
+
|
| 182 |
+
# References
|
| 183 |
+
|
| 184 |
+
Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, and Salman Avestimehr. 2023. Slora: Federated parameter efficient fine-tuning of language models. CoRR, abs/2308.06522.
|
| 185 |
+
Rishi Bommasani, Drew A. Hudson, and et. al. Ehsan Adeli. 2022. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
|
| 186 |
+
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2016. Practical secure aggregation for federated learning on user-held data. In NIPS Workshop on Private Multi-Party Machine Learning.
|
| 187 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
|
| 188 |
+
|
| 189 |
+
Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 190 |
+
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis. 2022. Differentially private bias-term only fine-tuning of foundation models. arXiv preprint arXiv:2210.00036.
|
| 191 |
+
Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, and Haozhao Wang. 2022. Fedtune: A deep dive into efficient federated fine-tuning with pre-trained transformers. CoRR, abs/2211.08025.
|
| 192 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 193 |
+
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 2023. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378.
|
| 194 |
+
Google. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
|
| 195 |
+
Google. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.1040.
|
| 196 |
+
Google DeepMind. 2023. Introducing palm2. https://blog.google/technology/ai/ google-palm-2-ai-large-language-model/.
|
| 197 |
+
Tao Guo, Song Guo, and Junxiao Wang. 2023. Pfed-prompt: Learning personalized prompt for vision-language models in federated learning. In Proceedings of the ACM Web Conference 2023, WWW '23, page 1364-1374, New York, NY, USA. Association for Computing Machinery.
|
| 198 |
+
Tao Guo, Song Guo, Junxiao Wang, and Wenchao Xu. 2022. Promptfl: Let federated participants cooperatively learn prompts instead of models — federated learning in age of foundation model. CoRR, abs/2208.11625.
|
| 199 |
+
Venkatesan Guruswami and Ravi Kannan. 2012. Lecture notes in computer science theory for the information age.
|
| 200 |
+
Neil Houlsby, Andrei Giurgiu, Stanisław Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.
|
| 201 |
+
|
| 202 |
+
Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning (ICML).
|
| 203 |
+
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR).
|
| 204 |
+
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In The 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
|
| 205 |
+
Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090.
|
| 206 |
+
Gwen Legate, Nicolas Bernier, Lucas Caccia, Edouard Oyallon, and Eugene Belilovsky. 2023. Guiding the last layer in federated learning with pre-trained models. In Workshop of Federated Learning and Analytics in Practice@ICML.
|
| 207 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 208 |
+
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
|
| 209 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 210 |
+
Andre Manoel, Mirian del Carmen Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Robert Sim, Dan Miller, Danny Karmon, and Dimitrios Dimitriadis. 2023. Federated multilingual models for medical transcript analysis. In Conference on Health, Inference, and Learning (CHIL), pages 147-162.
|
| 211 |
+
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agoury Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. International Conference on Artificial Intelligence and Statistics (AISTATS).
|
| 212 |
+
OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:submit/4812508.
|
| 213 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
|
| 214 |
+
|
| 215 |
+
Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization for heterogeneous networks. In Proceedings of the 3rd MLSys Conference.
|
| 216 |
+
Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. Chateval: A tool for chatbot evaluation. Proceedings of NAACL-HLT.
|
| 217 |
+
Ofir Ben Shoham and Nadav Rappoport. 2023. Federated learning of medical concepts embedding using behrt. arXiv preprint arXiv:2305.13052.
|
| 218 |
+
Aliaksandra Shysheya, John F Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, and Richard E Turner. 2023. Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification. International Conference on Learning Representations (ICLR).
|
| 219 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 220 |
+
Michael Volske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59-63, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 221 |
+
Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. 2021. A field guide to federated optimization. arXiv preprint arXiv:2107.06917.
|
| 222 |
+
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 37(6):1205-1221.
|
| 223 |
+
Colin Wei, Sang Michael Xie, and Tengyu Ma. 2021. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information Processing Systems, 34:16158-16170.
|
| 224 |
+
Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig Schmidt, Michael Rabbat, and Ari S. Morcos. 2023. lo-fi: distributed fine-tuning without communication. Transactions on Machine Learning Research (TMLR).
|
| 225 |
+
Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567.
|
| 226 |
+
|
| 227 |
+
Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. 2023. Fedlora: Model-heterogeneous personalized federated learning with lora tuning. arXiv preprint arXiv:2310.13283.
|
| 228 |
+
Sixing Yu, J. Pablo Muñoz, and Ali Jannesari. 2023. Federated foundation models: Privacy-preserving and collaborative learning for large models. arXiv preprint arXiv:2305.11414.
|
| 229 |
+
Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199.
|
| 230 |
+
Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, and Guoyin Wangand Yiran Chen. 2023a. Towards building the federated gpt: Federated instruction tuning. CoRR, abs/2305.05644.
|
| 231 |
+
Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023b. Adaptive budget allocation for parameter-efficient fine-tuning. In The 11th International Conference on Learning Representations (ICLR).
|
| 232 |
+
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL).
|
| 233 |
+
Zhuo Zhang, Xiangjing Hu, Jingyuan Zhang, Yating Zhang, Hui Wang, Lizhen Qu, and Zenglin Xu. 2023c. Fedlegal: The first real-world federated learning benchmark for legal nlp. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL).
|
| 234 |
+
Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. 2023d. Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models. Findings of the Association for Computational Linguistics (ACL).
|
| 235 |
+
Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, Jianxin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, and Lichao Sun. 2023. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419.
|
2401.06xxx/2401.06432/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a6e3efbaa27706de4a203848a586179cb7cacfd94a9a457aa28fe2cca4f112f9
|
| 3 |
+
size 312989
|
2401.06xxx/2401.06432/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2401.06xxx/2401.06468/0fc0ae98-47c2-47d6-8b41-aa05f4217658_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|