Add Batch c03a5288-1fd8-4eb2-bcca-549eebbdc628
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +63 -0
- 2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_content_list.json +1880 -0
- 2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_model.json +2556 -0
- 2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_origin.pdf +3 -0
- 2402.05xxx/2402.05201/full.md +349 -0
- 2402.05xxx/2402.05201/images.zip +3 -0
- 2402.05xxx/2402.05201/layout.json +0 -0
- 2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_content_list.json +1209 -0
- 2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_model.json +1568 -0
- 2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_origin.pdf +3 -0
- 2402.05xxx/2402.05210/full.md +192 -0
- 2402.05xxx/2402.05210/images.zip +3 -0
- 2402.05xxx/2402.05210/layout.json +0 -0
- 2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_content_list.json +0 -0
- 2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_model.json +0 -0
- 2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_origin.pdf +3 -0
- 2402.05xxx/2402.05235/full.md +656 -0
- 2402.05xxx/2402.05235/images.zip +3 -0
- 2402.05xxx/2402.05235/layout.json +0 -0
- 2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_content_list.json +0 -0
- 2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_model.json +0 -0
- 2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_origin.pdf +3 -0
- 2402.05xxx/2402.05239/full.md +703 -0
- 2402.05xxx/2402.05239/images.zip +3 -0
- 2402.05xxx/2402.05239/layout.json +0 -0
- 2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_content_list.json +0 -0
- 2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_model.json +0 -0
- 2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_origin.pdf +3 -0
- 2402.05xxx/2402.05319/full.md +477 -0
- 2402.05xxx/2402.05319/images.zip +3 -0
- 2402.05xxx/2402.05319/layout.json +0 -0
- 2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_content_list.json +0 -0
- 2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_model.json +0 -0
- 2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_origin.pdf +3 -0
- 2402.05xxx/2402.05369/full.md +716 -0
- 2402.05xxx/2402.05369/images.zip +3 -0
- 2402.05xxx/2402.05369/layout.json +0 -0
- 2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_content_list.json +0 -0
- 2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_model.json +0 -0
- 2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_origin.pdf +3 -0
- 2402.05xxx/2402.05375/full.md +655 -0
- 2402.05xxx/2402.05375/images.zip +3 -0
- 2402.05xxx/2402.05375/layout.json +0 -0
- 2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_content_list.json +1815 -0
- 2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_model.json +0 -0
- 2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_origin.pdf +3 -0
- 2402.05xxx/2402.05382/full.md +332 -0
- 2402.05xxx/2402.05382/images.zip +3 -0
- 2402.05xxx/2402.05382/layout.json +0 -0
- 2402.05xxx/2402.05388/4a5d066f-517b-40fb-a726-5553033aca85_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -9559,3 +9559,66 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 9559 |
2403.00xxx/2403.00769/60c24029-d52b-4208-8b8b-30dac03098e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9560 |
2403.07xxx/2403.07194/bcb6164c-8d18-43ea-8cff-65fe65a6d681_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9561 |
2403.12xxx/2403.12068/bfa94e8d-4f38-4d5b-99ca-18dbe2591186_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9559 |
2403.00xxx/2403.00769/60c24029-d52b-4208-8b8b-30dac03098e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9560 |
2403.07xxx/2403.07194/bcb6164c-8d18-43ea-8cff-65fe65a6d681_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9561 |
2403.12xxx/2403.12068/bfa94e8d-4f38-4d5b-99ca-18dbe2591186_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9562 |
+
2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9563 |
+
2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9564 |
+
2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9565 |
+
2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9566 |
+
2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9567 |
+
2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9568 |
+
2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9569 |
+
2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9570 |
+
2402.05xxx/2402.05388/4a5d066f-517b-40fb-a726-5553033aca85_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9571 |
+
2402.05xxx/2402.05391/e8c039d2-8ac5-46fe-a8ef-9680df205256_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9572 |
+
2402.05xxx/2402.05403/200c3039-8025-451f-ab74-96343f82f335_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9573 |
+
2402.05xxx/2402.05406/fc69f64f-7bef-45e9-b659-8a4829719928_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9574 |
+
2402.05xxx/2402.05408/21ed81dd-1371-4ccd-9129-1b8357f38314_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9575 |
+
2402.05xxx/2402.05445/3da77cd1-d4b0-48bb-aaa8-6052143a7422_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9576 |
+
2402.05xxx/2402.05457/d3eaa06a-c237-41fc-b3b7-70eba990b259_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9577 |
+
2402.05xxx/2402.05472/2c28a4c0-e669-4cc7-a2eb-7ae5f159a2f3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9578 |
+
2402.05xxx/2402.05483/78aa3a36-c43b-47f7-939c-9f82a18e4aad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9579 |
+
2402.05xxx/2402.05495/3bb5ccb2-9378-4caf-96de-c3e3c3bd1b28_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9580 |
+
2402.05xxx/2402.05499/ef3f038e-08d7-406e-a1b5-7082a663de3a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9581 |
+
2402.05xxx/2402.05501/77a587cf-1fee-4be1-b254-0129da698230_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9582 |
+
2402.05xxx/2402.05519/2070fb4e-a4c0-44d1-9168-22ff75e8272c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9583 |
+
2402.05xxx/2402.05546/f63308ee-8ee3-4b16-b794-f7933e5fc557_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9584 |
+
2402.05xxx/2402.05571/0eceb7d0-abfe-44d9-8dbc-c2acbd6ec77b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9585 |
+
2402.05xxx/2402.05602/a5c56112-383e-4e0a-be30-1e7143dcad55_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9586 |
+
2402.05xxx/2402.05608/b6a6b9b9-e9fa-461f-a8a7-efe4d3ae7bf3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9587 |
+
2402.05xxx/2402.05650/c21e7b9e-5272-4e91-bb39-2729329a2169_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9588 |
+
2402.05xxx/2402.05660/e29f549b-6c3d-4dcd-95e5-de35479b51bf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9589 |
+
2402.05xxx/2402.05668/54e7673c-1ddc-4c9f-a607-c2006276688b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9590 |
+
2402.05xxx/2402.05672/8ad71ce2-510a-45a3-a93a-fb03d15d41b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9591 |
+
2402.05xxx/2402.05699/62049bc9-d284-4939-af58-dfec573199d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9592 |
+
2402.05xxx/2402.05722/02686ea3-7347-4b76-a492-fcc402e5bda9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9593 |
+
2402.05xxx/2402.05738/e9aba932-f401-4f9c-bf8e-88dee1720963_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9594 |
+
2402.05xxx/2402.05741/fe403bb9-26a9-4303-a014-e8c33c56f75b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9595 |
+
2402.05xxx/2402.05746/b4b49fc1-b78a-4a5c-9031-c2419d33b918_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9596 |
+
2402.05xxx/2402.05749/942ff7f0-8942-484c-8c59-e188e4a58794_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9597 |
+
2402.05xxx/2402.05755/700957d5-efa8-4e12-8003-4e9d130ada9a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9598 |
+
2402.05xxx/2402.05779/5ab987ba-6c1a-4210-b6d9-027876ec840d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9599 |
+
2402.05xxx/2402.05808/29bb0a7e-2b34-4c23-ae8e-0a88085e663c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9600 |
+
2402.05xxx/2402.05809/257d4161-0d5a-4446-b06b-bc342ba704ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9601 |
+
2402.05xxx/2402.05813/70ba250d-3b6a-4a76-8090-8041f62fbd6c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9602 |
+
2402.05xxx/2402.05841/9248081c-d818-446a-9849-9e5b0b173b9e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9603 |
+
2402.05xxx/2402.05859/a32a8073-afbc-4c83-b2d6-aafcc57b7f68_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9604 |
+
2402.05xxx/2402.05861/4dbfed59-c049-422c-9def-08137d3403a5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9605 |
+
2402.05xxx/2402.05862/5606bafc-3699-4ef4-88f4-f5f8cccc1010_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9606 |
+
2402.05xxx/2402.05863/5f8d4f8e-0685-4cf4-baf7-3431c0337379_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9607 |
+
2402.05xxx/2402.05892/1ef988e9-0aea-42cc-af9c-682ec0341131_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9608 |
+
2402.05xxx/2402.05904/87f6ecdc-f998-4ddd-bce3-231b527da0f4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9609 |
+
2402.05xxx/2402.05929/d93f1e23-6698-451a-b865-9651fb265123_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9610 |
+
2402.05xxx/2402.05930/999cb7fa-d17e-4b7e-8f13-0a3253939658_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9611 |
+
2402.05xxx/2402.05932/a750ad82-ee68-44cc-a3cf-1610dfd70d62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9612 |
+
2402.05xxx/2402.05933/2e36c421-e894-445f-a930-1dff0c529106_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9613 |
+
2402.05xxx/2402.05935/bad47389-8928-4594-863f-d78852cb8487_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9614 |
+
2402.05xxx/2402.05937/e33e035b-7d6c-458b-ab6c-ab7116095cda_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9615 |
+
2402.05xxx/2402.05977/f3bbd971-cd60-4c84-9f89-205aaaa6eb0c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9616 |
+
2402.06xxx/2402.06013/32a128ee-b485-44cd-b7ce-d66031cd03ee_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9617 |
+
2402.06xxx/2402.06044/9f421e08-1478-42c6-9849-9616ba09751a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9618 |
+
2402.06xxx/2402.06067/29201205-d23f-470a-ad95-188f51b5a4e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9619 |
+
2402.07xxx/2402.07939/7344d9aa-e6d2-48b6-9cda-0ea4e672b0e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9620 |
+
2402.10xxx/2402.10086/c883286c-5bb2-410b-a689-1651ec4ba297_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9621 |
+
2403.05xxx/2403.05552/cafa83fc-b407-4034-b563-f52fa87a6d24_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9622 |
+
2403.15xxx/2403.15392/82474d7e-e320-4647-9b11-5de9804cea51_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9623 |
+
2403.18xxx/2403.18833/a95014ff-dc4e-4269-a9c7-fee7776224be_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 9624 |
+
2405.02xxx/2405.02292/ea2897b4-2438-468a-a30a-ff7a37643b1f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_content_list.json
ADDED
|
@@ -0,0 +1,1880 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
212,
|
| 8 |
+
119,
|
| 9 |
+
785,
|
| 10 |
+
162
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Matthew Renze",
|
| 17 |
+
"bbox": [
|
| 18 |
+
285,
|
| 19 |
+
226,
|
| 20 |
+
401,
|
| 21 |
+
239
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Johns Hopkins University",
|
| 28 |
+
"bbox": [
|
| 29 |
+
254,
|
| 30 |
+
241,
|
| 31 |
+
424,
|
| 32 |
+
255
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "mrenze1@jhu.edu",
|
| 39 |
+
"bbox": [
|
| 40 |
+
274,
|
| 41 |
+
256,
|
| 42 |
+
406,
|
| 43 |
+
268
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Erhan Guven",
|
| 50 |
+
"bbox": [
|
| 51 |
+
609,
|
| 52 |
+
227,
|
| 53 |
+
709,
|
| 54 |
+
239
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Johns Hopkins University",
|
| 61 |
+
"bbox": [
|
| 62 |
+
571,
|
| 63 |
+
241,
|
| 64 |
+
741,
|
| 65 |
+
255
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "eguven2@jhu.edu",
|
| 72 |
+
"bbox": [
|
| 73 |
+
594,
|
| 74 |
+
256,
|
| 75 |
+
725,
|
| 76 |
+
270
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Abstract",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
459,
|
| 86 |
+
318,
|
| 87 |
+
537,
|
| 88 |
+
333
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "In this research study, we empirically investigate the effect of sampling temperature on the performance of Large Language Models (LLMs) on various problem-solving tasks. We created a multiple-choice question-and-answer (MCQA) exam by randomly sampling problems from standard LLM benchmarks. Then, we used nine popular LLMs with five prompt-engineering techniques to solve the MCQA problems while increasing the sampling temperature from 0.0 to 1.6. Despite anecdotal reports to the contrary, our empirical results indicate that changes in temperature from 0.0 to 1.0 do not have a statistically significant impact on LLM performance for problem-solving tasks. In addition, these results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. All code, data, and supplemental materials are available on GitHub at: https://github.com/matthewrenze/jhu-llm-temperature.",
|
| 95 |
+
"bbox": [
|
| 96 |
+
169,
|
| 97 |
+
340,
|
| 98 |
+
826,
|
| 99 |
+
481
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "1 Introduction",
|
| 106 |
+
"text_level": 1,
|
| 107 |
+
"bbox": [
|
| 108 |
+
112,
|
| 109 |
+
500,
|
| 110 |
+
254,
|
| 111 |
+
513
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "1.1 Background",
|
| 118 |
+
"text_level": 1,
|
| 119 |
+
"bbox": [
|
| 120 |
+
112,
|
| 121 |
+
531,
|
| 122 |
+
240,
|
| 123 |
+
546
|
| 124 |
+
],
|
| 125 |
+
"page_idx": 0
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"type": "text",
|
| 129 |
+
"text": "In recent years, Large Language Models (LLMs) have revolutionized the field of artificial intelligence. The availability of open-source LLMs and pay-per-use APIs has allowed engineers to incorporate LLMs in their AI systems. However, prompt engineering and hyperparameter tuning are required to work effectively with LLMs.",
|
| 130 |
+
"bbox": [
|
| 131 |
+
111,
|
| 132 |
+
556,
|
| 133 |
+
883,
|
| 134 |
+
599
|
| 135 |
+
],
|
| 136 |
+
"page_idx": 0
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"type": "text",
|
| 140 |
+
"text": "Prompt-engineering techniques help LLMs solve complex problems, avoid hallucinations, and provide more accurate responses. For example, we can use techniques like chain-of-thought, tree-of-thought, self-criticism, and self-consistency to improve LLM performance [1, 2].",
|
| 141 |
+
"bbox": [
|
| 142 |
+
111,
|
| 143 |
+
604,
|
| 144 |
+
883,
|
| 145 |
+
648
|
| 146 |
+
],
|
| 147 |
+
"page_idx": 0
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"type": "text",
|
| 151 |
+
"text": "In addition, several inference hyperparameters can be adjusted to modify the LLM's output at runtime. For example, hyperparameters like sampling temperature, top-k sampling, repetition penalty, and maximum token length all affect the LLM's output and performance [3-5].",
|
| 152 |
+
"bbox": [
|
| 153 |
+
111,
|
| 154 |
+
652,
|
| 155 |
+
883,
|
| 156 |
+
696
|
| 157 |
+
],
|
| 158 |
+
"page_idx": 0
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"type": "text",
|
| 162 |
+
"text": "Despite significant interest in LLMs and progress in LLM best practices, many open questions remain about optimal prompt-engineering techniques and inference hyperparameters for LLMs. To complicate matters, various local optima may exist for LLMs, prompt types, and problem domains [5].",
|
| 163 |
+
"bbox": [
|
| 164 |
+
111,
|
| 165 |
+
700,
|
| 166 |
+
882,
|
| 167 |
+
744
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 0
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "text",
|
| 173 |
+
"text": "The prompt-engineering community has an abundance of opinions and anecdotal evidence regarding optimal prompt-engineering techniques and inference hyperparameter settings. However, we currently lack systematic studies and empirical evidence to support many of these claims.",
|
| 174 |
+
"bbox": [
|
| 175 |
+
111,
|
| 176 |
+
750,
|
| 177 |
+
883,
|
| 178 |
+
792
|
| 179 |
+
],
|
| 180 |
+
"page_idx": 0
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"type": "text",
|
| 184 |
+
"text": "As a result, this paper aims to address the open question of the optimal LLM sampling temperature for problem-solving tasks. In addition, we aim to provide a systematic study with empirical results to add to the growing body of knowledge used to create LLM and prompt-engineering best practices.",
|
| 185 |
+
"bbox": [
|
| 186 |
+
111,
|
| 187 |
+
797,
|
| 188 |
+
883,
|
| 189 |
+
840
|
| 190 |
+
],
|
| 191 |
+
"page_idx": 0
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"type": "text",
|
| 195 |
+
"text": "1.2 Sampling Temperature",
|
| 196 |
+
"text_level": 1,
|
| 197 |
+
"bbox": [
|
| 198 |
+
112,
|
| 199 |
+
857,
|
| 200 |
+
316,
|
| 201 |
+
872
|
| 202 |
+
],
|
| 203 |
+
"page_idx": 0
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"type": "text",
|
| 207 |
+
"text": "Sampling temperature is a hyperparameter of an LLM used in a temperature-based sampling process. It controls the randomness of the model's output at inference time [5-8].",
|
| 208 |
+
"bbox": [
|
| 209 |
+
111,
|
| 210 |
+
883,
|
| 211 |
+
883,
|
| 212 |
+
912
|
| 213 |
+
],
|
| 214 |
+
"page_idx": 0
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"type": "aside_text",
|
| 218 |
+
"text": "arXiv:2402.05201v3 [cs.CL] 2 Oct 2024",
|
| 219 |
+
"bbox": [
|
| 220 |
+
22,
|
| 221 |
+
272,
|
| 222 |
+
58,
|
| 223 |
+
700
|
| 224 |
+
],
|
| 225 |
+
"page_idx": 0
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"type": "text",
|
| 229 |
+
"text": "During each step of an LLM's decoding process, the LLM uses the previous tokens to choose the next output token. The final layer of the LLM uses a softmax function to convert raw scores (logits) into probabilities.",
|
| 230 |
+
"bbox": [
|
| 231 |
+
109,
|
| 232 |
+
90,
|
| 233 |
+
887,
|
| 234 |
+
119
|
| 235 |
+
],
|
| 236 |
+
"page_idx": 1
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "text",
|
| 240 |
+
"text": "In greedy sampling, the model will always choose the most likely next token. However, for probabilistic sampling, the next token is selected from a probability distribution.",
|
| 241 |
+
"bbox": [
|
| 242 |
+
109,
|
| 243 |
+
126,
|
| 244 |
+
883,
|
| 245 |
+
156
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "Temperature sampling is a modification to the softmax function, which adjusts the resulting probability mass functions. In this modified softmax function, $v_{k}$ is the $k$ -th vocabulary token, $l_{k}$ is the token's logit, and $\\tau$ is a constant temperature. See equation 1.",
|
| 252 |
+
"bbox": [
|
| 253 |
+
109,
|
| 254 |
+
160,
|
| 255 |
+
885,
|
| 256 |
+
204
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "equation",
|
| 262 |
+
"text": "\n$$\n\\Pr \\left(v _ {k}\\right) = \\frac {e ^ {l _ {k} / \\tau}}{\\sum_ {i} e ^ {l _ {i} / \\tau}} \\tag {1}\n$$\n",
|
| 263 |
+
"text_format": "latex",
|
| 264 |
+
"bbox": [
|
| 265 |
+
429,
|
| 266 |
+
219,
|
| 267 |
+
885,
|
| 268 |
+
256
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "A lower temperature makes the output of the LLM more deterministic, thus favoring the most likely predictions. This conservativeness is captured by the model's tendency to produce more repetitive, focused, and less diverse output based on the patterns most commonly seen in the training data [5, 7, 8].",
|
| 275 |
+
"bbox": [
|
| 276 |
+
109,
|
| 277 |
+
266,
|
| 278 |
+
883,
|
| 279 |
+
309
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "A higher temperature increases the randomness of the output, thus favoring more \"creative\" predictions. This creativity is captured by the model's willingness to explore more unconventional and less likely outputs. Higher temperatures can lead to novel text, diverse ideas, and creative solutions to problems [5, 7, 8].",
|
| 286 |
+
"bbox": [
|
| 287 |
+
109,
|
| 288 |
+
314,
|
| 289 |
+
883,
|
| 290 |
+
357
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "text",
|
| 296 |
+
"text": "In the context of problem-solving, temperature can be seen as a trade-off between exploring and exploiting possible solutions within the solution space. Lower temperatures tend to exploit more probable solutions; higher temperatures explore the solution space more broadly.",
|
| 297 |
+
"bbox": [
|
| 298 |
+
109,
|
| 299 |
+
362,
|
| 300 |
+
883,
|
| 301 |
+
405
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "text",
|
| 307 |
+
"text": "1.3 Choosing a Sampling Temperature",
|
| 308 |
+
"text_level": 1,
|
| 309 |
+
"bbox": [
|
| 310 |
+
112,
|
| 311 |
+
421,
|
| 312 |
+
397,
|
| 313 |
+
436
|
| 314 |
+
],
|
| 315 |
+
"page_idx": 1
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"type": "text",
|
| 319 |
+
"text": "Within the prompt-engineering community, there are a variety of opinions and best practices regarding the ideal sampling temperature for various problem-solving tasks [9, 10].",
|
| 320 |
+
"bbox": [
|
| 321 |
+
109,
|
| 322 |
+
446,
|
| 323 |
+
883,
|
| 324 |
+
477
|
| 325 |
+
],
|
| 326 |
+
"page_idx": 1
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"type": "text",
|
| 330 |
+
"text": "Low sampling temperatures are recommended for tasks requiring precision and factual accuracy, such as technical writing, code generation, or question-answering [11, 12]. However, higher temperatures are recommended for tasks requiring creativity, such as writing poetry, creating stories, or brainstorming.",
|
| 331 |
+
"bbox": [
|
| 332 |
+
109,
|
| 333 |
+
481,
|
| 334 |
+
883,
|
| 335 |
+
525
|
| 336 |
+
],
|
| 337 |
+
"page_idx": 1
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"type": "text",
|
| 341 |
+
"text": "Higher temperatures also increase the probability of model hallucination. Hallucination is a phenomenon where an LLM produces statistically probable responses that are factually incorrect or nonsensical. As a result, optimal temperature selection is also a balance between creativity and hallucination [13].",
|
| 342 |
+
"bbox": [
|
| 343 |
+
109,
|
| 344 |
+
529,
|
| 345 |
+
883,
|
| 346 |
+
571
|
| 347 |
+
],
|
| 348 |
+
"page_idx": 1
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"type": "text",
|
| 352 |
+
"text": "Practical guidelines for choosing a sampling temperature for a specific task or problem domain are often vague or anecdotal. Prompt-engineering guides often provide hypothetical examples of optimal sampling temperatures for various tasks. However, they rarely cite any sources or provide empirical evidence.<sup>1</sup>",
|
| 353 |
+
"bbox": [
|
| 354 |
+
109,
|
| 355 |
+
578,
|
| 356 |
+
883,
|
| 357 |
+
621
|
| 358 |
+
],
|
| 359 |
+
"page_idx": 1
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"type": "text",
|
| 363 |
+
"text": "As a result, the current state of choosing the optimal sampling temperature for specific problems is largely based on guesswork, gut instinct, non-systematic experimentation, and iterative refinement.[2,3]",
|
| 364 |
+
"bbox": [
|
| 365 |
+
109,
|
| 366 |
+
626,
|
| 367 |
+
883,
|
| 368 |
+
656
|
| 369 |
+
],
|
| 370 |
+
"page_idx": 1
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"type": "text",
|
| 374 |
+
"text": "2 Methods",
|
| 375 |
+
"text_level": 1,
|
| 376 |
+
"bbox": [
|
| 377 |
+
112,
|
| 378 |
+
674,
|
| 379 |
+
220,
|
| 380 |
+
690
|
| 381 |
+
],
|
| 382 |
+
"page_idx": 1
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"type": "text",
|
| 386 |
+
"text": "2.1 Models",
|
| 387 |
+
"text_level": 1,
|
| 388 |
+
"bbox": [
|
| 389 |
+
112,
|
| 390 |
+
705,
|
| 391 |
+
205,
|
| 392 |
+
720
|
| 393 |
+
],
|
| 394 |
+
"page_idx": 1
|
| 395 |
+
},
|
| 396 |
+
{
|
| 397 |
+
"type": "text",
|
| 398 |
+
"text": "The models used in this research project comprise nine widely-used foundational LLMs. To complement our analysis, we also conducted experiments using five prompts created using commonly used prompt-engineering techniques.",
|
| 399 |
+
"bbox": [
|
| 400 |
+
109,
|
| 401 |
+
731,
|
| 402 |
+
883,
|
| 403 |
+
762
|
| 404 |
+
],
|
| 405 |
+
"page_idx": 1
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"type": "text",
|
| 409 |
+
"text": "First, we reviewed the prior literature to identify candidate LLMs commonly used for problem-solving tasks. We limited our candidate models to those that allowed the model's sampling temperature to be specified via their API [4, 16-18]. See Table 1 for a list of LLMs used in the experiment.",
|
| 410 |
+
"bbox": [
|
| 411 |
+
109,
|
| 412 |
+
766,
|
| 413 |
+
883,
|
| 414 |
+
809
|
| 415 |
+
],
|
| 416 |
+
"page_idx": 1
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"type": "header",
|
| 420 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 421 |
+
"bbox": [
|
| 422 |
+
218,
|
| 423 |
+
42,
|
| 424 |
+
777,
|
| 425 |
+
56
|
| 426 |
+
],
|
| 427 |
+
"page_idx": 1
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"type": "page_footnote",
|
| 431 |
+
"text": "<sup>1</sup>A few empirical studies exist that indicate sampling temperature does have an effect on LLM performance on some types of problem-solving tasks (e.g., code generation, engineering exams, etc.) [11, 12, 14].",
|
| 432 |
+
"bbox": [
|
| 433 |
+
111,
|
| 434 |
+
819,
|
| 435 |
+
885,
|
| 436 |
+
845
|
| 437 |
+
],
|
| 438 |
+
"page_idx": 1
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"type": "page_footnote",
|
| 442 |
+
"text": "For example, OpenAI's GPT-3.5 API allowed users to set the sampling temperature from 0.0 to 1.0 with a default of 0.7. GPT-4's API expanded this range from 0.0 to 2.0 with a default of 1.0. No explanation from OpenAI has been provided for these default values or their change from GPT-3.5 to GPT-4 [15].",
|
| 443 |
+
"bbox": [
|
| 444 |
+
111,
|
| 445 |
+
845,
|
| 446 |
+
883,
|
| 447 |
+
883
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 1
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "page_footnote",
|
| 453 |
+
"text": "<sup>3</sup>Even the GPT-4 Technical Report explains that the authors used their \"best-guess\" when choosing sampling temperatures while evaluating GPT-4 on various benchmarks. See Appendix A in the GPT-4 Technical Report [16].",
|
| 454 |
+
"bbox": [
|
| 455 |
+
111,
|
| 456 |
+
885,
|
| 457 |
+
883,
|
| 458 |
+
912
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 1
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "page_number",
|
| 464 |
+
"text": "2",
|
| 465 |
+
"bbox": [
|
| 466 |
+
493,
|
| 467 |
+
935,
|
| 468 |
+
504,
|
| 469 |
+
946
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 1
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "table",
|
| 475 |
+
"img_path": "images/c46bafe6d86231e9317e6f3f32162f42597c0cf0025d6a41443f10fe4db77264.jpg",
|
| 476 |
+
"table_caption": [],
|
| 477 |
+
"table_footnote": [],
|
| 478 |
+
"table_body": "<table><tr><td>Name</td><td>Vendor</td><td>Released</td><td>License</td><td>Source</td></tr><tr><td>Claude 3 Opus</td><td>Anthropic</td><td>2024-03-04</td><td>Closed</td><td>[19,20]</td></tr><tr><td>Command R+</td><td>Cohere</td><td>2024-04-04</td><td>Open</td><td>[21,22]</td></tr><tr><td>Gemini 1.0 Pro</td><td>Google</td><td>2023-12-06</td><td>Closed</td><td>[23,24]</td></tr><tr><td>Gemini 1.5 Pro (Preview)</td><td>Google</td><td>2024-02-15</td><td>Closed</td><td>[25,26]</td></tr><tr><td>GPT-3.5 Turbo</td><td>OpenAI</td><td>2022-11-30</td><td>Closed</td><td>[17,27]</td></tr><tr><td>GPT-4</td><td>OpenAI</td><td>2023-03-14</td><td>Closed</td><td>[16,18]</td></tr><tr><td>Llama 2 7B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Llama 2 70B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Mistral Large</td><td>Mistral AI</td><td>2024-02-26</td><td>Closed</td><td>[29]</td></tr></table>",
|
| 479 |
+
"bbox": [
|
| 480 |
+
261,
|
| 481 |
+
88,
|
| 482 |
+
736,
|
| 483 |
+
229
|
| 484 |
+
],
|
| 485 |
+
"page_idx": 2
|
| 486 |
+
},
|
| 487 |
+
{
|
| 488 |
+
"type": "text",
|
| 489 |
+
"text": "Table 1: LLMs used in the experiment.",
|
| 490 |
+
"bbox": [
|
| 491 |
+
367,
|
| 492 |
+
243,
|
| 493 |
+
625,
|
| 494 |
+
258
|
| 495 |
+
],
|
| 496 |
+
"page_idx": 2
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"type": "text",
|
| 500 |
+
"text": "Next, we reviewed the existing literature for commonly used prompt-engineering techniques. We limited our candidate prompts to those that could be performed in a single request-and-response cycle with one-shot in-context learning. We excluded multi-step agents, few-shot learning, and model fine-tuning.",
|
| 501 |
+
"bbox": [
|
| 502 |
+
109,
|
| 503 |
+
281,
|
| 504 |
+
883,
|
| 505 |
+
324
|
| 506 |
+
],
|
| 507 |
+
"page_idx": 2
|
| 508 |
+
},
|
| 509 |
+
{
|
| 510 |
+
"type": "text",
|
| 511 |
+
"text": "As a result, we selected five prompt-engineering techniques to construct our system prompts:",
|
| 512 |
+
"bbox": [
|
| 513 |
+
109,
|
| 514 |
+
330,
|
| 515 |
+
723,
|
| 516 |
+
344
|
| 517 |
+
],
|
| 518 |
+
"page_idx": 2
|
| 519 |
+
},
|
| 520 |
+
{
|
| 521 |
+
"type": "list",
|
| 522 |
+
"sub_type": "text",
|
| 523 |
+
"list_items": [
|
| 524 |
+
"- Baseline - no prompt engineering; the LLM is instructed to return only a single multiple-choice answer as its output (e.g., 'Answer(\"C\")').",
|
| 525 |
+
"- Domain Expertise – the system prompt specifies that the LLM is an expert in the problem domain of the exam (e.g., “medicine”) or the topic of the problem (e.g., “anatomy”) [2].",
|
| 526 |
+
"- Self-recitation – the system prompt instructs the LLM to recite its own internal knowledge about the problem before answering the question [2, 30].",
|
| 527 |
+
"- Chain-of-Thought (CoT) – the system prompt instructs the LLM to “think step-by-step” to encourage it to reason through the problem procedurally [31, 32].",
|
| 528 |
+
"- Composite – the system prompt combines domain expertise, self-recitation, chain-of-thought, and adds self-criticism [33, 34]."
|
| 529 |
+
],
|
| 530 |
+
"bbox": [
|
| 531 |
+
156,
|
| 532 |
+
357,
|
| 533 |
+
880,
|
| 534 |
+
515
|
| 535 |
+
],
|
| 536 |
+
"page_idx": 2
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"type": "text",
|
| 540 |
+
"text": "Finally, we provided the LLM with a single example problem-and-solution pair for one-shot in-context learning. The example solution was adapted for each prompt based on the prompt-engineering technique used. For example, the CoT prompt included a chain of thought in its solution. See Figure 10 in the Appendix for a sample prompt.",
|
| 541 |
+
"bbox": [
|
| 542 |
+
109,
|
| 543 |
+
529,
|
| 544 |
+
883,
|
| 545 |
+
574
|
| 546 |
+
],
|
| 547 |
+
"page_idx": 2
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"type": "text",
|
| 551 |
+
"text": "2.2 Data",
|
| 552 |
+
"text_level": 1,
|
| 553 |
+
"bbox": [
|
| 554 |
+
112,
|
| 555 |
+
594,
|
| 556 |
+
187,
|
| 557 |
+
607
|
| 558 |
+
],
|
| 559 |
+
"page_idx": 2
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"type": "text",
|
| 563 |
+
"text": "The test dataset used in this research study consists of a series of Multiple-Choice Question-and-Answer (MCQA) exams derived from widely used LLM performance benchmarks.",
|
| 564 |
+
"bbox": [
|
| 565 |
+
109,
|
| 566 |
+
621,
|
| 567 |
+
883,
|
| 568 |
+
650
|
| 569 |
+
],
|
| 570 |
+
"page_idx": 2
|
| 571 |
+
},
|
| 572 |
+
{
|
| 573 |
+
"type": "text",
|
| 574 |
+
"text": "First, we reviewed the prior literature to identify benchmarks frequently used to evaluate LLMs. We limited our candidate benchmarks to those containing MCQA problems so that we could use correct-answer accuracy as our primary performance metric.",
|
| 575 |
+
"bbox": [
|
| 576 |
+
109,
|
| 577 |
+
656,
|
| 578 |
+
883,
|
| 579 |
+
698
|
| 580 |
+
],
|
| 581 |
+
"page_idx": 2
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "text",
|
| 585 |
+
"text": "Next, we selected a set of problems that covered a range of problem domains (e.g., math, science, law, etc.) and difficulty levels (e.g., secondary school, university, etc.) These problem sets can be seen in Table 2.",
|
| 586 |
+
"bbox": [
|
| 587 |
+
109,
|
| 588 |
+
704,
|
| 589 |
+
880,
|
| 590 |
+
733
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 2
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"text": "Then, we converted the benchmark problems from their original data format into a standardized data structure using the JSON Lines (JSON-L) format [41]. Our standardized set of exams allowed us to use the exams interchangeably without modifying the code in the test harness. See Figure 11 in the Appendix for a sample of an MCQA problem.",
|
| 597 |
+
"bbox": [
|
| 598 |
+
109,
|
| 599 |
+
738,
|
| 600 |
+
880,
|
| 601 |
+
781
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 2
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "Finally, we created two MCQA exams of different sizes. We created a large exam with 1,000 questions by randomly sampling 100 problems from each of the ten problem sets. This 1,000-question (large) exam was used with GPT-3.5 to perform a detailed analysis of temperature across problem domains.",
|
| 608 |
+
"bbox": [
|
| 609 |
+
109,
|
| 610 |
+
787,
|
| 611 |
+
880,
|
| 612 |
+
829
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 2
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"text": "Additionally, we created a smaller exam of 100 questions by randomly sampling ten questions from each of the ten domain-specific problem sets. This 100-question (small) exam was used for our high-level analysis of sampling temperature across all nine models, all five prompt-engineering techniques, and extended temperature range (0.0-1.6).<sup>4</sup>",
|
| 619 |
+
"bbox": [
|
| 620 |
+
109,
|
| 621 |
+
835,
|
| 622 |
+
880,
|
| 623 |
+
878
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 2
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "header",
|
| 629 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 630 |
+
"bbox": [
|
| 631 |
+
218,
|
| 632 |
+
42,
|
| 633 |
+
777,
|
| 634 |
+
56
|
| 635 |
+
],
|
| 636 |
+
"page_idx": 2
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "page_footnote",
|
| 640 |
+
"text": "<sup>4</sup>We used the smaller 100-question exam due to cost and runtime considerations.",
|
| 641 |
+
"bbox": [
|
| 642 |
+
133,
|
| 643 |
+
897,
|
| 644 |
+
609,
|
| 645 |
+
911
|
| 646 |
+
],
|
| 647 |
+
"page_idx": 2
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "page_number",
|
| 651 |
+
"text": "3",
|
| 652 |
+
"bbox": [
|
| 653 |
+
493,
|
| 654 |
+
935,
|
| 655 |
+
503,
|
| 656 |
+
946
|
| 657 |
+
],
|
| 658 |
+
"page_idx": 2
|
| 659 |
+
},
|
| 660 |
+
{
|
| 661 |
+
"type": "table",
|
| 662 |
+
"img_path": "images/2d6f1b7a9b684d1fc335d54a231d35bafe66d22f17b2ceb45f0e4788c4a3b926.jpg",
|
| 663 |
+
"table_caption": [],
|
| 664 |
+
"table_footnote": [],
|
| 665 |
+
"table_body": "<table><tr><td>Problem Set</td><td>Benchmark</td><td>Domain</td><td>Questions</td><td>License</td><td>Source</td></tr><tr><td>ARC Challenge Test</td><td>ARC</td><td>Science</td><td>1,173</td><td>CC BY-SA</td><td>[35]</td></tr><tr><td>AQUA-RAT</td><td>AGI Eval</td><td>Math</td><td>254</td><td>Apache v2.0</td><td>[36]</td></tr><tr><td>Hellaswag Val</td><td>Hellaswag</td><td>Common Sense Reasoning</td><td>10,042</td><td>MIT</td><td>[37]</td></tr><tr><td>LogiQA (English)</td><td>AGI Eval</td><td>Logic</td><td>651</td><td>GitHub</td><td>[36, 38]</td></tr><tr><td>LSAT-AR</td><td>AGI Eval</td><td>Law (Analytic Reasoning)</td><td>230</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-LR</td><td>AGI Eval</td><td>Law (Logical Reasoning)</td><td>510</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-RC</td><td>AGI Eval</td><td>Law (Reading Comprehension)</td><td>260</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>MedMCQA Valid</td><td>MedMCQA</td><td>Medicine</td><td>6,150</td><td>MIT</td><td>[40]</td></tr><tr><td>SAT-English</td><td>AGI Eval</td><td>English</td><td>206</td><td>MIT</td><td>[36]</td></tr><tr><td>SAT-Math</td><td>AGI Eval</td><td>Math</td><td>220</td><td>MIT</td><td>[36]</td></tr></table>",
|
| 666 |
+
"bbox": [
|
| 667 |
+
158,
|
| 668 |
+
88,
|
| 669 |
+
841,
|
| 670 |
+
244
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 3
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "Note: The GitHub repository for LogiQA does not include a license file. However, both the paper and readme.md file states that \"The dataset is freely available.\"",
|
| 677 |
+
"bbox": [
|
| 678 |
+
111,
|
| 679 |
+
279,
|
| 680 |
+
883,
|
| 681 |
+
306
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 3
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "2.3 Process",
|
| 688 |
+
"text_level": 1,
|
| 689 |
+
"bbox": [
|
| 690 |
+
112,
|
| 691 |
+
334,
|
| 692 |
+
207,
|
| 693 |
+
348
|
| 694 |
+
],
|
| 695 |
+
"page_idx": 3
|
| 696 |
+
},
|
| 697 |
+
{
|
| 698 |
+
"type": "text",
|
| 699 |
+
"text": "Our experiment was designed to test the problem-solving performance of LLMs across ten models, five prompt-engineering techniques, ten problem domains, 100 problems within each problem domain, and all viable sampling temperatures. For each combination of model, prompt, exam, and temperature, we instructed the LLM to answer each question ten times so we could assess the average correct-answer accuracy.",
|
| 700 |
+
"bbox": [
|
| 701 |
+
109,
|
| 702 |
+
359,
|
| 703 |
+
883,
|
| 704 |
+
417
|
| 705 |
+
],
|
| 706 |
+
"page_idx": 3
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "text",
|
| 710 |
+
"text": "The full experiment setup can be seen in Figure 1 and Algorithm 1. However, due to cost and runtime considerations, we conducted a subset of the full experiment designed to capture the most valuable information as efficiently as possible.",
|
| 711 |
+
"bbox": [
|
| 712 |
+
111,
|
| 713 |
+
422,
|
| 714 |
+
883,
|
| 715 |
+
452
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 3
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "image",
|
| 721 |
+
"img_path": "images/09a5aa74d8ac507fd0111994f51072c7d8100a8fda55ac73a82c5d90b1931a48.jpg",
|
| 722 |
+
"image_caption": [
|
| 723 |
+
"Figure 1: Diagram of the full experiment."
|
| 724 |
+
],
|
| 725 |
+
"image_footnote": [],
|
| 726 |
+
"bbox": [
|
| 727 |
+
133,
|
| 728 |
+
496,
|
| 729 |
+
468,
|
| 730 |
+
676
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 3
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "table",
|
| 736 |
+
"img_path": "images/383d5fc97a6f658f9cff921bb78c6fa35e70e87b2f2588900f4eae29cb2b6a31.jpg",
|
| 737 |
+
"table_caption": [
|
| 738 |
+
"Table 2: Problem sets used to create the multi-domain MCQA exam."
|
| 739 |
+
],
|
| 740 |
+
"table_footnote": [],
|
| 741 |
+
"table_body": "<table><tr><td colspan=\"2\">Algorithm 1 Full LLM Temperature Experiment</td></tr><tr><td>1:</td><td>for each model m in M do ▷ 10 models</td></tr><tr><td>2:</td><td>for each prompt p in P do ▷ 5 prompts</td></tr><tr><td>3:</td><td>for each exam e in E do ▷ 10 exams</td></tr><tr><td>4:</td><td>for each temperature τ in T do ▷ 16 temps</td></tr><tr><td>5:</td><td>for each problem q in Q do ▷ 100 prob</td></tr><tr><td>6:</td><td>for each attempt a in A do ▷ 10 attempts</td></tr><tr><td>7:</td><td>Create the prompt</td></tr><tr><td>8:</td><td>Answer the question</td></tr><tr><td>9:</td><td>Record the answer</td></tr><tr><td>10:</td><td>end for</td></tr><tr><td>11:</td><td>end for</td></tr><tr><td>12:</td><td>end for</td></tr><tr><td>13:</td><td>Save the results</td></tr><tr><td>14:</td><td>end for</td></tr><tr><td>15:</td><td>end for</td></tr><tr><td>16:</td><td>end for</td></tr><tr><td>17:</td><td>Process the results</td></tr><tr><td>18:</td><td>Analyze the results</td></tr></table>",
|
| 742 |
+
"bbox": [
|
| 743 |
+
511,
|
| 744 |
+
481,
|
| 745 |
+
883,
|
| 746 |
+
757
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 3
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "First, we instructed GPT-3.5 to complete the 100-question (small) exam using the CoT prompt with temperatures ranging from 0.0 to 2.0 in increments of 0.1. This allowed us to determine the range of viable sampling temperatures to explore.",
|
| 753 |
+
"bbox": [
|
| 754 |
+
111,
|
| 755 |
+
772,
|
| 756 |
+
883,
|
| 757 |
+
815
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 3
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "Performance began to drop rapidly after a temperature of 1.0 until the generated text became incoherent at 1.6. As a result, we stopped the initial temperature sweep at 1.6 and limited the rest of our sweeps from 0.0 to 1.0.",
|
| 764 |
+
"bbox": [
|
| 765 |
+
111,
|
| 766 |
+
821,
|
| 767 |
+
883,
|
| 768 |
+
849
|
| 769 |
+
],
|
| 770 |
+
"page_idx": 3
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"text": "Next, we instructed the other eight LLMs to complete the 100-question (small) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize to other LLMs.",
|
| 775 |
+
"bbox": [
|
| 776 |
+
111,
|
| 777 |
+
854,
|
| 778 |
+
883,
|
| 779 |
+
883
|
| 780 |
+
],
|
| 781 |
+
"page_idx": 3
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "header",
|
| 785 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 786 |
+
"bbox": [
|
| 787 |
+
218,
|
| 788 |
+
42,
|
| 789 |
+
774,
|
| 790 |
+
56
|
| 791 |
+
],
|
| 792 |
+
"page_idx": 3
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "page_footnote",
|
| 796 |
+
"text": "For this experiment, we fixed all other sampling parameters (e.g., top-k, top-p, etc.) to isolate the effect of temperature.",
|
| 797 |
+
"bbox": [
|
| 798 |
+
132,
|
| 799 |
+
896,
|
| 800 |
+
839,
|
| 801 |
+
912
|
| 802 |
+
],
|
| 803 |
+
"page_idx": 3
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "page_number",
|
| 807 |
+
"text": "4",
|
| 808 |
+
"bbox": [
|
| 809 |
+
493,
|
| 810 |
+
935,
|
| 811 |
+
503,
|
| 812 |
+
946
|
| 813 |
+
],
|
| 814 |
+
"page_idx": 3
|
| 815 |
+
},
|
| 816 |
+
{
|
| 817 |
+
"type": "text",
|
| 818 |
+
"text": "Then, we instructed GPT-3.5 to complete the 100-question (small) exam using each of the five prompts over temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize over various prompt-engineering techniques.",
|
| 819 |
+
"bbox": [
|
| 820 |
+
109,
|
| 821 |
+
90,
|
| 822 |
+
883,
|
| 823 |
+
119
|
| 824 |
+
],
|
| 825 |
+
"page_idx": 4
|
| 826 |
+
},
|
| 827 |
+
{
|
| 828 |
+
"type": "text",
|
| 829 |
+
"text": "Finally, we instructed GPT-3.5 to complete the 1,000-question (large) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results were domain-specific or generalized across problem domains.",
|
| 830 |
+
"bbox": [
|
| 831 |
+
109,
|
| 832 |
+
126,
|
| 833 |
+
883,
|
| 834 |
+
156
|
| 835 |
+
],
|
| 836 |
+
"page_idx": 4
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "text",
|
| 840 |
+
"text": "2.4 Metrics",
|
| 841 |
+
"text_level": 1,
|
| 842 |
+
"bbox": [
|
| 843 |
+
112,
|
| 844 |
+
172,
|
| 845 |
+
209,
|
| 846 |
+
186
|
| 847 |
+
],
|
| 848 |
+
"page_idx": 4
|
| 849 |
+
},
|
| 850 |
+
{
|
| 851 |
+
"type": "text",
|
| 852 |
+
"text": "To test our hypothesis, we measured the LLM's correct-answer accuracy as our primary performance metric. For each combination of model, prompt, exam, and temperature, we calculated the accuracy as the number of correct answers from ten attempts at each problem. Then, we computed the average (mean) accuracy across all problems.",
|
| 853 |
+
"bbox": [
|
| 854 |
+
109,
|
| 855 |
+
200,
|
| 856 |
+
883,
|
| 857 |
+
244
|
| 858 |
+
],
|
| 859 |
+
"page_idx": 4
|
| 860 |
+
},
|
| 861 |
+
{
|
| 862 |
+
"type": "text",
|
| 863 |
+
"text": "To further support our findings, we also measured the similarity of the LLM's responses using a series of text-similarity metrics. These metrics are defined as follows:",
|
| 864 |
+
"bbox": [
|
| 865 |
+
109,
|
| 866 |
+
250,
|
| 867 |
+
883,
|
| 868 |
+
277
|
| 869 |
+
],
|
| 870 |
+
"page_idx": 4
|
| 871 |
+
},
|
| 872 |
+
{
|
| 873 |
+
"type": "list",
|
| 874 |
+
"sub_type": "text",
|
| 875 |
+
"list_items": [
|
| 876 |
+
"- Jaccard similarity - the ratio of the intersection to the union of word sets in the output text [42].",
|
| 877 |
+
"- Bag-of-Words (BoW) similarity - comparison of the frequency of each word, ignoring order [43].",
|
| 878 |
+
"- TF-IDF similarity - comparison of word frequency weighted by inverse document frequency [44].",
|
| 879 |
+
"Levenshtein similarity - the number of edits needed to change one string of text into the other [45].",
|
| 880 |
+
"- BLEU score - comparison of similarity based on n-gram overlap [46].",
|
| 881 |
+
"- SBERT similarity - semantic similarity computed using Sentence-BERT embeddings [47]."
|
| 882 |
+
],
|
| 883 |
+
"bbox": [
|
| 884 |
+
156,
|
| 885 |
+
290,
|
| 886 |
+
830,
|
| 887 |
+
393
|
| 888 |
+
],
|
| 889 |
+
"page_idx": 4
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"text": "2.5 Analysis",
|
| 894 |
+
"text_level": 1,
|
| 895 |
+
"bbox": [
|
| 896 |
+
112,
|
| 897 |
+
414,
|
| 898 |
+
215,
|
| 899 |
+
429
|
| 900 |
+
],
|
| 901 |
+
"page_idx": 4
|
| 902 |
+
},
|
| 903 |
+
{
|
| 904 |
+
"type": "text",
|
| 905 |
+
"text": "We used the Kruskal-Wallis test to evaluate the statistical significance of any changes in accuracy as a function of temperature [48]. We chose the Kruskal-Wallis test because the data (i.e., correct-answer accuracy by question) were not normally distributed. Rather, they were bimodally distributed with centers at 0.0 and 1.0.",
|
| 906 |
+
"bbox": [
|
| 907 |
+
109,
|
| 908 |
+
440,
|
| 909 |
+
883,
|
| 910 |
+
484
|
| 911 |
+
],
|
| 912 |
+
"page_idx": 4
|
| 913 |
+
},
|
| 914 |
+
{
|
| 915 |
+
"type": "text",
|
| 916 |
+
"text": "3 Results",
|
| 917 |
+
"text_level": 1,
|
| 918 |
+
"bbox": [
|
| 919 |
+
112,
|
| 920 |
+
506,
|
| 921 |
+
209,
|
| 922 |
+
522
|
| 923 |
+
],
|
| 924 |
+
"page_idx": 4
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"text": "3.1 Accuracy vs. Temperature",
|
| 929 |
+
"text_level": 1,
|
| 930 |
+
"bbox": [
|
| 931 |
+
112,
|
| 932 |
+
539,
|
| 933 |
+
339,
|
| 934 |
+
555
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 4
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "Our analysis revealed that the problem-solving performance of LLMs remained relatively stable across sampling temperatures from 0.0 to 1.0 for all LLMs, prompt-engineering techniques, and problem domains. Using GPT-3.5 with a CoT prompt on the 1,000-question exam from 0.0 to 1.0, the Kruskal-Wallis test yielded $H(10) = 10.439$ , $p = 0.403$ .",
|
| 941 |
+
"bbox": [
|
| 942 |
+
109,
|
| 943 |
+
566,
|
| 944 |
+
883,
|
| 945 |
+
609
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 4
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "text",
|
| 951 |
+
"text": "First, we analyzed the performance of GPT-3.5 using the CoT prompt on the 100-question exam. Accuracy remained stable over temperatures from 0.0 to 1.0. However, after a temperature of 1.0, the text rapidly became incoherent, and the accuracy began to drop until it reached zero around a temperature of 1.6. See Figure 2.",
|
| 952 |
+
"bbox": [
|
| 953 |
+
109,
|
| 954 |
+
614,
|
| 955 |
+
883,
|
| 956 |
+
657
|
| 957 |
+
],
|
| 958 |
+
"page_idx": 4
|
| 959 |
+
},
|
| 960 |
+
{
|
| 961 |
+
"type": "image",
|
| 962 |
+
"img_path": "images/d812f7b0d424e7278bea2190a0fe91de1b38c1eb05999c95698ef8d54461fd57.jpg",
|
| 963 |
+
"image_caption": [
|
| 964 |
+
"Figure 2: Accuracy by temperature from 0.0 to 1.6 for GPT-3.5 using the CoT prompt on the 100-question exam."
|
| 965 |
+
],
|
| 966 |
+
"image_footnote": [],
|
| 967 |
+
"bbox": [
|
| 968 |
+
321,
|
| 969 |
+
684,
|
| 970 |
+
676,
|
| 971 |
+
875
|
| 972 |
+
],
|
| 973 |
+
"page_idx": 4
|
| 974 |
+
},
|
| 975 |
+
{
|
| 976 |
+
"type": "header",
|
| 977 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 978 |
+
"bbox": [
|
| 979 |
+
218,
|
| 980 |
+
42,
|
| 981 |
+
777,
|
| 982 |
+
56
|
| 983 |
+
],
|
| 984 |
+
"page_idx": 4
|
| 985 |
+
},
|
| 986 |
+
{
|
| 987 |
+
"type": "page_number",
|
| 988 |
+
"text": "5",
|
| 989 |
+
"bbox": [
|
| 990 |
+
493,
|
| 991 |
+
935,
|
| 992 |
+
503,
|
| 993 |
+
946
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 4
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "text",
|
| 999 |
+
"text": "Second, we analyzed the performance of all nine LLMs using the CoT prompt on the 100-question exam. Accuracy also remained stable across all of the LLMs, except for Llama 2 7B. The performance of most LLMs showed a gradual (non-significant) decrease in performance as a function of temperature. See Figure 3 and Table 3.",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
109,
|
| 1002 |
+
90,
|
| 1003 |
+
883,
|
| 1004 |
+
133
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "image",
|
| 1010 |
+
"img_path": "images/0964e010d94e722ebbeb21c71512a7614ea6ad219e4e5f8fff96b42fe3411661.jpg",
|
| 1011 |
+
"image_caption": [
|
| 1012 |
+
"Figure 3: Accuracy by temperature and model using the CoT prompt on the 100-question exam."
|
| 1013 |
+
],
|
| 1014 |
+
"image_footnote": [],
|
| 1015 |
+
"bbox": [
|
| 1016 |
+
130,
|
| 1017 |
+
161,
|
| 1018 |
+
550,
|
| 1019 |
+
383
|
| 1020 |
+
],
|
| 1021 |
+
"page_idx": 5
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "table",
|
| 1025 |
+
"img_path": "images/5069e8abb8ca887571d1a3191384649e3fb29ca737e69bfa31bb40124fd95cd5.jpg",
|
| 1026 |
+
"table_caption": [],
|
| 1027 |
+
"table_footnote": [],
|
| 1028 |
+
"table_body": "<table><tr><td>Model</td><td>H(10)</td><td>p-value</td></tr><tr><td>Claude 3 Opus</td><td>1.735</td><td>0.998</td></tr><tr><td>Command R+</td><td>1.771</td><td>0.998</td></tr><tr><td>Gemini Pro 1.0</td><td>7.379</td><td>0.689</td></tr><tr><td>Gemini Pro 1.5</td><td>3.119</td><td>0.978</td></tr><tr><td>GPT-3.5 Turbo</td><td>2.042</td><td>0.996</td></tr><tr><td>GPT-4</td><td>3.789</td><td>0.956</td></tr><tr><td>Llama 2 70B</td><td>3.677</td><td>0.961</td></tr><tr><td>Llama 2 7B</td><td>17.086</td><td>0.072</td></tr><tr><td>Mistral Large</td><td>3.069</td><td>0.980</td></tr></table>",
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
604,
|
| 1031 |
+
167,
|
| 1032 |
+
867,
|
| 1033 |
+
323
|
| 1034 |
+
],
|
| 1035 |
+
"page_idx": 5
|
| 1036 |
+
},
|
| 1037 |
+
{
|
| 1038 |
+
"type": "text",
|
| 1039 |
+
"text": "Llama 2 7B did not perform better than statistically random guesses. Its poor performance was due to generating incorrectly formatted answers (39%) and correctly formatted but incorrect answers (36%). Its all-or-nothing behavior at a temperature of 0.0 versus more random behavior from 0.1 to 1.0 led to a much lower, yet still non-significant, p-value.",
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
109,
|
| 1042 |
+
446,
|
| 1043 |
+
883,
|
| 1044 |
+
489
|
| 1045 |
+
],
|
| 1046 |
+
"page_idx": 5
|
| 1047 |
+
},
|
| 1048 |
+
{
|
| 1049 |
+
"type": "text",
|
| 1050 |
+
"text": "Third, we analyzed the performance of GPT-3.5 using each of the five prompts on the 100-question exam. Accuracy remained stable for all temperatures across all prompt-engineering techniques. The CoT prompt outperformed the other four prompts. As a result, we used the CoT prompt for all single-prompt experiments. See Figure 4 and Table 4.",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
109,
|
| 1053 |
+
494,
|
| 1054 |
+
883,
|
| 1055 |
+
537
|
| 1056 |
+
],
|
| 1057 |
+
"page_idx": 5
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"type": "image",
|
| 1061 |
+
"img_path": "images/5ea7eb3dd21fabd38ab396af1763fa0063e76730c7949637cc7cf2b640a04854.jpg",
|
| 1062 |
+
"image_caption": [
|
| 1063 |
+
"Figure 4: Accuracy by temperature and prompt for GPT-3.5 on the 100-question exam."
|
| 1064 |
+
],
|
| 1065 |
+
"image_footnote": [],
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
130,
|
| 1068 |
+
563,
|
| 1069 |
+
470,
|
| 1070 |
+
779
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 5
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "table",
|
| 1076 |
+
"img_path": "images/14175b2ea07a452c2bad817afb85f7f30a38c9ec800d7337103ce1f02cde8b93.jpg",
|
| 1077 |
+
"table_caption": [
|
| 1078 |
+
"Table 3: Kruskal-Wallis test results by model using the CoT prompt on the 100-question exam."
|
| 1079 |
+
],
|
| 1080 |
+
"table_footnote": [],
|
| 1081 |
+
"table_body": "<table><tr><td>Prompt</td><td>H(10)</td><td>p-value</td></tr><tr><td>Baseline</td><td>0.420</td><td>1.000</td></tr><tr><td>Domain Expert</td><td>0.548</td><td>1.000</td></tr><tr><td>Self-recitation</td><td>1.403</td><td>0.999</td></tr><tr><td>Chain of Thought</td><td>2.042</td><td>0.996</td></tr><tr><td>Composite</td><td>1.000</td><td>1.000</td></tr></table>",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
563,
|
| 1084 |
+
566,
|
| 1085 |
+
834,
|
| 1086 |
+
667
|
| 1087 |
+
],
|
| 1088 |
+
"page_idx": 5
|
| 1089 |
+
},
|
| 1090 |
+
{
|
| 1091 |
+
"type": "text",
|
| 1092 |
+
"text": "Table 4: Kruskal-Wallis test results by prompt for GPT-3.5 on the 100-question exam.",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
509,
|
| 1095 |
+
672,
|
| 1096 |
+
839,
|
| 1097 |
+
703
|
| 1098 |
+
],
|
| 1099 |
+
"page_idx": 5
|
| 1100 |
+
},
|
| 1101 |
+
{
|
| 1102 |
+
"type": "text",
|
| 1103 |
+
"text": "Finally, we analyzed the performance of GPT-3.5 using the CoT prompt on all ten exams. Accuracy remained stable for all temperatures across all problem domains based on visual analysis. However, the LSAT-AR and SAT-Math exams showed statistically significant differences in the Kruskal-Wallis p-values. See Figure 5 and Table 5.",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
109,
|
| 1106 |
+
842,
|
| 1107 |
+
883,
|
| 1108 |
+
886
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 5
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "header",
|
| 1114 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
218,
|
| 1117 |
+
42,
|
| 1118 |
+
774,
|
| 1119 |
+
56
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 5
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "page_footnote",
|
| 1125 |
+
"text": "<sup>6</sup>We considered the ARC Challenge results to be non-significant since they were greater than the significance threshold of 0.05.",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
132,
|
| 1128 |
+
897,
|
| 1129 |
+
880,
|
| 1130 |
+
911
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 5
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "page_number",
|
| 1136 |
+
"text": "6",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
493,
|
| 1139 |
+
936,
|
| 1140 |
+
503,
|
| 1141 |
+
946
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 5
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "image",
|
| 1147 |
+
"img_path": "images/f83d94bffc91236c411c4cf1fc8af484da9d9015f3ed54f55a9d065987d95bd6.jpg",
|
| 1148 |
+
"image_caption": [
|
| 1149 |
+
"Figure 5: Accuracy by temperature and exam for GPT-3.5 using the CoT prompt."
|
| 1150 |
+
],
|
| 1151 |
+
"image_footnote": [],
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
130,
|
| 1154 |
+
99,
|
| 1155 |
+
552,
|
| 1156 |
+
324
|
| 1157 |
+
],
|
| 1158 |
+
"page_idx": 6
|
| 1159 |
+
},
|
| 1160 |
+
{
|
| 1161 |
+
"type": "table",
|
| 1162 |
+
"img_path": "images/b5be77b7c29e13d18495475294e54920424e132062c2361510639f8833a3c11a.jpg",
|
| 1163 |
+
"table_caption": [],
|
| 1164 |
+
"table_footnote": [],
|
| 1165 |
+
"table_body": "<table><tr><td>Exam</td><td>H(10)</td><td>p-value</td></tr><tr><td>AQUA-RAT</td><td>10.320</td><td>0.413</td></tr><tr><td>ARC Challenge</td><td>16.390</td><td>0.089</td></tr><tr><td>Hellaswag</td><td>4.473</td><td>0.924</td></tr><tr><td>LogiQA</td><td>3.208</td><td>0.976</td></tr><tr><td>LSAT-AR</td><td>37.874</td><td>< 0.001</td></tr><tr><td>LSAT-LR</td><td>7.816</td><td>0.647</td></tr><tr><td>LSAT-RC</td><td>4.037</td><td>0.946</td></tr><tr><td>MedMCQA</td><td>2.334</td><td>0.993</td></tr><tr><td>SAT-English</td><td>3.937</td><td>0.950</td></tr><tr><td>SAT-Math</td><td>21.276</td><td>0.019</td></tr></table>",
|
| 1166 |
+
"bbox": [
|
| 1167 |
+
604,
|
| 1168 |
+
102,
|
| 1169 |
+
867,
|
| 1170 |
+
271
|
| 1171 |
+
],
|
| 1172 |
+
"page_idx": 6
|
| 1173 |
+
},
|
| 1174 |
+
{
|
| 1175 |
+
"type": "text",
|
| 1176 |
+
"text": "Table 5: Kruskal-Wallis test results by exam for GPT-3.5 using the CoT prompt.",
|
| 1177 |
+
"bbox": [
|
| 1178 |
+
584,
|
| 1179 |
+
277,
|
| 1180 |
+
880,
|
| 1181 |
+
306
|
| 1182 |
+
],
|
| 1183 |
+
"page_idx": 6
|
| 1184 |
+
},
|
| 1185 |
+
{
|
| 1186 |
+
"type": "text",
|
| 1187 |
+
"text": "We performed the Dunn-Bonferroni test on the LSAT-AR and SAT-Math results [49]. It revealed that the all-or-nothing behavior of responses generated at a temperature of 0.0 versus the more random responses from 0.1 to 1.0 caused the anomaly. The correct-answer accuracy for each individual problem varied significantly when evaluated pairwise across temperatures. However, the average accuracy for all problems remained similar across temperatures.",
|
| 1188 |
+
"bbox": [
|
| 1189 |
+
109,
|
| 1190 |
+
402,
|
| 1191 |
+
883,
|
| 1192 |
+
460
|
| 1193 |
+
],
|
| 1194 |
+
"page_idx": 6
|
| 1195 |
+
},
|
| 1196 |
+
{
|
| 1197 |
+
"type": "text",
|
| 1198 |
+
"text": "3.2 Text Variability vs. Temperature",
|
| 1199 |
+
"text_level": 1,
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
112,
|
| 1202 |
+
482,
|
| 1203 |
+
382,
|
| 1204 |
+
498
|
| 1205 |
+
],
|
| 1206 |
+
"page_idx": 6
|
| 1207 |
+
},
|
| 1208 |
+
{
|
| 1209 |
+
"type": "text",
|
| 1210 |
+
"text": "To further support our results, we analyzed text variability as a function of temperature. Our findings show a clear trend of decreasing text similarity (thus increasing text variability) as temperature increases. Text similarity decreases rapidly after a temperature of 1.0, corresponding to the rapid decrease in accuracy observed above $\\tau = 1.0$ . See Figure 6.",
|
| 1211 |
+
"bbox": [
|
| 1212 |
+
109,
|
| 1213 |
+
511,
|
| 1214 |
+
883,
|
| 1215 |
+
555
|
| 1216 |
+
],
|
| 1217 |
+
"page_idx": 6
|
| 1218 |
+
},
|
| 1219 |
+
{
|
| 1220 |
+
"type": "text",
|
| 1221 |
+
"text": "These results are consistent with our understanding of sampling temperature, indicating that higher temperatures produce more widely varied outputs. Furthermore, these results hold regardless of the LLM, prompt-engineering technique, or problem domain. See Figures 7, 8, and 9.",
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
109,
|
| 1224 |
+
559,
|
| 1225 |
+
883,
|
| 1226 |
+
602
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 6
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "image",
|
| 1232 |
+
"img_path": "images/26a27cc7dac86fc1154e35cdd04c8b0c517de38b934d2c9937a8a9f52cbf701a.jpg",
|
| 1233 |
+
"image_caption": [
|
| 1234 |
+
"Figure 6: Text similarity by temperature and metric for GPT-3.5 using CoT prompting on the 100-question exam over sampling temperatures from 0.0 to 1.6."
|
| 1235 |
+
],
|
| 1236 |
+
"image_footnote": [],
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
135,
|
| 1239 |
+
630,
|
| 1240 |
+
488,
|
| 1241 |
+
832
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 6
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "image",
|
| 1247 |
+
"img_path": "images/e4aed5087d5f558eae62a83882431a25b6d1759ac8d43d8a28e2c6b8f2485a5d.jpg",
|
| 1248 |
+
"image_caption": [
|
| 1249 |
+
"Figure 7: TF-IDF text similarity by temperature and model using the CoT prompt on the 100-question exam over sampling temperatures from 0.0 to 1.0"
|
| 1250 |
+
],
|
| 1251 |
+
"image_footnote": [],
|
| 1252 |
+
"bbox": [
|
| 1253 |
+
549,
|
| 1254 |
+
627,
|
| 1255 |
+
872,
|
| 1256 |
+
835
|
| 1257 |
+
],
|
| 1258 |
+
"page_idx": 6
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "header",
|
| 1262 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1263 |
+
"bbox": [
|
| 1264 |
+
218,
|
| 1265 |
+
42,
|
| 1266 |
+
774,
|
| 1267 |
+
56
|
| 1268 |
+
],
|
| 1269 |
+
"page_idx": 6
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "page_number",
|
| 1273 |
+
"text": "7",
|
| 1274 |
+
"bbox": [
|
| 1275 |
+
493,
|
| 1276 |
+
935,
|
| 1277 |
+
503,
|
| 1278 |
+
946
|
| 1279 |
+
],
|
| 1280 |
+
"page_idx": 6
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "image",
|
| 1284 |
+
"img_path": "images/19e333c27965deeee0ae3c8742179ca531e006674f51d16a047d050424db43fd.jpg",
|
| 1285 |
+
"image_caption": [
|
| 1286 |
+
"Figure 8: TF-IDF text similarity by temperature and prompt for GPT-3.5 on the 100-question exam over sampling temperatures from 0.0 to 1.0."
|
| 1287 |
+
],
|
| 1288 |
+
"image_footnote": [],
|
| 1289 |
+
"bbox": [
|
| 1290 |
+
130,
|
| 1291 |
+
95,
|
| 1292 |
+
475,
|
| 1293 |
+
316
|
| 1294 |
+
],
|
| 1295 |
+
"page_idx": 7
|
| 1296 |
+
},
|
| 1297 |
+
{
|
| 1298 |
+
"type": "image",
|
| 1299 |
+
"img_path": "images/e13dc861301a1e47f3dc182adf5b5376ca494b549bca97b9dd195cac87b0d182.jpg",
|
| 1300 |
+
"image_caption": [
|
| 1301 |
+
"Figure 9: TF-IDF text similarity by temperature and exam for GPT-3.5 using the CoT prompt over sampling temperatures from 0.0 to 1.0"
|
| 1302 |
+
],
|
| 1303 |
+
"image_footnote": [],
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
526,
|
| 1306 |
+
95,
|
| 1307 |
+
874,
|
| 1308 |
+
318
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "text",
|
| 1314 |
+
"text": "4 Discussion",
|
| 1315 |
+
"text_level": 1,
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
112,
|
| 1318 |
+
412,
|
| 1319 |
+
235,
|
| 1320 |
+
429
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 7
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "text",
|
| 1326 |
+
"text": "4.1 Interpretation",
|
| 1327 |
+
"text_level": 1,
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
112,
|
| 1330 |
+
452,
|
| 1331 |
+
253,
|
| 1332 |
+
468
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 7
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "text",
|
| 1338 |
+
"text": "Based on these results, changes in temperature from 0.0 to 1.0 do not have a statistically significant effect on the problem-solving performance of LLMs. These results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. However, there are a few potential exceptions to these general findings.",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
111,
|
| 1341 |
+
483,
|
| 1342 |
+
885,
|
| 1343 |
+
527
|
| 1344 |
+
],
|
| 1345 |
+
"page_idx": 7
|
| 1346 |
+
},
|
| 1347 |
+
{
|
| 1348 |
+
"type": "text",
|
| 1349 |
+
"text": "Therefore, we recommend setting an LLM's sampling temperature to 0.0 for problem-solving tasks. This temperature maximizes reproducibility without compromising accuracy. In addition, it helps avoid the inevitable drop-off in performance that occurs beyond a temperature of 1.0. However, exceptions to this guidance should be taken into consideration.",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
111,
|
| 1352 |
+
531,
|
| 1353 |
+
883,
|
| 1354 |
+
588
|
| 1355 |
+
],
|
| 1356 |
+
"page_idx": 7
|
| 1357 |
+
},
|
| 1358 |
+
{
|
| 1359 |
+
"type": "text",
|
| 1360 |
+
"text": "4.2 Limitations",
|
| 1361 |
+
"text_level": 1,
|
| 1362 |
+
"bbox": [
|
| 1363 |
+
112,
|
| 1364 |
+
617,
|
| 1365 |
+
235,
|
| 1366 |
+
632
|
| 1367 |
+
],
|
| 1368 |
+
"page_idx": 7
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "text",
|
| 1372 |
+
"text": "There were several limitations to our research study due to cost and runtime considerations:",
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
111,
|
| 1375 |
+
648,
|
| 1376 |
+
712,
|
| 1377 |
+
662
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 7
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "text",
|
| 1383 |
+
"text": "First, our study was limited to a subset of popular LLMs. As a result, our findings may not hold for other LLMs that were excluded from our experiment.",
|
| 1384 |
+
"bbox": [
|
| 1385 |
+
111,
|
| 1386 |
+
669,
|
| 1387 |
+
883,
|
| 1388 |
+
698
|
| 1389 |
+
],
|
| 1390 |
+
"page_idx": 7
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "text",
|
| 1394 |
+
"text": "Second, we only explored a subset of prompt-engineering techniques using a single prompt-and-response cycle with one-shot in-context learning. As a result, there may be more complex prompts or agent architectures that better leverage sampling temperature for creativity in their problem-solving capabilities.",
|
| 1395 |
+
"bbox": [
|
| 1396 |
+
111,
|
| 1397 |
+
703,
|
| 1398 |
+
883,
|
| 1399 |
+
747
|
| 1400 |
+
],
|
| 1401 |
+
"page_idx": 7
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "text",
|
| 1405 |
+
"text": "Third, our study was limited to a subset of problems, problem domains, and problem-solving tasks. As a result, our findings may not hold for larger data sets, different problem domains, or other types of problem-solving tasks.",
|
| 1406 |
+
"bbox": [
|
| 1407 |
+
111,
|
| 1408 |
+
752,
|
| 1409 |
+
882,
|
| 1410 |
+
781
|
| 1411 |
+
],
|
| 1412 |
+
"page_idx": 7
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "text",
|
| 1416 |
+
"text": "Fourth, due to time and cost constraints, we limited our study to two test sets of 1,000 and 100 randomly selected questions from standard benchmarks. These limited sample sizes may have introduced bias into the results. Utilizing a larger and more diverse test set would enhance the statistical reliability of our findings.",
|
| 1417 |
+
"bbox": [
|
| 1418 |
+
111,
|
| 1419 |
+
786,
|
| 1420 |
+
883,
|
| 1421 |
+
829
|
| 1422 |
+
],
|
| 1423 |
+
"page_idx": 7
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "text",
|
| 1427 |
+
"text": "Fifth, we had to limit the sampling temperature range we explored from 0.0 to 1.0 for all combinations of models, prompts, and exams, except for GPT-3.5 using CoT prompting on the 100-question exam. As a result, the temperature hyperparameter of other LLMs may operate differently at temperatures above 1.0.",
|
| 1428 |
+
"bbox": [
|
| 1429 |
+
111,
|
| 1430 |
+
835,
|
| 1431 |
+
883,
|
| 1432 |
+
878
|
| 1433 |
+
],
|
| 1434 |
+
"page_idx": 7
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "text",
|
| 1438 |
+
"text": "Sixth, we fixed all other sampling parameters (e.g., top-p, top-k, repetition penalty, etc.) to isolate the effect of sampling temperature. As a result, there may be combinations of sampling parameters that result in different outcomes.",
|
| 1439 |
+
"bbox": [
|
| 1440 |
+
111,
|
| 1441 |
+
883,
|
| 1442 |
+
883,
|
| 1443 |
+
912
|
| 1444 |
+
],
|
| 1445 |
+
"page_idx": 7
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "header",
|
| 1449 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1450 |
+
"bbox": [
|
| 1451 |
+
218,
|
| 1452 |
+
42,
|
| 1453 |
+
774,
|
| 1454 |
+
56
|
| 1455 |
+
],
|
| 1456 |
+
"page_idx": 7
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "page_number",
|
| 1460 |
+
"text": "8",
|
| 1461 |
+
"bbox": [
|
| 1462 |
+
493,
|
| 1463 |
+
935,
|
| 1464 |
+
504,
|
| 1465 |
+
946
|
| 1466 |
+
],
|
| 1467 |
+
"page_idx": 7
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "text",
|
| 1471 |
+
"text": "Finally, we could only explore a subset of the various combinations of models, prompts, exams, and temperatures. As a result, other combinations of LLMs, prompt-engineering techniques, and problem domains may exist where temperature plays a more important role in problem-solving performance.",
|
| 1472 |
+
"bbox": [
|
| 1473 |
+
111,
|
| 1474 |
+
90,
|
| 1475 |
+
883,
|
| 1476 |
+
133
|
| 1477 |
+
],
|
| 1478 |
+
"page_idx": 8
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "text",
|
| 1482 |
+
"text": "4.3 Implications",
|
| 1483 |
+
"text_level": 1,
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
112,
|
| 1486 |
+
148,
|
| 1487 |
+
243,
|
| 1488 |
+
165
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 8
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "text",
|
| 1494 |
+
"text": "This research study provides empirical evidence that changes in sampling temperature in the range of 0.0 to 1.0 do not significantly impact the problem-solving capabilities of LLMs on MCQA problems.",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
111,
|
| 1497 |
+
175,
|
| 1498 |
+
883,
|
| 1499 |
+
203
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 8
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "text",
|
| 1505 |
+
"text": "Answering this question may save AI engineers significant time and resources evaluating various sampling temperatures for their LLM agents and applications. In addition, it may reduce unproductive debates in the prompt-engineering community regarding the optimal sampling temperatures for various problem-solving tasks.",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
111,
|
| 1508 |
+
209,
|
| 1509 |
+
883,
|
| 1510 |
+
252
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 8
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "text",
|
| 1516 |
+
"text": "This research also provides broader insights for AI researchers studying model hallucination and problem-solution state-space search with LLMs. Our results show that increasing LLM temperature up to 1.0 does not cause the LLM to hallucinate in ways that lead to incorrect MCQA solutions. In addition, higher temperatures do not appear to improve MCQA solution-space search in ways that lead to correct solutions more often than lower temperatures.",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
111,
|
| 1519 |
+
257,
|
| 1520 |
+
883,
|
| 1521 |
+
314
|
| 1522 |
+
],
|
| 1523 |
+
"page_idx": 8
|
| 1524 |
+
},
|
| 1525 |
+
{
|
| 1526 |
+
"type": "text",
|
| 1527 |
+
"text": "4.4 Future Research",
|
| 1528 |
+
"text_level": 1,
|
| 1529 |
+
"bbox": [
|
| 1530 |
+
112,
|
| 1531 |
+
330,
|
| 1532 |
+
271,
|
| 1533 |
+
344
|
| 1534 |
+
],
|
| 1535 |
+
"page_idx": 8
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "text",
|
| 1539 |
+
"text": "To improve upon this research, we propose the following follow-up experiments:",
|
| 1540 |
+
"bbox": [
|
| 1541 |
+
111,
|
| 1542 |
+
354,
|
| 1543 |
+
645,
|
| 1544 |
+
371
|
| 1545 |
+
],
|
| 1546 |
+
"page_idx": 8
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "text",
|
| 1550 |
+
"text": "First, we recommend conducting this experiment with additional LLMs. Other proprietary and open-source LLMs may utilize temperature in ways that benefit their specific models but did not benefit the LLMs we tested.",
|
| 1551 |
+
"bbox": [
|
| 1552 |
+
111,
|
| 1553 |
+
376,
|
| 1554 |
+
883,
|
| 1555 |
+
404
|
| 1556 |
+
],
|
| 1557 |
+
"page_idx": 8
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "text",
|
| 1561 |
+
"text": "Second, we recommend expanding beyond MCQA problems to other types of problem-solving tasks whose correct answers are more open-ended. The limited effects of sampling temperature in our experiments may have simply resulted from the constraints imposed by the structure of MCQA problems.",
|
| 1562 |
+
"bbox": [
|
| 1563 |
+
111,
|
| 1564 |
+
410,
|
| 1565 |
+
883,
|
| 1566 |
+
452
|
| 1567 |
+
],
|
| 1568 |
+
"page_idx": 8
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "text",
|
| 1572 |
+
"text": "Third, we recommend conducting additional experiments with more MCQA problems and problem domains. We recommend specifically targeting tasks and problem domains that require more creative solutions or lateral \"out-of-the-box\" thinking.",
|
| 1573 |
+
"bbox": [
|
| 1574 |
+
111,
|
| 1575 |
+
458,
|
| 1576 |
+
883,
|
| 1577 |
+
501
|
| 1578 |
+
],
|
| 1579 |
+
"page_idx": 8
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "text",
|
| 1583 |
+
"text": "Fourth, we recommend extending the sampling temperature range until accuracy drops to zero for each LLM, prompt, and exam. However, it should be noted that as the generated text becomes more random, the number of tokens in each response increases significantly, leading to a considerable increase in runtime and cost to explore temperatures above 1.0.",
|
| 1584 |
+
"bbox": [
|
| 1585 |
+
111,
|
| 1586 |
+
507,
|
| 1587 |
+
883,
|
| 1588 |
+
561
|
| 1589 |
+
],
|
| 1590 |
+
"page_idx": 8
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "text",
|
| 1594 |
+
"text": "Finally, we recommend a more in-depth error analysis to determine if any sub-types of problems within these problem domains benefit from changes to sampling temperature. It is possible that statistical noise or averaging may have hidden individual problems that were sensitive to changes in sampling temperature.",
|
| 1595 |
+
"bbox": [
|
| 1596 |
+
111,
|
| 1597 |
+
569,
|
| 1598 |
+
883,
|
| 1599 |
+
612
|
| 1600 |
+
],
|
| 1601 |
+
"page_idx": 8
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "text",
|
| 1605 |
+
"text": "5 Conclusion",
|
| 1606 |
+
"text_level": 1,
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
112,
|
| 1609 |
+
630,
|
| 1610 |
+
241,
|
| 1611 |
+
646
|
| 1612 |
+
],
|
| 1613 |
+
"page_idx": 8
|
| 1614 |
+
},
|
| 1615 |
+
{
|
| 1616 |
+
"type": "text",
|
| 1617 |
+
"text": "This research study empirically investigated the effect of sampling temperature on the problem-solving performance of LLMs across multiple problem domains.",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
111,
|
| 1620 |
+
661,
|
| 1621 |
+
883,
|
| 1622 |
+
690
|
| 1623 |
+
],
|
| 1624 |
+
"page_idx": 8
|
| 1625 |
+
},
|
| 1626 |
+
{
|
| 1627 |
+
"type": "text",
|
| 1628 |
+
"text": "We demonstrated that changes in sampling temperature from 0.0 to 1.0 do not produce statistically significant differences in problem-solving performance on MCQA problems across multiple LLMs, prompt-engineering techniques, and problem domains.",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
111,
|
| 1631 |
+
696,
|
| 1632 |
+
883,
|
| 1633 |
+
738
|
| 1634 |
+
],
|
| 1635 |
+
"page_idx": 8
|
| 1636 |
+
},
|
| 1637 |
+
{
|
| 1638 |
+
"type": "text",
|
| 1639 |
+
"text": "These results have practical implications for AI engineers using LLMs to develop new AI systems. Additionally, they have theoretical implications for AI researchers studying model hallucination and solution-space search with LLMs.",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
111,
|
| 1642 |
+
744,
|
| 1643 |
+
883,
|
| 1644 |
+
772
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 8
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "text",
|
| 1650 |
+
"text": "References",
|
| 1651 |
+
"text_level": 1,
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
112,
|
| 1654 |
+
791,
|
| 1655 |
+
209,
|
| 1656 |
+
808
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 8
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "list",
|
| 1662 |
+
"sub_type": "ref_text",
|
| 1663 |
+
"list_items": [
|
| 1664 |
+
"[1] G. Mialon, R. Dessi, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Roziere, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz, E. Grave, Y. LeCun, and T. Scialom, \"Augmented language models: a survey,\" arXiv, 2 2023.",
|
| 1665 |
+
"[2] J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, and D. C. Schmidt, “A prompt pattern catalog to enhance prompt engineering with ChatGPT,” arXiv, 2 2023.",
|
| 1666 |
+
"[3] OpenAI, \"OpenAI - API reference,\" 2023. [Online]. Available: https://platform.openai.com/docs/api-reference/chat/create"
|
| 1667 |
+
],
|
| 1668 |
+
"bbox": [
|
| 1669 |
+
119,
|
| 1670 |
+
821,
|
| 1671 |
+
883,
|
| 1672 |
+
910
|
| 1673 |
+
],
|
| 1674 |
+
"page_idx": 8
|
| 1675 |
+
},
|
| 1676 |
+
{
|
| 1677 |
+
"type": "header",
|
| 1678 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1679 |
+
"bbox": [
|
| 1680 |
+
218,
|
| 1681 |
+
42,
|
| 1682 |
+
776,
|
| 1683 |
+
56
|
| 1684 |
+
],
|
| 1685 |
+
"page_idx": 8
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "page_number",
|
| 1689 |
+
"text": "9",
|
| 1690 |
+
"bbox": [
|
| 1691 |
+
493,
|
| 1692 |
+
935,
|
| 1693 |
+
504,
|
| 1694 |
+
946
|
| 1695 |
+
],
|
| 1696 |
+
"page_idx": 8
|
| 1697 |
+
},
|
| 1698 |
+
{
|
| 1699 |
+
"type": "list",
|
| 1700 |
+
"sub_type": "ref_text",
|
| 1701 |
+
"list_items": [
|
| 1702 |
+
"[4] Llama-2-Team, \"Llama 2: Open foundation and fine-tuned chat models,\" arXiv, 7 2023.",
|
| 1703 |
+
"[5] C. Wang, S. X. Liu, and A. H. Awadallah, \"Cost-effective hyperparameter optimization for large language model generation inference,\" 2023.",
|
| 1704 |
+
"[6] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985.",
|
| 1705 |
+
"[7] G. Hinton, O. Vinyals, and J. Dean, \"Distilling the knowledge in a neural network,\" arXiv, 3 2015.",
|
| 1706 |
+
"[8] P.-H. Wang, S.-I. Hsieh, S.-C. Chang, Y.-T. Chen, J.-Y. Pan, W. Wei, and D.-C. Juan, “Contextual temperature for language modeling,” arXiv, 12 2020.",
|
| 1707 |
+
"[9] Microsoft, \"Completions - learn how to generate or manipulate text,\" 2023. [Online]. Available: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/completions",
|
| 1708 |
+
"[10] J. Shieh, \"Best practices for prompt engineering with OpenAI API,\" 2024. [Online]. Available: https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api",
|
| 1709 |
+
"[11] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. Association for Computing Machinery, 2022, pp. 1–10.",
|
| 1710 |
+
"[12] Y. Zhu, J. Li, G. Li, Y. Zhao, J. Li, Z. Jin, and H. Mei, \"Improving code generation by dynamic temperature sampling,\" arXiv, 9 2023.",
|
| 1711 |
+
"[13] M. Lee, “A mathematical investigation of hallucination and creativity in GPT models,” Mathematics, vol. 11, p. 2320, 5 2023.",
|
| 1712 |
+
"[14] V. Pursnani, Y. Sermet, and I. Demir, \"Performance of ChatGPT on the US fundamentals of engineering exam: Comprehensive assessment of proficiency and potential implications for professional environmental engineering practice,\" arXiv, 4 2023.",
|
| 1713 |
+
"[15] OpenAI, \"Api temperature change from 0 to 1 to 0 to 2 in 'playground',\" 2023. [Online]. Available: https://community.openuai.com/t/api-temperature-change-from-0-to-1-to-0-to-2-in-playground/217755",
|
| 1714 |
+
"[16] ——, “GPT-4 technical report,” arXiv, 3 2023. [Online]. Available: https://arxiv.org/abs/2303.08774",
|
| 1715 |
+
"[17] ——, “Introducing ChatGPT,” 11 2022. [Online]. Available: https://openai.com/blog/chatgpt",
|
| 1716 |
+
"[18] ——, “GPT-4,” 3 2023. [Online]. Available: https://openai.com/research/gpt-4",
|
| 1717 |
+
"[19] Anthropic, “Introducing the next generation of claude anthropic,” 2024. [Online]. Available: https://www.anthropic.com/news/claude-3-family",
|
| 1718 |
+
"[20] ——, “The claude 3 model family: Opus, sonnet, haiku,” 2024. [Online]. Available: https://www.anthropic.com/claude-3-model-card",
|
| 1719 |
+
"[21] Cohere, \"Command r+,\" 2024. [Online]. Available: https://docs.cohere.com/docs/command-r-plus",
|
| 1720 |
+
"[22] ——, \"Model card for c4ai command r+,\" 2024. [Online]. Available: https://huggingface.co/CohereForAI/c4ai-command-r-plus",
|
| 1721 |
+
"[23] S. Pichai and D. Hassabis, \"Introducing gemini: Google's most capable ai model yet,\" 2023. [Online]. Available: https://blog.google/technology/ai/google-gemini-ai/",
|
| 1722 |
+
"[24] Gemini-Team, \"Gemini: A family of highly capable multimodal models,\" arXiv, 12 2023.",
|
| 1723 |
+
"[25] S. Pichai and D. Hassabis, \"Introducing gemini 1.5, google's next-generation ai model,\" 2024. [Online]. Available: https://blog.google/technology/ai/google-gemini- next-generation-model-february-2024/",
|
| 1724 |
+
"[26] Gemini-Team, \"Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context,\" 2024. [Online]. Available: https://arxiv.org/abs/2403.05530",
|
| 1725 |
+
"[27] OpenAI, \"Models - openerai api.\" [Online]. Available: https://platform.openerai.com/docs/models/gpt-3-5-turbo",
|
| 1726 |
+
"[28] Meta, “Meta and microsoft introduce the next generation of llama | meta,” 2023. [Online]. Available: https://about.meta.com/news/2023/07/llama-2/",
|
| 1727 |
+
"[29] Mistral-AI-Team, “Au large | mistral ai | frontier ai in your hands,” 2024. [Online]. Available: https://mistral.ai/news/mistral-large/",
|
| 1728 |
+
"[30] Z. Sun, X. Wang, Y. Tay, Y. Yang, and D. Zhou, \"Recitation-augmented language models,\" in The Eleventh International Conference on Learning Representations, 10 2023.",
|
| 1729 |
+
"[31] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, \"Large language models are zero-shot reasoners,\" in Advances in Neural Information Processing Systems, vol. 35, 5 2022, pp. 22 199-22 213.",
|
| 1730 |
+
"[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, \"Chain-of-thought prompting elicits reasoning in large language models,\" arXiv, 1 2022.",
|
| 1731 |
+
"[33] S. Huo, N. Arabzadeh, and C. L. A. Clarke, “Retrieving supporting evidence for generative question answering,” arXiv, 9 2023."
|
| 1732 |
+
],
|
| 1733 |
+
"bbox": [
|
| 1734 |
+
116,
|
| 1735 |
+
90,
|
| 1736 |
+
883,
|
| 1737 |
+
910
|
| 1738 |
+
],
|
| 1739 |
+
"page_idx": 9
|
| 1740 |
+
},
|
| 1741 |
+
{
|
| 1742 |
+
"type": "header",
|
| 1743 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1744 |
+
"bbox": [
|
| 1745 |
+
218,
|
| 1746 |
+
42,
|
| 1747 |
+
777,
|
| 1748 |
+
56
|
| 1749 |
+
],
|
| 1750 |
+
"page_idx": 9
|
| 1751 |
+
},
|
| 1752 |
+
{
|
| 1753 |
+
"type": "page_number",
|
| 1754 |
+
"text": "10",
|
| 1755 |
+
"bbox": [
|
| 1756 |
+
488,
|
| 1757 |
+
935,
|
| 1758 |
+
506,
|
| 1759 |
+
946
|
| 1760 |
+
],
|
| 1761 |
+
"page_idx": 9
|
| 1762 |
+
},
|
| 1763 |
+
{
|
| 1764 |
+
"type": "list",
|
| 1765 |
+
"sub_type": "ref_text",
|
| 1766 |
+
"list_items": [
|
| 1767 |
+
"[34] R. Wang, H. Wang, F. Mi, Y. Chen, R. Xu, and K.-F. Wong, \"Self-critique prompting with large language models for inductive instructions,\" arXiv, 5 2023.",
|
| 1768 |
+
"[35] P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord, \"Think you have solved question answering? Try ARC, the AI2 reasoning challenge,\" ArXiv, 3 2018.",
|
| 1769 |
+
"[36] W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan, \"AGIEval: A human-centric benchmark for evaluating foundation models,\" ArXiv, 4 2023.",
|
| 1770 |
+
"[37] R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, \"HellaSwag: Can a machine really finish your sentence?\" in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.",
|
| 1771 |
+
"[38] J. Liu, L. Cui, H. Liu, D. Huang, Y. Wang, and Y. Zhang, \"Logiqa: A challenge dataset for machine reading comprehension with logical reasoning,\" in International Joint Conference on Artificial Intelligence, 2020.",
|
| 1772 |
+
"[39] S. Wang, Z. Liu, W. Zhong, M. Zhou, Z. Wei, Z. Chen, and N. Duan, \"From lsat: The progress and challenges of complex reasoning,\" IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 30, pp. 2201-2216, 8 2021.",
|
| 1773 |
+
"[40] A. Pal, L. K. Umapathi, and M. Sankarasubbu, \"MedMCQA: A large-scale multi-subject multi-choice dataset for medical domain question answering,\" in Proceedings of the Conference on Health, Inference, and Learning. PMLR, 2022, pp. 248-260.",
|
| 1774 |
+
"[41] I. Ward, \"JSON lines,\" 2014. [Online]. Available: https://jsonlines.org/",
|
| 1775 |
+
"[42] P. Jaccard, “The distribution of flora in the alpine zone,” New Phytologist, vol. 11, pp. 37–50, 2 1912.",
|
| 1776 |
+
"[43] Z. S. Harris, \"Distributional structure,\" WORD, vol. 10, pp. 146-162, 8 1954.",
|
| 1777 |
+
"[44] K. S. Jones, “A statistical interpretation of term specificity and its application in retrieval,” Journal of Documentation, vol. 28, pp. 11–21, 1 1972.",
|
| 1778 |
+
"[45] V. Levenshtein, \"Binary codes capable of correcting deletions, insertions and reversals,\" Soviet Physics Doklady, vol. 10, pp. 707-710, 1966.",
|
| 1779 |
+
"[46] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “BLEU,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL '02. Association for Computational Linguistics, 2001, p. 311.",
|
| 1780 |
+
"[47] N. Reimers and I. Gurevych, \"Sentence-BERT: Sentence embeddings using Siamese BERT-networks,\" in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8 2019.",
|
| 1781 |
+
"[48] W. H. Kruskal and W. A. Wallis, \"Use of ranks in one-criterion variance analysis,\" Journal of the American Statistical Association, vol. 47, no. 260, pp. 583-621, 1952. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/01621459.1952.10483441",
|
| 1782 |
+
"[49] O. J. Dunn, “Multiple comparisons using rank sums,” Technometrics, vol. 6, no. 3, pp. 241–252, 1964. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/00401706.1964.10490181"
|
| 1783 |
+
],
|
| 1784 |
+
"bbox": [
|
| 1785 |
+
114,
|
| 1786 |
+
90,
|
| 1787 |
+
883,
|
| 1788 |
+
584
|
| 1789 |
+
],
|
| 1790 |
+
"page_idx": 10
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "header",
|
| 1794 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1795 |
+
"bbox": [
|
| 1796 |
+
218,
|
| 1797 |
+
42,
|
| 1798 |
+
774,
|
| 1799 |
+
56
|
| 1800 |
+
],
|
| 1801 |
+
"page_idx": 10
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "page_number",
|
| 1805 |
+
"text": "11",
|
| 1806 |
+
"bbox": [
|
| 1807 |
+
488,
|
| 1808 |
+
935,
|
| 1809 |
+
506,
|
| 1810 |
+
946
|
| 1811 |
+
],
|
| 1812 |
+
"page_idx": 10
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "text",
|
| 1816 |
+
"text": "A Appendix",
|
| 1817 |
+
"text_level": 1,
|
| 1818 |
+
"bbox": [
|
| 1819 |
+
112,
|
| 1820 |
+
89,
|
| 1821 |
+
233,
|
| 1822 |
+
108
|
| 1823 |
+
],
|
| 1824 |
+
"page_idx": 11
|
| 1825 |
+
},
|
| 1826 |
+
{
|
| 1827 |
+
"type": "code",
|
| 1828 |
+
"sub_type": "code",
|
| 1829 |
+
"code_caption": [
|
| 1830 |
+
"Figure 10: Sample of the composite system prompt with a one-shot example (i.e., problem-and-solution pair)."
|
| 1831 |
+
],
|
| 1832 |
+
"code_body": "[System Prompt] \nYou are an expert in {\\{expertise\\}}. \nYour task is to answer the following multiple-choice questions.. \nFirst, you should recite all of the relevant knowledge you have about the question and each option. \nNext, you should think step-by-step through the problem to ensure you have the correct answer. \nThen, you should critically evaluate your thoughts to identify any flaws in your facts, logic, and reasoning. Finally, you MUST answer the question using the following format 'Action: Answer(\"[choice]\") The parameter [choice] is the letter or number of the answer you want to select (e.g. \"A\", \"B\", \"C\", or \"D\") For example, 'Answer(\"C')' will select choice \"C\" as the best answer. The answer MUST ALWAYS be one of the available choices; it CANNOT be \"None of the Above\". If you think the answer is \"none of the above\", then you MUST select the most likely answer. \n[Example Problem] \nQuestion: What is the capital of the state where Johns Hopkins University is located? \nChoices: A: Baltimore B: Annapolis C: Des Moines D: Las Vegas \n[Example Solution] \nKnowledge: Johns Hopkins University is located in Baltimore, Maryland. A: Baltimore is a city located in the State of Maryland, but it is not the capital of Maryland. B: Annapolis is the capital of the State of Maryland. C: Des Moines is a city located in the State of Iowa and is the capital of Iowa. D: Las Vegas is located in the State of Nevada, but it is not the capital of Nevada. Thought: Johns Hopkins University is located in Baltimore. Baltimore is a city located in the state of Maryland. The capital of Maryland is Baltimore. Therefore, the capital of the state where Johns Hopkins University is located is Baltimore. The answer is A: Baltimore. Criticism: You are correct that Johns Hopkins is located in the State of Maryland. However, the capital of Maryland is Annapolis, not Baltimore. So, the correct answer is actually B: Annapolis. Action: Answer(\"B\")",
|
| 1833 |
+
"guess_lang": "txt",
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
112,
|
| 1836 |
+
127,
|
| 1837 |
+
869,
|
| 1838 |
+
510
|
| 1839 |
+
],
|
| 1840 |
+
"page_idx": 11
|
| 1841 |
+
},
|
| 1842 |
+
{
|
| 1843 |
+
"type": "code",
|
| 1844 |
+
"sub_type": "code",
|
| 1845 |
+
"code_caption": [
|
| 1846 |
+
"Figure 11: Sample of an MCQA problem in JSON-L format - with whitespace added for readability."
|
| 1847 |
+
],
|
| 1848 |
+
"code_body": "{\n\"source\": \"arc/arc-challenge-test\",\n\"source_id\": 1,\n\"topic\": \"Science\",\n\"context\":.,\n\"question\": \"An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?\", \"choices\": {\n\"A\": \"Planetary density will decrease.\", \"B\": \"Planetary years will become longer.\", \"C\": \"Planetary days will become shorter.\", \"D\": \"Planetary gravity will become stronger.\"\n},\n\"answer\": \"C\",\n\"solution\": \"\"\n}",
|
| 1849 |
+
"guess_lang": "json",
|
| 1850 |
+
"bbox": [
|
| 1851 |
+
112,
|
| 1852 |
+
568,
|
| 1853 |
+
614,
|
| 1854 |
+
731
|
| 1855 |
+
],
|
| 1856 |
+
"page_idx": 11
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "header",
|
| 1860 |
+
"text": "The Effect of Sampling Temperature on Problem Solving in Large Language Models",
|
| 1861 |
+
"bbox": [
|
| 1862 |
+
218,
|
| 1863 |
+
42,
|
| 1864 |
+
774,
|
| 1865 |
+
56
|
| 1866 |
+
],
|
| 1867 |
+
"page_idx": 11
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "page_number",
|
| 1871 |
+
"text": "12",
|
| 1872 |
+
"bbox": [
|
| 1873 |
+
488,
|
| 1874 |
+
935,
|
| 1875 |
+
508,
|
| 1876 |
+
946
|
| 1877 |
+
],
|
| 1878 |
+
"page_idx": 11
|
| 1879 |
+
}
|
| 1880 |
+
]
|
2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_model.json
ADDED
|
@@ -0,0 +1,2556 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.214,
|
| 7 |
+
0.121,
|
| 8 |
+
0.787,
|
| 9 |
+
0.164
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.286,
|
| 18 |
+
0.227,
|
| 19 |
+
0.402,
|
| 20 |
+
0.24
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Matthew Renze"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.255,
|
| 29 |
+
0.242,
|
| 30 |
+
0.426,
|
| 31 |
+
0.256
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Johns Hopkins University"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.276,
|
| 40 |
+
0.257,
|
| 41 |
+
0.408,
|
| 42 |
+
0.27
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "mrenze1@jhu.edu"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.611,
|
| 51 |
+
0.228,
|
| 52 |
+
0.71,
|
| 53 |
+
0.241
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Erhan Guven"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.572,
|
| 62 |
+
0.242,
|
| 63 |
+
0.743,
|
| 64 |
+
0.256
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Johns Hopkins University"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.595,
|
| 73 |
+
0.257,
|
| 74 |
+
0.726,
|
| 75 |
+
0.271
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "eguven2@jhu.edu"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.46,
|
| 84 |
+
0.319,
|
| 85 |
+
0.538,
|
| 86 |
+
0.334
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Abstract"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.17,
|
| 95 |
+
0.341,
|
| 96 |
+
0.828,
|
| 97 |
+
0.482
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "In this research study, we empirically investigate the effect of sampling temperature on the performance of Large Language Models (LLMs) on various problem-solving tasks. We created a multiple-choice question-and-answer (MCQA) exam by randomly sampling problems from standard LLM benchmarks. Then, we used nine popular LLMs with five prompt-engineering techniques to solve the MCQA problems while increasing the sampling temperature from 0.0 to 1.6. Despite anecdotal reports to the contrary, our empirical results indicate that changes in temperature from 0.0 to 1.0 do not have a statistically significant impact on LLM performance for problem-solving tasks. In addition, these results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. All code, data, and supplemental materials are available on GitHub at: https://github.com/matthewrenze/jhu-llm-temperature."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "title",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.114,
|
| 106 |
+
0.5,
|
| 107 |
+
0.255,
|
| 108 |
+
0.515
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "1 Introduction"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.114,
|
| 117 |
+
0.532,
|
| 118 |
+
0.241,
|
| 119 |
+
0.547
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "1.1 Background"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.112,
|
| 128 |
+
0.557,
|
| 129 |
+
0.885,
|
| 130 |
+
0.601
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "In recent years, Large Language Models (LLMs) have revolutionized the field of artificial intelligence. The availability of open-source LLMs and pay-per-use APIs has allowed engineers to incorporate LLMs in their AI systems. However, prompt engineering and hyperparameter tuning are required to work effectively with LLMs."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.112,
|
| 139 |
+
0.606,
|
| 140 |
+
0.884,
|
| 141 |
+
0.649
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Prompt-engineering techniques help LLMs solve complex problems, avoid hallucinations, and provide more accurate responses. For example, we can use techniques like chain-of-thought, tree-of-thought, self-criticism, and self-consistency to improve LLM performance [1, 2]."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.112,
|
| 150 |
+
0.654,
|
| 151 |
+
0.884,
|
| 152 |
+
0.697
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "In addition, several inference hyperparameters can be adjusted to modify the LLM's output at runtime. For example, hyperparameters like sampling temperature, top-k sampling, repetition penalty, and maximum token length all affect the LLM's output and performance [3-5]."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.112,
|
| 161 |
+
0.702,
|
| 162 |
+
0.883,
|
| 163 |
+
0.746
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "Despite significant interest in LLMs and progress in LLM best practices, many open questions remain about optimal prompt-engineering techniques and inference hyperparameters for LLMs. To complicate matters, various local optima may exist for LLMs, prompt types, and problem domains [5]."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.112,
|
| 172 |
+
0.751,
|
| 173 |
+
0.884,
|
| 174 |
+
0.793
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "The prompt-engineering community has an abundance of opinions and anecdotal evidence regarding optimal prompt-engineering techniques and inference hyperparameter settings. However, we currently lack systematic studies and empirical evidence to support many of these claims."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "text",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.112,
|
| 183 |
+
0.799,
|
| 184 |
+
0.884,
|
| 185 |
+
0.842
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "As a result, this paper aims to address the open question of the optimal LLM sampling temperature for problem-solving tasks. In addition, we aim to provide a systematic study with empirical results to add to the growing body of knowledge used to create LLM and prompt-engineering best practices."
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "title",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.114,
|
| 194 |
+
0.858,
|
| 195 |
+
0.317,
|
| 196 |
+
0.873
|
| 197 |
+
],
|
| 198 |
+
"angle": 0,
|
| 199 |
+
"content": "1.2 Sampling Temperature"
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "text",
|
| 203 |
+
"bbox": [
|
| 204 |
+
0.112,
|
| 205 |
+
0.884,
|
| 206 |
+
0.884,
|
| 207 |
+
0.913
|
| 208 |
+
],
|
| 209 |
+
"angle": 0,
|
| 210 |
+
"content": "Sampling temperature is a hyperparameter of an LLM used in a temperature-based sampling process. It controls the randomness of the model's output at inference time [5-8]."
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "aside_text",
|
| 214 |
+
"bbox": [
|
| 215 |
+
0.023,
|
| 216 |
+
0.273,
|
| 217 |
+
0.059,
|
| 218 |
+
0.701
|
| 219 |
+
],
|
| 220 |
+
"angle": 270,
|
| 221 |
+
"content": "arXiv:2402.05201v3 [cs.CL] 2 Oct 2024"
|
| 222 |
+
}
|
| 223 |
+
],
|
| 224 |
+
[
|
| 225 |
+
{
|
| 226 |
+
"type": "header",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.219,
|
| 229 |
+
0.043,
|
| 230 |
+
0.778,
|
| 231 |
+
0.058
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.111,
|
| 240 |
+
0.092,
|
| 241 |
+
0.888,
|
| 242 |
+
0.121
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "During each step of an LLM's decoding process, the LLM uses the previous tokens to choose the next output token. The final layer of the LLM uses a softmax function to convert raw scores (logits) into probabilities."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.111,
|
| 251 |
+
0.127,
|
| 252 |
+
0.884,
|
| 253 |
+
0.157
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "In greedy sampling, the model will always choose the most likely next token. However, for probabilistic sampling, the next token is selected from a probability distribution."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.111,
|
| 262 |
+
0.161,
|
| 263 |
+
0.886,
|
| 264 |
+
0.205
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "Temperature sampling is a modification to the softmax function, which adjusts the resulting probability mass functions. In this modified softmax function, \\( v_{k} \\) is the \\( k \\)-th vocabulary token, \\( l_{k} \\) is the token's logit, and \\( \\tau \\) is a constant temperature. See equation 1."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "equation",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.431,
|
| 273 |
+
0.22,
|
| 274 |
+
0.887,
|
| 275 |
+
0.257
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "\\[\n\\Pr \\left(v _ {k}\\right) = \\frac {e ^ {l _ {k} / \\tau}}{\\sum_ {i} e ^ {l _ {i} / \\tau}} \\tag {1}\n\\]"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.111,
|
| 284 |
+
0.267,
|
| 285 |
+
0.884,
|
| 286 |
+
0.31
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "A lower temperature makes the output of the LLM more deterministic, thus favoring the most likely predictions. This conservativeness is captured by the model's tendency to produce more repetitive, focused, and less diverse output based on the patterns most commonly seen in the training data [5, 7, 8]."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.111,
|
| 295 |
+
0.315,
|
| 296 |
+
0.884,
|
| 297 |
+
0.358
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "A higher temperature increases the randomness of the output, thus favoring more \"creative\" predictions. This creativity is captured by the model's willingness to explore more unconventional and less likely outputs. Higher temperatures can lead to novel text, diverse ideas, and creative solutions to problems [5, 7, 8]."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.111,
|
| 306 |
+
0.363,
|
| 307 |
+
0.884,
|
| 308 |
+
0.406
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "In the context of problem-solving, temperature can be seen as a trade-off between exploring and exploiting possible solutions within the solution space. Lower temperatures tend to exploit more probable solutions; higher temperatures explore the solution space more broadly."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "title",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.114,
|
| 317 |
+
0.422,
|
| 318 |
+
0.398,
|
| 319 |
+
0.437
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "1.3 Choosing a Sampling Temperature"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.111,
|
| 328 |
+
0.448,
|
| 329 |
+
0.884,
|
| 330 |
+
0.478
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Within the prompt-engineering community, there are a variety of opinions and best practices regarding the ideal sampling temperature for various problem-solving tasks [9, 10]."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.111,
|
| 339 |
+
0.482,
|
| 340 |
+
0.884,
|
| 341 |
+
0.526
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "Low sampling temperatures are recommended for tasks requiring precision and factual accuracy, such as technical writing, code generation, or question-answering [11, 12]. However, higher temperatures are recommended for tasks requiring creativity, such as writing poetry, creating stories, or brainstorming."
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.111,
|
| 350 |
+
0.53,
|
| 351 |
+
0.884,
|
| 352 |
+
0.573
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "Higher temperatures also increase the probability of model hallucination. Hallucination is a phenomenon where an LLM produces statistically probable responses that are factually incorrect or nonsensical. As a result, optimal temperature selection is also a balance between creativity and hallucination [13]."
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.111,
|
| 361 |
+
0.579,
|
| 362 |
+
0.884,
|
| 363 |
+
0.622
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "Practical guidelines for choosing a sampling temperature for a specific task or problem domain are often vague or anecdotal. Prompt-engineering guides often provide hypothetical examples of optimal sampling temperatures for various tasks. However, they rarely cite any sources or provide empirical evidence.<sup>1</sup>"
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "text",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.111,
|
| 372 |
+
0.627,
|
| 373 |
+
0.884,
|
| 374 |
+
0.657
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "As a result, the current state of choosing the optimal sampling temperature for specific problems is largely based on guesswork, gut instinct, non-systematic experimentation, and iterative refinement.[2,3]"
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "title",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.113,
|
| 383 |
+
0.675,
|
| 384 |
+
0.221,
|
| 385 |
+
0.691
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "2 Methods"
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "title",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.113,
|
| 394 |
+
0.707,
|
| 395 |
+
0.207,
|
| 396 |
+
0.721
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": "2.1 Models"
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"bbox": [
|
| 404 |
+
0.111,
|
| 405 |
+
0.732,
|
| 406 |
+
0.884,
|
| 407 |
+
0.763
|
| 408 |
+
],
|
| 409 |
+
"angle": 0,
|
| 410 |
+
"content": "The models used in this research project comprise nine widely-used foundational LLMs. To complement our analysis, we also conducted experiments using five prompts created using commonly used prompt-engineering techniques."
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "text",
|
| 414 |
+
"bbox": [
|
| 415 |
+
0.111,
|
| 416 |
+
0.767,
|
| 417 |
+
0.885,
|
| 418 |
+
0.81
|
| 419 |
+
],
|
| 420 |
+
"angle": 0,
|
| 421 |
+
"content": "First, we reviewed the prior literature to identify candidate LLMs commonly used for problem-solving tasks. We limited our candidate models to those that allowed the model's sampling temperature to be specified via their API [4, 16-18]. See Table 1 for a list of LLMs used in the experiment."
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "page_footnote",
|
| 425 |
+
"bbox": [
|
| 426 |
+
0.112,
|
| 427 |
+
0.82,
|
| 428 |
+
0.887,
|
| 429 |
+
0.847
|
| 430 |
+
],
|
| 431 |
+
"angle": 0,
|
| 432 |
+
"content": "<sup>1</sup>A few empirical studies exist that indicate sampling temperature does have an effect on LLM performance on some types of problem-solving tasks (e.g., code generation, engineering exams, etc.) [11, 12, 14]."
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "page_footnote",
|
| 436 |
+
"bbox": [
|
| 437 |
+
0.112,
|
| 438 |
+
0.847,
|
| 439 |
+
0.884,
|
| 440 |
+
0.885
|
| 441 |
+
],
|
| 442 |
+
"angle": 0,
|
| 443 |
+
"content": "For example, OpenAI's GPT-3.5 API allowed users to set the sampling temperature from 0.0 to 1.0 with a default of 0.7. GPT-4's API expanded this range from 0.0 to 2.0 with a default of 1.0. No explanation from OpenAI has been provided for these default values or their change from GPT-3.5 to GPT-4 [15]."
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "page_footnote",
|
| 447 |
+
"bbox": [
|
| 448 |
+
0.112,
|
| 449 |
+
0.886,
|
| 450 |
+
0.884,
|
| 451 |
+
0.913
|
| 452 |
+
],
|
| 453 |
+
"angle": 0,
|
| 454 |
+
"content": "<sup>3</sup>Even the GPT-4 Technical Report explains that the authors used their \"best-guess\" when choosing sampling temperatures while evaluating GPT-4 on various benchmarks. See Appendix A in the GPT-4 Technical Report [16]."
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "list",
|
| 458 |
+
"bbox": [
|
| 459 |
+
0.112,
|
| 460 |
+
0.82,
|
| 461 |
+
0.887,
|
| 462 |
+
0.913
|
| 463 |
+
],
|
| 464 |
+
"angle": 0,
|
| 465 |
+
"content": null
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "page_number",
|
| 469 |
+
"bbox": [
|
| 470 |
+
0.494,
|
| 471 |
+
0.936,
|
| 472 |
+
0.505,
|
| 473 |
+
0.948
|
| 474 |
+
],
|
| 475 |
+
"angle": 0,
|
| 476 |
+
"content": "2"
|
| 477 |
+
}
|
| 478 |
+
],
|
| 479 |
+
[
|
| 480 |
+
{
|
| 481 |
+
"type": "header",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.22,
|
| 484 |
+
0.043,
|
| 485 |
+
0.778,
|
| 486 |
+
0.058
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "table",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.263,
|
| 495 |
+
0.089,
|
| 496 |
+
0.737,
|
| 497 |
+
0.231
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "<table><tr><td>Name</td><td>Vendor</td><td>Released</td><td>License</td><td>Source</td></tr><tr><td>Claude 3 Opus</td><td>Anthropic</td><td>2024-03-04</td><td>Closed</td><td>[19,20]</td></tr><tr><td>Command R+</td><td>Cohere</td><td>2024-04-04</td><td>Open</td><td>[21,22]</td></tr><tr><td>Gemini 1.0 Pro</td><td>Google</td><td>2023-12-06</td><td>Closed</td><td>[23,24]</td></tr><tr><td>Gemini 1.5 Pro (Preview)</td><td>Google</td><td>2024-02-15</td><td>Closed</td><td>[25,26]</td></tr><tr><td>GPT-3.5 Turbo</td><td>OpenAI</td><td>2022-11-30</td><td>Closed</td><td>[17,27]</td></tr><tr><td>GPT-4</td><td>OpenAI</td><td>2023-03-14</td><td>Closed</td><td>[16,18]</td></tr><tr><td>Llama 2 7B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Llama 2 70B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Mistral Large</td><td>Mistral AI</td><td>2024-02-26</td><td>Closed</td><td>[29]</td></tr></table>"
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "table_caption",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.368,
|
| 506 |
+
0.244,
|
| 507 |
+
0.627,
|
| 508 |
+
0.259
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "Table 1: LLMs used in the experiment."
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.111,
|
| 517 |
+
0.282,
|
| 518 |
+
0.884,
|
| 519 |
+
0.325
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "Next, we reviewed the existing literature for commonly used prompt-engineering techniques. We limited our candidate prompts to those that could be performed in a single request-and-response cycle with one-shot in-context learning. We excluded multi-step agents, few-shot learning, and model fine-tuning."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.111,
|
| 528 |
+
0.331,
|
| 529 |
+
0.724,
|
| 530 |
+
0.345
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "As a result, we selected five prompt-engineering techniques to construct our system prompts:"
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.158,
|
| 539 |
+
0.358,
|
| 540 |
+
0.88,
|
| 541 |
+
0.386
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "- Baseline - no prompt engineering; the LLM is instructed to return only a single multiple-choice answer as its output (e.g., 'Answer(\"C\")')."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.157,
|
| 550 |
+
0.391,
|
| 551 |
+
0.881,
|
| 552 |
+
0.419
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "- Domain Expertise – the system prompt specifies that the LLM is an expert in the problem domain of the exam (e.g., “medicine”) or the topic of the problem (e.g., “anatomy”) [2]."
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.158,
|
| 561 |
+
0.424,
|
| 562 |
+
0.881,
|
| 563 |
+
0.451
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "- Self-recitation – the system prompt instructs the LLM to recite its own internal knowledge about the problem before answering the question [2, 30]."
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.157,
|
| 572 |
+
0.456,
|
| 573 |
+
0.881,
|
| 574 |
+
0.484
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": "- Chain-of-Thought (CoT) – the system prompt instructs the LLM to “think step-by-step” to encourage it to reason through the problem procedurally [31, 32]."
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "text",
|
| 581 |
+
"bbox": [
|
| 582 |
+
0.158,
|
| 583 |
+
0.489,
|
| 584 |
+
0.881,
|
| 585 |
+
0.516
|
| 586 |
+
],
|
| 587 |
+
"angle": 0,
|
| 588 |
+
"content": "- Composite – the system prompt combines domain expertise, self-recitation, chain-of-thought, and adds self-criticism [33, 34]."
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "list",
|
| 592 |
+
"bbox": [
|
| 593 |
+
0.157,
|
| 594 |
+
0.358,
|
| 595 |
+
0.881,
|
| 596 |
+
0.516
|
| 597 |
+
],
|
| 598 |
+
"angle": 0,
|
| 599 |
+
"content": null
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "text",
|
| 603 |
+
"bbox": [
|
| 604 |
+
0.111,
|
| 605 |
+
0.53,
|
| 606 |
+
0.884,
|
| 607 |
+
0.575
|
| 608 |
+
],
|
| 609 |
+
"angle": 0,
|
| 610 |
+
"content": "Finally, we provided the LLM with a single example problem-and-solution pair for one-shot in-context learning. The example solution was adapted for each prompt based on the prompt-engineering technique used. For example, the CoT prompt included a chain of thought in its solution. See Figure 10 in the Appendix for a sample prompt."
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "title",
|
| 614 |
+
"bbox": [
|
| 615 |
+
0.113,
|
| 616 |
+
0.595,
|
| 617 |
+
0.189,
|
| 618 |
+
0.608
|
| 619 |
+
],
|
| 620 |
+
"angle": 0,
|
| 621 |
+
"content": "2.2 Data"
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"bbox": [
|
| 626 |
+
0.111,
|
| 627 |
+
0.622,
|
| 628 |
+
0.884,
|
| 629 |
+
0.651
|
| 630 |
+
],
|
| 631 |
+
"angle": 0,
|
| 632 |
+
"content": "The test dataset used in this research study consists of a series of Multiple-Choice Question-and-Answer (MCQA) exams derived from widely used LLM performance benchmarks."
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"bbox": [
|
| 637 |
+
0.111,
|
| 638 |
+
0.657,
|
| 639 |
+
0.884,
|
| 640 |
+
0.699
|
| 641 |
+
],
|
| 642 |
+
"angle": 0,
|
| 643 |
+
"content": "First, we reviewed the prior literature to identify benchmarks frequently used to evaluate LLMs. We limited our candidate benchmarks to those containing MCQA problems so that we could use correct-answer accuracy as our primary performance metric."
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"bbox": [
|
| 648 |
+
0.111,
|
| 649 |
+
0.705,
|
| 650 |
+
0.882,
|
| 651 |
+
0.734
|
| 652 |
+
],
|
| 653 |
+
"angle": 0,
|
| 654 |
+
"content": "Next, we selected a set of problems that covered a range of problem domains (e.g., math, science, law, etc.) and difficulty levels (e.g., secondary school, university, etc.) These problem sets can be seen in Table 2."
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "text",
|
| 658 |
+
"bbox": [
|
| 659 |
+
0.111,
|
| 660 |
+
0.739,
|
| 661 |
+
0.882,
|
| 662 |
+
0.782
|
| 663 |
+
],
|
| 664 |
+
"angle": 0,
|
| 665 |
+
"content": "Then, we converted the benchmark problems from their original data format into a standardized data structure using the JSON Lines (JSON-L) format [41]. Our standardized set of exams allowed us to use the exams interchangeably without modifying the code in the test harness. See Figure 11 in the Appendix for a sample of an MCQA problem."
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "text",
|
| 669 |
+
"bbox": [
|
| 670 |
+
0.111,
|
| 671 |
+
0.788,
|
| 672 |
+
0.882,
|
| 673 |
+
0.83
|
| 674 |
+
],
|
| 675 |
+
"angle": 0,
|
| 676 |
+
"content": "Finally, we created two MCQA exams of different sizes. We created a large exam with 1,000 questions by randomly sampling 100 problems from each of the ten problem sets. This 1,000-question (large) exam was used with GPT-3.5 to perform a detailed analysis of temperature across problem domains."
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"bbox": [
|
| 681 |
+
0.111,
|
| 682 |
+
0.836,
|
| 683 |
+
0.882,
|
| 684 |
+
0.879
|
| 685 |
+
],
|
| 686 |
+
"angle": 0,
|
| 687 |
+
"content": "Additionally, we created a smaller exam of 100 questions by randomly sampling ten questions from each of the ten domain-specific problem sets. This 100-question (small) exam was used for our high-level analysis of sampling temperature across all nine models, all five prompt-engineering techniques, and extended temperature range (0.0-1.6).<sup>4</sup>"
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "page_footnote",
|
| 691 |
+
"bbox": [
|
| 692 |
+
0.134,
|
| 693 |
+
0.898,
|
| 694 |
+
0.61,
|
| 695 |
+
0.912
|
| 696 |
+
],
|
| 697 |
+
"angle": 0,
|
| 698 |
+
"content": "<sup>4</sup>We used the smaller 100-question exam due to cost and runtime considerations."
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "page_number",
|
| 702 |
+
"bbox": [
|
| 703 |
+
0.494,
|
| 704 |
+
0.936,
|
| 705 |
+
0.504,
|
| 706 |
+
0.948
|
| 707 |
+
],
|
| 708 |
+
"angle": 0,
|
| 709 |
+
"content": "3"
|
| 710 |
+
}
|
| 711 |
+
],
|
| 712 |
+
[
|
| 713 |
+
{
|
| 714 |
+
"type": "header",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.22,
|
| 717 |
+
0.043,
|
| 718 |
+
0.776,
|
| 719 |
+
0.058
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "table",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.159,
|
| 728 |
+
0.089,
|
| 729 |
+
0.842,
|
| 730 |
+
0.246
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "<table><tr><td>Problem Set</td><td>Benchmark</td><td>Domain</td><td>Questions</td><td>License</td><td>Source</td></tr><tr><td>ARC Challenge Test</td><td>ARC</td><td>Science</td><td>1,173</td><td>CC BY-SA</td><td>[35]</td></tr><tr><td>AQUA-RAT</td><td>AGI Eval</td><td>Math</td><td>254</td><td>Apache v2.0</td><td>[36]</td></tr><tr><td>Hellaswag Val</td><td>Hellaswag</td><td>Common Sense Reasoning</td><td>10,042</td><td>MIT</td><td>[37]</td></tr><tr><td>LogiQA (English)</td><td>AGI Eval</td><td>Logic</td><td>651</td><td>GitHub</td><td>[36, 38]</td></tr><tr><td>LSAT-AR</td><td>AGI Eval</td><td>Law (Analytic Reasoning)</td><td>230</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-LR</td><td>AGI Eval</td><td>Law (Logical Reasoning)</td><td>510</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-RC</td><td>AGI Eval</td><td>Law (Reading Comprehension)</td><td>260</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>MedMCQA Valid</td><td>MedMCQA</td><td>Medicine</td><td>6,150</td><td>MIT</td><td>[40]</td></tr><tr><td>SAT-English</td><td>AGI Eval</td><td>English</td><td>206</td><td>MIT</td><td>[36]</td></tr><tr><td>SAT-Math</td><td>AGI Eval</td><td>Math</td><td>220</td><td>MIT</td><td>[36]</td></tr></table>"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "table_caption",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.271,
|
| 739 |
+
0.257,
|
| 740 |
+
0.724,
|
| 741 |
+
0.272
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "Table 2: Problem sets used to create the multi-domain MCQA exam."
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.112,
|
| 750 |
+
0.28,
|
| 751 |
+
0.884,
|
| 752 |
+
0.308
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "Note: The GitHub repository for LogiQA does not include a license file. However, both the paper and readme.md file states that \"The dataset is freely available.\""
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "title",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.114,
|
| 761 |
+
0.335,
|
| 762 |
+
0.208,
|
| 763 |
+
0.349
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "2.3 Process"
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "text",
|
| 770 |
+
"bbox": [
|
| 771 |
+
0.111,
|
| 772 |
+
0.361,
|
| 773 |
+
0.885,
|
| 774 |
+
0.418
|
| 775 |
+
],
|
| 776 |
+
"angle": 0,
|
| 777 |
+
"content": "Our experiment was designed to test the problem-solving performance of LLMs across ten models, five prompt-engineering techniques, ten problem domains, 100 problems within each problem domain, and all viable sampling temperatures. For each combination of model, prompt, exam, and temperature, we instructed the LLM to answer each question ten times so we could assess the average correct-answer accuracy."
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"bbox": [
|
| 782 |
+
0.112,
|
| 783 |
+
0.423,
|
| 784 |
+
0.884,
|
| 785 |
+
0.453
|
| 786 |
+
],
|
| 787 |
+
"angle": 0,
|
| 788 |
+
"content": "The full experiment setup can be seen in Figure 1 and Algorithm 1. However, due to cost and runtime considerations, we conducted a subset of the full experiment designed to capture the most valuable information as efficiently as possible."
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "image",
|
| 792 |
+
"bbox": [
|
| 793 |
+
0.134,
|
| 794 |
+
0.497,
|
| 795 |
+
0.469,
|
| 796 |
+
0.677
|
| 797 |
+
],
|
| 798 |
+
"angle": 0,
|
| 799 |
+
"content": null
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "image_caption",
|
| 803 |
+
"bbox": [
|
| 804 |
+
0.161,
|
| 805 |
+
0.722,
|
| 806 |
+
0.436,
|
| 807 |
+
0.737
|
| 808 |
+
],
|
| 809 |
+
"angle": 0,
|
| 810 |
+
"content": "Figure 1: Diagram of the full experiment."
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "table",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.513,
|
| 816 |
+
0.482,
|
| 817 |
+
0.884,
|
| 818 |
+
0.758
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": "<table><tr><td colspan=\"2\">Algorithm 1 Full LLM Temperature Experiment</td></tr><tr><td>1:</td><td>for each model m in M do ▷ 10 models</td></tr><tr><td>2:</td><td>for each prompt p in P do ▷ 5 prompts</td></tr><tr><td>3:</td><td>for each exam e in E do ▷ 10 exams</td></tr><tr><td>4:</td><td>for each temperature τ in T do ▷ 16 temps</td></tr><tr><td>5:</td><td>for each problem q in Q do ▷ 100 prob</td></tr><tr><td>6:</td><td>for each attempt a in A do ▷ 10 attempts</td></tr><tr><td>7:</td><td>Create the prompt</td></tr><tr><td>8:</td><td>Answer the question</td></tr><tr><td>9:</td><td>Record the answer</td></tr><tr><td>10:</td><td>end for</td></tr><tr><td>11:</td><td>end for</td></tr><tr><td>12:</td><td>end for</td></tr><tr><td>13:</td><td>Save the results</td></tr><tr><td>14:</td><td>end for</td></tr><tr><td>15:</td><td>end for</td></tr><tr><td>16:</td><td>end for</td></tr><tr><td>17:</td><td>Process the results</td></tr><tr><td>18:</td><td>Analyze the results</td></tr></table>"
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "text",
|
| 825 |
+
"bbox": [
|
| 826 |
+
0.112,
|
| 827 |
+
0.773,
|
| 828 |
+
0.884,
|
| 829 |
+
0.816
|
| 830 |
+
],
|
| 831 |
+
"angle": 0,
|
| 832 |
+
"content": "First, we instructed GPT-3.5 to complete the 100-question (small) exam using the CoT prompt with temperatures ranging from 0.0 to 2.0 in increments of 0.1. This allowed us to determine the range of viable sampling temperatures to explore."
|
| 833 |
+
},
|
| 834 |
+
{
|
| 835 |
+
"type": "text",
|
| 836 |
+
"bbox": [
|
| 837 |
+
0.112,
|
| 838 |
+
0.822,
|
| 839 |
+
0.884,
|
| 840 |
+
0.851
|
| 841 |
+
],
|
| 842 |
+
"angle": 0,
|
| 843 |
+
"content": "Performance began to drop rapidly after a temperature of 1.0 until the generated text became incoherent at 1.6. As a result, we stopped the initial temperature sweep at 1.6 and limited the rest of our sweeps from 0.0 to 1.0."
|
| 844 |
+
},
|
| 845 |
+
{
|
| 846 |
+
"type": "text",
|
| 847 |
+
"bbox": [
|
| 848 |
+
0.112,
|
| 849 |
+
0.856,
|
| 850 |
+
0.884,
|
| 851 |
+
0.885
|
| 852 |
+
],
|
| 853 |
+
"angle": 0,
|
| 854 |
+
"content": "Next, we instructed the other eight LLMs to complete the 100-question (small) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize to other LLMs."
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "page_footnote",
|
| 858 |
+
"bbox": [
|
| 859 |
+
0.133,
|
| 860 |
+
0.897,
|
| 861 |
+
0.841,
|
| 862 |
+
0.913
|
| 863 |
+
],
|
| 864 |
+
"angle": 0,
|
| 865 |
+
"content": "For this experiment, we fixed all other sampling parameters (e.g., top-k, top-p, etc.) to isolate the effect of temperature."
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "page_number",
|
| 869 |
+
"bbox": [
|
| 870 |
+
0.494,
|
| 871 |
+
0.936,
|
| 872 |
+
0.504,
|
| 873 |
+
0.947
|
| 874 |
+
],
|
| 875 |
+
"angle": 0,
|
| 876 |
+
"content": "4"
|
| 877 |
+
}
|
| 878 |
+
],
|
| 879 |
+
[
|
| 880 |
+
{
|
| 881 |
+
"type": "header",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.219,
|
| 884 |
+
0.043,
|
| 885 |
+
0.778,
|
| 886 |
+
0.058
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.111,
|
| 895 |
+
0.092,
|
| 896 |
+
0.884,
|
| 897 |
+
0.121
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "Then, we instructed GPT-3.5 to complete the 100-question (small) exam using each of the five prompts over temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize over various prompt-engineering techniques."
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.111,
|
| 906 |
+
0.127,
|
| 907 |
+
0.884,
|
| 908 |
+
0.157
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "Finally, we instructed GPT-3.5 to complete the 1,000-question (large) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results were domain-specific or generalized across problem domains."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "title",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.113,
|
| 917 |
+
0.174,
|
| 918 |
+
0.21,
|
| 919 |
+
0.188
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "2.4 Metrics"
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.111,
|
| 928 |
+
0.202,
|
| 929 |
+
0.884,
|
| 930 |
+
0.245
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "To test our hypothesis, we measured the LLM's correct-answer accuracy as our primary performance metric. For each combination of model, prompt, exam, and temperature, we calculated the accuracy as the number of correct answers from ten attempts at each problem. Then, we computed the average (mean) accuracy across all problems."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.111,
|
| 939 |
+
0.25,
|
| 940 |
+
0.884,
|
| 941 |
+
0.279
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "To further support our findings, we also measured the similarity of the LLM's responses using a series of text-similarity metrics. These metrics are defined as follows:"
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.157,
|
| 950 |
+
0.291,
|
| 951 |
+
0.809,
|
| 952 |
+
0.305
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "- Jaccard similarity - the ratio of the intersection to the union of word sets in the output text [42]."
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.157,
|
| 961 |
+
0.309,
|
| 962 |
+
0.825,
|
| 963 |
+
0.323
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "- Bag-of-Words (BoW) similarity - comparison of the frequency of each word, ignoring order [43]."
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.157,
|
| 972 |
+
0.327,
|
| 973 |
+
0.825,
|
| 974 |
+
0.342
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "- TF-IDF similarity - comparison of word frequency weighted by inverse document frequency [44]."
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.157,
|
| 983 |
+
0.345,
|
| 984 |
+
0.831,
|
| 985 |
+
0.36
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "Levenshtein similarity - the number of edits needed to change one string of text into the other [45]."
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.158,
|
| 994 |
+
0.363,
|
| 995 |
+
0.639,
|
| 996 |
+
0.377
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "- BLEU score - comparison of similarity based on n-gram overlap [46]."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "text",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.157,
|
| 1005 |
+
0.381,
|
| 1006 |
+
0.774,
|
| 1007 |
+
0.395
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "- SBERT similarity - semantic similarity computed using Sentence-BERT embeddings [47]."
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "list",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
0.157,
|
| 1016 |
+
0.291,
|
| 1017 |
+
0.831,
|
| 1018 |
+
0.395
|
| 1019 |
+
],
|
| 1020 |
+
"angle": 0,
|
| 1021 |
+
"content": null
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "title",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
0.113,
|
| 1027 |
+
0.415,
|
| 1028 |
+
0.216,
|
| 1029 |
+
0.43
|
| 1030 |
+
],
|
| 1031 |
+
"angle": 0,
|
| 1032 |
+
"content": "2.5 Analysis"
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "text",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
0.111,
|
| 1038 |
+
0.441,
|
| 1039 |
+
0.884,
|
| 1040 |
+
0.485
|
| 1041 |
+
],
|
| 1042 |
+
"angle": 0,
|
| 1043 |
+
"content": "We used the Kruskal-Wallis test to evaluate the statistical significance of any changes in accuracy as a function of temperature [48]. We chose the Kruskal-Wallis test because the data (i.e., correct-answer accuracy by question) were not normally distributed. Rather, they were bimodally distributed with centers at 0.0 and 1.0."
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "title",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
0.113,
|
| 1049 |
+
0.507,
|
| 1050 |
+
0.21,
|
| 1051 |
+
0.523
|
| 1052 |
+
],
|
| 1053 |
+
"angle": 0,
|
| 1054 |
+
"content": "3 Results"
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "title",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
0.113,
|
| 1060 |
+
0.54,
|
| 1061 |
+
0.34,
|
| 1062 |
+
0.556
|
| 1063 |
+
],
|
| 1064 |
+
"angle": 0,
|
| 1065 |
+
"content": "3.1 Accuracy vs. Temperature"
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
0.111,
|
| 1071 |
+
0.567,
|
| 1072 |
+
0.884,
|
| 1073 |
+
0.61
|
| 1074 |
+
],
|
| 1075 |
+
"angle": 0,
|
| 1076 |
+
"content": "Our analysis revealed that the problem-solving performance of LLMs remained relatively stable across sampling temperatures from 0.0 to 1.0 for all LLMs, prompt-engineering techniques, and problem domains. Using GPT-3.5 with a CoT prompt on the 1,000-question exam from 0.0 to 1.0, the Kruskal-Wallis test yielded \\( H(10) = 10.439 \\), \\( p = 0.403 \\)."
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "text",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
0.111,
|
| 1082 |
+
0.615,
|
| 1083 |
+
0.884,
|
| 1084 |
+
0.658
|
| 1085 |
+
],
|
| 1086 |
+
"angle": 0,
|
| 1087 |
+
"content": "First, we analyzed the performance of GPT-3.5 using the CoT prompt on the 100-question exam. Accuracy remained stable over temperatures from 0.0 to 1.0. However, after a temperature of 1.0, the text rapidly became incoherent, and the accuracy began to drop until it reached zero around a temperature of 1.6. See Figure 2."
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "image",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
0.322,
|
| 1093 |
+
0.685,
|
| 1094 |
+
0.677,
|
| 1095 |
+
0.875
|
| 1096 |
+
],
|
| 1097 |
+
"angle": 0,
|
| 1098 |
+
"content": null
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "image_caption",
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
0.127,
|
| 1104 |
+
0.893,
|
| 1105 |
+
0.868,
|
| 1106 |
+
0.91
|
| 1107 |
+
],
|
| 1108 |
+
"angle": 0,
|
| 1109 |
+
"content": "Figure 2: Accuracy by temperature from 0.0 to 1.6 for GPT-3.5 using the CoT prompt on the 100-question exam."
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "page_number",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
0.494,
|
| 1115 |
+
0.936,
|
| 1116 |
+
0.504,
|
| 1117 |
+
0.948
|
| 1118 |
+
],
|
| 1119 |
+
"angle": 0,
|
| 1120 |
+
"content": "5"
|
| 1121 |
+
}
|
| 1122 |
+
],
|
| 1123 |
+
[
|
| 1124 |
+
{
|
| 1125 |
+
"type": "header",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.22,
|
| 1128 |
+
0.043,
|
| 1129 |
+
0.776,
|
| 1130 |
+
0.058
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "text",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.111,
|
| 1139 |
+
0.092,
|
| 1140 |
+
0.885,
|
| 1141 |
+
0.134
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "Second, we analyzed the performance of all nine LLMs using the CoT prompt on the 100-question exam. Accuracy also remained stable across all of the LLMs, except for Llama 2 7B. The performance of most LLMs showed a gradual (non-significant) decrease in performance as a function of temperature. See Figure 3 and Table 3."
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "image",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.132,
|
| 1150 |
+
0.162,
|
| 1151 |
+
0.552,
|
| 1152 |
+
0.384
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": null
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "image_caption",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.112,
|
| 1161 |
+
0.402,
|
| 1162 |
+
0.484,
|
| 1163 |
+
0.431
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "Figure 3: Accuracy by temperature and model using the CoT prompt on the 100-question exam."
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "table",
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
0.606,
|
| 1172 |
+
0.169,
|
| 1173 |
+
0.868,
|
| 1174 |
+
0.324
|
| 1175 |
+
],
|
| 1176 |
+
"angle": 0,
|
| 1177 |
+
"content": "<table><tr><td>Model</td><td>H(10)</td><td>p-value</td></tr><tr><td>Claude 3 Opus</td><td>1.735</td><td>0.998</td></tr><tr><td>Command R+</td><td>1.771</td><td>0.998</td></tr><tr><td>Gemini Pro 1.0</td><td>7.379</td><td>0.689</td></tr><tr><td>Gemini Pro 1.5</td><td>3.119</td><td>0.978</td></tr><tr><td>GPT-3.5 Turbo</td><td>2.042</td><td>0.996</td></tr><tr><td>GPT-4</td><td>3.789</td><td>0.956</td></tr><tr><td>Llama 2 70B</td><td>3.677</td><td>0.961</td></tr><tr><td>Llama 2 7B</td><td>17.086</td><td>0.072</td></tr><tr><td>Mistral Large</td><td>3.069</td><td>0.980</td></tr></table>"
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "table_caption",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
0.586,
|
| 1183 |
+
0.329,
|
| 1184 |
+
0.884,
|
| 1185 |
+
0.371
|
| 1186 |
+
],
|
| 1187 |
+
"angle": 0,
|
| 1188 |
+
"content": "Table 3: Kruskal-Wallis test results by model using the CoT prompt on the 100-question exam."
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "text",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
0.111,
|
| 1194 |
+
0.448,
|
| 1195 |
+
0.885,
|
| 1196 |
+
0.491
|
| 1197 |
+
],
|
| 1198 |
+
"angle": 0,
|
| 1199 |
+
"content": "Llama 2 7B did not perform better than statistically random guesses. Its poor performance was due to generating incorrectly formatted answers (39%) and correctly formatted but incorrect answers (36%). Its all-or-nothing behavior at a temperature of 0.0 versus more random behavior from 0.1 to 1.0 led to a much lower, yet still non-significant, p-value."
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "text",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
0.111,
|
| 1205 |
+
0.495,
|
| 1206 |
+
0.884,
|
| 1207 |
+
0.539
|
| 1208 |
+
],
|
| 1209 |
+
"angle": 0,
|
| 1210 |
+
"content": "Third, we analyzed the performance of GPT-3.5 using each of the five prompts on the 100-question exam. Accuracy remained stable for all temperatures across all prompt-engineering techniques. The CoT prompt outperformed the other four prompts. As a result, we used the CoT prompt for all single-prompt experiments. See Figure 4 and Table 4."
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "image",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
0.131,
|
| 1216 |
+
0.564,
|
| 1217 |
+
0.471,
|
| 1218 |
+
0.78
|
| 1219 |
+
],
|
| 1220 |
+
"angle": 0,
|
| 1221 |
+
"content": null
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "image_caption",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
0.112,
|
| 1227 |
+
0.797,
|
| 1228 |
+
0.45,
|
| 1229 |
+
0.826
|
| 1230 |
+
],
|
| 1231 |
+
"angle": 0,
|
| 1232 |
+
"content": "Figure 4: Accuracy by temperature and prompt for GPT-3.5 on the 100-question exam."
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "table",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
0.564,
|
| 1238 |
+
0.568,
|
| 1239 |
+
0.835,
|
| 1240 |
+
0.668
|
| 1241 |
+
],
|
| 1242 |
+
"angle": 0,
|
| 1243 |
+
"content": "<table><tr><td>Prompt</td><td>H(10)</td><td>p-value</td></tr><tr><td>Baseline</td><td>0.420</td><td>1.000</td></tr><tr><td>Domain Expert</td><td>0.548</td><td>1.000</td></tr><tr><td>Self-recitation</td><td>1.403</td><td>0.999</td></tr><tr><td>Chain of Thought</td><td>2.042</td><td>0.996</td></tr><tr><td>Composite</td><td>1.000</td><td>1.000</td></tr></table>"
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "table_caption",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
0.51,
|
| 1249 |
+
0.674,
|
| 1250 |
+
0.84,
|
| 1251 |
+
0.704
|
| 1252 |
+
],
|
| 1253 |
+
"angle": 0,
|
| 1254 |
+
"content": "Table 4: Kruskal-Wallis test results by prompt for GPT-3.5 on the 100-question exam."
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
0.111,
|
| 1260 |
+
0.843,
|
| 1261 |
+
0.884,
|
| 1262 |
+
0.887
|
| 1263 |
+
],
|
| 1264 |
+
"angle": 0,
|
| 1265 |
+
"content": "Finally, we analyzed the performance of GPT-3.5 using the CoT prompt on all ten exams. Accuracy remained stable for all temperatures across all problem domains based on visual analysis. However, the LSAT-AR and SAT-Math exams showed statistically significant differences in the Kruskal-Wallis p-values. See Figure 5 and Table 5."
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "page_footnote",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
0.133,
|
| 1271 |
+
0.898,
|
| 1272 |
+
0.882,
|
| 1273 |
+
0.912
|
| 1274 |
+
],
|
| 1275 |
+
"angle": 0,
|
| 1276 |
+
"content": "<sup>6</sup>We considered the ARC Challenge results to be non-significant since they were greater than the significance threshold of 0.05."
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "page_number",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
0.494,
|
| 1282 |
+
0.937,
|
| 1283 |
+
0.504,
|
| 1284 |
+
0.948
|
| 1285 |
+
],
|
| 1286 |
+
"angle": 0,
|
| 1287 |
+
"content": "6"
|
| 1288 |
+
}
|
| 1289 |
+
],
|
| 1290 |
+
[
|
| 1291 |
+
{
|
| 1292 |
+
"type": "header",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.22,
|
| 1295 |
+
0.043,
|
| 1296 |
+
0.776,
|
| 1297 |
+
0.058
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "image",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.132,
|
| 1306 |
+
0.101,
|
| 1307 |
+
0.553,
|
| 1308 |
+
0.325
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": null
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "image_caption",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.112,
|
| 1317 |
+
0.342,
|
| 1318 |
+
0.497,
|
| 1319 |
+
0.371
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": "Figure 5: Accuracy by temperature and exam for GPT-3.5 using the CoT prompt."
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "table",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.606,
|
| 1328 |
+
0.103,
|
| 1329 |
+
0.868,
|
| 1330 |
+
0.272
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": "<table><tr><td>Exam</td><td>H(10)</td><td>p-value</td></tr><tr><td>AQUA-RAT</td><td>10.320</td><td>0.413</td></tr><tr><td>ARC Challenge</td><td>16.390</td><td>0.089</td></tr><tr><td>Hellaswag</td><td>4.473</td><td>0.924</td></tr><tr><td>LogiQA</td><td>3.208</td><td>0.976</td></tr><tr><td>LSAT-AR</td><td>37.874</td><td>< 0.001</td></tr><tr><td>LSAT-LR</td><td>7.816</td><td>0.647</td></tr><tr><td>LSAT-RC</td><td>4.037</td><td>0.946</td></tr><tr><td>MedMCQA</td><td>2.334</td><td>0.993</td></tr><tr><td>SAT-English</td><td>3.937</td><td>0.950</td></tr><tr><td>SAT-Math</td><td>21.276</td><td>0.019</td></tr></table>"
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "table_caption",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
0.586,
|
| 1339 |
+
0.278,
|
| 1340 |
+
0.882,
|
| 1341 |
+
0.307
|
| 1342 |
+
],
|
| 1343 |
+
"angle": 0,
|
| 1344 |
+
"content": "Table 5: Kruskal-Wallis test results by exam for GPT-3.5 using the CoT prompt."
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "text",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
0.111,
|
| 1350 |
+
0.404,
|
| 1351 |
+
0.884,
|
| 1352 |
+
0.462
|
| 1353 |
+
],
|
| 1354 |
+
"angle": 0,
|
| 1355 |
+
"content": "We performed the Dunn-Bonferroni test on the LSAT-AR and SAT-Math results [49]. It revealed that the all-or-nothing behavior of responses generated at a temperature of 0.0 versus the more random responses from 0.1 to 1.0 caused the anomaly. The correct-answer accuracy for each individual problem varied significantly when evaluated pairwise across temperatures. However, the average accuracy for all problems remained similar across temperatures."
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "title",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
0.113,
|
| 1361 |
+
0.483,
|
| 1362 |
+
0.383,
|
| 1363 |
+
0.499
|
| 1364 |
+
],
|
| 1365 |
+
"angle": 0,
|
| 1366 |
+
"content": "3.2 Text Variability vs. Temperature"
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "text",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
0.111,
|
| 1372 |
+
0.512,
|
| 1373 |
+
0.884,
|
| 1374 |
+
0.556
|
| 1375 |
+
],
|
| 1376 |
+
"angle": 0,
|
| 1377 |
+
"content": "To further support our results, we analyzed text variability as a function of temperature. Our findings show a clear trend of decreasing text similarity (thus increasing text variability) as temperature increases. Text similarity decreases rapidly after a temperature of 1.0, corresponding to the rapid decrease in accuracy observed above \\(\\tau = 1.0\\). See Figure 6."
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "text",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
0.111,
|
| 1383 |
+
0.56,
|
| 1384 |
+
0.884,
|
| 1385 |
+
0.603
|
| 1386 |
+
],
|
| 1387 |
+
"angle": 0,
|
| 1388 |
+
"content": "These results are consistent with our understanding of sampling temperature, indicating that higher temperatures produce more widely varied outputs. Furthermore, these results hold regardless of the LLM, prompt-engineering technique, or problem domain. See Figures 7, 8, and 9."
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "image",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
0.137,
|
| 1394 |
+
0.631,
|
| 1395 |
+
0.49,
|
| 1396 |
+
0.833
|
| 1397 |
+
],
|
| 1398 |
+
"angle": 0,
|
| 1399 |
+
"content": null
|
| 1400 |
+
},
|
| 1401 |
+
{
|
| 1402 |
+
"type": "image_caption",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
0.112,
|
| 1405 |
+
0.848,
|
| 1406 |
+
0.51,
|
| 1407 |
+
0.892
|
| 1408 |
+
],
|
| 1409 |
+
"angle": 0,
|
| 1410 |
+
"content": "Figure 6: Text similarity by temperature and metric for GPT-3.5 using CoT prompting on the 100-question exam over sampling temperatures from 0.0 to 1.6."
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "image",
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
0.55,
|
| 1416 |
+
0.628,
|
| 1417 |
+
0.873,
|
| 1418 |
+
0.837
|
| 1419 |
+
],
|
| 1420 |
+
"angle": 0,
|
| 1421 |
+
"content": null
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "image_caption",
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
0.532,
|
| 1427 |
+
0.851,
|
| 1428 |
+
0.883,
|
| 1429 |
+
0.894
|
| 1430 |
+
],
|
| 1431 |
+
"angle": 0,
|
| 1432 |
+
"content": "Figure 7: TF-IDF text similarity by temperature and model using the CoT prompt on the 100-question exam over sampling temperatures from 0.0 to 1.0"
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "page_number",
|
| 1436 |
+
"bbox": [
|
| 1437 |
+
0.494,
|
| 1438 |
+
0.936,
|
| 1439 |
+
0.504,
|
| 1440 |
+
0.947
|
| 1441 |
+
],
|
| 1442 |
+
"angle": 0,
|
| 1443 |
+
"content": "7"
|
| 1444 |
+
}
|
| 1445 |
+
],
|
| 1446 |
+
[
|
| 1447 |
+
{
|
| 1448 |
+
"type": "header",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
0.22,
|
| 1451 |
+
0.043,
|
| 1452 |
+
0.776,
|
| 1453 |
+
0.058
|
| 1454 |
+
],
|
| 1455 |
+
"angle": 0,
|
| 1456 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "image",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
0.131,
|
| 1462 |
+
0.096,
|
| 1463 |
+
0.476,
|
| 1464 |
+
0.318
|
| 1465 |
+
],
|
| 1466 |
+
"angle": 0,
|
| 1467 |
+
"content": null
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "image_caption",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
0.112,
|
| 1473 |
+
0.334,
|
| 1474 |
+
0.49,
|
| 1475 |
+
0.378
|
| 1476 |
+
],
|
| 1477 |
+
"angle": 0,
|
| 1478 |
+
"content": "Figure 8: TF-IDF text similarity by temperature and prompt for GPT-3.5 on the 100-question exam over sampling temperatures from 0.0 to 1.0."
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "image",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
0.527,
|
| 1484 |
+
0.097,
|
| 1485 |
+
0.875,
|
| 1486 |
+
0.319
|
| 1487 |
+
],
|
| 1488 |
+
"angle": 0,
|
| 1489 |
+
"content": null
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "image_caption",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
0.51,
|
| 1495 |
+
0.334,
|
| 1496 |
+
0.886,
|
| 1497 |
+
0.376
|
| 1498 |
+
],
|
| 1499 |
+
"angle": 0,
|
| 1500 |
+
"content": "Figure 9: TF-IDF text similarity by temperature and exam for GPT-3.5 using the CoT prompt over sampling temperatures from 0.0 to 1.0"
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "title",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
0.113,
|
| 1506 |
+
0.414,
|
| 1507 |
+
0.236,
|
| 1508 |
+
0.43
|
| 1509 |
+
],
|
| 1510 |
+
"angle": 0,
|
| 1511 |
+
"content": "4 Discussion"
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "title",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
0.113,
|
| 1517 |
+
0.453,
|
| 1518 |
+
0.254,
|
| 1519 |
+
0.469
|
| 1520 |
+
],
|
| 1521 |
+
"angle": 0,
|
| 1522 |
+
"content": "4.1 Interpretation"
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "text",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
0.112,
|
| 1528 |
+
0.484,
|
| 1529 |
+
0.886,
|
| 1530 |
+
0.528
|
| 1531 |
+
],
|
| 1532 |
+
"angle": 0,
|
| 1533 |
+
"content": "Based on these results, changes in temperature from 0.0 to 1.0 do not have a statistically significant effect on the problem-solving performance of LLMs. These results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. However, there are a few potential exceptions to these general findings."
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "text",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
0.112,
|
| 1539 |
+
0.532,
|
| 1540 |
+
0.884,
|
| 1541 |
+
0.589
|
| 1542 |
+
],
|
| 1543 |
+
"angle": 0,
|
| 1544 |
+
"content": "Therefore, we recommend setting an LLM's sampling temperature to 0.0 for problem-solving tasks. This temperature maximizes reproducibility without compromising accuracy. In addition, it helps avoid the inevitable drop-off in performance that occurs beyond a temperature of 1.0. However, exceptions to this guidance should be taken into consideration."
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "title",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
0.113,
|
| 1550 |
+
0.618,
|
| 1551 |
+
0.236,
|
| 1552 |
+
0.633
|
| 1553 |
+
],
|
| 1554 |
+
"angle": 0,
|
| 1555 |
+
"content": "4.2 Limitations"
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "text",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
0.112,
|
| 1561 |
+
0.649,
|
| 1562 |
+
0.714,
|
| 1563 |
+
0.664
|
| 1564 |
+
],
|
| 1565 |
+
"angle": 0,
|
| 1566 |
+
"content": "There were several limitations to our research study due to cost and runtime considerations:"
|
| 1567 |
+
},
|
| 1568 |
+
{
|
| 1569 |
+
"type": "text",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
0.112,
|
| 1572 |
+
0.67,
|
| 1573 |
+
0.884,
|
| 1574 |
+
0.699
|
| 1575 |
+
],
|
| 1576 |
+
"angle": 0,
|
| 1577 |
+
"content": "First, our study was limited to a subset of popular LLMs. As a result, our findings may not hold for other LLMs that were excluded from our experiment."
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "text",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.112,
|
| 1583 |
+
0.704,
|
| 1584 |
+
0.884,
|
| 1585 |
+
0.748
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "Second, we only explored a subset of prompt-engineering techniques using a single prompt-and-response cycle with one-shot in-context learning. As a result, there may be more complex prompts or agent architectures that better leverage sampling temperature for creativity in their problem-solving capabilities."
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "text",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
0.112,
|
| 1594 |
+
0.753,
|
| 1595 |
+
0.883,
|
| 1596 |
+
0.782
|
| 1597 |
+
],
|
| 1598 |
+
"angle": 0,
|
| 1599 |
+
"content": "Third, our study was limited to a subset of problems, problem domains, and problem-solving tasks. As a result, our findings may not hold for larger data sets, different problem domains, or other types of problem-solving tasks."
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "text",
|
| 1603 |
+
"bbox": [
|
| 1604 |
+
0.112,
|
| 1605 |
+
0.787,
|
| 1606 |
+
0.884,
|
| 1607 |
+
0.83
|
| 1608 |
+
],
|
| 1609 |
+
"angle": 0,
|
| 1610 |
+
"content": "Fourth, due to time and cost constraints, we limited our study to two test sets of 1,000 and 100 randomly selected questions from standard benchmarks. These limited sample sizes may have introduced bias into the results. Utilizing a larger and more diverse test set would enhance the statistical reliability of our findings."
|
| 1611 |
+
},
|
| 1612 |
+
{
|
| 1613 |
+
"type": "text",
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
0.112,
|
| 1616 |
+
0.836,
|
| 1617 |
+
0.884,
|
| 1618 |
+
0.879
|
| 1619 |
+
],
|
| 1620 |
+
"angle": 0,
|
| 1621 |
+
"content": "Fifth, we had to limit the sampling temperature range we explored from 0.0 to 1.0 for all combinations of models, prompts, and exams, except for GPT-3.5 using CoT prompting on the 100-question exam. As a result, the temperature hyperparameter of other LLMs may operate differently at temperatures above 1.0."
|
| 1622 |
+
},
|
| 1623 |
+
{
|
| 1624 |
+
"type": "text",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
0.112,
|
| 1627 |
+
0.884,
|
| 1628 |
+
0.884,
|
| 1629 |
+
0.913
|
| 1630 |
+
],
|
| 1631 |
+
"angle": 0,
|
| 1632 |
+
"content": "Sixth, we fixed all other sampling parameters (e.g., top-p, top-k, repetition penalty, etc.) to isolate the effect of sampling temperature. As a result, there may be combinations of sampling parameters that result in different outcomes."
|
| 1633 |
+
},
|
| 1634 |
+
{
|
| 1635 |
+
"type": "page_number",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
0.494,
|
| 1638 |
+
0.936,
|
| 1639 |
+
0.505,
|
| 1640 |
+
0.948
|
| 1641 |
+
],
|
| 1642 |
+
"angle": 0,
|
| 1643 |
+
"content": "8"
|
| 1644 |
+
}
|
| 1645 |
+
],
|
| 1646 |
+
[
|
| 1647 |
+
{
|
| 1648 |
+
"type": "header",
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
0.22,
|
| 1651 |
+
0.044,
|
| 1652 |
+
0.777,
|
| 1653 |
+
0.058
|
| 1654 |
+
],
|
| 1655 |
+
"angle": 0,
|
| 1656 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "text",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
0.112,
|
| 1662 |
+
0.092,
|
| 1663 |
+
0.885,
|
| 1664 |
+
0.135
|
| 1665 |
+
],
|
| 1666 |
+
"angle": 0,
|
| 1667 |
+
"content": "Finally, we could only explore a subset of the various combinations of models, prompts, exams, and temperatures. As a result, other combinations of LLMs, prompt-engineering techniques, and problem domains may exist where temperature plays a more important role in problem-solving performance."
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "title",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.113,
|
| 1673 |
+
0.15,
|
| 1674 |
+
0.245,
|
| 1675 |
+
0.166
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "4.3 Implications"
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "text",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.112,
|
| 1684 |
+
0.176,
|
| 1685 |
+
0.884,
|
| 1686 |
+
0.204
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": "This research study provides empirical evidence that changes in sampling temperature in the range of 0.0 to 1.0 do not significantly impact the problem-solving capabilities of LLMs on MCQA problems."
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "text",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.112,
|
| 1695 |
+
0.21,
|
| 1696 |
+
0.884,
|
| 1697 |
+
0.253
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "Answering this question may save AI engineers significant time and resources evaluating various sampling temperatures for their LLM agents and applications. In addition, it may reduce unproductive debates in the prompt-engineering community regarding the optimal sampling temperatures for various problem-solving tasks."
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"type": "text",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
0.112,
|
| 1706 |
+
0.258,
|
| 1707 |
+
0.884,
|
| 1708 |
+
0.315
|
| 1709 |
+
],
|
| 1710 |
+
"angle": 0,
|
| 1711 |
+
"content": "This research also provides broader insights for AI researchers studying model hallucination and problem-solution state-space search with LLMs. Our results show that increasing LLM temperature up to 1.0 does not cause the LLM to hallucinate in ways that lead to incorrect MCQA solutions. In addition, higher temperatures do not appear to improve MCQA solution-space search in ways that lead to correct solutions more often than lower temperatures."
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "title",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
0.113,
|
| 1717 |
+
0.331,
|
| 1718 |
+
0.272,
|
| 1719 |
+
0.345
|
| 1720 |
+
],
|
| 1721 |
+
"angle": 0,
|
| 1722 |
+
"content": "4.4 Future Research"
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "text",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
0.112,
|
| 1728 |
+
0.356,
|
| 1729 |
+
0.646,
|
| 1730 |
+
0.372
|
| 1731 |
+
],
|
| 1732 |
+
"angle": 0,
|
| 1733 |
+
"content": "To improve upon this research, we propose the following follow-up experiments:"
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "text",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
0.112,
|
| 1739 |
+
0.377,
|
| 1740 |
+
0.884,
|
| 1741 |
+
0.405
|
| 1742 |
+
],
|
| 1743 |
+
"angle": 0,
|
| 1744 |
+
"content": "First, we recommend conducting this experiment with additional LLMs. Other proprietary and open-source LLMs may utilize temperature in ways that benefit their specific models but did not benefit the LLMs we tested."
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "text",
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
0.112,
|
| 1750 |
+
0.411,
|
| 1751 |
+
0.884,
|
| 1752 |
+
0.453
|
| 1753 |
+
],
|
| 1754 |
+
"angle": 0,
|
| 1755 |
+
"content": "Second, we recommend expanding beyond MCQA problems to other types of problem-solving tasks whose correct answers are more open-ended. The limited effects of sampling temperature in our experiments may have simply resulted from the constraints imposed by the structure of MCQA problems."
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "text",
|
| 1759 |
+
"bbox": [
|
| 1760 |
+
0.112,
|
| 1761 |
+
0.459,
|
| 1762 |
+
0.884,
|
| 1763 |
+
0.502
|
| 1764 |
+
],
|
| 1765 |
+
"angle": 0,
|
| 1766 |
+
"content": "Third, we recommend conducting additional experiments with more MCQA problems and problem domains. We recommend specifically targeting tasks and problem domains that require more creative solutions or lateral \"out-of-the-box\" thinking."
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "text",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
0.112,
|
| 1772 |
+
0.508,
|
| 1773 |
+
0.884,
|
| 1774 |
+
0.562
|
| 1775 |
+
],
|
| 1776 |
+
"angle": 0,
|
| 1777 |
+
"content": "Fourth, we recommend extending the sampling temperature range until accuracy drops to zero for each LLM, prompt, and exam. However, it should be noted that as the generated text becomes more random, the number of tokens in each response increases significantly, leading to a considerable increase in runtime and cost to explore temperatures above 1.0."
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "text",
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
0.112,
|
| 1783 |
+
0.57,
|
| 1784 |
+
0.884,
|
| 1785 |
+
0.613
|
| 1786 |
+
],
|
| 1787 |
+
"angle": 0,
|
| 1788 |
+
"content": "Finally, we recommend a more in-depth error analysis to determine if any sub-types of problems within these problem domains benefit from changes to sampling temperature. It is possible that statistical noise or averaging may have hidden individual problems that were sensitive to changes in sampling temperature."
|
| 1789 |
+
},
|
| 1790 |
+
{
|
| 1791 |
+
"type": "title",
|
| 1792 |
+
"bbox": [
|
| 1793 |
+
0.113,
|
| 1794 |
+
0.631,
|
| 1795 |
+
0.242,
|
| 1796 |
+
0.647
|
| 1797 |
+
],
|
| 1798 |
+
"angle": 0,
|
| 1799 |
+
"content": "5 Conclusion"
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "text",
|
| 1803 |
+
"bbox": [
|
| 1804 |
+
0.112,
|
| 1805 |
+
0.662,
|
| 1806 |
+
0.884,
|
| 1807 |
+
0.691
|
| 1808 |
+
],
|
| 1809 |
+
"angle": 0,
|
| 1810 |
+
"content": "This research study empirically investigated the effect of sampling temperature on the problem-solving performance of LLMs across multiple problem domains."
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "text",
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
0.112,
|
| 1816 |
+
0.697,
|
| 1817 |
+
0.884,
|
| 1818 |
+
0.739
|
| 1819 |
+
],
|
| 1820 |
+
"angle": 0,
|
| 1821 |
+
"content": "We demonstrated that changes in sampling temperature from 0.0 to 1.0 do not produce statistically significant differences in problem-solving performance on MCQA problems across multiple LLMs, prompt-engineering techniques, and problem domains."
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "text",
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
0.112,
|
| 1827 |
+
0.745,
|
| 1828 |
+
0.884,
|
| 1829 |
+
0.773
|
| 1830 |
+
],
|
| 1831 |
+
"angle": 0,
|
| 1832 |
+
"content": "These results have practical implications for AI engineers using LLMs to develop new AI systems. Additionally, they have theoretical implications for AI researchers studying model hallucination and solution-space search with LLMs."
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "title",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
0.114,
|
| 1838 |
+
0.792,
|
| 1839 |
+
0.21,
|
| 1840 |
+
0.809
|
| 1841 |
+
],
|
| 1842 |
+
"angle": 0,
|
| 1843 |
+
"content": "References"
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "ref_text",
|
| 1847 |
+
"bbox": [
|
| 1848 |
+
0.12,
|
| 1849 |
+
0.823,
|
| 1850 |
+
0.884,
|
| 1851 |
+
0.852
|
| 1852 |
+
],
|
| 1853 |
+
"angle": 0,
|
| 1854 |
+
"content": "[1] G. Mialon, R. Dessi, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Roziere, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz, E. Grave, Y. LeCun, and T. Scialom, \"Augmented language models: a survey,\" arXiv, 2 2023."
|
| 1855 |
+
},
|
| 1856 |
+
{
|
| 1857 |
+
"type": "ref_text",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
0.121,
|
| 1860 |
+
0.853,
|
| 1861 |
+
0.884,
|
| 1862 |
+
0.883
|
| 1863 |
+
],
|
| 1864 |
+
"angle": 0,
|
| 1865 |
+
"content": "[2] J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, and D. C. Schmidt, “A prompt pattern catalog to enhance prompt engineering with ChatGPT,” arXiv, 2 2023."
|
| 1866 |
+
},
|
| 1867 |
+
{
|
| 1868 |
+
"type": "ref_text",
|
| 1869 |
+
"bbox": [
|
| 1870 |
+
0.122,
|
| 1871 |
+
0.884,
|
| 1872 |
+
0.885,
|
| 1873 |
+
0.911
|
| 1874 |
+
],
|
| 1875 |
+
"angle": 0,
|
| 1876 |
+
"content": "[3] OpenAI, \"OpenAI - API reference,\" 2023. [Online]. Available: https://platform.openai.com/docs/api-reference/chat/create"
|
| 1877 |
+
},
|
| 1878 |
+
{
|
| 1879 |
+
"type": "list",
|
| 1880 |
+
"bbox": [
|
| 1881 |
+
0.12,
|
| 1882 |
+
0.823,
|
| 1883 |
+
0.885,
|
| 1884 |
+
0.911
|
| 1885 |
+
],
|
| 1886 |
+
"angle": 0,
|
| 1887 |
+
"content": null
|
| 1888 |
+
},
|
| 1889 |
+
{
|
| 1890 |
+
"type": "page_number",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
0.494,
|
| 1893 |
+
0.936,
|
| 1894 |
+
0.505,
|
| 1895 |
+
0.948
|
| 1896 |
+
],
|
| 1897 |
+
"angle": 0,
|
| 1898 |
+
"content": "9"
|
| 1899 |
+
}
|
| 1900 |
+
],
|
| 1901 |
+
[
|
| 1902 |
+
{
|
| 1903 |
+
"type": "header",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.219,
|
| 1906 |
+
0.043,
|
| 1907 |
+
0.778,
|
| 1908 |
+
0.058
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "ref_text",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.123,
|
| 1917 |
+
0.091,
|
| 1918 |
+
0.725,
|
| 1919 |
+
0.107
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": "[4] Llama-2-Team, \"Llama 2: Open foundation and fine-tuned chat models,\" arXiv, 7 2023."
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "ref_text",
|
| 1926 |
+
"bbox": [
|
| 1927 |
+
0.123,
|
| 1928 |
+
0.109,
|
| 1929 |
+
0.882,
|
| 1930 |
+
0.137
|
| 1931 |
+
],
|
| 1932 |
+
"angle": 0,
|
| 1933 |
+
"content": "[5] C. Wang, S. X. Liu, and A. H. Awadallah, \"Cost-effective hyperparameter optimization for large language model generation inference,\" 2023."
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "ref_text",
|
| 1937 |
+
"bbox": [
|
| 1938 |
+
0.123,
|
| 1939 |
+
0.139,
|
| 1940 |
+
0.882,
|
| 1941 |
+
0.168
|
| 1942 |
+
],
|
| 1943 |
+
"angle": 0,
|
| 1944 |
+
"content": "[6] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985."
|
| 1945 |
+
},
|
| 1946 |
+
{
|
| 1947 |
+
"type": "ref_text",
|
| 1948 |
+
"bbox": [
|
| 1949 |
+
0.123,
|
| 1950 |
+
0.17,
|
| 1951 |
+
0.794,
|
| 1952 |
+
0.183
|
| 1953 |
+
],
|
| 1954 |
+
"angle": 0,
|
| 1955 |
+
"content": "[7] G. Hinton, O. Vinyals, and J. Dean, \"Distilling the knowledge in a neural network,\" arXiv, 3 2015."
|
| 1956 |
+
},
|
| 1957 |
+
{
|
| 1958 |
+
"type": "ref_text",
|
| 1959 |
+
"bbox": [
|
| 1960 |
+
0.123,
|
| 1961 |
+
0.185,
|
| 1962 |
+
0.882,
|
| 1963 |
+
0.214
|
| 1964 |
+
],
|
| 1965 |
+
"angle": 0,
|
| 1966 |
+
"content": "[8] P.-H. Wang, S.-I. Hsieh, S.-C. Chang, Y.-T. Chen, J.-Y. Pan, W. Wei, and D.-C. Juan, “Contextual temperature for language modeling,” arXiv, 12 2020."
|
| 1967 |
+
},
|
| 1968 |
+
{
|
| 1969 |
+
"type": "ref_text",
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
0.123,
|
| 1972 |
+
0.216,
|
| 1973 |
+
0.882,
|
| 1974 |
+
0.244
|
| 1975 |
+
],
|
| 1976 |
+
"angle": 0,
|
| 1977 |
+
"content": "[9] Microsoft, \"Completions - learn how to generate or manipulate text,\" 2023. [Online]. Available: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/completions"
|
| 1978 |
+
},
|
| 1979 |
+
{
|
| 1980 |
+
"type": "ref_text",
|
| 1981 |
+
"bbox": [
|
| 1982 |
+
0.117,
|
| 1983 |
+
0.246,
|
| 1984 |
+
0.884,
|
| 1985 |
+
0.275
|
| 1986 |
+
],
|
| 1987 |
+
"angle": 0,
|
| 1988 |
+
"content": "[10] J. Shieh, \"Best practices for prompt engineering with OpenAI API,\" 2024. [Online]. Available: https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api"
|
| 1989 |
+
},
|
| 1990 |
+
{
|
| 1991 |
+
"type": "ref_text",
|
| 1992 |
+
"bbox": [
|
| 1993 |
+
0.117,
|
| 1994 |
+
0.277,
|
| 1995 |
+
0.884,
|
| 1996 |
+
0.319
|
| 1997 |
+
],
|
| 1998 |
+
"angle": 0,
|
| 1999 |
+
"content": "[11] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. Association for Computing Machinery, 2022, pp. 1–10."
|
| 2000 |
+
},
|
| 2001 |
+
{
|
| 2002 |
+
"type": "ref_text",
|
| 2003 |
+
"bbox": [
|
| 2004 |
+
0.117,
|
| 2005 |
+
0.321,
|
| 2006 |
+
0.882,
|
| 2007 |
+
0.348
|
| 2008 |
+
],
|
| 2009 |
+
"angle": 0,
|
| 2010 |
+
"content": "[12] Y. Zhu, J. Li, G. Li, Y. Zhao, J. Li, Z. Jin, and H. Mei, \"Improving code generation by dynamic temperature sampling,\" arXiv, 9 2023."
|
| 2011 |
+
},
|
| 2012 |
+
{
|
| 2013 |
+
"type": "ref_text",
|
| 2014 |
+
"bbox": [
|
| 2015 |
+
0.117,
|
| 2016 |
+
0.35,
|
| 2017 |
+
0.884,
|
| 2018 |
+
0.378
|
| 2019 |
+
],
|
| 2020 |
+
"angle": 0,
|
| 2021 |
+
"content": "[13] M. Lee, “A mathematical investigation of hallucination and creativity in GPT models,” Mathematics, vol. 11, p. 2320, 5 2023."
|
| 2022 |
+
},
|
| 2023 |
+
{
|
| 2024 |
+
"type": "ref_text",
|
| 2025 |
+
"bbox": [
|
| 2026 |
+
0.117,
|
| 2027 |
+
0.38,
|
| 2028 |
+
0.884,
|
| 2029 |
+
0.422
|
| 2030 |
+
],
|
| 2031 |
+
"angle": 0,
|
| 2032 |
+
"content": "[14] V. Pursnani, Y. Sermet, and I. Demir, \"Performance of ChatGPT on the US fundamentals of engineering exam: Comprehensive assessment of proficiency and potential implications for professional environmental engineering practice,\" arXiv, 4 2023."
|
| 2033 |
+
},
|
| 2034 |
+
{
|
| 2035 |
+
"type": "ref_text",
|
| 2036 |
+
"bbox": [
|
| 2037 |
+
0.117,
|
| 2038 |
+
0.424,
|
| 2039 |
+
0.884,
|
| 2040 |
+
0.453
|
| 2041 |
+
],
|
| 2042 |
+
"angle": 0,
|
| 2043 |
+
"content": "[15] OpenAI, \"Api temperature change from 0 to 1 to 0 to 2 in 'playground',\" 2023. [Online]. Available: https://community.openuai.com/t/api-temperature-change-from-0-to-1-to-0-to-2-in-playground/217755"
|
| 2044 |
+
},
|
| 2045 |
+
{
|
| 2046 |
+
"type": "ref_text",
|
| 2047 |
+
"bbox": [
|
| 2048 |
+
0.117,
|
| 2049 |
+
0.455,
|
| 2050 |
+
0.808,
|
| 2051 |
+
0.469
|
| 2052 |
+
],
|
| 2053 |
+
"angle": 0,
|
| 2054 |
+
"content": "[16] ——, “GPT-4 technical report,” arXiv, 3 2023. [Online]. Available: https://arxiv.org/abs/2303.08774"
|
| 2055 |
+
},
|
| 2056 |
+
{
|
| 2057 |
+
"type": "ref_text",
|
| 2058 |
+
"bbox": [
|
| 2059 |
+
0.117,
|
| 2060 |
+
0.471,
|
| 2061 |
+
0.761,
|
| 2062 |
+
0.486
|
| 2063 |
+
],
|
| 2064 |
+
"angle": 0,
|
| 2065 |
+
"content": "[17] ——, “Introducing ChatGPT,” 11 2022. [Online]. Available: https://openai.com/blog/chatgpt"
|
| 2066 |
+
},
|
| 2067 |
+
{
|
| 2068 |
+
"type": "ref_text",
|
| 2069 |
+
"bbox": [
|
| 2070 |
+
0.117,
|
| 2071 |
+
0.488,
|
| 2072 |
+
0.665,
|
| 2073 |
+
0.502
|
| 2074 |
+
],
|
| 2075 |
+
"angle": 0,
|
| 2076 |
+
"content": "[18] ——, “GPT-4,” 3 2023. [Online]. Available: https://openai.com/research/gpt-4"
|
| 2077 |
+
},
|
| 2078 |
+
{
|
| 2079 |
+
"type": "ref_text",
|
| 2080 |
+
"bbox": [
|
| 2081 |
+
0.117,
|
| 2082 |
+
0.504,
|
| 2083 |
+
0.884,
|
| 2084 |
+
0.532
|
| 2085 |
+
],
|
| 2086 |
+
"angle": 0,
|
| 2087 |
+
"content": "[19] Anthropic, “Introducing the next generation of claude anthropic,” 2024. [Online]. Available: https://www.anthropic.com/news/claude-3-family"
|
| 2088 |
+
},
|
| 2089 |
+
{
|
| 2090 |
+
"type": "ref_text",
|
| 2091 |
+
"bbox": [
|
| 2092 |
+
0.117,
|
| 2093 |
+
0.534,
|
| 2094 |
+
0.884,
|
| 2095 |
+
0.562
|
| 2096 |
+
],
|
| 2097 |
+
"angle": 0,
|
| 2098 |
+
"content": "[20] ——, “The claude 3 model family: Opus, sonnet, haiku,” 2024. [Online]. Available: https://www.anthropic.com/claude-3-model-card"
|
| 2099 |
+
},
|
| 2100 |
+
{
|
| 2101 |
+
"type": "ref_text",
|
| 2102 |
+
"bbox": [
|
| 2103 |
+
0.117,
|
| 2104 |
+
0.564,
|
| 2105 |
+
0.794,
|
| 2106 |
+
0.579
|
| 2107 |
+
],
|
| 2108 |
+
"angle": 0,
|
| 2109 |
+
"content": "[21] Cohere, \"Command r+,\" 2024. [Online]. Available: https://docs.cohere.com/docs/command-r-plus"
|
| 2110 |
+
},
|
| 2111 |
+
{
|
| 2112 |
+
"type": "ref_text",
|
| 2113 |
+
"bbox": [
|
| 2114 |
+
0.117,
|
| 2115 |
+
0.581,
|
| 2116 |
+
0.884,
|
| 2117 |
+
0.609
|
| 2118 |
+
],
|
| 2119 |
+
"angle": 0,
|
| 2120 |
+
"content": "[22] ——, \"Model card for c4ai command r+,\" 2024. [Online]. Available: https://huggingface.co/CohereForAI/c4ai-command-r-plus"
|
| 2121 |
+
},
|
| 2122 |
+
{
|
| 2123 |
+
"type": "ref_text",
|
| 2124 |
+
"bbox": [
|
| 2125 |
+
0.117,
|
| 2126 |
+
0.611,
|
| 2127 |
+
0.884,
|
| 2128 |
+
0.639
|
| 2129 |
+
],
|
| 2130 |
+
"angle": 0,
|
| 2131 |
+
"content": "[23] S. Pichai and D. Hassabis, \"Introducing gemini: Google's most capable ai model yet,\" 2023. [Online]. Available: https://blog.google/technology/ai/google-gemini-ai/"
|
| 2132 |
+
},
|
| 2133 |
+
{
|
| 2134 |
+
"type": "ref_text",
|
| 2135 |
+
"bbox": [
|
| 2136 |
+
0.117,
|
| 2137 |
+
0.641,
|
| 2138 |
+
0.74,
|
| 2139 |
+
0.655
|
| 2140 |
+
],
|
| 2141 |
+
"angle": 0,
|
| 2142 |
+
"content": "[24] Gemini-Team, \"Gemini: A family of highly capable multimodal models,\" arXiv, 12 2023."
|
| 2143 |
+
},
|
| 2144 |
+
{
|
| 2145 |
+
"type": "ref_text",
|
| 2146 |
+
"bbox": [
|
| 2147 |
+
0.117,
|
| 2148 |
+
0.657,
|
| 2149 |
+
0.884,
|
| 2150 |
+
0.686
|
| 2151 |
+
],
|
| 2152 |
+
"angle": 0,
|
| 2153 |
+
"content": "[25] S. Pichai and D. Hassabis, \"Introducing gemini 1.5, google's next-generation ai model,\" 2024. [Online]. Available: https://blog.google/technology/ai/google-gemini- next-generation-model-february-2024/"
|
| 2154 |
+
},
|
| 2155 |
+
{
|
| 2156 |
+
"type": "ref_text",
|
| 2157 |
+
"bbox": [
|
| 2158 |
+
0.117,
|
| 2159 |
+
0.688,
|
| 2160 |
+
0.884,
|
| 2161 |
+
0.716
|
| 2162 |
+
],
|
| 2163 |
+
"angle": 0,
|
| 2164 |
+
"content": "[26] Gemini-Team, \"Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context,\" 2024. [Online]. Available: https://arxiv.org/abs/2403.05530"
|
| 2165 |
+
},
|
| 2166 |
+
{
|
| 2167 |
+
"type": "ref_text",
|
| 2168 |
+
"bbox": [
|
| 2169 |
+
0.117,
|
| 2170 |
+
0.718,
|
| 2171 |
+
0.861,
|
| 2172 |
+
0.732
|
| 2173 |
+
],
|
| 2174 |
+
"angle": 0,
|
| 2175 |
+
"content": "[27] OpenAI, \"Models - openerai api.\" [Online]. Available: https://platform.openerai.com/docs/models/gpt-3-5-turbo"
|
| 2176 |
+
},
|
| 2177 |
+
{
|
| 2178 |
+
"type": "ref_text",
|
| 2179 |
+
"bbox": [
|
| 2180 |
+
0.117,
|
| 2181 |
+
0.734,
|
| 2182 |
+
0.884,
|
| 2183 |
+
0.761
|
| 2184 |
+
],
|
| 2185 |
+
"angle": 0,
|
| 2186 |
+
"content": "[28] Meta, “Meta and microsoft introduce the next generation of llama | meta,” 2023. [Online]. Available: https://about.meta.com/news/2023/07/llama-2/"
|
| 2187 |
+
},
|
| 2188 |
+
{
|
| 2189 |
+
"type": "ref_text",
|
| 2190 |
+
"bbox": [
|
| 2191 |
+
0.117,
|
| 2192 |
+
0.764,
|
| 2193 |
+
0.884,
|
| 2194 |
+
0.792
|
| 2195 |
+
],
|
| 2196 |
+
"angle": 0,
|
| 2197 |
+
"content": "[29] Mistral-AI-Team, “Au large | mistral ai | frontier ai in your hands,” 2024. [Online]. Available: https://mistral.ai/news/mistral-large/"
|
| 2198 |
+
},
|
| 2199 |
+
{
|
| 2200 |
+
"type": "ref_text",
|
| 2201 |
+
"bbox": [
|
| 2202 |
+
0.117,
|
| 2203 |
+
0.794,
|
| 2204 |
+
0.884,
|
| 2205 |
+
0.822
|
| 2206 |
+
],
|
| 2207 |
+
"angle": 0,
|
| 2208 |
+
"content": "[30] Z. Sun, X. Wang, Y. Tay, Y. Yang, and D. Zhou, \"Recitation-augmented language models,\" in The Eleventh International Conference on Learning Representations, 10 2023."
|
| 2209 |
+
},
|
| 2210 |
+
{
|
| 2211 |
+
"type": "ref_text",
|
| 2212 |
+
"bbox": [
|
| 2213 |
+
0.117,
|
| 2214 |
+
0.824,
|
| 2215 |
+
0.884,
|
| 2216 |
+
0.852
|
| 2217 |
+
],
|
| 2218 |
+
"angle": 0,
|
| 2219 |
+
"content": "[31] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, \"Large language models are zero-shot reasoners,\" in Advances in Neural Information Processing Systems, vol. 35, 5 2022, pp. 22 199-22 213."
|
| 2220 |
+
},
|
| 2221 |
+
{
|
| 2222 |
+
"type": "ref_text",
|
| 2223 |
+
"bbox": [
|
| 2224 |
+
0.117,
|
| 2225 |
+
0.854,
|
| 2226 |
+
0.884,
|
| 2227 |
+
0.883
|
| 2228 |
+
],
|
| 2229 |
+
"angle": 0,
|
| 2230 |
+
"content": "[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, \"Chain-of-thought prompting elicits reasoning in large language models,\" arXiv, 1 2022."
|
| 2231 |
+
},
|
| 2232 |
+
{
|
| 2233 |
+
"type": "ref_text",
|
| 2234 |
+
"bbox": [
|
| 2235 |
+
0.117,
|
| 2236 |
+
0.885,
|
| 2237 |
+
0.884,
|
| 2238 |
+
0.911
|
| 2239 |
+
],
|
| 2240 |
+
"angle": 0,
|
| 2241 |
+
"content": "[33] S. Huo, N. Arabzadeh, and C. L. A. Clarke, “Retrieving supporting evidence for generative question answering,” arXiv, 9 2023."
|
| 2242 |
+
},
|
| 2243 |
+
{
|
| 2244 |
+
"type": "list",
|
| 2245 |
+
"bbox": [
|
| 2246 |
+
0.117,
|
| 2247 |
+
0.091,
|
| 2248 |
+
0.884,
|
| 2249 |
+
0.911
|
| 2250 |
+
],
|
| 2251 |
+
"angle": 0,
|
| 2252 |
+
"content": null
|
| 2253 |
+
},
|
| 2254 |
+
{
|
| 2255 |
+
"type": "page_number",
|
| 2256 |
+
"bbox": [
|
| 2257 |
+
0.49,
|
| 2258 |
+
0.936,
|
| 2259 |
+
0.508,
|
| 2260 |
+
0.948
|
| 2261 |
+
],
|
| 2262 |
+
"angle": 0,
|
| 2263 |
+
"content": "10"
|
| 2264 |
+
}
|
| 2265 |
+
],
|
| 2266 |
+
[
|
| 2267 |
+
{
|
| 2268 |
+
"type": "header",
|
| 2269 |
+
"bbox": [
|
| 2270 |
+
0.22,
|
| 2271 |
+
0.043,
|
| 2272 |
+
0.776,
|
| 2273 |
+
0.058
|
| 2274 |
+
],
|
| 2275 |
+
"angle": 0,
|
| 2276 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 2277 |
+
},
|
| 2278 |
+
{
|
| 2279 |
+
"type": "ref_text",
|
| 2280 |
+
"bbox": [
|
| 2281 |
+
0.115,
|
| 2282 |
+
0.091,
|
| 2283 |
+
0.884,
|
| 2284 |
+
0.12
|
| 2285 |
+
],
|
| 2286 |
+
"angle": 0,
|
| 2287 |
+
"content": "[34] R. Wang, H. Wang, F. Mi, Y. Chen, R. Xu, and K.-F. Wong, \"Self-critique prompting with large language models for inductive instructions,\" arXiv, 5 2023."
|
| 2288 |
+
},
|
| 2289 |
+
{
|
| 2290 |
+
"type": "ref_text",
|
| 2291 |
+
"bbox": [
|
| 2292 |
+
0.115,
|
| 2293 |
+
0.122,
|
| 2294 |
+
0.885,
|
| 2295 |
+
0.151
|
| 2296 |
+
],
|
| 2297 |
+
"angle": 0,
|
| 2298 |
+
"content": "[35] P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord, \"Think you have solved question answering? Try ARC, the AI2 reasoning challenge,\" ArXiv, 3 2018."
|
| 2299 |
+
},
|
| 2300 |
+
{
|
| 2301 |
+
"type": "ref_text",
|
| 2302 |
+
"bbox": [
|
| 2303 |
+
0.116,
|
| 2304 |
+
0.152,
|
| 2305 |
+
0.884,
|
| 2306 |
+
0.181
|
| 2307 |
+
],
|
| 2308 |
+
"angle": 0,
|
| 2309 |
+
"content": "[36] W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan, \"AGIEval: A human-centric benchmark for evaluating foundation models,\" ArXiv, 4 2023."
|
| 2310 |
+
},
|
| 2311 |
+
{
|
| 2312 |
+
"type": "ref_text",
|
| 2313 |
+
"bbox": [
|
| 2314 |
+
0.116,
|
| 2315 |
+
0.182,
|
| 2316 |
+
0.885,
|
| 2317 |
+
0.211
|
| 2318 |
+
],
|
| 2319 |
+
"angle": 0,
|
| 2320 |
+
"content": "[37] R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, \"HellaSwag: Can a machine really finish your sentence?\" in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019."
|
| 2321 |
+
},
|
| 2322 |
+
{
|
| 2323 |
+
"type": "ref_text",
|
| 2324 |
+
"bbox": [
|
| 2325 |
+
0.116,
|
| 2326 |
+
0.212,
|
| 2327 |
+
0.885,
|
| 2328 |
+
0.242
|
| 2329 |
+
],
|
| 2330 |
+
"angle": 0,
|
| 2331 |
+
"content": "[38] J. Liu, L. Cui, H. Liu, D. Huang, Y. Wang, and Y. Zhang, \"Logiqa: A challenge dataset for machine reading comprehension with logical reasoning,\" in International Joint Conference on Artificial Intelligence, 2020."
|
| 2332 |
+
},
|
| 2333 |
+
{
|
| 2334 |
+
"type": "ref_text",
|
| 2335 |
+
"bbox": [
|
| 2336 |
+
0.116,
|
| 2337 |
+
0.243,
|
| 2338 |
+
0.885,
|
| 2339 |
+
0.283
|
| 2340 |
+
],
|
| 2341 |
+
"angle": 0,
|
| 2342 |
+
"content": "[39] S. Wang, Z. Liu, W. Zhong, M. Zhou, Z. Wei, Z. Chen, and N. Duan, \"From lsat: The progress and challenges of complex reasoning,\" IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 30, pp. 2201-2216, 8 2021."
|
| 2343 |
+
},
|
| 2344 |
+
{
|
| 2345 |
+
"type": "ref_text",
|
| 2346 |
+
"bbox": [
|
| 2347 |
+
0.116,
|
| 2348 |
+
0.286,
|
| 2349 |
+
0.885,
|
| 2350 |
+
0.328
|
| 2351 |
+
],
|
| 2352 |
+
"angle": 0,
|
| 2353 |
+
"content": "[40] A. Pal, L. K. Umapathi, and M. Sankarasubbu, \"MedMCQA: A large-scale multi-subject multi-choice dataset for medical domain question answering,\" in Proceedings of the Conference on Health, Inference, and Learning. PMLR, 2022, pp. 248-260."
|
| 2354 |
+
},
|
| 2355 |
+
{
|
| 2356 |
+
"type": "ref_text",
|
| 2357 |
+
"bbox": [
|
| 2358 |
+
0.116,
|
| 2359 |
+
0.33,
|
| 2360 |
+
0.615,
|
| 2361 |
+
0.346
|
| 2362 |
+
],
|
| 2363 |
+
"angle": 0,
|
| 2364 |
+
"content": "[41] I. Ward, \"JSON lines,\" 2014. [Online]. Available: https://jsonlines.org/"
|
| 2365 |
+
},
|
| 2366 |
+
{
|
| 2367 |
+
"type": "ref_text",
|
| 2368 |
+
"bbox": [
|
| 2369 |
+
0.116,
|
| 2370 |
+
0.347,
|
| 2371 |
+
0.812,
|
| 2372 |
+
0.362
|
| 2373 |
+
],
|
| 2374 |
+
"angle": 0,
|
| 2375 |
+
"content": "[42] P. Jaccard, “The distribution of flora in the alpine zone,” New Phytologist, vol. 11, pp. 37–50, 2 1912."
|
| 2376 |
+
},
|
| 2377 |
+
{
|
| 2378 |
+
"type": "ref_text",
|
| 2379 |
+
"bbox": [
|
| 2380 |
+
0.116,
|
| 2381 |
+
0.363,
|
| 2382 |
+
0.657,
|
| 2383 |
+
0.377
|
| 2384 |
+
],
|
| 2385 |
+
"angle": 0,
|
| 2386 |
+
"content": "[43] Z. S. Harris, \"Distributional structure,\" WORD, vol. 10, pp. 146-162, 8 1954."
|
| 2387 |
+
},
|
| 2388 |
+
{
|
| 2389 |
+
"type": "ref_text",
|
| 2390 |
+
"bbox": [
|
| 2391 |
+
0.116,
|
| 2392 |
+
0.379,
|
| 2393 |
+
0.885,
|
| 2394 |
+
0.407
|
| 2395 |
+
],
|
| 2396 |
+
"angle": 0,
|
| 2397 |
+
"content": "[44] K. S. Jones, “A statistical interpretation of term specificity and its application in retrieval,” Journal of Documentation, vol. 28, pp. 11–21, 1 1972."
|
| 2398 |
+
},
|
| 2399 |
+
{
|
| 2400 |
+
"type": "ref_text",
|
| 2401 |
+
"bbox": [
|
| 2402 |
+
0.116,
|
| 2403 |
+
0.409,
|
| 2404 |
+
0.885,
|
| 2405 |
+
0.438
|
| 2406 |
+
],
|
| 2407 |
+
"angle": 0,
|
| 2408 |
+
"content": "[45] V. Levenshtein, \"Binary codes capable of correcting deletions, insertions and reversals,\" Soviet Physics Doklady, vol. 10, pp. 707-710, 1966."
|
| 2409 |
+
},
|
| 2410 |
+
{
|
| 2411 |
+
"type": "ref_text",
|
| 2412 |
+
"bbox": [
|
| 2413 |
+
0.116,
|
| 2414 |
+
0.44,
|
| 2415 |
+
0.885,
|
| 2416 |
+
0.468
|
| 2417 |
+
],
|
| 2418 |
+
"angle": 0,
|
| 2419 |
+
"content": "[46] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “BLEU,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL '02. Association for Computational Linguistics, 2001, p. 311."
|
| 2420 |
+
},
|
| 2421 |
+
{
|
| 2422 |
+
"type": "ref_text",
|
| 2423 |
+
"bbox": [
|
| 2424 |
+
0.116,
|
| 2425 |
+
0.47,
|
| 2426 |
+
0.885,
|
| 2427 |
+
0.512
|
| 2428 |
+
],
|
| 2429 |
+
"angle": 0,
|
| 2430 |
+
"content": "[47] N. Reimers and I. Gurevych, \"Sentence-BERT: Sentence embeddings using Siamese BERT-networks,\" in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8 2019."
|
| 2431 |
+
},
|
| 2432 |
+
{
|
| 2433 |
+
"type": "ref_text",
|
| 2434 |
+
"bbox": [
|
| 2435 |
+
0.116,
|
| 2436 |
+
0.513,
|
| 2437 |
+
0.885,
|
| 2438 |
+
0.555
|
| 2439 |
+
],
|
| 2440 |
+
"angle": 0,
|
| 2441 |
+
"content": "[48] W. H. Kruskal and W. A. Wallis, \"Use of ranks in one-criterion variance analysis,\" Journal of the American Statistical Association, vol. 47, no. 260, pp. 583-621, 1952. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/01621459.1952.10483441"
|
| 2442 |
+
},
|
| 2443 |
+
{
|
| 2444 |
+
"type": "ref_text",
|
| 2445 |
+
"bbox": [
|
| 2446 |
+
0.116,
|
| 2447 |
+
0.557,
|
| 2448 |
+
0.885,
|
| 2449 |
+
0.585
|
| 2450 |
+
],
|
| 2451 |
+
"angle": 0,
|
| 2452 |
+
"content": "[49] O. J. Dunn, “Multiple comparisons using rank sums,” Technometrics, vol. 6, no. 3, pp. 241–252, 1964. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/00401706.1964.10490181"
|
| 2453 |
+
},
|
| 2454 |
+
{
|
| 2455 |
+
"type": "list",
|
| 2456 |
+
"bbox": [
|
| 2457 |
+
0.115,
|
| 2458 |
+
0.091,
|
| 2459 |
+
0.885,
|
| 2460 |
+
0.585
|
| 2461 |
+
],
|
| 2462 |
+
"angle": 0,
|
| 2463 |
+
"content": null
|
| 2464 |
+
},
|
| 2465 |
+
{
|
| 2466 |
+
"type": "page_number",
|
| 2467 |
+
"bbox": [
|
| 2468 |
+
0.49,
|
| 2469 |
+
0.936,
|
| 2470 |
+
0.507,
|
| 2471 |
+
0.948
|
| 2472 |
+
],
|
| 2473 |
+
"angle": 0,
|
| 2474 |
+
"content": "11"
|
| 2475 |
+
}
|
| 2476 |
+
],
|
| 2477 |
+
[
|
| 2478 |
+
{
|
| 2479 |
+
"type": "header",
|
| 2480 |
+
"bbox": [
|
| 2481 |
+
0.22,
|
| 2482 |
+
0.044,
|
| 2483 |
+
0.776,
|
| 2484 |
+
0.058
|
| 2485 |
+
],
|
| 2486 |
+
"angle": 0,
|
| 2487 |
+
"content": "The Effect of Sampling Temperature on Problem Solving in Large Language Models"
|
| 2488 |
+
},
|
| 2489 |
+
{
|
| 2490 |
+
"type": "title",
|
| 2491 |
+
"bbox": [
|
| 2492 |
+
0.114,
|
| 2493 |
+
0.09,
|
| 2494 |
+
0.234,
|
| 2495 |
+
0.109
|
| 2496 |
+
],
|
| 2497 |
+
"angle": 0,
|
| 2498 |
+
"content": "A Appendix"
|
| 2499 |
+
},
|
| 2500 |
+
{
|
| 2501 |
+
"type": "code",
|
| 2502 |
+
"bbox": [
|
| 2503 |
+
0.114,
|
| 2504 |
+
0.128,
|
| 2505 |
+
0.87,
|
| 2506 |
+
0.511
|
| 2507 |
+
],
|
| 2508 |
+
"angle": 0,
|
| 2509 |
+
"content": "[System Prompt] \nYou are an expert in {\\{expertise\\}}. \nYour task is to answer the following multiple-choice questions.. \nFirst, you should recite all of the relevant knowledge you have about the question and each option. \nNext, you should think step-by-step through the problem to ensure you have the correct answer. \nThen, you should critically evaluate your thoughts to identify any flaws in your facts, logic, and reasoning. Finally, you MUST answer the question using the following format 'Action: Answer(\"[choice]\") The parameter [choice] is the letter or number of the answer you want to select (e.g. \"A\", \"B\", \"C\", or \"D\") For example, 'Answer(\"C')' will select choice \"C\" as the best answer. The answer MUST ALWAYS be one of the available choices; it CANNOT be \"None of the Above\". If you think the answer is \"none of the above\", then you MUST select the most likely answer. \n[Example Problem] \nQuestion: What is the capital of the state where Johns Hopkins University is located? \nChoices: A: Baltimore B: Annapolis C: Des Moines D: Las Vegas \n[Example Solution] \nKnowledge: Johns Hopkins University is located in Baltimore, Maryland. A: Baltimore is a city located in the State of Maryland, but it is not the capital of Maryland. B: Annapolis is the capital of the State of Maryland. C: Des Moines is a city located in the State of Iowa and is the capital of Iowa. D: Las Vegas is located in the State of Nevada, but it is not the capital of Nevada. Thought: Johns Hopkins University is located in Baltimore. Baltimore is a city located in the state of Maryland. The capital of Maryland is Baltimore. Therefore, the capital of the state where Johns Hopkins University is located is Baltimore. The answer is A: Baltimore. Criticism: You are correct that Johns Hopkins is located in the State of Maryland. However, the capital of Maryland is Annapolis, not Baltimore. So, the correct answer is actually B: Annapolis. Action: Answer(\"B\")"
|
| 2510 |
+
},
|
| 2511 |
+
{
|
| 2512 |
+
"type": "code_caption",
|
| 2513 |
+
"bbox": [
|
| 2514 |
+
0.138,
|
| 2515 |
+
0.533,
|
| 2516 |
+
0.856,
|
| 2517 |
+
0.548
|
| 2518 |
+
],
|
| 2519 |
+
"angle": 0,
|
| 2520 |
+
"content": "Figure 10: Sample of the composite system prompt with a one-shot example (i.e., problem-and-solution pair)."
|
| 2521 |
+
},
|
| 2522 |
+
{
|
| 2523 |
+
"type": "code",
|
| 2524 |
+
"bbox": [
|
| 2525 |
+
0.114,
|
| 2526 |
+
0.569,
|
| 2527 |
+
0.616,
|
| 2528 |
+
0.732
|
| 2529 |
+
],
|
| 2530 |
+
"angle": 0,
|
| 2531 |
+
"content": "{\n\"source\": \"arc/arc-challenge-test\",\n\"source_id\": 1,\n\"topic\": \"Science\",\n\"context\":.,\n\"question\": \"An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?\", \"choices\": {\n\"A\": \"Planetary density will decrease.\", \"B\": \"Planetary years will become longer.\", \"C\": \"Planetary days will become shorter.\", \"D\": \"Planetary gravity will become stronger.\"\n},\n\"answer\": \"C\",\n\"solution\": \"\"\n}"
|
| 2532 |
+
},
|
| 2533 |
+
{
|
| 2534 |
+
"type": "code_caption",
|
| 2535 |
+
"bbox": [
|
| 2536 |
+
0.167,
|
| 2537 |
+
0.743,
|
| 2538 |
+
0.828,
|
| 2539 |
+
0.759
|
| 2540 |
+
],
|
| 2541 |
+
"angle": 0,
|
| 2542 |
+
"content": "Figure 11: Sample of an MCQA problem in JSON-L format - with whitespace added for readability."
|
| 2543 |
+
},
|
| 2544 |
+
{
|
| 2545 |
+
"type": "page_number",
|
| 2546 |
+
"bbox": [
|
| 2547 |
+
0.49,
|
| 2548 |
+
0.936,
|
| 2549 |
+
0.509,
|
| 2550 |
+
0.948
|
| 2551 |
+
],
|
| 2552 |
+
"angle": 0,
|
| 2553 |
+
"content": "12"
|
| 2554 |
+
}
|
| 2555 |
+
]
|
| 2556 |
+
]
|
2402.05xxx/2402.05201/bb381492-9383-4094-8c87-ec36c058446d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a01b7c6d8842eaee7daa249729419784caf26d041b46870145816faa1f34ae0
|
| 3 |
+
size 433370
|
2402.05xxx/2402.05201/full.md
ADDED
|
@@ -0,0 +1,349 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The Effect of Sampling Temperature on Problem Solving in Large Language Models
|
| 2 |
+
|
| 3 |
+
Matthew Renze
|
| 4 |
+
|
| 5 |
+
Johns Hopkins University
|
| 6 |
+
|
| 7 |
+
mrenze1@jhu.edu
|
| 8 |
+
|
| 9 |
+
Erhan Guven
|
| 10 |
+
|
| 11 |
+
Johns Hopkins University
|
| 12 |
+
|
| 13 |
+
eguven2@jhu.edu
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
In this research study, we empirically investigate the effect of sampling temperature on the performance of Large Language Models (LLMs) on various problem-solving tasks. We created a multiple-choice question-and-answer (MCQA) exam by randomly sampling problems from standard LLM benchmarks. Then, we used nine popular LLMs with five prompt-engineering techniques to solve the MCQA problems while increasing the sampling temperature from 0.0 to 1.6. Despite anecdotal reports to the contrary, our empirical results indicate that changes in temperature from 0.0 to 1.0 do not have a statistically significant impact on LLM performance for problem-solving tasks. In addition, these results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. All code, data, and supplemental materials are available on GitHub at: https://github.com/matthewrenze/jhu-llm-temperature.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
# 1.1 Background
|
| 22 |
+
|
| 23 |
+
In recent years, Large Language Models (LLMs) have revolutionized the field of artificial intelligence. The availability of open-source LLMs and pay-per-use APIs has allowed engineers to incorporate LLMs in their AI systems. However, prompt engineering and hyperparameter tuning are required to work effectively with LLMs.
|
| 24 |
+
|
| 25 |
+
Prompt-engineering techniques help LLMs solve complex problems, avoid hallucinations, and provide more accurate responses. For example, we can use techniques like chain-of-thought, tree-of-thought, self-criticism, and self-consistency to improve LLM performance [1, 2].
|
| 26 |
+
|
| 27 |
+
In addition, several inference hyperparameters can be adjusted to modify the LLM's output at runtime. For example, hyperparameters like sampling temperature, top-k sampling, repetition penalty, and maximum token length all affect the LLM's output and performance [3-5].
|
| 28 |
+
|
| 29 |
+
Despite significant interest in LLMs and progress in LLM best practices, many open questions remain about optimal prompt-engineering techniques and inference hyperparameters for LLMs. To complicate matters, various local optima may exist for LLMs, prompt types, and problem domains [5].
|
| 30 |
+
|
| 31 |
+
The prompt-engineering community has an abundance of opinions and anecdotal evidence regarding optimal prompt-engineering techniques and inference hyperparameter settings. However, we currently lack systematic studies and empirical evidence to support many of these claims.
|
| 32 |
+
|
| 33 |
+
As a result, this paper aims to address the open question of the optimal LLM sampling temperature for problem-solving tasks. In addition, we aim to provide a systematic study with empirical results to add to the growing body of knowledge used to create LLM and prompt-engineering best practices.
|
| 34 |
+
|
| 35 |
+
# 1.2 Sampling Temperature
|
| 36 |
+
|
| 37 |
+
Sampling temperature is a hyperparameter of an LLM used in a temperature-based sampling process. It controls the randomness of the model's output at inference time [5-8].
|
| 38 |
+
|
| 39 |
+
During each step of an LLM's decoding process, the LLM uses the previous tokens to choose the next output token. The final layer of the LLM uses a softmax function to convert raw scores (logits) into probabilities.
|
| 40 |
+
|
| 41 |
+
In greedy sampling, the model will always choose the most likely next token. However, for probabilistic sampling, the next token is selected from a probability distribution.
|
| 42 |
+
|
| 43 |
+
Temperature sampling is a modification to the softmax function, which adjusts the resulting probability mass functions. In this modified softmax function, $v_{k}$ is the $k$ -th vocabulary token, $l_{k}$ is the token's logit, and $\tau$ is a constant temperature. See equation 1.
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
\Pr \left(v _ {k}\right) = \frac {e ^ {l _ {k} / \tau}}{\sum_ {i} e ^ {l _ {i} / \tau}} \tag {1}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
A lower temperature makes the output of the LLM more deterministic, thus favoring the most likely predictions. This conservativeness is captured by the model's tendency to produce more repetitive, focused, and less diverse output based on the patterns most commonly seen in the training data [5, 7, 8].
|
| 50 |
+
|
| 51 |
+
A higher temperature increases the randomness of the output, thus favoring more "creative" predictions. This creativity is captured by the model's willingness to explore more unconventional and less likely outputs. Higher temperatures can lead to novel text, diverse ideas, and creative solutions to problems [5, 7, 8].
|
| 52 |
+
|
| 53 |
+
In the context of problem-solving, temperature can be seen as a trade-off between exploring and exploiting possible solutions within the solution space. Lower temperatures tend to exploit more probable solutions; higher temperatures explore the solution space more broadly.
|
| 54 |
+
|
| 55 |
+
# 1.3 Choosing a Sampling Temperature
|
| 56 |
+
|
| 57 |
+
Within the prompt-engineering community, there are a variety of opinions and best practices regarding the ideal sampling temperature for various problem-solving tasks [9, 10].
|
| 58 |
+
|
| 59 |
+
Low sampling temperatures are recommended for tasks requiring precision and factual accuracy, such as technical writing, code generation, or question-answering [11, 12]. However, higher temperatures are recommended for tasks requiring creativity, such as writing poetry, creating stories, or brainstorming.
|
| 60 |
+
|
| 61 |
+
Higher temperatures also increase the probability of model hallucination. Hallucination is a phenomenon where an LLM produces statistically probable responses that are factually incorrect or nonsensical. As a result, optimal temperature selection is also a balance between creativity and hallucination [13].
|
| 62 |
+
|
| 63 |
+
Practical guidelines for choosing a sampling temperature for a specific task or problem domain are often vague or anecdotal. Prompt-engineering guides often provide hypothetical examples of optimal sampling temperatures for various tasks. However, they rarely cite any sources or provide empirical evidence.<sup>1</sup>
|
| 64 |
+
|
| 65 |
+
As a result, the current state of choosing the optimal sampling temperature for specific problems is largely based on guesswork, gut instinct, non-systematic experimentation, and iterative refinement.[2,3]
|
| 66 |
+
|
| 67 |
+
# 2 Methods
|
| 68 |
+
|
| 69 |
+
# 2.1 Models
|
| 70 |
+
|
| 71 |
+
The models used in this research project comprise nine widely-used foundational LLMs. To complement our analysis, we also conducted experiments using five prompts created using commonly used prompt-engineering techniques.
|
| 72 |
+
|
| 73 |
+
First, we reviewed the prior literature to identify candidate LLMs commonly used for problem-solving tasks. We limited our candidate models to those that allowed the model's sampling temperature to be specified via their API [4, 16-18]. See Table 1 for a list of LLMs used in the experiment.
|
| 74 |
+
|
| 75 |
+
<table><tr><td>Name</td><td>Vendor</td><td>Released</td><td>License</td><td>Source</td></tr><tr><td>Claude 3 Opus</td><td>Anthropic</td><td>2024-03-04</td><td>Closed</td><td>[19,20]</td></tr><tr><td>Command R+</td><td>Cohere</td><td>2024-04-04</td><td>Open</td><td>[21,22]</td></tr><tr><td>Gemini 1.0 Pro</td><td>Google</td><td>2023-12-06</td><td>Closed</td><td>[23,24]</td></tr><tr><td>Gemini 1.5 Pro (Preview)</td><td>Google</td><td>2024-02-15</td><td>Closed</td><td>[25,26]</td></tr><tr><td>GPT-3.5 Turbo</td><td>OpenAI</td><td>2022-11-30</td><td>Closed</td><td>[17,27]</td></tr><tr><td>GPT-4</td><td>OpenAI</td><td>2023-03-14</td><td>Closed</td><td>[16,18]</td></tr><tr><td>Llama 2 7B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Llama 2 70B Chat</td><td>Meta</td><td>2023-07-18</td><td>Open</td><td>[4,28]</td></tr><tr><td>Mistral Large</td><td>Mistral AI</td><td>2024-02-26</td><td>Closed</td><td>[29]</td></tr></table>
|
| 76 |
+
|
| 77 |
+
Table 1: LLMs used in the experiment.
|
| 78 |
+
|
| 79 |
+
Next, we reviewed the existing literature for commonly used prompt-engineering techniques. We limited our candidate prompts to those that could be performed in a single request-and-response cycle with one-shot in-context learning. We excluded multi-step agents, few-shot learning, and model fine-tuning.
|
| 80 |
+
|
| 81 |
+
As a result, we selected five prompt-engineering techniques to construct our system prompts:
|
| 82 |
+
|
| 83 |
+
- Baseline - no prompt engineering; the LLM is instructed to return only a single multiple-choice answer as its output (e.g., 'Answer("C")').
|
| 84 |
+
- Domain Expertise – the system prompt specifies that the LLM is an expert in the problem domain of the exam (e.g., “medicine”) or the topic of the problem (e.g., “anatomy”) [2].
|
| 85 |
+
- Self-recitation – the system prompt instructs the LLM to recite its own internal knowledge about the problem before answering the question [2, 30].
|
| 86 |
+
- Chain-of-Thought (CoT) – the system prompt instructs the LLM to “think step-by-step” to encourage it to reason through the problem procedurally [31, 32].
|
| 87 |
+
- Composite – the system prompt combines domain expertise, self-recitation, chain-of-thought, and adds self-criticism [33, 34].
|
| 88 |
+
|
| 89 |
+
Finally, we provided the LLM with a single example problem-and-solution pair for one-shot in-context learning. The example solution was adapted for each prompt based on the prompt-engineering technique used. For example, the CoT prompt included a chain of thought in its solution. See Figure 10 in the Appendix for a sample prompt.
|
| 90 |
+
|
| 91 |
+
# 2.2 Data
|
| 92 |
+
|
| 93 |
+
The test dataset used in this research study consists of a series of Multiple-Choice Question-and-Answer (MCQA) exams derived from widely used LLM performance benchmarks.
|
| 94 |
+
|
| 95 |
+
First, we reviewed the prior literature to identify benchmarks frequently used to evaluate LLMs. We limited our candidate benchmarks to those containing MCQA problems so that we could use correct-answer accuracy as our primary performance metric.
|
| 96 |
+
|
| 97 |
+
Next, we selected a set of problems that covered a range of problem domains (e.g., math, science, law, etc.) and difficulty levels (e.g., secondary school, university, etc.) These problem sets can be seen in Table 2.
|
| 98 |
+
|
| 99 |
+
Then, we converted the benchmark problems from their original data format into a standardized data structure using the JSON Lines (JSON-L) format [41]. Our standardized set of exams allowed us to use the exams interchangeably without modifying the code in the test harness. See Figure 11 in the Appendix for a sample of an MCQA problem.
|
| 100 |
+
|
| 101 |
+
Finally, we created two MCQA exams of different sizes. We created a large exam with 1,000 questions by randomly sampling 100 problems from each of the ten problem sets. This 1,000-question (large) exam was used with GPT-3.5 to perform a detailed analysis of temperature across problem domains.
|
| 102 |
+
|
| 103 |
+
Additionally, we created a smaller exam of 100 questions by randomly sampling ten questions from each of the ten domain-specific problem sets. This 100-question (small) exam was used for our high-level analysis of sampling temperature across all nine models, all five prompt-engineering techniques, and extended temperature range (0.0-1.6).<sup>4</sup>
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Problem Set</td><td>Benchmark</td><td>Domain</td><td>Questions</td><td>License</td><td>Source</td></tr><tr><td>ARC Challenge Test</td><td>ARC</td><td>Science</td><td>1,173</td><td>CC BY-SA</td><td>[35]</td></tr><tr><td>AQUA-RAT</td><td>AGI Eval</td><td>Math</td><td>254</td><td>Apache v2.0</td><td>[36]</td></tr><tr><td>Hellaswag Val</td><td>Hellaswag</td><td>Common Sense Reasoning</td><td>10,042</td><td>MIT</td><td>[37]</td></tr><tr><td>LogiQA (English)</td><td>AGI Eval</td><td>Logic</td><td>651</td><td>GitHub</td><td>[36, 38]</td></tr><tr><td>LSAT-AR</td><td>AGI Eval</td><td>Law (Analytic Reasoning)</td><td>230</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-LR</td><td>AGI Eval</td><td>Law (Logical Reasoning)</td><td>510</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>LSAT-RC</td><td>AGI Eval</td><td>Law (Reading Comprehension)</td><td>260</td><td>MIT</td><td>[36, 39]</td></tr><tr><td>MedMCQA Valid</td><td>MedMCQA</td><td>Medicine</td><td>6,150</td><td>MIT</td><td>[40]</td></tr><tr><td>SAT-English</td><td>AGI Eval</td><td>English</td><td>206</td><td>MIT</td><td>[36]</td></tr><tr><td>SAT-Math</td><td>AGI Eval</td><td>Math</td><td>220</td><td>MIT</td><td>[36]</td></tr></table>
|
| 106 |
+
|
| 107 |
+
Note: The GitHub repository for LogiQA does not include a license file. However, both the paper and readme.md file states that "The dataset is freely available."
|
| 108 |
+
|
| 109 |
+
# 2.3 Process
|
| 110 |
+
|
| 111 |
+
Our experiment was designed to test the problem-solving performance of LLMs across ten models, five prompt-engineering techniques, ten problem domains, 100 problems within each problem domain, and all viable sampling temperatures. For each combination of model, prompt, exam, and temperature, we instructed the LLM to answer each question ten times so we could assess the average correct-answer accuracy.
|
| 112 |
+
|
| 113 |
+
The full experiment setup can be seen in Figure 1 and Algorithm 1. However, due to cost and runtime considerations, we conducted a subset of the full experiment designed to capture the most valuable information as efficiently as possible.
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
Figure 1: Diagram of the full experiment.
|
| 117 |
+
|
| 118 |
+
Table 2: Problem sets used to create the multi-domain MCQA exam.
|
| 119 |
+
|
| 120 |
+
<table><tr><td colspan="2">Algorithm 1 Full LLM Temperature Experiment</td></tr><tr><td>1:</td><td>for each model m in M do ▷ 10 models</td></tr><tr><td>2:</td><td>for each prompt p in P do ▷ 5 prompts</td></tr><tr><td>3:</td><td>for each exam e in E do ▷ 10 exams</td></tr><tr><td>4:</td><td>for each temperature τ in T do ▷ 16 temps</td></tr><tr><td>5:</td><td>for each problem q in Q do ▷ 100 prob</td></tr><tr><td>6:</td><td>for each attempt a in A do ▷ 10 attempts</td></tr><tr><td>7:</td><td>Create the prompt</td></tr><tr><td>8:</td><td>Answer the question</td></tr><tr><td>9:</td><td>Record the answer</td></tr><tr><td>10:</td><td>end for</td></tr><tr><td>11:</td><td>end for</td></tr><tr><td>12:</td><td>end for</td></tr><tr><td>13:</td><td>Save the results</td></tr><tr><td>14:</td><td>end for</td></tr><tr><td>15:</td><td>end for</td></tr><tr><td>16:</td><td>end for</td></tr><tr><td>17:</td><td>Process the results</td></tr><tr><td>18:</td><td>Analyze the results</td></tr></table>
|
| 121 |
+
|
| 122 |
+
First, we instructed GPT-3.5 to complete the 100-question (small) exam using the CoT prompt with temperatures ranging from 0.0 to 2.0 in increments of 0.1. This allowed us to determine the range of viable sampling temperatures to explore.
|
| 123 |
+
|
| 124 |
+
Performance began to drop rapidly after a temperature of 1.0 until the generated text became incoherent at 1.6. As a result, we stopped the initial temperature sweep at 1.6 and limited the rest of our sweeps from 0.0 to 1.0.
|
| 125 |
+
|
| 126 |
+
Next, we instructed the other eight LLMs to complete the 100-question (small) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize to other LLMs.
|
| 127 |
+
|
| 128 |
+
Then, we instructed GPT-3.5 to complete the 100-question (small) exam using each of the five prompts over temperatures from 0.0 to 1.0. This allowed us to determine if the results generalize over various prompt-engineering techniques.
|
| 129 |
+
|
| 130 |
+
Finally, we instructed GPT-3.5 to complete the 1,000-question (large) exam using the CoT prompt with temperatures from 0.0 to 1.0. This allowed us to determine if the results were domain-specific or generalized across problem domains.
|
| 131 |
+
|
| 132 |
+
# 2.4 Metrics
|
| 133 |
+
|
| 134 |
+
To test our hypothesis, we measured the LLM's correct-answer accuracy as our primary performance metric. For each combination of model, prompt, exam, and temperature, we calculated the accuracy as the number of correct answers from ten attempts at each problem. Then, we computed the average (mean) accuracy across all problems.
|
| 135 |
+
|
| 136 |
+
To further support our findings, we also measured the similarity of the LLM's responses using a series of text-similarity metrics. These metrics are defined as follows:
|
| 137 |
+
|
| 138 |
+
- Jaccard similarity - the ratio of the intersection to the union of word sets in the output text [42].
|
| 139 |
+
- Bag-of-Words (BoW) similarity - comparison of the frequency of each word, ignoring order [43].
|
| 140 |
+
- TF-IDF similarity - comparison of word frequency weighted by inverse document frequency [44].
|
| 141 |
+
Levenshtein similarity - the number of edits needed to change one string of text into the other [45].
|
| 142 |
+
- BLEU score - comparison of similarity based on n-gram overlap [46].
|
| 143 |
+
- SBERT similarity - semantic similarity computed using Sentence-BERT embeddings [47].
|
| 144 |
+
|
| 145 |
+
# 2.5 Analysis
|
| 146 |
+
|
| 147 |
+
We used the Kruskal-Wallis test to evaluate the statistical significance of any changes in accuracy as a function of temperature [48]. We chose the Kruskal-Wallis test because the data (i.e., correct-answer accuracy by question) were not normally distributed. Rather, they were bimodally distributed with centers at 0.0 and 1.0.
|
| 148 |
+
|
| 149 |
+
# 3 Results
|
| 150 |
+
|
| 151 |
+
# 3.1 Accuracy vs. Temperature
|
| 152 |
+
|
| 153 |
+
Our analysis revealed that the problem-solving performance of LLMs remained relatively stable across sampling temperatures from 0.0 to 1.0 for all LLMs, prompt-engineering techniques, and problem domains. Using GPT-3.5 with a CoT prompt on the 1,000-question exam from 0.0 to 1.0, the Kruskal-Wallis test yielded $H(10) = 10.439$ , $p = 0.403$ .
|
| 154 |
+
|
| 155 |
+
First, we analyzed the performance of GPT-3.5 using the CoT prompt on the 100-question exam. Accuracy remained stable over temperatures from 0.0 to 1.0. However, after a temperature of 1.0, the text rapidly became incoherent, and the accuracy began to drop until it reached zero around a temperature of 1.6. See Figure 2.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
Figure 2: Accuracy by temperature from 0.0 to 1.6 for GPT-3.5 using the CoT prompt on the 100-question exam.
|
| 159 |
+
|
| 160 |
+
Second, we analyzed the performance of all nine LLMs using the CoT prompt on the 100-question exam. Accuracy also remained stable across all of the LLMs, except for Llama 2 7B. The performance of most LLMs showed a gradual (non-significant) decrease in performance as a function of temperature. See Figure 3 and Table 3.
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
Figure 3: Accuracy by temperature and model using the CoT prompt on the 100-question exam.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Model</td><td>H(10)</td><td>p-value</td></tr><tr><td>Claude 3 Opus</td><td>1.735</td><td>0.998</td></tr><tr><td>Command R+</td><td>1.771</td><td>0.998</td></tr><tr><td>Gemini Pro 1.0</td><td>7.379</td><td>0.689</td></tr><tr><td>Gemini Pro 1.5</td><td>3.119</td><td>0.978</td></tr><tr><td>GPT-3.5 Turbo</td><td>2.042</td><td>0.996</td></tr><tr><td>GPT-4</td><td>3.789</td><td>0.956</td></tr><tr><td>Llama 2 70B</td><td>3.677</td><td>0.961</td></tr><tr><td>Llama 2 7B</td><td>17.086</td><td>0.072</td></tr><tr><td>Mistral Large</td><td>3.069</td><td>0.980</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Llama 2 7B did not perform better than statistically random guesses. Its poor performance was due to generating incorrectly formatted answers (39%) and correctly formatted but incorrect answers (36%). Its all-or-nothing behavior at a temperature of 0.0 versus more random behavior from 0.1 to 1.0 led to a much lower, yet still non-significant, p-value.
|
| 168 |
+
|
| 169 |
+
Third, we analyzed the performance of GPT-3.5 using each of the five prompts on the 100-question exam. Accuracy remained stable for all temperatures across all prompt-engineering techniques. The CoT prompt outperformed the other four prompts. As a result, we used the CoT prompt for all single-prompt experiments. See Figure 4 and Table 4.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
Figure 4: Accuracy by temperature and prompt for GPT-3.5 on the 100-question exam.
|
| 173 |
+
|
| 174 |
+
Table 3: Kruskal-Wallis test results by model using the CoT prompt on the 100-question exam.
|
| 175 |
+
|
| 176 |
+
<table><tr><td>Prompt</td><td>H(10)</td><td>p-value</td></tr><tr><td>Baseline</td><td>0.420</td><td>1.000</td></tr><tr><td>Domain Expert</td><td>0.548</td><td>1.000</td></tr><tr><td>Self-recitation</td><td>1.403</td><td>0.999</td></tr><tr><td>Chain of Thought</td><td>2.042</td><td>0.996</td></tr><tr><td>Composite</td><td>1.000</td><td>1.000</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 4: Kruskal-Wallis test results by prompt for GPT-3.5 on the 100-question exam.
|
| 179 |
+
|
| 180 |
+
Finally, we analyzed the performance of GPT-3.5 using the CoT prompt on all ten exams. Accuracy remained stable for all temperatures across all problem domains based on visual analysis. However, the LSAT-AR and SAT-Math exams showed statistically significant differences in the Kruskal-Wallis p-values. See Figure 5 and Table 5.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 5: Accuracy by temperature and exam for GPT-3.5 using the CoT prompt.
|
| 184 |
+
|
| 185 |
+
<table><tr><td>Exam</td><td>H(10)</td><td>p-value</td></tr><tr><td>AQUA-RAT</td><td>10.320</td><td>0.413</td></tr><tr><td>ARC Challenge</td><td>16.390</td><td>0.089</td></tr><tr><td>Hellaswag</td><td>4.473</td><td>0.924</td></tr><tr><td>LogiQA</td><td>3.208</td><td>0.976</td></tr><tr><td>LSAT-AR</td><td>37.874</td><td>< 0.001</td></tr><tr><td>LSAT-LR</td><td>7.816</td><td>0.647</td></tr><tr><td>LSAT-RC</td><td>4.037</td><td>0.946</td></tr><tr><td>MedMCQA</td><td>2.334</td><td>0.993</td></tr><tr><td>SAT-English</td><td>3.937</td><td>0.950</td></tr><tr><td>SAT-Math</td><td>21.276</td><td>0.019</td></tr></table>
|
| 186 |
+
|
| 187 |
+
Table 5: Kruskal-Wallis test results by exam for GPT-3.5 using the CoT prompt.
|
| 188 |
+
|
| 189 |
+
We performed the Dunn-Bonferroni test on the LSAT-AR and SAT-Math results [49]. It revealed that the all-or-nothing behavior of responses generated at a temperature of 0.0 versus the more random responses from 0.1 to 1.0 caused the anomaly. The correct-answer accuracy for each individual problem varied significantly when evaluated pairwise across temperatures. However, the average accuracy for all problems remained similar across temperatures.
|
| 190 |
+
|
| 191 |
+
# 3.2 Text Variability vs. Temperature
|
| 192 |
+
|
| 193 |
+
To further support our results, we analyzed text variability as a function of temperature. Our findings show a clear trend of decreasing text similarity (thus increasing text variability) as temperature increases. Text similarity decreases rapidly after a temperature of 1.0, corresponding to the rapid decrease in accuracy observed above $\tau = 1.0$ . See Figure 6.
|
| 194 |
+
|
| 195 |
+
These results are consistent with our understanding of sampling temperature, indicating that higher temperatures produce more widely varied outputs. Furthermore, these results hold regardless of the LLM, prompt-engineering technique, or problem domain. See Figures 7, 8, and 9.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
Figure 6: Text similarity by temperature and metric for GPT-3.5 using CoT prompting on the 100-question exam over sampling temperatures from 0.0 to 1.6.
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 7: TF-IDF text similarity by temperature and model using the CoT prompt on the 100-question exam over sampling temperatures from 0.0 to 1.0
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 8: TF-IDF text similarity by temperature and prompt for GPT-3.5 on the 100-question exam over sampling temperatures from 0.0 to 1.0.
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
Figure 9: TF-IDF text similarity by temperature and exam for GPT-3.5 using the CoT prompt over sampling temperatures from 0.0 to 1.0
|
| 208 |
+
|
| 209 |
+
# 4 Discussion
|
| 210 |
+
|
| 211 |
+
# 4.1 Interpretation
|
| 212 |
+
|
| 213 |
+
Based on these results, changes in temperature from 0.0 to 1.0 do not have a statistically significant effect on the problem-solving performance of LLMs. These results appear to generalize across LLMs, prompt-engineering techniques, and problem domains. However, there are a few potential exceptions to these general findings.
|
| 214 |
+
|
| 215 |
+
Therefore, we recommend setting an LLM's sampling temperature to 0.0 for problem-solving tasks. This temperature maximizes reproducibility without compromising accuracy. In addition, it helps avoid the inevitable drop-off in performance that occurs beyond a temperature of 1.0. However, exceptions to this guidance should be taken into consideration.
|
| 216 |
+
|
| 217 |
+
# 4.2 Limitations
|
| 218 |
+
|
| 219 |
+
There were several limitations to our research study due to cost and runtime considerations:
|
| 220 |
+
|
| 221 |
+
First, our study was limited to a subset of popular LLMs. As a result, our findings may not hold for other LLMs that were excluded from our experiment.
|
| 222 |
+
|
| 223 |
+
Second, we only explored a subset of prompt-engineering techniques using a single prompt-and-response cycle with one-shot in-context learning. As a result, there may be more complex prompts or agent architectures that better leverage sampling temperature for creativity in their problem-solving capabilities.
|
| 224 |
+
|
| 225 |
+
Third, our study was limited to a subset of problems, problem domains, and problem-solving tasks. As a result, our findings may not hold for larger data sets, different problem domains, or other types of problem-solving tasks.
|
| 226 |
+
|
| 227 |
+
Fourth, due to time and cost constraints, we limited our study to two test sets of 1,000 and 100 randomly selected questions from standard benchmarks. These limited sample sizes may have introduced bias into the results. Utilizing a larger and more diverse test set would enhance the statistical reliability of our findings.
|
| 228 |
+
|
| 229 |
+
Fifth, we had to limit the sampling temperature range we explored from 0.0 to 1.0 for all combinations of models, prompts, and exams, except for GPT-3.5 using CoT prompting on the 100-question exam. As a result, the temperature hyperparameter of other LLMs may operate differently at temperatures above 1.0.
|
| 230 |
+
|
| 231 |
+
Sixth, we fixed all other sampling parameters (e.g., top-p, top-k, repetition penalty, etc.) to isolate the effect of sampling temperature. As a result, there may be combinations of sampling parameters that result in different outcomes.
|
| 232 |
+
|
| 233 |
+
Finally, we could only explore a subset of the various combinations of models, prompts, exams, and temperatures. As a result, other combinations of LLMs, prompt-engineering techniques, and problem domains may exist where temperature plays a more important role in problem-solving performance.
|
| 234 |
+
|
| 235 |
+
# 4.3 Implications
|
| 236 |
+
|
| 237 |
+
This research study provides empirical evidence that changes in sampling temperature in the range of 0.0 to 1.0 do not significantly impact the problem-solving capabilities of LLMs on MCQA problems.
|
| 238 |
+
|
| 239 |
+
Answering this question may save AI engineers significant time and resources evaluating various sampling temperatures for their LLM agents and applications. In addition, it may reduce unproductive debates in the prompt-engineering community regarding the optimal sampling temperatures for various problem-solving tasks.
|
| 240 |
+
|
| 241 |
+
This research also provides broader insights for AI researchers studying model hallucination and problem-solution state-space search with LLMs. Our results show that increasing LLM temperature up to 1.0 does not cause the LLM to hallucinate in ways that lead to incorrect MCQA solutions. In addition, higher temperatures do not appear to improve MCQA solution-space search in ways that lead to correct solutions more often than lower temperatures.
|
| 242 |
+
|
| 243 |
+
# 4.4 Future Research
|
| 244 |
+
|
| 245 |
+
To improve upon this research, we propose the following follow-up experiments:
|
| 246 |
+
|
| 247 |
+
First, we recommend conducting this experiment with additional LLMs. Other proprietary and open-source LLMs may utilize temperature in ways that benefit their specific models but did not benefit the LLMs we tested.
|
| 248 |
+
|
| 249 |
+
Second, we recommend expanding beyond MCQA problems to other types of problem-solving tasks whose correct answers are more open-ended. The limited effects of sampling temperature in our experiments may have simply resulted from the constraints imposed by the structure of MCQA problems.
|
| 250 |
+
|
| 251 |
+
Third, we recommend conducting additional experiments with more MCQA problems and problem domains. We recommend specifically targeting tasks and problem domains that require more creative solutions or lateral "out-of-the-box" thinking.
|
| 252 |
+
|
| 253 |
+
Fourth, we recommend extending the sampling temperature range until accuracy drops to zero for each LLM, prompt, and exam. However, it should be noted that as the generated text becomes more random, the number of tokens in each response increases significantly, leading to a considerable increase in runtime and cost to explore temperatures above 1.0.
|
| 254 |
+
|
| 255 |
+
Finally, we recommend a more in-depth error analysis to determine if any sub-types of problems within these problem domains benefit from changes to sampling temperature. It is possible that statistical noise or averaging may have hidden individual problems that were sensitive to changes in sampling temperature.
|
| 256 |
+
|
| 257 |
+
# 5 Conclusion
|
| 258 |
+
|
| 259 |
+
This research study empirically investigated the effect of sampling temperature on the problem-solving performance of LLMs across multiple problem domains.
|
| 260 |
+
|
| 261 |
+
We demonstrated that changes in sampling temperature from 0.0 to 1.0 do not produce statistically significant differences in problem-solving performance on MCQA problems across multiple LLMs, prompt-engineering techniques, and problem domains.
|
| 262 |
+
|
| 263 |
+
These results have practical implications for AI engineers using LLMs to develop new AI systems. Additionally, they have theoretical implications for AI researchers studying model hallucination and solution-space search with LLMs.
|
| 264 |
+
|
| 265 |
+
# References
|
| 266 |
+
|
| 267 |
+
[1] G. Mialon, R. Dessi, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Roziere, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz, E. Grave, Y. LeCun, and T. Scialom, "Augmented language models: a survey," arXiv, 2 2023.
|
| 268 |
+
[2] J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, H. Gilbert, A. Elnashar, J. Spencer-Smith, and D. C. Schmidt, “A prompt pattern catalog to enhance prompt engineering with ChatGPT,” arXiv, 2 2023.
|
| 269 |
+
[3] OpenAI, "OpenAI - API reference," 2023. [Online]. Available: https://platform.openai.com/docs/api-reference/chat/create
|
| 270 |
+
|
| 271 |
+
[4] Llama-2-Team, "Llama 2: Open foundation and fine-tuned chat models," arXiv, 7 2023.
|
| 272 |
+
[5] C. Wang, S. X. Liu, and A. H. Awadallah, "Cost-effective hyperparameter optimization for large language model generation inference," 2023.
|
| 273 |
+
[6] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985.
|
| 274 |
+
[7] G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv, 3 2015.
|
| 275 |
+
[8] P.-H. Wang, S.-I. Hsieh, S.-C. Chang, Y.-T. Chen, J.-Y. Pan, W. Wei, and D.-C. Juan, “Contextual temperature for language modeling,” arXiv, 12 2020.
|
| 276 |
+
[9] Microsoft, "Completions - learn how to generate or manipulate text," 2023. [Online]. Available: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/completions
|
| 277 |
+
[10] J. Shieh, "Best practices for prompt engineering with OpenAI API," 2024. [Online]. Available: https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api
|
| 278 |
+
[11] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. Association for Computing Machinery, 2022, pp. 1–10.
|
| 279 |
+
[12] Y. Zhu, J. Li, G. Li, Y. Zhao, J. Li, Z. Jin, and H. Mei, "Improving code generation by dynamic temperature sampling," arXiv, 9 2023.
|
| 280 |
+
[13] M. Lee, “A mathematical investigation of hallucination and creativity in GPT models,” Mathematics, vol. 11, p. 2320, 5 2023.
|
| 281 |
+
[14] V. Pursnani, Y. Sermet, and I. Demir, "Performance of ChatGPT on the US fundamentals of engineering exam: Comprehensive assessment of proficiency and potential implications for professional environmental engineering practice," arXiv, 4 2023.
|
| 282 |
+
[15] OpenAI, "Api temperature change from 0 to 1 to 0 to 2 in 'playground'," 2023. [Online]. Available: https://community.openuai.com/t/api-temperature-change-from-0-to-1-to-0-to-2-in-playground/217755
|
| 283 |
+
[16] ——, “GPT-4 technical report,” arXiv, 3 2023. [Online]. Available: https://arxiv.org/abs/2303.08774
|
| 284 |
+
[17] ——, “Introducing ChatGPT,” 11 2022. [Online]. Available: https://openai.com/blog/chatgpt
|
| 285 |
+
[18] ——, “GPT-4,” 3 2023. [Online]. Available: https://openai.com/research/gpt-4
|
| 286 |
+
[19] Anthropic, “Introducing the next generation of claude anthropic,” 2024. [Online]. Available: https://www.anthropic.com/news/claude-3-family
|
| 287 |
+
[20] ——, “The claude 3 model family: Opus, sonnet, haiku,” 2024. [Online]. Available: https://www.anthropic.com/claude-3-model-card
|
| 288 |
+
[21] Cohere, "Command r+," 2024. [Online]. Available: https://docs.cohere.com/docs/command-r-plus
|
| 289 |
+
[22] ——, "Model card for c4ai command r+," 2024. [Online]. Available: https://huggingface.co/CohereForAI/c4ai-command-r-plus
|
| 290 |
+
[23] S. Pichai and D. Hassabis, "Introducing gemini: Google's most capable ai model yet," 2023. [Online]. Available: https://blog.google/technology/ai/google-gemini-ai/
|
| 291 |
+
[24] Gemini-Team, "Gemini: A family of highly capable multimodal models," arXiv, 12 2023.
|
| 292 |
+
[25] S. Pichai and D. Hassabis, "Introducing gemini 1.5, google's next-generation ai model," 2024. [Online]. Available: https://blog.google/technology/ai/google-gemini- next-generation-model-february-2024/
|
| 293 |
+
[26] Gemini-Team, "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context," 2024. [Online]. Available: https://arxiv.org/abs/2403.05530
|
| 294 |
+
[27] OpenAI, "Models - openerai api." [Online]. Available: https://platform.openerai.com/docs/models/gpt-3-5-turbo
|
| 295 |
+
[28] Meta, “Meta and microsoft introduce the next generation of llama | meta,” 2023. [Online]. Available: https://about.meta.com/news/2023/07/llama-2/
|
| 296 |
+
[29] Mistral-AI-Team, “Au large | mistral ai | frontier ai in your hands,” 2024. [Online]. Available: https://mistral.ai/news/mistral-large/
|
| 297 |
+
[30] Z. Sun, X. Wang, Y. Tay, Y. Yang, and D. Zhou, "Recitation-augmented language models," in The Eleventh International Conference on Learning Representations, 10 2023.
|
| 298 |
+
[31] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, "Large language models are zero-shot reasoners," in Advances in Neural Information Processing Systems, vol. 35, 5 2022, pp. 22 199-22 213.
|
| 299 |
+
[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, "Chain-of-thought prompting elicits reasoning in large language models," arXiv, 1 2022.
|
| 300 |
+
[33] S. Huo, N. Arabzadeh, and C. L. A. Clarke, “Retrieving supporting evidence for generative question answering,” arXiv, 9 2023.
|
| 301 |
+
|
| 302 |
+
[34] R. Wang, H. Wang, F. Mi, Y. Chen, R. Xu, and K.-F. Wong, "Self-critique prompting with large language models for inductive instructions," arXiv, 5 2023.
|
| 303 |
+
[35] P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord, "Think you have solved question answering? Try ARC, the AI2 reasoning challenge," ArXiv, 3 2018.
|
| 304 |
+
[36] W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan, "AGIEval: A human-centric benchmark for evaluating foundation models," ArXiv, 4 2023.
|
| 305 |
+
[37] R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, "HellaSwag: Can a machine really finish your sentence?" in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
|
| 306 |
+
[38] J. Liu, L. Cui, H. Liu, D. Huang, Y. Wang, and Y. Zhang, "Logiqa: A challenge dataset for machine reading comprehension with logical reasoning," in International Joint Conference on Artificial Intelligence, 2020.
|
| 307 |
+
[39] S. Wang, Z. Liu, W. Zhong, M. Zhou, Z. Wei, Z. Chen, and N. Duan, "From lsat: The progress and challenges of complex reasoning," IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 30, pp. 2201-2216, 8 2021.
|
| 308 |
+
[40] A. Pal, L. K. Umapathi, and M. Sankarasubbu, "MedMCQA: A large-scale multi-subject multi-choice dataset for medical domain question answering," in Proceedings of the Conference on Health, Inference, and Learning. PMLR, 2022, pp. 248-260.
|
| 309 |
+
[41] I. Ward, "JSON lines," 2014. [Online]. Available: https://jsonlines.org/
|
| 310 |
+
[42] P. Jaccard, “The distribution of flora in the alpine zone,” New Phytologist, vol. 11, pp. 37–50, 2 1912.
|
| 311 |
+
[43] Z. S. Harris, "Distributional structure," WORD, vol. 10, pp. 146-162, 8 1954.
|
| 312 |
+
[44] K. S. Jones, “A statistical interpretation of term specificity and its application in retrieval,” Journal of Documentation, vol. 28, pp. 11–21, 1 1972.
|
| 313 |
+
[45] V. Levenshtein, "Binary codes capable of correcting deletions, insertions and reversals," Soviet Physics Doklady, vol. 10, pp. 707-710, 1966.
|
| 314 |
+
[46] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “BLEU,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL '02. Association for Computational Linguistics, 2001, p. 311.
|
| 315 |
+
[47] N. Reimers and I. Gurevych, "Sentence-BERT: Sentence embeddings using Siamese BERT-networks," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8 2019.
|
| 316 |
+
[48] W. H. Kruskal and W. A. Wallis, "Use of ranks in one-criterion variance analysis," Journal of the American Statistical Association, vol. 47, no. 260, pp. 583-621, 1952. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/01621459.1952.10483441
|
| 317 |
+
[49] O. J. Dunn, “Multiple comparisons using rank sums,” Technometrics, vol. 6, no. 3, pp. 241–252, 1964. [Online]. Available: https://www.tandfonline.com/doi/abs/10.1080/00401706.1964.10490181
|
| 318 |
+
|
| 319 |
+
# A Appendix
|
| 320 |
+
|
| 321 |
+
Figure 10: Sample of the composite system prompt with a one-shot example (i.e., problem-and-solution pair).
|
| 322 |
+
```txt
|
| 323 |
+
[System Prompt]
|
| 324 |
+
You are an expert in {\{expertise\}}.
|
| 325 |
+
Your task is to answer the following multiple-choice questions..
|
| 326 |
+
First, you should recite all of the relevant knowledge you have about the question and each option.
|
| 327 |
+
Next, you should think step-by-step through the problem to ensure you have the correct answer.
|
| 328 |
+
Then, you should critically evaluate your thoughts to identify any flaws in your facts, logic, and reasoning. Finally, you MUST answer the question using the following format 'Action: Answer("[choice]") The parameter [choice] is the letter or number of the answer you want to select (e.g. "A", "B", "C", or "D") For example, 'Answer("C')' will select choice "C" as the best answer. The answer MUST ALWAYS be one of the available choices; it CANNOT be "None of the Above". If you think the answer is "none of the above", then you MUST select the most likely answer.
|
| 329 |
+
[Example Problem]
|
| 330 |
+
Question: What is the capital of the state where Johns Hopkins University is located?
|
| 331 |
+
Choices: A: Baltimore B: Annapolis C: Des Moines D: Las Vegas
|
| 332 |
+
[Example Solution]
|
| 333 |
+
Knowledge: Johns Hopkins University is located in Baltimore, Maryland. A: Baltimore is a city located in the State of Maryland, but it is not the capital of Maryland. B: Annapolis is the capital of the State of Maryland. C: Des Moines is a city located in the State of Iowa and is the capital of Iowa. D: Las Vegas is located in the State of Nevada, but it is not the capital of Nevada. Thought: Johns Hopkins University is located in Baltimore. Baltimore is a city located in the state of Maryland. The capital of Maryland is Baltimore. Therefore, the capital of the state where Johns Hopkins University is located is Baltimore. The answer is A: Baltimore. Criticism: You are correct that Johns Hopkins is located in the State of Maryland. However, the capital of Maryland is Annapolis, not Baltimore. So, the correct answer is actually B: Annapolis. Action: Answer("B")
|
| 334 |
+
```
|
| 335 |
+
|
| 336 |
+
Figure 11: Sample of an MCQA problem in JSON-L format - with whitespace added for readability.
|
| 337 |
+
```json
|
| 338 |
+
{
|
| 339 |
+
"source": "arc/arc-challenge-test",
|
| 340 |
+
"source_id": 1,
|
| 341 |
+
"topic": "Science",
|
| 342 |
+
"context":.,
|
| 343 |
+
"question": "An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?", "choices": {
|
| 344 |
+
"A": "Planetary density will decrease.", "B": "Planetary years will become longer.", "C": "Planetary days will become shorter.", "D": "Planetary gravity will become stronger."
|
| 345 |
+
},
|
| 346 |
+
"answer": "C",
|
| 347 |
+
"solution": ""
|
| 348 |
+
}
|
| 349 |
+
```
|
2402.05xxx/2402.05201/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e518812721dc290654173aff0b27d59f37ace56e045cb982d0f388e8104712c
|
| 3 |
+
size 485330
|
2402.05xxx/2402.05201/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_content_list.json
ADDED
|
@@ -0,0 +1,1209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
227,
|
| 8 |
+
140,
|
| 9 |
+
774,
|
| 10 |
+
205
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Nicholas Konz<sup>1*</sup>, Yuwen Chen<sup>1</sup>, Haoyu Dong<sup>1</sup>, and Maciej A. Mazurowski<sup>1,2,3,4</sup>",
|
| 17 |
+
"bbox": [
|
| 18 |
+
215,
|
| 19 |
+
233,
|
| 20 |
+
785,
|
| 21 |
+
251
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Department of Electrical and Computer Engineering, Duke University, NC, USA",
|
| 28 |
+
"bbox": [
|
| 29 |
+
225,
|
| 30 |
+
261,
|
| 31 |
+
774,
|
| 32 |
+
277
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ Department of Radiology, Duke University, NC, USA",
|
| 39 |
+
"bbox": [
|
| 40 |
+
287,
|
| 41 |
+
277,
|
| 42 |
+
686,
|
| 43 |
+
290
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "$^{3}$ Department of Computer Science, Duke University, NC, USA",
|
| 50 |
+
"bbox": [
|
| 51 |
+
287,
|
| 52 |
+
290,
|
| 53 |
+
712,
|
| 54 |
+
304
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "$^{4}$ Department of Biostatistics & Bioinformatics, Duke University, NC, USA",
|
| 61 |
+
"bbox": [
|
| 62 |
+
250,
|
| 63 |
+
304,
|
| 64 |
+
751,
|
| 65 |
+
318
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "*corresponding author nicholas.konz@duke.edu",
|
| 72 |
+
"bbox": [
|
| 73 |
+
338,
|
| 74 |
+
318,
|
| 75 |
+
661,
|
| 76 |
+
330
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Abstract. Diffusion models have enabled remarkably high-quality medical image generation, yet it is challenging to enforce anatomical constraints in generated images. To this end, we propose a diffusion model-based method that supports anatomically-controllable medical image generation, by following a multi-class anatomical segmentation mask at each sampling step. We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints while allowing flexibility in other anatomical areas. We compare our method (\"SegGuidedDiff\") to existing methods on breast MRI and abdominal/neck-to-pelvis CT datasets with a wide range of anatomical objects. Results show that our method reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks on both datasets, and is on par for general anatomical realism. Finally, our model also enjoys the extra benefit of being able to adjust the anatomical similarity of generated images to real images of choice through interpolation in its latent space. SegGuidedDiff has many applications, including cross-modality translation, and the generation of paired or counterfactual data. Our code is available at https://github.com/mazurowski-lab/segmentation-guided-diffusion.",
|
| 83 |
+
"bbox": [
|
| 84 |
+
259,
|
| 85 |
+
369,
|
| 86 |
+
740,
|
| 87 |
+
633
|
| 88 |
+
],
|
| 89 |
+
"page_idx": 0
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"text": "Keywords: diffusion models $\\cdot$ image generation $\\cdot$ semantic synthesis",
|
| 94 |
+
"bbox": [
|
| 95 |
+
261,
|
| 96 |
+
646,
|
| 97 |
+
723,
|
| 98 |
+
661
|
| 99 |
+
],
|
| 100 |
+
"page_idx": 0
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"text": "Introduction",
|
| 105 |
+
"text_level": 1,
|
| 106 |
+
"bbox": [
|
| 107 |
+
215,
|
| 108 |
+
686,
|
| 109 |
+
339,
|
| 110 |
+
702
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "Denoising diffusion probabilistic models [9] (DDPMs, or just \"diffusion models\") have shown extensive applications in medical image analysis [11] due to their ability to generate high-quality, high-resolution images, such as for direct image generation [21,12], image segmentation [29], anomaly detection [20,28], cross-modality image translation [17], and image denoising [6]. However, standard generative models like DDPMs can still fail to create anatomically plausible tissue (Fig 1), and such anatomy is not precisely customizable. Our proposed solution is to incorporate anatomical information as a prior for image generation",
|
| 117 |
+
"bbox": [
|
| 118 |
+
212,
|
| 119 |
+
718,
|
| 120 |
+
787,
|
| 121 |
+
840
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 0
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "aside_text",
|
| 127 |
+
"text": "arXiv:2402.05210v4 [eess.IV] 19 Jun 2024",
|
| 128 |
+
"bbox": [
|
| 129 |
+
22,
|
| 130 |
+
265,
|
| 131 |
+
57,
|
| 132 |
+
717
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 0
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "text",
|
| 138 |
+
"text": "via a segmentation mask for different types of tissue, organs, etc., providing the network with a more direct learning signal for anatomical realism.",
|
| 139 |
+
"bbox": [
|
| 140 |
+
212,
|
| 141 |
+
146,
|
| 142 |
+
782,
|
| 143 |
+
176
|
| 144 |
+
],
|
| 145 |
+
"page_idx": 1
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"type": "text",
|
| 149 |
+
"text": "Generating an image from a mask (a.k.a. semantic synthesis) is a type of image-to-image translation task. Existing translation works include GAN-based [30,3,2] and diffusion-based models [28,23,31], yet these do not directly enforce precise pixel-wise anatomical constraints. Recent works introduced fine-tuning large pre-trained text-to-image latent diffusion models (LDMs) for segmentation-conditioning on natural images [31,27], but we found that these adapt poorly to medical images (Sec. 3.1). Instead, we propose the first diffusion model for generating medical images from anatomical segmentations, which is assisted by it being an image-space diffusion model trained from scratch. Image-space-based diffusion models are especially amenable for strict segmentation guidance because a conditioning mask can be used as-is for every small step of the denoising process, with no conversion to some abstract latent space as in certain LDMs [31,27,23] (or GANs) where precise spatial guidance may be lost.",
|
| 150 |
+
"bbox": [
|
| 151 |
+
212,
|
| 152 |
+
176,
|
| 153 |
+
787,
|
| 154 |
+
375
|
| 155 |
+
],
|
| 156 |
+
"page_idx": 1
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"type": "image",
|
| 160 |
+
"img_path": "images/2c1060b8bd1a368342f88b837945ac8c642034c7626dacedd096ec57549b5aff.jpg",
|
| 161 |
+
"image_caption": [
|
| 162 |
+
"Fig. 1. Standard diffusion models (right) can fail to create realistic tissue even if the overall image appears high-quality, motivating our segmentation-guided model (center)."
|
| 163 |
+
],
|
| 164 |
+
"image_footnote": [],
|
| 165 |
+
"bbox": [
|
| 166 |
+
328,
|
| 167 |
+
400,
|
| 168 |
+
669,
|
| 169 |
+
554
|
| 170 |
+
],
|
| 171 |
+
"page_idx": 1
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"type": "text",
|
| 175 |
+
"text": "Segmentation-guided generation would be even more flexible if only certain object classes could be constrained in an input mask, while others are free to be inferred by the model. This opens up further applications such as generating images from incomplete masks ([1]), the generation of anatomically paired/registered data, \"counterfactual\" analysis of existing annotated data, and others. To solve this, we propose a mask-ablated training strategy to provide the model with all possible combinations of missing classes in masks during training, teaching it to make such inferences when generating new images. Notably, we also demonstrate how interpolating within the latent space of our mask ablated-trained model enables generating images with adjustable anatomical similarity to some real image (Sec. 3.2).",
|
| 176 |
+
"bbox": [
|
| 177 |
+
212,
|
| 178 |
+
627,
|
| 179 |
+
787,
|
| 180 |
+
792
|
| 181 |
+
],
|
| 182 |
+
"page_idx": 1
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"type": "text",
|
| 186 |
+
"text": "In summary, we introduce a diffusion model, \"SegGuidedDiff\" that can flexibly and precisely generate medical images according to anatomical masks. We evaluate our model's ability to generate images conditioned on a range of",
|
| 187 |
+
"bbox": [
|
| 188 |
+
212,
|
| 189 |
+
794,
|
| 190 |
+
787,
|
| 191 |
+
840
|
| 192 |
+
],
|
| 193 |
+
"page_idx": 1
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"type": "page_number",
|
| 197 |
+
"text": "2",
|
| 198 |
+
"bbox": [
|
| 199 |
+
217,
|
| 200 |
+
114,
|
| 201 |
+
228,
|
| 202 |
+
126
|
| 203 |
+
],
|
| 204 |
+
"page_idx": 1
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"type": "header",
|
| 208 |
+
"text": "N. Konz et al.",
|
| 209 |
+
"bbox": [
|
| 210 |
+
271,
|
| 211 |
+
114,
|
| 212 |
+
367,
|
| 213 |
+
127
|
| 214 |
+
],
|
| 215 |
+
"page_idx": 1
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"type": "text",
|
| 219 |
+
"text": "anatomical objects of interest for breast MRI and neck-to-pelvis CT, where it outperforms state-of-the-art mask-conditional generative models in its faithfulness to input anatomical masks, and is on par for general anatomical realism (Sec. 3.1). Our code is publicly released at https://github.com/mazurowski-lab/segmentation-guided-diffusion with a focus on usability on any dataset, along with a dataset of paired \"pre-registered\" generated breast MRIs.",
|
| 220 |
+
"bbox": [
|
| 221 |
+
212,
|
| 222 |
+
146,
|
| 223 |
+
782,
|
| 224 |
+
237
|
| 225 |
+
],
|
| 226 |
+
"page_idx": 2
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"type": "text",
|
| 230 |
+
"text": "1 Method",
|
| 231 |
+
"text_level": 1,
|
| 232 |
+
"bbox": [
|
| 233 |
+
215,
|
| 234 |
+
260,
|
| 235 |
+
330,
|
| 236 |
+
275
|
| 237 |
+
],
|
| 238 |
+
"page_idx": 2
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"type": "text",
|
| 242 |
+
"text": "1.1 A Brief Review of Diffusion Models",
|
| 243 |
+
"text_level": 1,
|
| 244 |
+
"bbox": [
|
| 245 |
+
215,
|
| 246 |
+
291,
|
| 247 |
+
555,
|
| 248 |
+
305
|
| 249 |
+
],
|
| 250 |
+
"page_idx": 2
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"type": "text",
|
| 254 |
+
"text": "Denoising diffusion probabilistic models [9] (DDPMs, or diffusion models for short) are a type of generative latent variable model that learns to sample from some data distribution $p(x_0)$ ( $x_0 \\in \\mathbb{R}^n$ ) by defining a stochastic process that gradually converts the data to noise—the forward process $q(x_t | x_{t-1})$ and learning to reverse this process via a learned denoising process $p_\\theta(x_{t-1} | x_t)$ , where $\\theta$ is the model parameters. Data is generated by iteratively sampling from $p_\\theta(x_{t-1} | x_t)$ , beginning with a Gaussian noise sample $x_T \\sim p(x_T)$ , for $t = T - 1, \\ldots, 0$ (we use $T = 1000$ ) until an image $x_0$ is recovered.",
|
| 255 |
+
"bbox": [
|
| 256 |
+
212,
|
| 257 |
+
316,
|
| 258 |
+
784,
|
| 259 |
+
435
|
| 260 |
+
],
|
| 261 |
+
"page_idx": 2
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"type": "text",
|
| 265 |
+
"text": "Any forward process step can be written explicitly as $x_{t} = \\sqrt{\\overline{\\alpha}_{t}} x_{0} + \\sqrt{1 - \\overline{\\alpha}_{t}}\\epsilon$ where $\\epsilon \\sim \\mathcal{N}(0,I_n)$ , and $\\alpha_{t} \\coloneqq 1 - \\beta_{t}$ given the variance of the additive prescheduled noise $\\beta_{t}$ , and $\\overline{\\alpha}_t \\coloneqq \\prod_{s = 1}^t\\alpha_s$ . DDPMs can be trained by the usual evidence lower bound (ELBO) maximization, which can be approximately optimized in a relatively simple form by training a network $\\epsilon_{\\theta}(x_t,t)$ to predict the noise $\\epsilon$ added to each datapoint $x_0$ for various time steps $t$ , with the loss $L = \\mathbb{E}_{x_0,t,\\epsilon}\\left[||\\epsilon -\\epsilon_\\theta (x_t,t)||^2\\right] = \\mathbb{E}_{x_0,t,\\epsilon}\\left[||\\epsilon -\\epsilon_\\theta (\\sqrt{\\overline{\\alpha}_t} x_0 + \\sqrt{1 - \\overline{\\alpha}_t}\\epsilon ,t)||^2\\right]$ , which has proven to be the typically superior DDPM loss in practice [18].",
|
| 266 |
+
"bbox": [
|
| 267 |
+
212,
|
| 268 |
+
436,
|
| 269 |
+
785,
|
| 270 |
+
559
|
| 271 |
+
],
|
| 272 |
+
"page_idx": 2
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"type": "text",
|
| 276 |
+
"text": "1.2 Adding Segmentation Guidance to Diffusion Models",
|
| 277 |
+
"text_level": 1,
|
| 278 |
+
"bbox": [
|
| 279 |
+
215,
|
| 280 |
+
580,
|
| 281 |
+
689,
|
| 282 |
+
594
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 2
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "Rather than sampling from the unconditional distribution $p(x_0)$ , our goal is to condition the generation of some $c$ -channel image $x_0 \\in \\mathbb{R}^{c \\times h \\times w}$ to follow some multi-class anatomical mask $m \\in \\{0, \\dots, C - 1\\}^{h \\times w}$ , where $C$ is the number of classes (including background), or in other words, sample from $p(x_0|m)$ . While modifying the data likelihood $p(x_0|m)$ to be mask-conditional does not alter the noisng process $q(x_t|x_{t-1})$ , it does modify the reverse process $p_\\theta(x_{t-1}|x_t, m)$ and noise-predicting network $\\epsilon_\\theta$ . Propagating this to the ELBO results in a loss of",
|
| 289 |
+
"bbox": [
|
| 290 |
+
212,
|
| 291 |
+
604,
|
| 292 |
+
784,
|
| 293 |
+
709
|
| 294 |
+
],
|
| 295 |
+
"page_idx": 2
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "equation",
|
| 299 |
+
"text": "\n$$\nL _ {m} = \\mathbb {E} _ {\\left(x _ {0}, m\\right), t, \\epsilon} \\left[ | | \\epsilon - \\epsilon_ {\\theta} \\left(\\sqrt {\\bar {\\alpha} _ {t}} x _ {0} + \\sqrt {1 - \\bar {\\alpha} _ {t}} \\epsilon , t \\mid m\\right) | | ^ {2} \\right] \\tag {1}\n$$\n",
|
| 300 |
+
"text_format": "latex",
|
| 301 |
+
"bbox": [
|
| 302 |
+
316,
|
| 303 |
+
720,
|
| 304 |
+
784,
|
| 305 |
+
739
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 2
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "for training our model, where each training image $x_0$ has some paired mask $m$ . We propose to implement this simply by concatenating $m$ channel-wise to the network input at every denoising step, i.e., modifying the network to have an additional input channel as $\\epsilon_{\\theta}(x_t,t|m):\\mathbb{R}^{(c + 1)\\times h\\times w}\\to \\mathbb{R}^{c\\times h\\times w}$ , which can be any image-to-image model (see Sec. 3). We use the DDIM algorithm [26] for fast, yet high-quality sampling.",
|
| 312 |
+
"bbox": [
|
| 313 |
+
212,
|
| 314 |
+
750,
|
| 315 |
+
782,
|
| 316 |
+
839
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 2
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "header",
|
| 322 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 323 |
+
"bbox": [
|
| 324 |
+
290,
|
| 325 |
+
114,
|
| 326 |
+
732,
|
| 327 |
+
128
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "page_number",
|
| 333 |
+
"text": "3",
|
| 334 |
+
"bbox": [
|
| 335 |
+
774,
|
| 336 |
+
116,
|
| 337 |
+
784,
|
| 338 |
+
126
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "This simple method generates images that are very faithful to input masks (Fig. 2, Table 1), because the denoising process is conditioned on the mask at each of its many gradual steps, allowing the network to follow the masks because they provide helpful spatial information that is directly correlated with the optimal denoised model output that minimizes the loss.",
|
| 345 |
+
"bbox": [
|
| 346 |
+
212,
|
| 347 |
+
146,
|
| 348 |
+
787,
|
| 349 |
+
220
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 3
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "text",
|
| 355 |
+
"text": "1.3 Mask-Ablated Training and Sampling",
|
| 356 |
+
"text_level": 1,
|
| 357 |
+
"bbox": [
|
| 358 |
+
215,
|
| 359 |
+
244,
|
| 360 |
+
571,
|
| 361 |
+
262
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 3
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Given that our model is mask-guided, the quality of these masks is important; a generated image may be misleading if the input mask is not fully annotated, known as the partial label problem [1] in medical image analysis. This is because the model may assume that un-annotated objects should not be present in the output image whatsoever (associating the missing/zero pixel labels as background or some other object), when in reality we may desire for the model to simply \"fill in\"/infer the unprovided objects.",
|
| 368 |
+
"bbox": [
|
| 369 |
+
212,
|
| 370 |
+
271,
|
| 371 |
+
787,
|
| 372 |
+
378
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 3
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "text",
|
| 378 |
+
"text": "To alleviate this problem, we propose a mask-ablated training (MAT) strategy (Algorithm 1), which provides examples of masks with various numbers and combinations of classes removed for the model to learn to generate images from during training. This can be thought of as a form of self-supervised learning of anatomical object representations (somewhat analogous to MAE [7]). We set all $2^{C-1}$ of these possible combinations of classes being removed from a given mask in training to occur with equal probability so that the model can handle each equally, although any other balancing of these probabilities for different object classes could be used. Finally, we note that our MAT algorithm is immediately applicable to any mask-conditional generative model.",
|
| 379 |
+
"bbox": [
|
| 380 |
+
212,
|
| 381 |
+
378,
|
| 382 |
+
787,
|
| 383 |
+
529
|
| 384 |
+
],
|
| 385 |
+
"page_idx": 3
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"type": "code",
|
| 389 |
+
"sub_type": "algorithm",
|
| 390 |
+
"code_caption": [
|
| 391 |
+
"Algorithm 1: Segmentation-guided model training with mask ablation."
|
| 392 |
+
],
|
| 393 |
+
"code_body": "Input: number of mask classes $C$ dataset $p(x_0,m)$ \nrepeat $\\begin{array}{l}x_0,m\\sim p(x_0,m)\\\\ \\text{for} c = 1,\\ldots ,C - 1\\text{do}\\\\ \\delta \\sim \\mathrm{Bernoulli}(0.5)\\\\ \\text{if}\\delta = 1\\text{then}\\\\ |m[m = c] = 0\\\\ \\text{end}\\\\ \\epsilon \\sim \\mathcal{N}(0,I_n);t\\sim \\mathrm{Uniform}(\\{1,\\dots,T\\})\\\\ x_t = \\sqrt{\\bar{\\alpha}_t} x_0 + \\sqrt{1 - \\bar{\\alpha}_t}\\epsilon \\\\ \\mathrm{Update}\\theta \\mathrm{with}\\nabla_\\theta \\| \\epsilon -\\epsilon_\\theta (x_t,t|m)\\| ^2\\\\ \\mathrm{until}\\mathrm{converged}; \\end{array}$",
|
| 394 |
+
"bbox": [
|
| 395 |
+
218,
|
| 396 |
+
550,
|
| 397 |
+
785,
|
| 398 |
+
736
|
| 399 |
+
],
|
| 400 |
+
"page_idx": 3
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"type": "text",
|
| 404 |
+
"text": "2 Datasets",
|
| 405 |
+
"text_level": 1,
|
| 406 |
+
"bbox": [
|
| 407 |
+
215,
|
| 408 |
+
761,
|
| 409 |
+
336,
|
| 410 |
+
777
|
| 411 |
+
],
|
| 412 |
+
"page_idx": 3
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"text": "Breast MRI: Our first dataset is a 100-patient subset of the Duke Breast Cancer MRI dataset [25]. We use all 2D image slices from the fat-saturated gradient echo T1-weighted pre-contrast sequence, with a train/test split of 70/15 patients,",
|
| 417 |
+
"bbox": [
|
| 418 |
+
212,
|
| 419 |
+
794,
|
| 420 |
+
785,
|
| 421 |
+
842
|
| 422 |
+
],
|
| 423 |
+
"page_idx": 3
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "page_number",
|
| 427 |
+
"text": "4",
|
| 428 |
+
"bbox": [
|
| 429 |
+
217,
|
| 430 |
+
114,
|
| 431 |
+
228,
|
| 432 |
+
126
|
| 433 |
+
],
|
| 434 |
+
"page_idx": 3
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "header",
|
| 438 |
+
"text": "N. Konz et al.",
|
| 439 |
+
"bbox": [
|
| 440 |
+
271,
|
| 441 |
+
114,
|
| 442 |
+
367,
|
| 443 |
+
127
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 3
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"text": "resulting in $\\sim 12000 / 2500$ slice images per split. We also keep a held-out training set of 15 patients for additional experiments. All images have full segmentation annotations for (1) breast, (2) blood vessels (BV), and (3) fibroglandular/dense tissue (FGT) provided at [14]. Notably, the FGT and BV have very high variability in shape, size, and other morphological characteristics, posing a challenge for generative models to realistically capture. CT Organ: Our second dataset is a 40-patient subset of neck-to-pelvis and abdominal CT scans from [22], with segmentation annotations for liver, bladder, lungs, kidney, and bone. This results in a train/test split of $\\sim 11000 / 2100$ 2D slice images, given a patient-wise split of $24 / 8$ , as well as a held-out training set of 8 patients. All generative models are trained on the training sets, and the auxiliary segmentation network, introduced next, is trained on the held-out training sets.",
|
| 450 |
+
"bbox": [
|
| 451 |
+
212,
|
| 452 |
+
146,
|
| 453 |
+
787,
|
| 454 |
+
328
|
| 455 |
+
],
|
| 456 |
+
"page_idx": 4
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"text": "3 Experiments",
|
| 461 |
+
"text_level": 1,
|
| 462 |
+
"bbox": [
|
| 463 |
+
215,
|
| 464 |
+
353,
|
| 465 |
+
375,
|
| 466 |
+
369
|
| 467 |
+
],
|
| 468 |
+
"page_idx": 4
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"type": "text",
|
| 472 |
+
"text": "Training, Architecture and Implementational Details. All images are resized to $256 \\times 256$ and normalized to [0, 255]. We use a UNet architecture [24] for the denoising model $\\epsilon_{\\theta}$ , modified to take two channels (image and mask) as input; see Appendix A for additional training and architecture details.",
|
| 473 |
+
"bbox": [
|
| 474 |
+
212,
|
| 475 |
+
386,
|
| 476 |
+
787,
|
| 477 |
+
446
|
| 478 |
+
],
|
| 479 |
+
"page_idx": 4
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"type": "text",
|
| 483 |
+
"text": "3.1 Comparison to Existing Image Generation Models",
|
| 484 |
+
"text_level": 1,
|
| 485 |
+
"bbox": [
|
| 486 |
+
214,
|
| 487 |
+
470,
|
| 488 |
+
676,
|
| 489 |
+
486
|
| 490 |
+
],
|
| 491 |
+
"page_idx": 4
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "text",
|
| 495 |
+
"text": "We next compare our segmentation-guided diffusion model (\"SegGuidedDiff\" for short) to state-of-the-art segmentation-conditional image generation models. These are SPADE [19], a GAN-based model that uses spatially-adaptive normalization layers, and ControlNet [31], a recent method for adding spatial conditioning to large pretrained text-to-image diffusion models. Training/implementation details for each are in Appendix A.1; note that we use the standard implementation of SPADE rather than the recent brain MRI SPADE model [5,4] because its modifications to SPADE do not apply to our setting (see Appendix A.1), and that we follow ControlNet's guidelines for adapting it to medical images. We show example generated images from all models in Fig. 2 (using masks randomly sampled from the test set); more are provided in Appendix B.",
|
| 496 |
+
"bbox": [
|
| 497 |
+
212,
|
| 498 |
+
498,
|
| 499 |
+
787,
|
| 500 |
+
666
|
| 501 |
+
],
|
| 502 |
+
"page_idx": 4
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"text": "Evaluating Faithfulness of Generated Images to Input Masks. To measure how well our model follows an input mask for image generation, we use an auxiliary segmentation network trained on the real training set (a standard UNet; training details in Appendix A.2), to predict segmentations $m_{gen}^{pred}$ on images that were generated from the masks $m$ in the test set, and measure their overlap with (a) $m$ and (b) the model's predicted segmentations $m_{real}^{pred}$ for the input masks' original corresponding real images, similar to metrics used in [19]. Our model's generated images have high overlap for both metrics ( $> 0.85$ Dice coeff., Table 1), showing that our model closely followed the input masks when generating the images, and also outperformed the competing methods.",
|
| 507 |
+
"bbox": [
|
| 508 |
+
212,
|
| 509 |
+
688,
|
| 510 |
+
787,
|
| 511 |
+
840
|
| 512 |
+
],
|
| 513 |
+
"page_idx": 4
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "header",
|
| 517 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 518 |
+
"bbox": [
|
| 519 |
+
290,
|
| 520 |
+
114,
|
| 521 |
+
730,
|
| 522 |
+
128
|
| 523 |
+
],
|
| 524 |
+
"page_idx": 4
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "page_number",
|
| 528 |
+
"text": "5",
|
| 529 |
+
"bbox": [
|
| 530 |
+
774,
|
| 531 |
+
116,
|
| 532 |
+
784,
|
| 533 |
+
126
|
| 534 |
+
],
|
| 535 |
+
"page_idx": 4
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "image",
|
| 539 |
+
"img_path": "images/ee845974c257eff0e1e093a9643b1319175719746a92ecbba87c4c790de532ae.jpg",
|
| 540 |
+
"image_caption": [
|
| 541 |
+
"Fig.2. Visual comparison of our model (SegGuidedDiff, or \"Seg-Diff\" for short) to existing segmentation-conditional image generation models. For breast MRI, the breast, BV, and FGT segmentations are shown as white, red, and blue, respectively, while for CT, the liver, bladder, lungs, kidneys, and bone are in maroon, orange, pink, red, and white, respectively. \"MAT\" = max ablated training, \"STD\" = our standard method."
|
| 542 |
+
],
|
| 543 |
+
"image_footnote": [],
|
| 544 |
+
"bbox": [
|
| 545 |
+
251,
|
| 546 |
+
147,
|
| 547 |
+
746,
|
| 548 |
+
390
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 5
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "Evaluating Generated Image Quality. We first attempted to use the common Fréchet Inception Distance (FID) [8] as a metric for quality/realism of generated image features compared to real data, via a CNN image encoder trained on the corresponding dataset. We observed that samples generated from both our segmentation-guided and standard unconditional diffusion models achieved potentially promising results, (e.g., breast MRI feature $\\mathrm{FID} \\simeq 0.5$ ), yet CNN feature-based metrics like FID fail to capture the global feature of anatomical realism that can differ in images generated by these models (e.g., fibroglandular tissue as shown in Fig. 1), so we caution using such metrics.",
|
| 555 |
+
"bbox": [
|
| 556 |
+
212,
|
| 557 |
+
521,
|
| 558 |
+
787,
|
| 559 |
+
656
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 5
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "text",
|
| 565 |
+
"text": "Instead, we propose to more precisely measure anatomical realism by determining how well the aforementioned auxiliary segmentation models for the objects of interest can be trained solely on these synthetic images to be able to generalize to real data, using the input masks as targets. We compare the performance of the segmentation models trained on (a) the real held-out training set (Sec. 2) and (b) the set of images generated from all masks corresponding to these images. We split the real test set in half (by patient) into a validation set and a test set to use for these models. The results for this are in Table 2; we see that for both datasets, the segmentation network trained on our model's synthetic data barely performs worse (by only $\\leq 0.04$ Dice) than the network trained on real data, implying that our generated images are both highly realistic and faithful to the input masks (especially considering certain objects'",
|
| 566 |
+
"bbox": [
|
| 567 |
+
212,
|
| 568 |
+
659,
|
| 569 |
+
787,
|
| 570 |
+
840
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 5
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "page_number",
|
| 576 |
+
"text": "6",
|
| 577 |
+
"bbox": [
|
| 578 |
+
217,
|
| 579 |
+
114,
|
| 580 |
+
228,
|
| 581 |
+
126
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 5
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "header",
|
| 587 |
+
"text": "N. Konz et al.",
|
| 588 |
+
"bbox": [
|
| 589 |
+
271,
|
| 590 |
+
114,
|
| 591 |
+
367,
|
| 592 |
+
127
|
| 593 |
+
],
|
| 594 |
+
"page_idx": 5
|
| 595 |
+
},
|
| 596 |
+
{
|
| 597 |
+
"type": "table",
|
| 598 |
+
"img_path": "images/13b89d370f6879056de24e176e1e11daabe6cfd16f27714b5954eee4f498b2cb.jpg",
|
| 599 |
+
"table_caption": [
|
| 600 |
+
"Table 1. Faithfulness of generated images to input masks. $m$ denotes input masks, and ${m}_{\\text{gen }}^{\\text{pred }}$ and ${m}_{\\text{real }}^{\\text{pred }}$ denote the masks predicted for (a) the generated images and (b) the real images corresponding to the input masks, respectively, by an auxiliary segmentation model. Best-performing is shown in bold, and second best is underlined."
|
| 601 |
+
],
|
| 602 |
+
"table_footnote": [],
|
| 603 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">Breast MRI</td><td colspan=\"2\">CT Organ</td></tr><tr><td>Model</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td></tr><tr><td>ControlNet</td><td>0.3636</td><td>0.3604</td><td>0.1132</td><td>0.1126</td></tr><tr><td>SPADE</td><td>0.8473</td><td>0.8477</td><td>0.8771</td><td>0.8603</td></tr><tr><td>Ours</td><td>0.9027</td><td>0.8593</td><td>0.8980</td><td>0.8797</td></tr></table>",
|
| 604 |
+
"bbox": [
|
| 605 |
+
222,
|
| 606 |
+
209,
|
| 607 |
+
774,
|
| 608 |
+
300
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 6
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "segmentation difficulty). Our method is on par with SPADE for CT Organ, and slightly worse for breast MRI, while outperforming ControlNet in both cases.",
|
| 615 |
+
"bbox": [
|
| 616 |
+
212,
|
| 617 |
+
328,
|
| 618 |
+
782,
|
| 619 |
+
358
|
| 620 |
+
],
|
| 621 |
+
"page_idx": 6
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "table",
|
| 625 |
+
"img_path": "images/73a1d5fd042f6b9f366328ca101646abb29add3b0b80455c1594bbd98128c807.jpg",
|
| 626 |
+
"table_caption": [
|
| 627 |
+
"Table 2. Quality of generated images. Real test set performance (Dice coeff.) of a segmentation network for the objects of interest, trained on real data vs. synthetic data generated by different models."
|
| 628 |
+
],
|
| 629 |
+
"table_footnote": [],
|
| 630 |
+
"table_body": "<table><tr><td rowspan=\"2\"></td><td rowspan=\"2\">Real training set</td><td colspan=\"3\">Synthetic training set:</td></tr><tr><td>ControlNet</td><td>SPADE</td><td>Ours</td></tr><tr><td>Breast MRI</td><td>0.8376</td><td>0.7570</td><td>0.8333</td><td>0.7934</td></tr><tr><td>CT Organ</td><td>0.9075</td><td>0.0000</td><td>0.8932</td><td>0.8981</td></tr></table>",
|
| 631 |
+
"bbox": [
|
| 632 |
+
305,
|
| 633 |
+
435,
|
| 634 |
+
692,
|
| 635 |
+
508
|
| 636 |
+
],
|
| 637 |
+
"page_idx": 6
|
| 638 |
+
},
|
| 639 |
+
{
|
| 640 |
+
"type": "text",
|
| 641 |
+
"text": "ControlNet performed poorly for all metrics because it failed to follow input masks closely (if at all) due to its limitations and untested nature of adapting to medical images; more details are in Appendix A.1. We also see that our mask-ablated-trained model (Sec 1.3) follows input masks less strictly than our standard model (Fig. 2), especially for the dataset with more object classes (CT Organ): for breast MRI the model obtained faithfulness metrics (Table 1) of $\\mathrm{Dice}(m_{gen}^{pred}, m) = 0.6589$ and $\\mathrm{Dice}(m_{gen}^{pred}, m_{real}^{pred}) = 0.6684$ , and quality metric (Table 2) of 0.7478, and 0.5952, 0.5963 and 0.7564 for CT Organ, respectively. This is likely because the diversity of object class combinations seen in training scales exponentially with the number of classes in order to prepare the model for all possible combinations (Sec. 1.3), so the model is \"spread thin\". However, this relaxation of the mask constraint has its own benefits, detailed next.",
|
| 642 |
+
"bbox": [
|
| 643 |
+
212,
|
| 644 |
+
535,
|
| 645 |
+
787,
|
| 646 |
+
718
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 6
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "3.2 Advantages of Mask-Ablated-Training",
|
| 653 |
+
"text_level": 1,
|
| 654 |
+
"bbox": [
|
| 655 |
+
214,
|
| 656 |
+
739,
|
| 657 |
+
576,
|
| 658 |
+
757
|
| 659 |
+
],
|
| 660 |
+
"page_idx": 6
|
| 661 |
+
},
|
| 662 |
+
{
|
| 663 |
+
"type": "text",
|
| 664 |
+
"text": "Sampling from Ablated Masks. The direct benefit of mask ablated training (MAT) is its ability to generate images from masks with classes missing. In Fig. 3 we demonstrate the effect on generated images of ablating certain classes from an input mask for breast MRI (more examples, including for CT Organ, are provided in Appendix B). For example, we see that constraining the BV+FGT",
|
| 665 |
+
"bbox": [
|
| 666 |
+
212,
|
| 667 |
+
763,
|
| 668 |
+
787,
|
| 669 |
+
842
|
| 670 |
+
],
|
| 671 |
+
"page_idx": 6
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"type": "header",
|
| 675 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 676 |
+
"bbox": [
|
| 677 |
+
290,
|
| 678 |
+
114,
|
| 679 |
+
732,
|
| 680 |
+
128
|
| 681 |
+
],
|
| 682 |
+
"page_idx": 6
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "page_number",
|
| 686 |
+
"text": "7",
|
| 687 |
+
"bbox": [
|
| 688 |
+
774,
|
| 689 |
+
116,
|
| 690 |
+
784,
|
| 691 |
+
126
|
| 692 |
+
],
|
| 693 |
+
"page_idx": 6
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "text",
|
| 697 |
+
"text": "in breast MRI, yet keeping the breast free, results in images that have the latter two classes pre-registered while the breast shape varies, and vice-versa.",
|
| 698 |
+
"bbox": [
|
| 699 |
+
215,
|
| 700 |
+
146,
|
| 701 |
+
784,
|
| 702 |
+
176
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 7
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "image",
|
| 708 |
+
"img_path": "images/c947ed45d4c568b868fe0ef71b94d247220cc84bc8b34f777bf205aafc42d917.jpg",
|
| 709 |
+
"image_caption": [
|
| 710 |
+
"Fig. 3. Generating images (even rows) Fig.4. Using our model to generate imfrom masks with classes removed (odd ages that are anatomically similar to real rows), shown for breast MRI. images."
|
| 711 |
+
],
|
| 712 |
+
"image_footnote": [],
|
| 713 |
+
"bbox": [
|
| 714 |
+
240,
|
| 715 |
+
215,
|
| 716 |
+
444,
|
| 717 |
+
391
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 7
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "image",
|
| 723 |
+
"img_path": "images/b0bb9e0d7eeeeca525fee351a0aa441c04c3b0c5dbe673150e8cff3c3ce42881.jpg",
|
| 724 |
+
"image_caption": [],
|
| 725 |
+
"image_footnote": [],
|
| 726 |
+
"bbox": [
|
| 727 |
+
486,
|
| 728 |
+
215,
|
| 729 |
+
746,
|
| 730 |
+
393
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 7
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "text",
|
| 736 |
+
"text": "Adjustable Anatomical Similarity of Generated Images to Real Images. One application of our model is the adjustable generation of images that are anatomically similar to some real image, which is not immediately possible for existing state-of-the-art GAN-based methods like SPADE. Consider some real image $x_0$ with anatomical mask $m$ . We can adjust the anatomical similarity to $x_0$ of an image generated from $m$ with our model by interpolating between the synthetic image and the real image in the model's latent space, as follows.",
|
| 737 |
+
"bbox": [
|
| 738 |
+
212,
|
| 739 |
+
541,
|
| 740 |
+
787,
|
| 741 |
+
651
|
| 742 |
+
],
|
| 743 |
+
"page_idx": 7
|
| 744 |
+
},
|
| 745 |
+
{
|
| 746 |
+
"type": "text",
|
| 747 |
+
"text": "If the generation/de-noising process given $m$ (starting at timestep $t = T$ ) is halted at some intermediate step $t = \\tilde{t}$ (we use $\\tilde{t} = 240$ ), we obtain a latent representation $x_{\\tilde{t}}'$ of the generating image. We can then convert the real image $x_0$ to the same latent space by applying the noisiging process to $x_0$ from $t = 0$ to $t = \\tilde{t}$ to obtain $x_{\\tilde{t}}$ . Next, the features of the two images can be mixed via the interpolated latent $x_{\\tilde{t}}^{\\lambda} := (1 - \\lambda)x_{\\tilde{t}} + \\lambda x_{\\tilde{t}}'$ , where $\\lambda \\in (0,1]$ controls the similarity of the mixed features to those of the real image. From here, $x_{\\tilde{t}}^{\\lambda}$ can be denoised back to image space to obtain the interpolated image $x_0^{\\lambda}$ . An advantage of using our mask-ablated-trained model to generate $x_0^{\\lambda}$ is that certain objects can be constrained while others are free to vary, resulting in higher, yet adjustable, semantic diversity. We demonstrate this with various $\\lambda$ in Figure 4, with only FGT+BV constrained for breast MRI, and only bone constrained for CT Organ.",
|
| 748 |
+
"bbox": [
|
| 749 |
+
212,
|
| 750 |
+
655,
|
| 751 |
+
787,
|
| 752 |
+
843
|
| 753 |
+
],
|
| 754 |
+
"page_idx": 7
|
| 755 |
+
},
|
| 756 |
+
{
|
| 757 |
+
"type": "page_number",
|
| 758 |
+
"text": "8",
|
| 759 |
+
"bbox": [
|
| 760 |
+
217,
|
| 761 |
+
114,
|
| 762 |
+
228,
|
| 763 |
+
126
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 7
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "header",
|
| 769 |
+
"text": "N. Konz et al.",
|
| 770 |
+
"bbox": [
|
| 771 |
+
271,
|
| 772 |
+
114,
|
| 773 |
+
367,
|
| 774 |
+
127
|
| 775 |
+
],
|
| 776 |
+
"page_idx": 7
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"type": "text",
|
| 780 |
+
"text": "Conclusion",
|
| 781 |
+
"text_level": 1,
|
| 782 |
+
"bbox": [
|
| 783 |
+
217,
|
| 784 |
+
143,
|
| 785 |
+
326,
|
| 786 |
+
160
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 8
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"text": "Our segmentation-guided diffusion model enables superior anatomically-controllable medical image generation, which has many potential applications, including (1) the generation of anatomically rare cases to augment some imbalanced dataset, (2) the synthesis of anatomically-paired/pre-registered data, and (3) cross-modality anatomy translation, where our model could be trained on images and masks from one sequence (e.g., T2 MRI), and then supplied with masks from another sequence (e.g., T1 MRI) to create new T2 images from the T1 masks.",
|
| 793 |
+
"bbox": [
|
| 794 |
+
212,
|
| 795 |
+
176,
|
| 796 |
+
807,
|
| 797 |
+
280
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 8
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"text": "However, some limitations of this study are that we did not consider full 3D generation, and we did not compare to ControlNet-like latent diffusion models [23] trained completely from scratch. For future work, we are interested in further improving generated image quality, incorporating image-level class guidance [10] either for pathological or domain-related variables [13], and extending our model to segmentation-guided image translation.",
|
| 804 |
+
"bbox": [
|
| 805 |
+
212,
|
| 806 |
+
281,
|
| 807 |
+
785,
|
| 808 |
+
371
|
| 809 |
+
],
|
| 810 |
+
"page_idx": 8
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "text",
|
| 814 |
+
"text": "Acknowledgments. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R01EB031575. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.",
|
| 815 |
+
"bbox": [
|
| 816 |
+
212,
|
| 817 |
+
386,
|
| 818 |
+
785,
|
| 819 |
+
455
|
| 820 |
+
],
|
| 821 |
+
"page_idx": 8
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "text",
|
| 825 |
+
"text": "Disclosure of Interests. The authors have no competing interests.",
|
| 826 |
+
"bbox": [
|
| 827 |
+
215,
|
| 828 |
+
468,
|
| 829 |
+
674,
|
| 830 |
+
483
|
| 831 |
+
],
|
| 832 |
+
"page_idx": 8
|
| 833 |
+
},
|
| 834 |
+
{
|
| 835 |
+
"type": "text",
|
| 836 |
+
"text": "References",
|
| 837 |
+
"text_level": 1,
|
| 838 |
+
"bbox": [
|
| 839 |
+
215,
|
| 840 |
+
506,
|
| 841 |
+
325,
|
| 842 |
+
522
|
| 843 |
+
],
|
| 844 |
+
"page_idx": 8
|
| 845 |
+
},
|
| 846 |
+
{
|
| 847 |
+
"type": "list",
|
| 848 |
+
"sub_type": "ref_text",
|
| 849 |
+
"list_items": [
|
| 850 |
+
"1. Partial label learning: Taxonomy, analysis and outlook. Neural Networks (2023)",
|
| 851 |
+
"2. Cao, S., Konz, N., Duncan, J., Mazurowski, M.A.: Deep learning for breast mri style transfer with limited training data. Journal of Digital Imaging 36(2), 666-678 (2023)",
|
| 852 |
+
"3. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8789-8797 (2018)",
|
| 853 |
+
"4. Fernandez, V., Pinaya, W.H.L., Borges, P., Graham, M.S., Vercauteren, T., Cardoso, M.J.: A 3D Generative Model of Pathological Multi-modal MR Images and Segmentations. In: Deep Generative Models. pp. 132-142. Springer Nature Switzerland, Cham (2024)",
|
| 854 |
+
"5. Fernandez, V., Pinaya, W.H.L., Borges, P., Tudosiu, P.D., Graham, M.S., Vercauteren, T., Cardoso, M.J.: Can segmentation models be trained with fully synthetically generated data? In: International Workshop on Simulation and Synthesis in Medical Imaging. pp. 79-90. Springer (2022)",
|
| 855 |
+
"6. Gong, K., Johnson, K., El Fakhri, G., Li, Q., Pan, T.: Pet image denoising based on denoising diffusion probabilistic model. European Journal of Nuclear Medicine and Molecular Imaging pp. 1-11 (2023)",
|
| 856 |
+
"7. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 16000-16009 (2022)"
|
| 857 |
+
],
|
| 858 |
+
"bbox": [
|
| 859 |
+
222,
|
| 860 |
+
536,
|
| 861 |
+
785,
|
| 862 |
+
839
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 8
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "header",
|
| 868 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 869 |
+
"bbox": [
|
| 870 |
+
290,
|
| 871 |
+
114,
|
| 872 |
+
730,
|
| 873 |
+
128
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 8
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "page_number",
|
| 879 |
+
"text": "9",
|
| 880 |
+
"bbox": [
|
| 881 |
+
774,
|
| 882 |
+
116,
|
| 883 |
+
784,
|
| 884 |
+
126
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 8
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "list",
|
| 890 |
+
"sub_type": "ref_text",
|
| 891 |
+
"list_items": [
|
| 892 |
+
"8. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017)",
|
| 893 |
+
"9. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)",
|
| 894 |
+
"10. Ho, J., Salimans, T.: Classifier-free diffusion guidance. In: NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications (2021)",
|
| 895 |
+
"11. Kazerouni, A., Aghdam, E.K., Heidari, M., Azad, R., Fayyaz, M., Hacihaliloglu, I., Merhof, D.: Diffusion models in medical imaging: A comprehensive survey. Medical Image Analysis p. 102846 (2023)",
|
| 896 |
+
"12. Khader, F., Müller-Franzes, G., Tayebi Arasteh, S., Han, T., Haarburger, C., Schulze-Hagen, M., Schad, P., Engelhardt, S., Baeßler, B., Foersch, S., et al.: Denoising diffusion probabilistic models for 3d medical image generation. Scientific Reports 13(1), 7303 (2023)",
|
| 897 |
+
"13. Konz, N., Mazurowski, M.A.: Reverse engineering breast mris: Predicting acquisition parameters directly from images. In: Medical Imaging with Deep Learning (2023)",
|
| 898 |
+
"14. Lew, C.O., Harouni, M., Kirksey, E.R., Kang, E.J., Dong, H., Gu, H., Grimm, L.J., Walsh, R., Lowell, D.A., Mazurowski, M.A.: A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast mri. Scientific reports 14 1, 5383 (2024), https://api-semanticscholar.org/CorpusID:268251677",
|
| 899 |
+
"15. Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. In: International Conference on Learning Representations (2016)",
|
| 900 |
+
"16. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2018)",
|
| 901 |
+
"17. Lyu, Q., Wang, G.: Conversion between ct and mri images using diffusion and score-matching models. arXiv preprint arXiv:2209.12104 (2022)",
|
| 902 |
+
"18. Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: International Conference on Machine Learning. pp. 8162-8171. PMLR (2021)",
|
| 903 |
+
"19. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)",
|
| 904 |
+
"20. Pinaya, W.H., Graham, M.S., Gray, R., Da Costa, P.F., Tudosiu, P.D., Wright, P., Mah, Y.H., MacKinnon, A.D., Teo, J.T., Jager, R., et al.: Fast unsupervised brain anomaly detection and segmentation with diffusion models. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 705-714. Springer (2022)",
|
| 905 |
+
"21. Pinaya, W.H., Tudosiu, P.D., Dafflon, J., Da Costa, P.F., Fernandez, V., Nachev, P., Ourselin, S., Cardoso, M.J.: Brain imaging generation with latent diffusion models. In: MICCAI Workshop on Deep Generative Models. pp. 117-126. Springer (2022)",
|
| 906 |
+
"22. Rister, B., Yi, D., Shivakumar, K., Nobashi, T., Rubin, D.L.: Ct-org, a new dataset for multiple organ segmentation in computed tomography. Scientific Data 7(1), 381 (2020)",
|
| 907 |
+
"23. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10684-10695 (June 2022)"
|
| 908 |
+
],
|
| 909 |
+
"bbox": [
|
| 910 |
+
217,
|
| 911 |
+
146,
|
| 912 |
+
785,
|
| 913 |
+
839
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 9
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "page_number",
|
| 919 |
+
"text": "10",
|
| 920 |
+
"bbox": [
|
| 921 |
+
217,
|
| 922 |
+
114,
|
| 923 |
+
235,
|
| 924 |
+
126
|
| 925 |
+
],
|
| 926 |
+
"page_idx": 9
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "header",
|
| 930 |
+
"text": "N. Konz et al.",
|
| 931 |
+
"bbox": [
|
| 932 |
+
271,
|
| 933 |
+
114,
|
| 934 |
+
366,
|
| 935 |
+
126
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 9
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "list",
|
| 941 |
+
"sub_type": "ref_text",
|
| 942 |
+
"list_items": [
|
| 943 |
+
"24. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234-241. Springer (2015)",
|
| 944 |
+
"25. Saha, A., Harowicz, M.R., Grimm, L.J., Kim, C.E., Ghate, S.V., Walsh, R., Mazurowski, M.A.: A machine learning approach to radiogenomics of breast cancer: a study of 922 subjects and 529 dce-mri features. British journal of cancer 119(4), 508-516 (2018)",
|
| 945 |
+
"26. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=St1giarCHLP",
|
| 946 |
+
"27. Wang, T., Zhang, T., Zhang, B., Ouyang, H., Chen, D., Chen, Q., Wen, F.: Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952 (2022)",
|
| 947 |
+
"28. Wolleb, J., Bieder, F., Sandkühler, R., Cattin, P.C.: Diffusion models for medical anomaly detection. In: International Conference on Medical image computing and computer-assisted intervention. pp. 35-45. Springer (2022)",
|
| 948 |
+
"29. Wolleb, J., Sandkuhler, R., Bieder, F., Valmaggia, P., Cattin, P.C.: Diffusion models for implicit image segmentation ensembles. In: International Conference on Medical Imaging with Deep Learning. pp. 1336-1348. PMLR (2022)",
|
| 949 |
+
"30. Yang, J., Dvornek, N.C., Zhang, F., Shapiro, J., Lin, M., Duncan, J.S.: Unsupervised domain adaptation via disentangled representations: Application to cross-modality liver segmentation. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part II 22. pp. 255-263. Springer (2019)",
|
| 950 |
+
"31. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)"
|
| 951 |
+
],
|
| 952 |
+
"bbox": [
|
| 953 |
+
212,
|
| 954 |
+
146,
|
| 955 |
+
787,
|
| 956 |
+
534
|
| 957 |
+
],
|
| 958 |
+
"page_idx": 10
|
| 959 |
+
},
|
| 960 |
+
{
|
| 961 |
+
"type": "header",
|
| 962 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 963 |
+
"bbox": [
|
| 964 |
+
290,
|
| 965 |
+
114,
|
| 966 |
+
730,
|
| 967 |
+
128
|
| 968 |
+
],
|
| 969 |
+
"page_idx": 10
|
| 970 |
+
},
|
| 971 |
+
{
|
| 972 |
+
"type": "page_number",
|
| 973 |
+
"text": "11",
|
| 974 |
+
"bbox": [
|
| 975 |
+
767,
|
| 976 |
+
116,
|
| 977 |
+
782,
|
| 978 |
+
126
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 10
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "text",
|
| 984 |
+
"text": "A Additional Training and Architectural Details",
|
| 985 |
+
"text_level": 1,
|
| 986 |
+
"bbox": [
|
| 987 |
+
215,
|
| 988 |
+
143,
|
| 989 |
+
699,
|
| 990 |
+
162
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 11
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "Our model (segmentation-guided diffusion). The denoising model (UNet)'s encoder is constructed from six standard ResNet down-sampling blocks, with the fifth block also having spatial self-attention, with (128, 128, 256, 256, 512, 512) output channels, respectively. The decoder is simply the up-sampling reverse of the encoder. We use a standard forward process variance schedule that linearly increases from $\\beta_{1} = 10^{-4}$ to $\\beta_{T} = 0.02$ [9]. For training, we use the AdamW optimizer [16] and a cosine learning rate scheduler [15] with an initial learning rate of $10^{-4}$ , with 500 linear warm-up steps. We train for 400 epochs with a batch size of 64 (about 26 hours), and we perform all training and evaluation on four 48 GB NVIDIA A6000 GPUs. We use the Diffusers library as a backbone (https://github.com/huggingface/diffusers).",
|
| 997 |
+
"bbox": [
|
| 998 |
+
212,
|
| 999 |
+
171,
|
| 1000 |
+
787,
|
| 1001 |
+
338
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 11
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "A.1 Comparison models",
|
| 1008 |
+
"text_level": 1,
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
215,
|
| 1011 |
+
354,
|
| 1012 |
+
429,
|
| 1013 |
+
371
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 11
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "text",
|
| 1019 |
+
"text": "SPADE. We train SPADE [19] using the default settings, with a batch size of 128 for 50 epochs. We did not adopt the changes of the recent brain MRI SPADE model [5] because they are not applicable to our datasets/task, namely: (1) the contrast-based clustering is not applicable due to us using pre-contrast MRIs or CT, (2) we work with standard categorical segmentation maps, not partial volume/probabilistic segmentation maps, so changes using the latter are not applicable, and (3) we work with independent 2D slice images, rather than full 3D volumes, so the enforcement of style and content separation via using different slices from the same volume during training is not applicable.",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
212,
|
| 1022 |
+
375,
|
| 1023 |
+
787,
|
| 1024 |
+
512
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 11
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "text",
|
| 1030 |
+
"text": "ControlNet. We adapted ControlNet [31] to each of our medical image datasets as was instructed at their official tutorial (https://github.com/lllyasviel/ ControlNet/blob/main/docs/train.md#sd.locked) for use with datasets that are out-of-distribution (e.g., medical images) from their model's very large natural image pre-training set, using empty prompts for text inputs. We note that despite this tutorial, none of this was tested in the ControlNet paper, which may explain ControlNet's poor performance on our medical datasets.",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
212,
|
| 1033 |
+
529,
|
| 1034 |
+
787,
|
| 1035 |
+
635
|
| 1036 |
+
],
|
| 1037 |
+
"page_idx": 11
|
| 1038 |
+
},
|
| 1039 |
+
{
|
| 1040 |
+
"type": "text",
|
| 1041 |
+
"text": "This involved first finetuning the VAE for 200 epochs, then finetuning the Stable Diffusion (SD) model for 400 epochs using the respective breast MRI or CT organ training set images. We then finetuned the ControlNet with the images and their corresponding masks for segmentation guidance for 200 epochs. The pretrained (pre-finetuning) models are from the SD v1.5 checkpoints available on Hugging Face at https://huggingface.co/runwayml/stable-diffusion-v1-5. For all training, we set the batch size to 128, the initial learning rate to $10^{-4}$ , and adopted cosine annealing learning rate schedulers rate with 500 steps of warm-up.",
|
| 1042 |
+
"bbox": [
|
| 1043 |
+
212,
|
| 1044 |
+
635,
|
| 1045 |
+
795,
|
| 1046 |
+
772
|
| 1047 |
+
],
|
| 1048 |
+
"page_idx": 11
|
| 1049 |
+
},
|
| 1050 |
+
{
|
| 1051 |
+
"type": "text",
|
| 1052 |
+
"text": "A.2 Auxiliary segmentation model",
|
| 1053 |
+
"text_level": 1,
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
215,
|
| 1056 |
+
787,
|
| 1057 |
+
516,
|
| 1058 |
+
804
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 11
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "We used the MONAI UNet (https://docs.monai.io/en/stable/networks.html) with 1-channel input and (number of target object classes + 1)-channel",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
212,
|
| 1067 |
+
809,
|
| 1068 |
+
785,
|
| 1069 |
+
840
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 11
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "page_number",
|
| 1075 |
+
"text": "12",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
217,
|
| 1078 |
+
114,
|
| 1079 |
+
235,
|
| 1080 |
+
126
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 11
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "header",
|
| 1086 |
+
"text": "N. Konz et al.",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
271,
|
| 1089 |
+
114,
|
| 1090 |
+
367,
|
| 1091 |
+
127
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 11
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "output. The sequence of intermediate UNet channels was set to (16, 32, 64, 128, 256). We trained each model for 100 epochs with a batch size of 8 and selected the models with the lowest validation loss, with an initial learning rate of $10^{-3}$ and a cosine annealing scheduler.",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
212,
|
| 1100 |
+
146,
|
| 1101 |
+
787,
|
| 1102 |
+
205
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 12
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "text",
|
| 1108 |
+
"text": "B Additional Sampled Images",
|
| 1109 |
+
"text_level": 1,
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
215,
|
| 1112 |
+
229,
|
| 1113 |
+
522,
|
| 1114 |
+
247
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 12
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "image",
|
| 1120 |
+
"img_path": "images/3187692269ffb475e288d37e7e4e76738b288fb7fae4dbbc16d0cb26df47c203.jpg",
|
| 1121 |
+
"image_caption": [
|
| 1122 |
+
"Fig. 5. Additional samples from all segmentation-conditional models; breast MRI on the left, CT organ on the right. Please see Fig. 2 caption for more details."
|
| 1123 |
+
],
|
| 1124 |
+
"image_footnote": [],
|
| 1125 |
+
"bbox": [
|
| 1126 |
+
279,
|
| 1127 |
+
287,
|
| 1128 |
+
718,
|
| 1129 |
+
569
|
| 1130 |
+
],
|
| 1131 |
+
"page_idx": 12
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "image",
|
| 1135 |
+
"img_path": "images/5f28b3674ee09fe63e8bba077dd102ff478fdcfc872d3bc47ea84e5e8f1db9ef.jpg",
|
| 1136 |
+
"image_caption": [],
|
| 1137 |
+
"image_footnote": [],
|
| 1138 |
+
"bbox": [
|
| 1139 |
+
272,
|
| 1140 |
+
672,
|
| 1141 |
+
496,
|
| 1142 |
+
715
|
| 1143 |
+
],
|
| 1144 |
+
"page_idx": 12
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "image",
|
| 1148 |
+
"img_path": "images/73aca0f37506ffc44f582922768cd66df6c04b234390ae373bbe9d700bc25d89.jpg",
|
| 1149 |
+
"image_caption": [],
|
| 1150 |
+
"image_footnote": [],
|
| 1151 |
+
"bbox": [
|
| 1152 |
+
504,
|
| 1153 |
+
672,
|
| 1154 |
+
727,
|
| 1155 |
+
715
|
| 1156 |
+
],
|
| 1157 |
+
"page_idx": 12
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "image",
|
| 1161 |
+
"img_path": "images/a230f41b640fc9dd3f72901d54cfdff16bebd2fe07c2a19aa5945cc438d36e67.jpg",
|
| 1162 |
+
"image_caption": [
|
| 1163 |
+
"Fig. 6. Additional samples from our mask-ablated-trained model with various classes removed from given input segmentations for breast MRI (top) and CT Organ (bottom)."
|
| 1164 |
+
],
|
| 1165 |
+
"image_footnote": [],
|
| 1166 |
+
"bbox": [
|
| 1167 |
+
272,
|
| 1168 |
+
720,
|
| 1169 |
+
493,
|
| 1170 |
+
763
|
| 1171 |
+
],
|
| 1172 |
+
"page_idx": 12
|
| 1173 |
+
},
|
| 1174 |
+
{
|
| 1175 |
+
"type": "image",
|
| 1176 |
+
"img_path": "images/d46b0f6b136429441c04b3d78831cc654355850bc644d71032c8803ebefa2cfa.jpg",
|
| 1177 |
+
"image_caption": [],
|
| 1178 |
+
"image_footnote": [],
|
| 1179 |
+
"bbox": [
|
| 1180 |
+
503,
|
| 1181 |
+
720,
|
| 1182 |
+
723,
|
| 1183 |
+
763
|
| 1184 |
+
],
|
| 1185 |
+
"page_idx": 12
|
| 1186 |
+
},
|
| 1187 |
+
{
|
| 1188 |
+
"type": "header",
|
| 1189 |
+
"text": "Anatomically-Controllable Segmentation-Guided Diffusion Models",
|
| 1190 |
+
"bbox": [
|
| 1191 |
+
290,
|
| 1192 |
+
114,
|
| 1193 |
+
732,
|
| 1194 |
+
128
|
| 1195 |
+
],
|
| 1196 |
+
"page_idx": 12
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "page_number",
|
| 1200 |
+
"text": "13",
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
767,
|
| 1203 |
+
114,
|
| 1204 |
+
785,
|
| 1205 |
+
126
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 12
|
| 1208 |
+
}
|
| 1209 |
+
]
|
2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_model.json
ADDED
|
@@ -0,0 +1,1568 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.266,
|
| 8 |
+
0.058,
|
| 9 |
+
0.718
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2402.05210v4 [eess.IV] 19 Jun 2024"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.228,
|
| 18 |
+
0.141,
|
| 19 |
+
0.776,
|
| 20 |
+
0.207
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.217,
|
| 29 |
+
0.234,
|
| 30 |
+
0.786,
|
| 31 |
+
0.252
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Nicholas Konz<sup>1*</sup>, Yuwen Chen<sup>1</sup>, Haoyu Dong<sup>1</sup>, and Maciej A. Mazurowski<sup>1,2,3,4</sup>"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.226,
|
| 40 |
+
0.262,
|
| 41 |
+
0.776,
|
| 42 |
+
0.278
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "\\(^{1}\\) Department of Electrical and Computer Engineering, Duke University, NC, USA"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.289,
|
| 51 |
+
0.278,
|
| 52 |
+
0.687,
|
| 53 |
+
0.291
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "\\(^{2}\\) Department of Radiology, Duke University, NC, USA"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.289,
|
| 62 |
+
0.291,
|
| 63 |
+
0.713,
|
| 64 |
+
0.305
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "\\(^{3}\\) Department of Computer Science, Duke University, NC, USA"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.251,
|
| 73 |
+
0.305,
|
| 74 |
+
0.753,
|
| 75 |
+
0.319
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "\\(^{4}\\) Department of Biostatistics & Bioinformatics, Duke University, NC, USA"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.339,
|
| 84 |
+
0.319,
|
| 85 |
+
0.663,
|
| 86 |
+
0.332
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "*corresponding author nicholas.konz@duke.edu"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.261,
|
| 95 |
+
0.37,
|
| 96 |
+
0.741,
|
| 97 |
+
0.634
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Abstract. Diffusion models have enabled remarkably high-quality medical image generation, yet it is challenging to enforce anatomical constraints in generated images. To this end, we propose a diffusion model-based method that supports anatomically-controllable medical image generation, by following a multi-class anatomical segmentation mask at each sampling step. We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints while allowing flexibility in other anatomical areas. We compare our method (\"SegGuidedDiff\") to existing methods on breast MRI and abdominal/neck-to-pelvis CT datasets with a wide range of anatomical objects. Results show that our method reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks on both datasets, and is on par for general anatomical realism. Finally, our model also enjoys the extra benefit of being able to adjust the anatomical similarity of generated images to real images of choice through interpolation in its latent space. SegGuidedDiff has many applications, including cross-modality translation, and the generation of paired or counterfactual data. Our code is available at https://github.com/mazurowski-lab/segmentation-guided-diffusion."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.262,
|
| 106 |
+
0.647,
|
| 107 |
+
0.724,
|
| 108 |
+
0.662
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Keywords: diffusion models \\(\\cdot\\) image generation \\(\\cdot\\) semantic synthesis"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.216,
|
| 117 |
+
0.687,
|
| 118 |
+
0.341,
|
| 119 |
+
0.703
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Introduction"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.214,
|
| 128 |
+
0.719,
|
| 129 |
+
0.788,
|
| 130 |
+
0.841
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Denoising diffusion probabilistic models [9] (DDPMs, or just \"diffusion models\") have shown extensive applications in medical image analysis [11] due to their ability to generate high-quality, high-resolution images, such as for direct image generation [21,12], image segmentation [29], anomaly detection [20,28], cross-modality image translation [17], and image denoising [6]. However, standard generative models like DDPMs can still fail to create anatomically plausible tissue (Fig 1), and such anatomy is not precisely customizable. Our proposed solution is to incorporate anatomical information as a prior for image generation"
|
| 134 |
+
}
|
| 135 |
+
],
|
| 136 |
+
[
|
| 137 |
+
{
|
| 138 |
+
"type": "page_number",
|
| 139 |
+
"bbox": [
|
| 140 |
+
0.218,
|
| 141 |
+
0.116,
|
| 142 |
+
0.23,
|
| 143 |
+
0.127
|
| 144 |
+
],
|
| 145 |
+
"angle": 0,
|
| 146 |
+
"content": "2"
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "header",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.272,
|
| 152 |
+
0.115,
|
| 153 |
+
0.368,
|
| 154 |
+
0.128
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": "N. Konz et al."
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.214,
|
| 163 |
+
0.147,
|
| 164 |
+
0.784,
|
| 165 |
+
0.177
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "via a segmentation mask for different types of tissue, organs, etc., providing the network with a more direct learning signal for anatomical realism."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.214,
|
| 174 |
+
0.178,
|
| 175 |
+
0.788,
|
| 176 |
+
0.375
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "Generating an image from a mask (a.k.a. semantic synthesis) is a type of image-to-image translation task. Existing translation works include GAN-based [30,3,2] and diffusion-based models [28,23,31], yet these do not directly enforce precise pixel-wise anatomical constraints. Recent works introduced fine-tuning large pre-trained text-to-image latent diffusion models (LDMs) for segmentation-conditioning on natural images [31,27], but we found that these adapt poorly to medical images (Sec. 3.1). Instead, we propose the first diffusion model for generating medical images from anatomical segmentations, which is assisted by it being an image-space diffusion model trained from scratch. Image-space-based diffusion models are especially amenable for strict segmentation guidance because a conditioning mask can be used as-is for every small step of the denoising process, with no conversion to some abstract latent space as in certain LDMs [31,27,23] (or GANs) where precise spatial guidance may be lost."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "image",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.329,
|
| 185 |
+
0.401,
|
| 186 |
+
0.671,
|
| 187 |
+
0.555
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": null
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "image_caption",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.214,
|
| 196 |
+
0.571,
|
| 197 |
+
0.788,
|
| 198 |
+
0.601
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "Fig. 1. Standard diffusion models (right) can fail to create realistic tissue even if the overall image appears high-quality, motivating our segmentation-guided model (center)."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.214,
|
| 207 |
+
0.628,
|
| 208 |
+
0.788,
|
| 209 |
+
0.794
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Segmentation-guided generation would be even more flexible if only certain object classes could be constrained in an input mask, while others are free to be inferred by the model. This opens up further applications such as generating images from incomplete masks ([1]), the generation of anatomically paired/registered data, \"counterfactual\" analysis of existing annotated data, and others. To solve this, we propose a mask-ablated training strategy to provide the model with all possible combinations of missing classes in masks during training, teaching it to make such inferences when generating new images. Notably, we also demonstrate how interpolating within the latent space of our mask ablated-trained model enables generating images with adjustable anatomical similarity to some real image (Sec. 3.2)."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.214,
|
| 218 |
+
0.795,
|
| 219 |
+
0.788,
|
| 220 |
+
0.842
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "In summary, we introduce a diffusion model, \"SegGuidedDiff\" that can flexibly and precisely generate medical images according to anatomical masks. We evaluate our model's ability to generate images conditioned on a range of"
|
| 224 |
+
}
|
| 225 |
+
],
|
| 226 |
+
[
|
| 227 |
+
{
|
| 228 |
+
"type": "header",
|
| 229 |
+
"bbox": [
|
| 230 |
+
0.292,
|
| 231 |
+
0.115,
|
| 232 |
+
0.733,
|
| 233 |
+
0.129
|
| 234 |
+
],
|
| 235 |
+
"angle": 0,
|
| 236 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "page_number",
|
| 240 |
+
"bbox": [
|
| 241 |
+
0.775,
|
| 242 |
+
0.117,
|
| 243 |
+
0.785,
|
| 244 |
+
0.127
|
| 245 |
+
],
|
| 246 |
+
"angle": 0,
|
| 247 |
+
"content": "3"
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"bbox": [
|
| 252 |
+
0.214,
|
| 253 |
+
0.147,
|
| 254 |
+
0.784,
|
| 255 |
+
0.238
|
| 256 |
+
],
|
| 257 |
+
"angle": 0,
|
| 258 |
+
"content": "anatomical objects of interest for breast MRI and neck-to-pelvis CT, where it outperforms state-of-the-art mask-conditional generative models in its faithfulness to input anatomical masks, and is on par for general anatomical realism (Sec. 3.1). Our code is publicly released at https://github.com/mazurowski-lab/segmentation-guided-diffusion with a focus on usability on any dataset, along with a dataset of paired \"pre-registered\" generated breast MRIs."
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "title",
|
| 262 |
+
"bbox": [
|
| 263 |
+
0.216,
|
| 264 |
+
0.261,
|
| 265 |
+
0.331,
|
| 266 |
+
0.276
|
| 267 |
+
],
|
| 268 |
+
"angle": 0,
|
| 269 |
+
"content": "1 Method"
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"type": "title",
|
| 273 |
+
"bbox": [
|
| 274 |
+
0.216,
|
| 275 |
+
0.292,
|
| 276 |
+
0.557,
|
| 277 |
+
0.306
|
| 278 |
+
],
|
| 279 |
+
"angle": 0,
|
| 280 |
+
"content": "1.1 A Brief Review of Diffusion Models"
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"type": "text",
|
| 284 |
+
"bbox": [
|
| 285 |
+
0.214,
|
| 286 |
+
0.317,
|
| 287 |
+
0.785,
|
| 288 |
+
0.436
|
| 289 |
+
],
|
| 290 |
+
"angle": 0,
|
| 291 |
+
"content": "Denoising diffusion probabilistic models [9] (DDPMs, or diffusion models for short) are a type of generative latent variable model that learns to sample from some data distribution \\( p(x_0) \\) (\\( x_0 \\in \\mathbb{R}^n \\)) by defining a stochastic process that gradually converts the data to noise—the forward process \\( q(x_t | x_{t-1}) \\) and learning to reverse this process via a learned denoising process \\( p_\\theta(x_{t-1} | x_t) \\), where \\( \\theta \\) is the model parameters. Data is generated by iteratively sampling from \\( p_\\theta(x_{t-1} | x_t) \\), beginning with a Gaussian noise sample \\( x_T \\sim p(x_T) \\), for \\( t = T - 1, \\ldots, 0 \\) (we use \\( T = 1000 \\)) until an image \\( x_0 \\) is recovered."
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "text",
|
| 295 |
+
"bbox": [
|
| 296 |
+
0.214,
|
| 297 |
+
0.438,
|
| 298 |
+
0.787,
|
| 299 |
+
0.56
|
| 300 |
+
],
|
| 301 |
+
"angle": 0,
|
| 302 |
+
"content": "Any forward process step can be written explicitly as \\( x_{t} = \\sqrt{\\overline{\\alpha}_{t}} x_{0} + \\sqrt{1 - \\overline{\\alpha}_{t}}\\epsilon \\) where \\( \\epsilon \\sim \\mathcal{N}(0,I_n) \\), and \\( \\alpha_{t} \\coloneqq 1 - \\beta_{t} \\) given the variance of the additive prescheduled noise \\( \\beta_{t} \\), and \\( \\overline{\\alpha}_t \\coloneqq \\prod_{s = 1}^t\\alpha_s \\). DDPMs can be trained by the usual evidence lower bound (ELBO) maximization, which can be approximately optimized in a relatively simple form by training a network \\( \\epsilon_{\\theta}(x_t,t) \\) to predict the noise \\( \\epsilon \\) added to each datapoint \\( x_0 \\) for various time steps \\( t \\), with the loss \\( L = \\mathbb{E}_{x_0,t,\\epsilon}\\left[||\\epsilon -\\epsilon_\\theta (x_t,t)||^2\\right] = \\mathbb{E}_{x_0,t,\\epsilon}\\left[||\\epsilon -\\epsilon_\\theta (\\sqrt{\\overline{\\alpha}_t} x_0 + \\sqrt{1 - \\overline{\\alpha}_t}\\epsilon ,t)||^2\\right] \\), which has proven to be the typically superior DDPM loss in practice [18]."
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "title",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.216,
|
| 308 |
+
0.581,
|
| 309 |
+
0.691,
|
| 310 |
+
0.595
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "1.2 Adding Segmentation Guidance to Diffusion Models"
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.214,
|
| 319 |
+
0.605,
|
| 320 |
+
0.785,
|
| 321 |
+
0.71
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "Rather than sampling from the unconditional distribution \\( p(x_0) \\), our goal is to condition the generation of some \\( c \\)-channel image \\( x_0 \\in \\mathbb{R}^{c \\times h \\times w} \\) to follow some multi-class anatomical mask \\( m \\in \\{0, \\dots, C - 1\\}^{h \\times w} \\), where \\( C \\) is the number of classes (including background), or in other words, sample from \\( p(x_0|m) \\). While modifying the data likelihood \\( p(x_0|m) \\) to be mask-conditional does not alter the noisng process \\( q(x_t|x_{t-1}) \\), it does modify the reverse process \\( p_\\theta(x_{t-1}|x_t, m) \\) and noise-predicting network \\( \\epsilon_\\theta \\). Propagating this to the ELBO results in a loss of"
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "equation",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.317,
|
| 330 |
+
0.722,
|
| 331 |
+
0.785,
|
| 332 |
+
0.74
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "\\[\nL _ {m} = \\mathbb {E} _ {\\left(x _ {0}, m\\right), t, \\epsilon} \\left[ | | \\epsilon - \\epsilon_ {\\theta} \\left(\\sqrt {\\bar {\\alpha} _ {t}} x _ {0} + \\sqrt {1 - \\bar {\\alpha} _ {t}} \\epsilon , t \\mid m\\right) | | ^ {2} \\right] \\tag {1}\n\\]"
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.214,
|
| 341 |
+
0.75,
|
| 342 |
+
0.784,
|
| 343 |
+
0.84
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "for training our model, where each training image \\( x_0 \\) has some paired mask \\( m \\). We propose to implement this simply by concatenating \\( m \\) channel-wise to the network input at every denoising step, i.e., modifying the network to have an additional input channel as \\( \\epsilon_{\\theta}(x_t,t|m):\\mathbb{R}^{(c + 1)\\times h\\times w}\\to \\mathbb{R}^{c\\times h\\times w} \\), which can be any image-to-image model (see Sec. 3). We use the DDIM algorithm [26] for fast, yet high-quality sampling."
|
| 347 |
+
}
|
| 348 |
+
],
|
| 349 |
+
[
|
| 350 |
+
{
|
| 351 |
+
"type": "page_number",
|
| 352 |
+
"bbox": [
|
| 353 |
+
0.218,
|
| 354 |
+
0.116,
|
| 355 |
+
0.23,
|
| 356 |
+
0.127
|
| 357 |
+
],
|
| 358 |
+
"angle": 0,
|
| 359 |
+
"content": "4"
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"type": "header",
|
| 363 |
+
"bbox": [
|
| 364 |
+
0.272,
|
| 365 |
+
0.115,
|
| 366 |
+
0.368,
|
| 367 |
+
0.128
|
| 368 |
+
],
|
| 369 |
+
"angle": 0,
|
| 370 |
+
"content": "N. Konz et al."
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"type": "text",
|
| 374 |
+
"bbox": [
|
| 375 |
+
0.214,
|
| 376 |
+
0.147,
|
| 377 |
+
0.788,
|
| 378 |
+
0.222
|
| 379 |
+
],
|
| 380 |
+
"angle": 0,
|
| 381 |
+
"content": "This simple method generates images that are very faithful to input masks (Fig. 2, Table 1), because the denoising process is conditioned on the mask at each of its many gradual steps, allowing the network to follow the masks because they provide helpful spatial information that is directly correlated with the optimal denoised model output that minimizes the loss."
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"type": "title",
|
| 385 |
+
"bbox": [
|
| 386 |
+
0.216,
|
| 387 |
+
0.246,
|
| 388 |
+
0.573,
|
| 389 |
+
0.263
|
| 390 |
+
],
|
| 391 |
+
"angle": 0,
|
| 392 |
+
"content": "1.3 Mask-Ablated Training and Sampling"
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"type": "text",
|
| 396 |
+
"bbox": [
|
| 397 |
+
0.214,
|
| 398 |
+
0.272,
|
| 399 |
+
0.788,
|
| 400 |
+
0.379
|
| 401 |
+
],
|
| 402 |
+
"angle": 0,
|
| 403 |
+
"content": "Given that our model is mask-guided, the quality of these masks is important; a generated image may be misleading if the input mask is not fully annotated, known as the partial label problem [1] in medical image analysis. This is because the model may assume that un-annotated objects should not be present in the output image whatsoever (associating the missing/zero pixel labels as background or some other object), when in reality we may desire for the model to simply \"fill in\"/infer the unprovided objects."
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "text",
|
| 407 |
+
"bbox": [
|
| 408 |
+
0.214,
|
| 409 |
+
0.379,
|
| 410 |
+
0.788,
|
| 411 |
+
0.53
|
| 412 |
+
],
|
| 413 |
+
"angle": 0,
|
| 414 |
+
"content": "To alleviate this problem, we propose a mask-ablated training (MAT) strategy (Algorithm 1), which provides examples of masks with various numbers and combinations of classes removed for the model to learn to generate images from during training. This can be thought of as a form of self-supervised learning of anatomical object representations (somewhat analogous to MAE [7]). We set all \\(2^{C-1}\\) of these possible combinations of classes being removed from a given mask in training to occur with equal probability so that the model can handle each equally, although any other balancing of these probabilities for different object classes could be used. Finally, we note that our MAT algorithm is immediately applicable to any mask-conditional generative model."
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "code_caption",
|
| 418 |
+
"bbox": [
|
| 419 |
+
0.229,
|
| 420 |
+
0.535,
|
| 421 |
+
0.736,
|
| 422 |
+
0.549
|
| 423 |
+
],
|
| 424 |
+
"angle": 0,
|
| 425 |
+
"content": "Algorithm 1: Segmentation-guided model training with mask ablation."
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"type": "algorithm",
|
| 429 |
+
"bbox": [
|
| 430 |
+
0.22,
|
| 431 |
+
0.551,
|
| 432 |
+
0.787,
|
| 433 |
+
0.737
|
| 434 |
+
],
|
| 435 |
+
"angle": 0,
|
| 436 |
+
"content": "Input: number of mask classes \\(C\\) dataset \\(p(x_0,m)\\) \nrepeat \\(\\begin{array}{l}x_0,m\\sim p(x_0,m)\\\\ \\text{for} c = 1,\\ldots ,C - 1\\text{do}\\\\ \\delta \\sim \\mathrm{Bernoulli}(0.5)\\\\ \\text{if}\\delta = 1\\text{then}\\\\ |m[m = c] = 0\\\\ \\text{end}\\\\ \\epsilon \\sim \\mathcal{N}(0,I_n);t\\sim \\mathrm{Uniform}(\\{1,\\dots,T\\})\\\\ x_t = \\sqrt{\\bar{\\alpha}_t} x_0 + \\sqrt{1 - \\bar{\\alpha}_t}\\epsilon \\\\ \\mathrm{Update}\\theta \\mathrm{with}\\nabla_\\theta \\| \\epsilon -\\epsilon_\\theta (x_t,t|m)\\| ^2\\\\ \\mathrm{until}\\mathrm{converged}; \\end{array}\\)"
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "title",
|
| 440 |
+
"bbox": [
|
| 441 |
+
0.216,
|
| 442 |
+
0.762,
|
| 443 |
+
0.338,
|
| 444 |
+
0.779
|
| 445 |
+
],
|
| 446 |
+
"angle": 0,
|
| 447 |
+
"content": "2 Datasets"
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "text",
|
| 451 |
+
"bbox": [
|
| 452 |
+
0.214,
|
| 453 |
+
0.795,
|
| 454 |
+
0.787,
|
| 455 |
+
0.843
|
| 456 |
+
],
|
| 457 |
+
"angle": 0,
|
| 458 |
+
"content": "Breast MRI: Our first dataset is a 100-patient subset of the Duke Breast Cancer MRI dataset [25]. We use all 2D image slices from the fat-saturated gradient echo T1-weighted pre-contrast sequence, with a train/test split of 70/15 patients,"
|
| 459 |
+
}
|
| 460 |
+
],
|
| 461 |
+
[
|
| 462 |
+
{
|
| 463 |
+
"type": "header",
|
| 464 |
+
"bbox": [
|
| 465 |
+
0.292,
|
| 466 |
+
0.115,
|
| 467 |
+
0.731,
|
| 468 |
+
0.129
|
| 469 |
+
],
|
| 470 |
+
"angle": 0,
|
| 471 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "page_number",
|
| 475 |
+
"bbox": [
|
| 476 |
+
0.775,
|
| 477 |
+
0.117,
|
| 478 |
+
0.785,
|
| 479 |
+
0.127
|
| 480 |
+
],
|
| 481 |
+
"angle": 0,
|
| 482 |
+
"content": "5"
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"bbox": [
|
| 487 |
+
0.214,
|
| 488 |
+
0.147,
|
| 489 |
+
0.788,
|
| 490 |
+
0.329
|
| 491 |
+
],
|
| 492 |
+
"angle": 0,
|
| 493 |
+
"content": "resulting in \\(\\sim 12000 / 2500\\) slice images per split. We also keep a held-out training set of 15 patients for additional experiments. All images have full segmentation annotations for (1) breast, (2) blood vessels (BV), and (3) fibroglandular/dense tissue (FGT) provided at [14]. Notably, the FGT and BV have very high variability in shape, size, and other morphological characteristics, posing a challenge for generative models to realistically capture. CT Organ: Our second dataset is a 40-patient subset of neck-to-pelvis and abdominal CT scans from [22], with segmentation annotations for liver, bladder, lungs, kidney, and bone. This results in a train/test split of \\(\\sim 11000 / 2100\\) 2D slice images, given a patient-wise split of \\(24 / 8\\), as well as a held-out training set of 8 patients. All generative models are trained on the training sets, and the auxiliary segmentation network, introduced next, is trained on the held-out training sets."
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "title",
|
| 497 |
+
"bbox": [
|
| 498 |
+
0.216,
|
| 499 |
+
0.354,
|
| 500 |
+
0.377,
|
| 501 |
+
0.371
|
| 502 |
+
],
|
| 503 |
+
"angle": 0,
|
| 504 |
+
"content": "3 Experiments"
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"bbox": [
|
| 509 |
+
0.214,
|
| 510 |
+
0.387,
|
| 511 |
+
0.788,
|
| 512 |
+
0.448
|
| 513 |
+
],
|
| 514 |
+
"angle": 0,
|
| 515 |
+
"content": "Training, Architecture and Implementational Details. All images are resized to \\(256 \\times 256\\) and normalized to [0, 255]. We use a UNet architecture [24] for the denoising model \\(\\epsilon_{\\theta}\\), modified to take two channels (image and mask) as input; see Appendix A for additional training and architecture details."
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "title",
|
| 519 |
+
"bbox": [
|
| 520 |
+
0.215,
|
| 521 |
+
0.472,
|
| 522 |
+
0.677,
|
| 523 |
+
0.487
|
| 524 |
+
],
|
| 525 |
+
"angle": 0,
|
| 526 |
+
"content": "3.1 Comparison to Existing Image Generation Models"
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"bbox": [
|
| 531 |
+
0.214,
|
| 532 |
+
0.499,
|
| 533 |
+
0.788,
|
| 534 |
+
0.667
|
| 535 |
+
],
|
| 536 |
+
"angle": 0,
|
| 537 |
+
"content": "We next compare our segmentation-guided diffusion model (\"SegGuidedDiff\" for short) to state-of-the-art segmentation-conditional image generation models. These are SPADE [19], a GAN-based model that uses spatially-adaptive normalization layers, and ControlNet [31], a recent method for adding spatial conditioning to large pretrained text-to-image diffusion models. Training/implementation details for each are in Appendix A.1; note that we use the standard implementation of SPADE rather than the recent brain MRI SPADE model [5,4] because its modifications to SPADE do not apply to our setting (see Appendix A.1), and that we follow ControlNet's guidelines for adapting it to medical images. We show example generated images from all models in Fig. 2 (using masks randomly sampled from the test set); more are provided in Appendix B."
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "text",
|
| 541 |
+
"bbox": [
|
| 542 |
+
0.214,
|
| 543 |
+
0.689,
|
| 544 |
+
0.788,
|
| 545 |
+
0.841
|
| 546 |
+
],
|
| 547 |
+
"angle": 0,
|
| 548 |
+
"content": "Evaluating Faithfulness of Generated Images to Input Masks. To measure how well our model follows an input mask for image generation, we use an auxiliary segmentation network trained on the real training set (a standard UNet; training details in Appendix A.2), to predict segmentations \\( m_{gen}^{pred} \\) on images that were generated from the masks \\( m \\) in the test set, and measure their overlap with (a) \\( m \\) and (b) the model's predicted segmentations \\( m_{real}^{pred} \\) for the input masks' original corresponding real images, similar to metrics used in [19]. Our model's generated images have high overlap for both metrics (\\( > 0.85 \\) Dice coeff., Table 1), showing that our model closely followed the input masks when generating the images, and also outperformed the competing methods."
|
| 549 |
+
}
|
| 550 |
+
],
|
| 551 |
+
[
|
| 552 |
+
{
|
| 553 |
+
"type": "page_number",
|
| 554 |
+
"bbox": [
|
| 555 |
+
0.218,
|
| 556 |
+
0.116,
|
| 557 |
+
0.23,
|
| 558 |
+
0.127
|
| 559 |
+
],
|
| 560 |
+
"angle": 0,
|
| 561 |
+
"content": "6"
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "header",
|
| 565 |
+
"bbox": [
|
| 566 |
+
0.272,
|
| 567 |
+
0.115,
|
| 568 |
+
0.368,
|
| 569 |
+
0.128
|
| 570 |
+
],
|
| 571 |
+
"angle": 0,
|
| 572 |
+
"content": "N. Konz et al."
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "image",
|
| 576 |
+
"bbox": [
|
| 577 |
+
0.253,
|
| 578 |
+
0.148,
|
| 579 |
+
0.747,
|
| 580 |
+
0.391
|
| 581 |
+
],
|
| 582 |
+
"angle": 0,
|
| 583 |
+
"content": null
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "image_caption",
|
| 587 |
+
"bbox": [
|
| 588 |
+
0.214,
|
| 589 |
+
0.409,
|
| 590 |
+
0.788,
|
| 591 |
+
0.493
|
| 592 |
+
],
|
| 593 |
+
"angle": 0,
|
| 594 |
+
"content": "Fig.2. Visual comparison of our model (SegGuidedDiff, or \"Seg-Diff\" for short) to existing segmentation-conditional image generation models. For breast MRI, the breast, BV, and FGT segmentations are shown as white, red, and blue, respectively, while for CT, the liver, bladder, lungs, kidneys, and bone are in maroon, orange, pink, red, and white, respectively. \"MAT\" = max ablated training, \"STD\" = our standard method."
|
| 595 |
+
},
|
| 596 |
+
{
|
| 597 |
+
"type": "text",
|
| 598 |
+
"bbox": [
|
| 599 |
+
0.214,
|
| 600 |
+
0.522,
|
| 601 |
+
0.788,
|
| 602 |
+
0.657
|
| 603 |
+
],
|
| 604 |
+
"angle": 0,
|
| 605 |
+
"content": "Evaluating Generated Image Quality. We first attempted to use the common Fréchet Inception Distance (FID) [8] as a metric for quality/realism of generated image features compared to real data, via a CNN image encoder trained on the corresponding dataset. We observed that samples generated from both our segmentation-guided and standard unconditional diffusion models achieved potentially promising results, (e.g., breast MRI feature \\(\\mathrm{FID} \\simeq 0.5\\)), yet CNN feature-based metrics like FID fail to capture the global feature of anatomical realism that can differ in images generated by these models (e.g., fibroglandular tissue as shown in Fig. 1), so we caution using such metrics."
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "text",
|
| 609 |
+
"bbox": [
|
| 610 |
+
0.214,
|
| 611 |
+
0.66,
|
| 612 |
+
0.788,
|
| 613 |
+
0.841
|
| 614 |
+
],
|
| 615 |
+
"angle": 0,
|
| 616 |
+
"content": "Instead, we propose to more precisely measure anatomical realism by determining how well the aforementioned auxiliary segmentation models for the objects of interest can be trained solely on these synthetic images to be able to generalize to real data, using the input masks as targets. We compare the performance of the segmentation models trained on (a) the real held-out training set (Sec. 2) and (b) the set of images generated from all masks corresponding to these images. We split the real test set in half (by patient) into a validation set and a test set to use for these models. The results for this are in Table 2; we see that for both datasets, the segmentation network trained on our model's synthetic data barely performs worse (by only \\(\\leq 0.04\\) Dice) than the network trained on real data, implying that our generated images are both highly realistic and faithful to the input masks (especially considering certain objects'"
|
| 617 |
+
}
|
| 618 |
+
],
|
| 619 |
+
[
|
| 620 |
+
{
|
| 621 |
+
"type": "header",
|
| 622 |
+
"bbox": [
|
| 623 |
+
0.292,
|
| 624 |
+
0.115,
|
| 625 |
+
0.733,
|
| 626 |
+
0.129
|
| 627 |
+
],
|
| 628 |
+
"angle": 0,
|
| 629 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "page_number",
|
| 633 |
+
"bbox": [
|
| 634 |
+
0.775,
|
| 635 |
+
0.117,
|
| 636 |
+
0.785,
|
| 637 |
+
0.127
|
| 638 |
+
],
|
| 639 |
+
"angle": 0,
|
| 640 |
+
"content": "7"
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "table_caption",
|
| 644 |
+
"bbox": [
|
| 645 |
+
0.214,
|
| 646 |
+
0.143,
|
| 647 |
+
0.788,
|
| 648 |
+
0.2
|
| 649 |
+
],
|
| 650 |
+
"angle": 0,
|
| 651 |
+
"content": "Table 1. Faithfulness of generated images to input masks. \\( m \\) denotes input masks, and \\( {m}_{\\text{gen }}^{\\text{pred }} \\) and \\( {m}_{\\text{real }}^{\\text{pred }} \\) denote the masks predicted for (a) the generated images and (b) the real images corresponding to the input masks, respectively, by an auxiliary segmentation model. Best-performing is shown in bold, and second best is underlined."
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "table",
|
| 655 |
+
"bbox": [
|
| 656 |
+
0.223,
|
| 657 |
+
0.21,
|
| 658 |
+
0.776,
|
| 659 |
+
0.301
|
| 660 |
+
],
|
| 661 |
+
"angle": 0,
|
| 662 |
+
"content": "<table><tr><td></td><td colspan=\"2\">Breast MRI</td><td colspan=\"2\">CT Organ</td></tr><tr><td>Model</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td></tr><tr><td>ControlNet</td><td>0.3636</td><td>0.3604</td><td>0.1132</td><td>0.1126</td></tr><tr><td>SPADE</td><td>0.8473</td><td>0.8477</td><td>0.8771</td><td>0.8603</td></tr><tr><td>Ours</td><td>0.9027</td><td>0.8593</td><td>0.8980</td><td>0.8797</td></tr></table>"
|
| 663 |
+
},
|
| 664 |
+
{
|
| 665 |
+
"type": "text",
|
| 666 |
+
"bbox": [
|
| 667 |
+
0.214,
|
| 668 |
+
0.329,
|
| 669 |
+
0.784,
|
| 670 |
+
0.359
|
| 671 |
+
],
|
| 672 |
+
"angle": 0,
|
| 673 |
+
"content": "segmentation difficulty). Our method is on par with SPADE for CT Organ, and slightly worse for breast MRI, while outperforming ControlNet in both cases."
|
| 674 |
+
},
|
| 675 |
+
{
|
| 676 |
+
"type": "table_caption",
|
| 677 |
+
"bbox": [
|
| 678 |
+
0.214,
|
| 679 |
+
0.383,
|
| 680 |
+
0.788,
|
| 681 |
+
0.426
|
| 682 |
+
],
|
| 683 |
+
"angle": 0,
|
| 684 |
+
"content": "Table 2. Quality of generated images. Real test set performance (Dice coeff.) of a segmentation network for the objects of interest, trained on real data vs. synthetic data generated by different models."
|
| 685 |
+
},
|
| 686 |
+
{
|
| 687 |
+
"type": "table",
|
| 688 |
+
"bbox": [
|
| 689 |
+
0.306,
|
| 690 |
+
0.436,
|
| 691 |
+
0.694,
|
| 692 |
+
0.509
|
| 693 |
+
],
|
| 694 |
+
"angle": 0,
|
| 695 |
+
"content": "<table><tr><td rowspan=\"2\"></td><td rowspan=\"2\">Real training set</td><td colspan=\"3\">Synthetic training set:</td></tr><tr><td>ControlNet</td><td>SPADE</td><td>Ours</td></tr><tr><td>Breast MRI</td><td>0.8376</td><td>0.7570</td><td>0.8333</td><td>0.7934</td></tr><tr><td>CT Organ</td><td>0.9075</td><td>0.0000</td><td>0.8932</td><td>0.8981</td></tr></table>"
|
| 696 |
+
},
|
| 697 |
+
{
|
| 698 |
+
"type": "text",
|
| 699 |
+
"bbox": [
|
| 700 |
+
0.214,
|
| 701 |
+
0.536,
|
| 702 |
+
0.788,
|
| 703 |
+
0.719
|
| 704 |
+
],
|
| 705 |
+
"angle": 0,
|
| 706 |
+
"content": "ControlNet performed poorly for all metrics because it failed to follow input masks closely (if at all) due to its limitations and untested nature of adapting to medical images; more details are in Appendix A.1. We also see that our mask-ablated-trained model (Sec 1.3) follows input masks less strictly than our standard model (Fig. 2), especially for the dataset with more object classes (CT Organ): for breast MRI the model obtained faithfulness metrics (Table 1) of \\(\\mathrm{Dice}(m_{gen}^{pred}, m) = 0.6589\\) and \\(\\mathrm{Dice}(m_{gen}^{pred}, m_{real}^{pred}) = 0.6684\\), and quality metric (Table 2) of 0.7478, and 0.5952, 0.5963 and 0.7564 for CT Organ, respectively. This is likely because the diversity of object class combinations seen in training scales exponentially with the number of classes in order to prepare the model for all possible combinations (Sec. 1.3), so the model is \"spread thin\". However, this relaxation of the mask constraint has its own benefits, detailed next."
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "title",
|
| 710 |
+
"bbox": [
|
| 711 |
+
0.215,
|
| 712 |
+
0.741,
|
| 713 |
+
0.577,
|
| 714 |
+
0.758
|
| 715 |
+
],
|
| 716 |
+
"angle": 0,
|
| 717 |
+
"content": "3.2 Advantages of Mask-Ablated-Training"
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "text",
|
| 721 |
+
"bbox": [
|
| 722 |
+
0.214,
|
| 723 |
+
0.765,
|
| 724 |
+
0.788,
|
| 725 |
+
0.843
|
| 726 |
+
],
|
| 727 |
+
"angle": 0,
|
| 728 |
+
"content": "Sampling from Ablated Masks. The direct benefit of mask ablated training (MAT) is its ability to generate images from masks with classes missing. In Fig. 3 we demonstrate the effect on generated images of ablating certain classes from an input mask for breast MRI (more examples, including for CT Organ, are provided in Appendix B). For example, we see that constraining the BV+FGT"
|
| 729 |
+
}
|
| 730 |
+
],
|
| 731 |
+
[
|
| 732 |
+
{
|
| 733 |
+
"type": "page_number",
|
| 734 |
+
"bbox": [
|
| 735 |
+
0.218,
|
| 736 |
+
0.116,
|
| 737 |
+
0.23,
|
| 738 |
+
0.127
|
| 739 |
+
],
|
| 740 |
+
"angle": 0,
|
| 741 |
+
"content": "8"
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "header",
|
| 745 |
+
"bbox": [
|
| 746 |
+
0.272,
|
| 747 |
+
0.115,
|
| 748 |
+
0.368,
|
| 749 |
+
0.128
|
| 750 |
+
],
|
| 751 |
+
"angle": 0,
|
| 752 |
+
"content": "N. Konz et al."
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "text",
|
| 756 |
+
"bbox": [
|
| 757 |
+
0.216,
|
| 758 |
+
0.147,
|
| 759 |
+
0.785,
|
| 760 |
+
0.178
|
| 761 |
+
],
|
| 762 |
+
"angle": 0,
|
| 763 |
+
"content": "in breast MRI, yet keeping the breast free, results in images that have the latter two classes pre-registered while the breast shape varies, and vice-versa."
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "image",
|
| 767 |
+
"bbox": [
|
| 768 |
+
0.241,
|
| 769 |
+
0.217,
|
| 770 |
+
0.446,
|
| 771 |
+
0.392
|
| 772 |
+
],
|
| 773 |
+
"angle": 0,
|
| 774 |
+
"content": null
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "image",
|
| 778 |
+
"bbox": [
|
| 779 |
+
0.487,
|
| 780 |
+
0.217,
|
| 781 |
+
0.747,
|
| 782 |
+
0.394
|
| 783 |
+
],
|
| 784 |
+
"angle": 0,
|
| 785 |
+
"content": null
|
| 786 |
+
},
|
| 787 |
+
{
|
| 788 |
+
"type": "image_caption",
|
| 789 |
+
"bbox": [
|
| 790 |
+
0.214,
|
| 791 |
+
0.407,
|
| 792 |
+
0.763,
|
| 793 |
+
0.449
|
| 794 |
+
],
|
| 795 |
+
"angle": 0,
|
| 796 |
+
"content": "Fig. 3. Generating images (even rows) Fig.4. Using our model to generate imfrom masks with classes removed (odd ages that are anatomically similar to real rows), shown for breast MRI. images."
|
| 797 |
+
},
|
| 798 |
+
{
|
| 799 |
+
"type": "text",
|
| 800 |
+
"bbox": [
|
| 801 |
+
0.214,
|
| 802 |
+
0.542,
|
| 803 |
+
0.788,
|
| 804 |
+
0.652
|
| 805 |
+
],
|
| 806 |
+
"angle": 0,
|
| 807 |
+
"content": "Adjustable Anatomical Similarity of Generated Images to Real Images. One application of our model is the adjustable generation of images that are anatomically similar to some real image, which is not immediately possible for existing state-of-the-art GAN-based methods like SPADE. Consider some real image \\( x_0 \\) with anatomical mask \\( m \\). We can adjust the anatomical similarity to \\( x_0 \\) of an image generated from \\( m \\) with our model by interpolating between the synthetic image and the real image in the model's latent space, as follows."
|
| 808 |
+
},
|
| 809 |
+
{
|
| 810 |
+
"type": "text",
|
| 811 |
+
"bbox": [
|
| 812 |
+
0.214,
|
| 813 |
+
0.656,
|
| 814 |
+
0.788,
|
| 815 |
+
0.844
|
| 816 |
+
],
|
| 817 |
+
"angle": 0,
|
| 818 |
+
"content": "If the generation/de-noising process given \\( m \\) (starting at timestep \\( t = T \\)) is halted at some intermediate step \\( t = \\tilde{t} \\) (we use \\( \\tilde{t} = 240 \\)), we obtain a latent representation \\( x_{\\tilde{t}}' \\) of the generating image. We can then convert the real image \\( x_0 \\) to the same latent space by applying the noisiging process to \\( x_0 \\) from \\( t = 0 \\) to \\( t = \\tilde{t} \\) to obtain \\( x_{\\tilde{t}} \\). Next, the features of the two images can be mixed via the interpolated latent \\( x_{\\tilde{t}}^{\\lambda} := (1 - \\lambda)x_{\\tilde{t}} + \\lambda x_{\\tilde{t}}' \\), where \\( \\lambda \\in (0,1] \\) controls the similarity of the mixed features to those of the real image. From here, \\( x_{\\tilde{t}}^{\\lambda} \\) can be denoised back to image space to obtain the interpolated image \\( x_0^{\\lambda} \\). An advantage of using our mask-ablated-trained model to generate \\( x_0^{\\lambda} \\) is that certain objects can be constrained while others are free to vary, resulting in higher, yet adjustable, semantic diversity. We demonstrate this with various \\( \\lambda \\) in Figure 4, with only FGT+BV constrained for breast MRI, and only bone constrained for CT Organ."
|
| 819 |
+
}
|
| 820 |
+
],
|
| 821 |
+
[
|
| 822 |
+
{
|
| 823 |
+
"type": "header",
|
| 824 |
+
"bbox": [
|
| 825 |
+
0.292,
|
| 826 |
+
0.115,
|
| 827 |
+
0.731,
|
| 828 |
+
0.129
|
| 829 |
+
],
|
| 830 |
+
"angle": 0,
|
| 831 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "page_number",
|
| 835 |
+
"bbox": [
|
| 836 |
+
0.775,
|
| 837 |
+
0.117,
|
| 838 |
+
0.785,
|
| 839 |
+
0.127
|
| 840 |
+
],
|
| 841 |
+
"angle": 0,
|
| 842 |
+
"content": "9"
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"type": "title",
|
| 846 |
+
"bbox": [
|
| 847 |
+
0.218,
|
| 848 |
+
0.145,
|
| 849 |
+
0.327,
|
| 850 |
+
0.161
|
| 851 |
+
],
|
| 852 |
+
"angle": 0,
|
| 853 |
+
"content": "Conclusion"
|
| 854 |
+
},
|
| 855 |
+
{
|
| 856 |
+
"type": "text",
|
| 857 |
+
"bbox": [
|
| 858 |
+
0.214,
|
| 859 |
+
0.177,
|
| 860 |
+
0.808,
|
| 861 |
+
0.281
|
| 862 |
+
],
|
| 863 |
+
"angle": 0,
|
| 864 |
+
"content": "Our segmentation-guided diffusion model enables superior anatomically-controllable medical image generation, which has many potential applications, including (1) the generation of anatomically rare cases to augment some imbalanced dataset, (2) the synthesis of anatomically-paired/pre-registered data, and (3) cross-modality anatomy translation, where our model could be trained on images and masks from one sequence (e.g., T2 MRI), and then supplied with masks from another sequence (e.g., T1 MRI) to create new T2 images from the T1 masks."
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "text",
|
| 868 |
+
"bbox": [
|
| 869 |
+
0.214,
|
| 870 |
+
0.282,
|
| 871 |
+
0.787,
|
| 872 |
+
0.372
|
| 873 |
+
],
|
| 874 |
+
"angle": 0,
|
| 875 |
+
"content": "However, some limitations of this study are that we did not consider full 3D generation, and we did not compare to ControlNet-like latent diffusion models [23] trained completely from scratch. For future work, we are interested in further improving generated image quality, incorporating image-level class guidance [10] either for pathological or domain-related variables [13], and extending our model to segmentation-guided image translation."
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "text",
|
| 879 |
+
"bbox": [
|
| 880 |
+
0.214,
|
| 881 |
+
0.387,
|
| 882 |
+
0.787,
|
| 883 |
+
0.456
|
| 884 |
+
],
|
| 885 |
+
"angle": 0,
|
| 886 |
+
"content": "Acknowledgments. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R01EB031575. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health."
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "text",
|
| 890 |
+
"bbox": [
|
| 891 |
+
0.216,
|
| 892 |
+
0.469,
|
| 893 |
+
0.676,
|
| 894 |
+
0.484
|
| 895 |
+
],
|
| 896 |
+
"angle": 0,
|
| 897 |
+
"content": "Disclosure of Interests. The authors have no competing interests."
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "title",
|
| 901 |
+
"bbox": [
|
| 902 |
+
0.216,
|
| 903 |
+
0.507,
|
| 904 |
+
0.326,
|
| 905 |
+
0.523
|
| 906 |
+
],
|
| 907 |
+
"angle": 0,
|
| 908 |
+
"content": "References"
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "ref_text",
|
| 912 |
+
"bbox": [
|
| 913 |
+
0.223,
|
| 914 |
+
0.537,
|
| 915 |
+
0.775,
|
| 916 |
+
0.551
|
| 917 |
+
],
|
| 918 |
+
"angle": 0,
|
| 919 |
+
"content": "1. Partial label learning: Taxonomy, analysis and outlook. Neural Networks (2023)"
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "ref_text",
|
| 923 |
+
"bbox": [
|
| 924 |
+
0.223,
|
| 925 |
+
0.551,
|
| 926 |
+
0.786,
|
| 927 |
+
0.592
|
| 928 |
+
],
|
| 929 |
+
"angle": 0,
|
| 930 |
+
"content": "2. Cao, S., Konz, N., Duncan, J., Mazurowski, M.A.: Deep learning for breast mri style transfer with limited training data. Journal of Digital Imaging 36(2), 666-678 (2023)"
|
| 931 |
+
},
|
| 932 |
+
{
|
| 933 |
+
"type": "ref_text",
|
| 934 |
+
"bbox": [
|
| 935 |
+
0.223,
|
| 936 |
+
0.592,
|
| 937 |
+
0.787,
|
| 938 |
+
0.647
|
| 939 |
+
],
|
| 940 |
+
"angle": 0,
|
| 941 |
+
"content": "3. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8789-8797 (2018)"
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "ref_text",
|
| 945 |
+
"bbox": [
|
| 946 |
+
0.223,
|
| 947 |
+
0.647,
|
| 948 |
+
0.787,
|
| 949 |
+
0.702
|
| 950 |
+
],
|
| 951 |
+
"angle": 0,
|
| 952 |
+
"content": "4. Fernandez, V., Pinaya, W.H.L., Borges, P., Graham, M.S., Vercauteren, T., Cardoso, M.J.: A 3D Generative Model of Pathological Multi-modal MR Images and Segmentations. In: Deep Generative Models. pp. 132-142. Springer Nature Switzerland, Cham (2024)"
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"type": "ref_text",
|
| 956 |
+
"bbox": [
|
| 957 |
+
0.223,
|
| 958 |
+
0.702,
|
| 959 |
+
0.787,
|
| 960 |
+
0.758
|
| 961 |
+
],
|
| 962 |
+
"angle": 0,
|
| 963 |
+
"content": "5. Fernandez, V., Pinaya, W.H.L., Borges, P., Tudosiu, P.D., Graham, M.S., Vercauteren, T., Cardoso, M.J.: Can segmentation models be trained with fully synthetically generated data? In: International Workshop on Simulation and Synthesis in Medical Imaging. pp. 79-90. Springer (2022)"
|
| 964 |
+
},
|
| 965 |
+
{
|
| 966 |
+
"type": "ref_text",
|
| 967 |
+
"bbox": [
|
| 968 |
+
0.223,
|
| 969 |
+
0.758,
|
| 970 |
+
0.787,
|
| 971 |
+
0.799
|
| 972 |
+
],
|
| 973 |
+
"angle": 0,
|
| 974 |
+
"content": "6. Gong, K., Johnson, K., El Fakhri, G., Li, Q., Pan, T.: Pet image denoising based on denoising diffusion probabilistic model. European Journal of Nuclear Medicine and Molecular Imaging pp. 1-11 (2023)"
|
| 975 |
+
},
|
| 976 |
+
{
|
| 977 |
+
"type": "ref_text",
|
| 978 |
+
"bbox": [
|
| 979 |
+
0.223,
|
| 980 |
+
0.799,
|
| 981 |
+
0.787,
|
| 982 |
+
0.84
|
| 983 |
+
],
|
| 984 |
+
"angle": 0,
|
| 985 |
+
"content": "7. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 16000-16009 (2022)"
|
| 986 |
+
},
|
| 987 |
+
{
|
| 988 |
+
"type": "list",
|
| 989 |
+
"bbox": [
|
| 990 |
+
0.223,
|
| 991 |
+
0.537,
|
| 992 |
+
0.787,
|
| 993 |
+
0.84
|
| 994 |
+
],
|
| 995 |
+
"angle": 0,
|
| 996 |
+
"content": null
|
| 997 |
+
}
|
| 998 |
+
],
|
| 999 |
+
[
|
| 1000 |
+
{
|
| 1001 |
+
"type": "page_number",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
0.218,
|
| 1004 |
+
0.116,
|
| 1005 |
+
0.236,
|
| 1006 |
+
0.127
|
| 1007 |
+
],
|
| 1008 |
+
"angle": 0,
|
| 1009 |
+
"content": "10"
|
| 1010 |
+
},
|
| 1011 |
+
{
|
| 1012 |
+
"type": "header",
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
0.272,
|
| 1015 |
+
0.115,
|
| 1016 |
+
0.367,
|
| 1017 |
+
0.127
|
| 1018 |
+
],
|
| 1019 |
+
"angle": 0,
|
| 1020 |
+
"content": "N. Konz et al."
|
| 1021 |
+
},
|
| 1022 |
+
{
|
| 1023 |
+
"type": "ref_text",
|
| 1024 |
+
"bbox": [
|
| 1025 |
+
0.223,
|
| 1026 |
+
0.147,
|
| 1027 |
+
0.787,
|
| 1028 |
+
0.189
|
| 1029 |
+
],
|
| 1030 |
+
"angle": 0,
|
| 1031 |
+
"content": "8. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017)"
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"type": "ref_text",
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
0.225,
|
| 1037 |
+
0.191,
|
| 1038 |
+
0.786,
|
| 1039 |
+
0.218
|
| 1040 |
+
],
|
| 1041 |
+
"angle": 0,
|
| 1042 |
+
"content": "9. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)"
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "ref_text",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
0.218,
|
| 1048 |
+
0.22,
|
| 1049 |
+
0.786,
|
| 1050 |
+
0.247
|
| 1051 |
+
],
|
| 1052 |
+
"angle": 0,
|
| 1053 |
+
"content": "10. Ho, J., Salimans, T.: Classifier-free diffusion guidance. In: NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications (2021)"
|
| 1054 |
+
},
|
| 1055 |
+
{
|
| 1056 |
+
"type": "ref_text",
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
0.218,
|
| 1059 |
+
0.249,
|
| 1060 |
+
0.787,
|
| 1061 |
+
0.289
|
| 1062 |
+
],
|
| 1063 |
+
"angle": 0,
|
| 1064 |
+
"content": "11. Kazerouni, A., Aghdam, E.K., Heidari, M., Azad, R., Fayyaz, M., Hacihaliloglu, I., Merhof, D.: Diffusion models in medical imaging: A comprehensive survey. Medical Image Analysis p. 102846 (2023)"
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"type": "ref_text",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
0.219,
|
| 1070 |
+
0.291,
|
| 1071 |
+
0.787,
|
| 1072 |
+
0.346
|
| 1073 |
+
],
|
| 1074 |
+
"angle": 0,
|
| 1075 |
+
"content": "12. Khader, F., Müller-Franzes, G., Tayebi Arasteh, S., Han, T., Haarburger, C., Schulze-Hagen, M., Schad, P., Engelhardt, S., Baeßler, B., Foersch, S., et al.: Denoising diffusion probabilistic models for 3d medical image generation. Scientific Reports 13(1), 7303 (2023)"
|
| 1076 |
+
},
|
| 1077 |
+
{
|
| 1078 |
+
"type": "ref_text",
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
0.219,
|
| 1081 |
+
0.348,
|
| 1082 |
+
0.787,
|
| 1083 |
+
0.388
|
| 1084 |
+
],
|
| 1085 |
+
"angle": 0,
|
| 1086 |
+
"content": "13. Konz, N., Mazurowski, M.A.: Reverse engineering breast mris: Predicting acquisition parameters directly from images. In: Medical Imaging with Deep Learning (2023)"
|
| 1087 |
+
},
|
| 1088 |
+
{
|
| 1089 |
+
"type": "ref_text",
|
| 1090 |
+
"bbox": [
|
| 1091 |
+
0.219,
|
| 1092 |
+
0.39,
|
| 1093 |
+
0.787,
|
| 1094 |
+
0.457
|
| 1095 |
+
],
|
| 1096 |
+
"angle": 0,
|
| 1097 |
+
"content": "14. Lew, C.O., Harouni, M., Kirksey, E.R., Kang, E.J., Dong, H., Gu, H., Grimm, L.J., Walsh, R., Lowell, D.A., Mazurowski, M.A.: A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast mri. Scientific reports 14 1, 5383 (2024), https://api-semanticscholar.org/CorpusID:268251677"
|
| 1098 |
+
},
|
| 1099 |
+
{
|
| 1100 |
+
"type": "ref_text",
|
| 1101 |
+
"bbox": [
|
| 1102 |
+
0.219,
|
| 1103 |
+
0.46,
|
| 1104 |
+
0.786,
|
| 1105 |
+
0.487
|
| 1106 |
+
],
|
| 1107 |
+
"angle": 0,
|
| 1108 |
+
"content": "15. Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. In: International Conference on Learning Representations (2016)"
|
| 1109 |
+
},
|
| 1110 |
+
{
|
| 1111 |
+
"type": "ref_text",
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
0.219,
|
| 1114 |
+
0.489,
|
| 1115 |
+
0.787,
|
| 1116 |
+
0.515
|
| 1117 |
+
],
|
| 1118 |
+
"angle": 0,
|
| 1119 |
+
"content": "16. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2018)"
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "ref_text",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
0.219,
|
| 1125 |
+
0.517,
|
| 1126 |
+
0.787,
|
| 1127 |
+
0.544
|
| 1128 |
+
],
|
| 1129 |
+
"angle": 0,
|
| 1130 |
+
"content": "17. Lyu, Q., Wang, G.: Conversion between ct and mri images using diffusion and score-matching models. arXiv preprint arXiv:2209.12104 (2022)"
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "ref_text",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
0.219,
|
| 1136 |
+
0.546,
|
| 1137 |
+
0.787,
|
| 1138 |
+
0.573
|
| 1139 |
+
],
|
| 1140 |
+
"angle": 0,
|
| 1141 |
+
"content": "18. Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: International Conference on Machine Learning. pp. 8162-8171. PMLR (2021)"
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "ref_text",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
0.219,
|
| 1147 |
+
0.575,
|
| 1148 |
+
0.787,
|
| 1149 |
+
0.615
|
| 1150 |
+
],
|
| 1151 |
+
"angle": 0,
|
| 1152 |
+
"content": "19. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)"
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "ref_text",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
0.219,
|
| 1158 |
+
0.617,
|
| 1159 |
+
0.787,
|
| 1160 |
+
0.685
|
| 1161 |
+
],
|
| 1162 |
+
"angle": 0,
|
| 1163 |
+
"content": "20. Pinaya, W.H., Graham, M.S., Gray, R., Da Costa, P.F., Tudosiu, P.D., Wright, P., Mah, Y.H., MacKinnon, A.D., Teo, J.T., Jager, R., et al.: Fast unsupervised brain anomaly detection and segmentation with diffusion models. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 705-714. Springer (2022)"
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "ref_text",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
0.219,
|
| 1169 |
+
0.687,
|
| 1170 |
+
0.787,
|
| 1171 |
+
0.741
|
| 1172 |
+
],
|
| 1173 |
+
"angle": 0,
|
| 1174 |
+
"content": "21. Pinaya, W.H., Tudosiu, P.D., Dafflon, J., Da Costa, P.F., Fernandez, V., Nachev, P., Ourselin, S., Cardoso, M.J.: Brain imaging generation with latent diffusion models. In: MICCAI Workshop on Deep Generative Models. pp. 117-126. Springer (2022)"
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "ref_text",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
0.219,
|
| 1180 |
+
0.743,
|
| 1181 |
+
0.787,
|
| 1182 |
+
0.784
|
| 1183 |
+
],
|
| 1184 |
+
"angle": 0,
|
| 1185 |
+
"content": "22. Rister, B., Yi, D., Shivakumar, K., Nobashi, T., Rubin, D.L.: Ct-org, a new dataset for multiple organ segmentation in computed tomography. Scientific Data 7(1), 381 (2020)"
|
| 1186 |
+
},
|
| 1187 |
+
{
|
| 1188 |
+
"type": "ref_text",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
0.219,
|
| 1191 |
+
0.786,
|
| 1192 |
+
0.787,
|
| 1193 |
+
0.84
|
| 1194 |
+
],
|
| 1195 |
+
"angle": 0,
|
| 1196 |
+
"content": "23. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10684-10695 (June 2022)"
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "list",
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
0.218,
|
| 1202 |
+
0.147,
|
| 1203 |
+
0.787,
|
| 1204 |
+
0.84
|
| 1205 |
+
],
|
| 1206 |
+
"angle": 0,
|
| 1207 |
+
"content": null
|
| 1208 |
+
}
|
| 1209 |
+
],
|
| 1210 |
+
[
|
| 1211 |
+
{
|
| 1212 |
+
"type": "header",
|
| 1213 |
+
"bbox": [
|
| 1214 |
+
0.292,
|
| 1215 |
+
0.115,
|
| 1216 |
+
0.732,
|
| 1217 |
+
0.129
|
| 1218 |
+
],
|
| 1219 |
+
"angle": 0,
|
| 1220 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "page_number",
|
| 1224 |
+
"bbox": [
|
| 1225 |
+
0.768,
|
| 1226 |
+
0.117,
|
| 1227 |
+
0.784,
|
| 1228 |
+
0.127
|
| 1229 |
+
],
|
| 1230 |
+
"angle": 0,
|
| 1231 |
+
"content": "11"
|
| 1232 |
+
},
|
| 1233 |
+
{
|
| 1234 |
+
"type": "ref_text",
|
| 1235 |
+
"bbox": [
|
| 1236 |
+
0.214,
|
| 1237 |
+
0.147,
|
| 1238 |
+
0.788,
|
| 1239 |
+
0.203
|
| 1240 |
+
],
|
| 1241 |
+
"angle": 0,
|
| 1242 |
+
"content": "24. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234-241. Springer (2015)"
|
| 1243 |
+
},
|
| 1244 |
+
{
|
| 1245 |
+
"type": "ref_text",
|
| 1246 |
+
"bbox": [
|
| 1247 |
+
0.214,
|
| 1248 |
+
0.204,
|
| 1249 |
+
0.788,
|
| 1250 |
+
0.258
|
| 1251 |
+
],
|
| 1252 |
+
"angle": 0,
|
| 1253 |
+
"content": "25. Saha, A., Harowicz, M.R., Grimm, L.J., Kim, C.E., Ghate, S.V., Walsh, R., Mazurowski, M.A.: A machine learning approach to radiogenomics of breast cancer: a study of 922 subjects and 529 dce-mri features. British journal of cancer 119(4), 508-516 (2018)"
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "ref_text",
|
| 1257 |
+
"bbox": [
|
| 1258 |
+
0.214,
|
| 1259 |
+
0.259,
|
| 1260 |
+
0.788,
|
| 1261 |
+
0.299
|
| 1262 |
+
],
|
| 1263 |
+
"angle": 0,
|
| 1264 |
+
"content": "26. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=St1giarCHLP"
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "ref_text",
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
0.214,
|
| 1270 |
+
0.3,
|
| 1271 |
+
0.788,
|
| 1272 |
+
0.34
|
| 1273 |
+
],
|
| 1274 |
+
"angle": 0,
|
| 1275 |
+
"content": "27. Wang, T., Zhang, T., Zhang, B., Ouyang, H., Chen, D., Chen, Q., Wen, F.: Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952 (2022)"
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "ref_text",
|
| 1279 |
+
"bbox": [
|
| 1280 |
+
0.214,
|
| 1281 |
+
0.342,
|
| 1282 |
+
0.788,
|
| 1283 |
+
0.383
|
| 1284 |
+
],
|
| 1285 |
+
"angle": 0,
|
| 1286 |
+
"content": "28. Wolleb, J., Bieder, F., Sandkühler, R., Cattin, P.C.: Diffusion models for medical anomaly detection. In: International Conference on Medical image computing and computer-assisted intervention. pp. 35-45. Springer (2022)"
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "ref_text",
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
0.214,
|
| 1292 |
+
0.384,
|
| 1293 |
+
0.788,
|
| 1294 |
+
0.424
|
| 1295 |
+
],
|
| 1296 |
+
"angle": 0,
|
| 1297 |
+
"content": "29. Wolleb, J., Sandkuhler, R., Bieder, F., Valmaggia, P., Cattin, P.C.: Diffusion models for implicit image segmentation ensembles. In: International Conference on Medical Imaging with Deep Learning. pp. 1336-1348. PMLR (2022)"
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "ref_text",
|
| 1301 |
+
"bbox": [
|
| 1302 |
+
0.214,
|
| 1303 |
+
0.425,
|
| 1304 |
+
0.788,
|
| 1305 |
+
0.493
|
| 1306 |
+
],
|
| 1307 |
+
"angle": 0,
|
| 1308 |
+
"content": "30. Yang, J., Dvornek, N.C., Zhang, F., Shapiro, J., Lin, M., Duncan, J.S.: Unsupervised domain adaptation via disentangled representations: Application to cross-modality liver segmentation. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part II 22. pp. 255-263. Springer (2019)"
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"type": "ref_text",
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
0.214,
|
| 1314 |
+
0.494,
|
| 1315 |
+
0.788,
|
| 1316 |
+
0.535
|
| 1317 |
+
],
|
| 1318 |
+
"angle": 0,
|
| 1319 |
+
"content": "31. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)"
|
| 1320 |
+
},
|
| 1321 |
+
{
|
| 1322 |
+
"type": "list",
|
| 1323 |
+
"bbox": [
|
| 1324 |
+
0.214,
|
| 1325 |
+
0.147,
|
| 1326 |
+
0.788,
|
| 1327 |
+
0.535
|
| 1328 |
+
],
|
| 1329 |
+
"angle": 0,
|
| 1330 |
+
"content": null
|
| 1331 |
+
}
|
| 1332 |
+
],
|
| 1333 |
+
[
|
| 1334 |
+
{
|
| 1335 |
+
"type": "page_number",
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
0.218,
|
| 1338 |
+
0.116,
|
| 1339 |
+
0.236,
|
| 1340 |
+
0.127
|
| 1341 |
+
],
|
| 1342 |
+
"angle": 0,
|
| 1343 |
+
"content": "12"
|
| 1344 |
+
},
|
| 1345 |
+
{
|
| 1346 |
+
"type": "header",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
0.272,
|
| 1349 |
+
0.115,
|
| 1350 |
+
0.368,
|
| 1351 |
+
0.128
|
| 1352 |
+
],
|
| 1353 |
+
"angle": 0,
|
| 1354 |
+
"content": "N. Konz et al."
|
| 1355 |
+
},
|
| 1356 |
+
{
|
| 1357 |
+
"type": "title",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
0.217,
|
| 1360 |
+
0.145,
|
| 1361 |
+
0.7,
|
| 1362 |
+
0.163
|
| 1363 |
+
],
|
| 1364 |
+
"angle": 0,
|
| 1365 |
+
"content": "A Additional Training and Architectural Details"
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "text",
|
| 1369 |
+
"bbox": [
|
| 1370 |
+
0.214,
|
| 1371 |
+
0.172,
|
| 1372 |
+
0.788,
|
| 1373 |
+
0.339
|
| 1374 |
+
],
|
| 1375 |
+
"angle": 0,
|
| 1376 |
+
"content": "Our model (segmentation-guided diffusion). The denoising model (UNet)'s encoder is constructed from six standard ResNet down-sampling blocks, with the fifth block also having spatial self-attention, with (128, 128, 256, 256, 512, 512) output channels, respectively. The decoder is simply the up-sampling reverse of the encoder. We use a standard forward process variance schedule that linearly increases from \\(\\beta_{1} = 10^{-4}\\) to \\(\\beta_{T} = 0.02\\) [9]. For training, we use the AdamW optimizer [16] and a cosine learning rate scheduler [15] with an initial learning rate of \\(10^{-4}\\), with 500 linear warm-up steps. We train for 400 epochs with a batch size of 64 (about 26 hours), and we perform all training and evaluation on four 48 GB NVIDIA A6000 GPUs. We use the Diffusers library as a backbone (https://github.com/huggingface/diffusers)."
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "title",
|
| 1380 |
+
"bbox": [
|
| 1381 |
+
0.216,
|
| 1382 |
+
0.356,
|
| 1383 |
+
0.431,
|
| 1384 |
+
0.372
|
| 1385 |
+
],
|
| 1386 |
+
"angle": 0,
|
| 1387 |
+
"content": "A.1 Comparison models"
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"bbox": [
|
| 1392 |
+
0.214,
|
| 1393 |
+
0.376,
|
| 1394 |
+
0.788,
|
| 1395 |
+
0.513
|
| 1396 |
+
],
|
| 1397 |
+
"angle": 0,
|
| 1398 |
+
"content": "SPADE. We train SPADE [19] using the default settings, with a batch size of 128 for 50 epochs. We did not adopt the changes of the recent brain MRI SPADE model [5] because they are not applicable to our datasets/task, namely: (1) the contrast-based clustering is not applicable due to us using pre-contrast MRIs or CT, (2) we work with standard categorical segmentation maps, not partial volume/probabilistic segmentation maps, so changes using the latter are not applicable, and (3) we work with independent 2D slice images, rather than full 3D volumes, so the enforcement of style and content separation via using different slices from the same volume during training is not applicable."
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"bbox": [
|
| 1403 |
+
0.214,
|
| 1404 |
+
0.53,
|
| 1405 |
+
0.788,
|
| 1406 |
+
0.636
|
| 1407 |
+
],
|
| 1408 |
+
"angle": 0,
|
| 1409 |
+
"content": "ControlNet. We adapted ControlNet [31] to each of our medical image datasets as was instructed at their official tutorial (https://github.com/lllyasviel/ ControlNet/blob/main/docs/train.md#sd.locked) for use with datasets that are out-of-distribution (e.g., medical images) from their model's very large natural image pre-training set, using empty prompts for text inputs. We note that despite this tutorial, none of this was tested in the ControlNet paper, which may explain ControlNet's poor performance on our medical datasets."
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"bbox": [
|
| 1414 |
+
0.214,
|
| 1415 |
+
0.636,
|
| 1416 |
+
0.797,
|
| 1417 |
+
0.773
|
| 1418 |
+
],
|
| 1419 |
+
"angle": 0,
|
| 1420 |
+
"content": "This involved first finetuning the VAE for 200 epochs, then finetuning the Stable Diffusion (SD) model for 400 epochs using the respective breast MRI or CT organ training set images. We then finetuned the ControlNet with the images and their corresponding masks for segmentation guidance for 200 epochs. The pretrained (pre-finetuning) models are from the SD v1.5 checkpoints available on Hugging Face at https://huggingface.co/runwayml/stable-diffusion-v1-5. For all training, we set the batch size to 128, the initial learning rate to \\(10^{-4}\\), and adopted cosine annealing learning rate schedulers rate with 500 steps of warm-up."
|
| 1421 |
+
},
|
| 1422 |
+
{
|
| 1423 |
+
"type": "title",
|
| 1424 |
+
"bbox": [
|
| 1425 |
+
0.216,
|
| 1426 |
+
0.789,
|
| 1427 |
+
0.517,
|
| 1428 |
+
0.805
|
| 1429 |
+
],
|
| 1430 |
+
"angle": 0,
|
| 1431 |
+
"content": "A.2 Auxiliary segmentation model"
|
| 1432 |
+
},
|
| 1433 |
+
{
|
| 1434 |
+
"type": "text",
|
| 1435 |
+
"bbox": [
|
| 1436 |
+
0.214,
|
| 1437 |
+
0.81,
|
| 1438 |
+
0.787,
|
| 1439 |
+
0.842
|
| 1440 |
+
],
|
| 1441 |
+
"angle": 0,
|
| 1442 |
+
"content": "We used the MONAI UNet (https://docs.monai.io/en/stable/networks.html) with 1-channel input and (number of target object classes + 1)-channel"
|
| 1443 |
+
}
|
| 1444 |
+
],
|
| 1445 |
+
[
|
| 1446 |
+
{
|
| 1447 |
+
"type": "header",
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
0.292,
|
| 1450 |
+
0.115,
|
| 1451 |
+
0.733,
|
| 1452 |
+
0.129
|
| 1453 |
+
],
|
| 1454 |
+
"angle": 0,
|
| 1455 |
+
"content": "Anatomically-Controllable Segmentation-Guided Diffusion Models"
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "page_number",
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
0.768,
|
| 1461 |
+
0.116,
|
| 1462 |
+
0.786,
|
| 1463 |
+
0.127
|
| 1464 |
+
],
|
| 1465 |
+
"angle": 0,
|
| 1466 |
+
"content": "13"
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"bbox": [
|
| 1471 |
+
0.214,
|
| 1472 |
+
0.147,
|
| 1473 |
+
0.788,
|
| 1474 |
+
0.207
|
| 1475 |
+
],
|
| 1476 |
+
"angle": 0,
|
| 1477 |
+
"content": "output. The sequence of intermediate UNet channels was set to (16, 32, 64, 128, 256). We trained each model for 100 epochs with a batch size of 8 and selected the models with the lowest validation loss, with an initial learning rate of \\(10^{-3}\\) and a cosine annealing scheduler."
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "title",
|
| 1481 |
+
"bbox": [
|
| 1482 |
+
0.216,
|
| 1483 |
+
0.23,
|
| 1484 |
+
0.524,
|
| 1485 |
+
0.248
|
| 1486 |
+
],
|
| 1487 |
+
"angle": 0,
|
| 1488 |
+
"content": "B Additional Sampled Images"
|
| 1489 |
+
},
|
| 1490 |
+
{
|
| 1491 |
+
"type": "image",
|
| 1492 |
+
"bbox": [
|
| 1493 |
+
0.28,
|
| 1494 |
+
0.288,
|
| 1495 |
+
0.72,
|
| 1496 |
+
0.57
|
| 1497 |
+
],
|
| 1498 |
+
"angle": 0,
|
| 1499 |
+
"content": null
|
| 1500 |
+
},
|
| 1501 |
+
{
|
| 1502 |
+
"type": "image_caption",
|
| 1503 |
+
"bbox": [
|
| 1504 |
+
0.215,
|
| 1505 |
+
0.589,
|
| 1506 |
+
0.788,
|
| 1507 |
+
0.618
|
| 1508 |
+
],
|
| 1509 |
+
"angle": 0,
|
| 1510 |
+
"content": "Fig. 5. Additional samples from all segmentation-conditional models; breast MRI on the left, CT organ on the right. Please see Fig. 2 caption for more details."
|
| 1511 |
+
},
|
| 1512 |
+
{
|
| 1513 |
+
"type": "image",
|
| 1514 |
+
"bbox": [
|
| 1515 |
+
0.274,
|
| 1516 |
+
0.673,
|
| 1517 |
+
0.497,
|
| 1518 |
+
0.716
|
| 1519 |
+
],
|
| 1520 |
+
"angle": 0,
|
| 1521 |
+
"content": null
|
| 1522 |
+
},
|
| 1523 |
+
{
|
| 1524 |
+
"type": "image",
|
| 1525 |
+
"bbox": [
|
| 1526 |
+
0.505,
|
| 1527 |
+
0.673,
|
| 1528 |
+
0.728,
|
| 1529 |
+
0.717
|
| 1530 |
+
],
|
| 1531 |
+
"angle": 0,
|
| 1532 |
+
"content": null
|
| 1533 |
+
},
|
| 1534 |
+
{
|
| 1535 |
+
"type": "image",
|
| 1536 |
+
"bbox": [
|
| 1537 |
+
0.273,
|
| 1538 |
+
0.722,
|
| 1539 |
+
0.495,
|
| 1540 |
+
0.765
|
| 1541 |
+
],
|
| 1542 |
+
"angle": 0,
|
| 1543 |
+
"content": null
|
| 1544 |
+
},
|
| 1545 |
+
{
|
| 1546 |
+
"type": "image",
|
| 1547 |
+
"bbox": [
|
| 1548 |
+
0.504,
|
| 1549 |
+
0.722,
|
| 1550 |
+
0.725,
|
| 1551 |
+
0.765
|
| 1552 |
+
],
|
| 1553 |
+
"angle": 0,
|
| 1554 |
+
"content": null
|
| 1555 |
+
},
|
| 1556 |
+
{
|
| 1557 |
+
"type": "image_caption",
|
| 1558 |
+
"bbox": [
|
| 1559 |
+
0.214,
|
| 1560 |
+
0.784,
|
| 1561 |
+
0.788,
|
| 1562 |
+
0.813
|
| 1563 |
+
],
|
| 1564 |
+
"angle": 0,
|
| 1565 |
+
"content": "Fig. 6. Additional samples from our mask-ablated-trained model with various classes removed from given input segmentations for breast MRI (top) and CT Organ (bottom)."
|
| 1566 |
+
}
|
| 1567 |
+
]
|
| 1568 |
+
]
|
2402.05xxx/2402.05210/3b636105-e254-44bd-b00d-d827c35173d9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:14098c2b344a891b19f1e8a0549857eaf336581e2231aa1312323273692a82d0
|
| 3 |
+
size 17781369
|
2402.05xxx/2402.05210/full.md
ADDED
|
@@ -0,0 +1,192 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models
|
| 2 |
+
|
| 3 |
+
Nicholas Konz<sup>1*</sup>, Yuwen Chen<sup>1</sup>, Haoyu Dong<sup>1</sup>, and Maciej A. Mazurowski<sup>1,2,3,4</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Electrical and Computer Engineering, Duke University, NC, USA
|
| 6 |
+
|
| 7 |
+
$^{2}$ Department of Radiology, Duke University, NC, USA
|
| 8 |
+
|
| 9 |
+
$^{3}$ Department of Computer Science, Duke University, NC, USA
|
| 10 |
+
|
| 11 |
+
$^{4}$ Department of Biostatistics & Bioinformatics, Duke University, NC, USA
|
| 12 |
+
|
| 13 |
+
*corresponding author nicholas.konz@duke.edu
|
| 14 |
+
|
| 15 |
+
Abstract. Diffusion models have enabled remarkably high-quality medical image generation, yet it is challenging to enforce anatomical constraints in generated images. To this end, we propose a diffusion model-based method that supports anatomically-controllable medical image generation, by following a multi-class anatomical segmentation mask at each sampling step. We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints while allowing flexibility in other anatomical areas. We compare our method ("SegGuidedDiff") to existing methods on breast MRI and abdominal/neck-to-pelvis CT datasets with a wide range of anatomical objects. Results show that our method reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks on both datasets, and is on par for general anatomical realism. Finally, our model also enjoys the extra benefit of being able to adjust the anatomical similarity of generated images to real images of choice through interpolation in its latent space. SegGuidedDiff has many applications, including cross-modality translation, and the generation of paired or counterfactual data. Our code is available at https://github.com/mazurowski-lab/segmentation-guided-diffusion.
|
| 16 |
+
|
| 17 |
+
Keywords: diffusion models $\cdot$ image generation $\cdot$ semantic synthesis
|
| 18 |
+
|
| 19 |
+
# Introduction
|
| 20 |
+
|
| 21 |
+
Denoising diffusion probabilistic models [9] (DDPMs, or just "diffusion models") have shown extensive applications in medical image analysis [11] due to their ability to generate high-quality, high-resolution images, such as for direct image generation [21,12], image segmentation [29], anomaly detection [20,28], cross-modality image translation [17], and image denoising [6]. However, standard generative models like DDPMs can still fail to create anatomically plausible tissue (Fig 1), and such anatomy is not precisely customizable. Our proposed solution is to incorporate anatomical information as a prior for image generation
|
| 22 |
+
|
| 23 |
+
via a segmentation mask for different types of tissue, organs, etc., providing the network with a more direct learning signal for anatomical realism.
|
| 24 |
+
|
| 25 |
+
Generating an image from a mask (a.k.a. semantic synthesis) is a type of image-to-image translation task. Existing translation works include GAN-based [30,3,2] and diffusion-based models [28,23,31], yet these do not directly enforce precise pixel-wise anatomical constraints. Recent works introduced fine-tuning large pre-trained text-to-image latent diffusion models (LDMs) for segmentation-conditioning on natural images [31,27], but we found that these adapt poorly to medical images (Sec. 3.1). Instead, we propose the first diffusion model for generating medical images from anatomical segmentations, which is assisted by it being an image-space diffusion model trained from scratch. Image-space-based diffusion models are especially amenable for strict segmentation guidance because a conditioning mask can be used as-is for every small step of the denoising process, with no conversion to some abstract latent space as in certain LDMs [31,27,23] (or GANs) where precise spatial guidance may be lost.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Fig. 1. Standard diffusion models (right) can fail to create realistic tissue even if the overall image appears high-quality, motivating our segmentation-guided model (center).
|
| 29 |
+
|
| 30 |
+
Segmentation-guided generation would be even more flexible if only certain object classes could be constrained in an input mask, while others are free to be inferred by the model. This opens up further applications such as generating images from incomplete masks ([1]), the generation of anatomically paired/registered data, "counterfactual" analysis of existing annotated data, and others. To solve this, we propose a mask-ablated training strategy to provide the model with all possible combinations of missing classes in masks during training, teaching it to make such inferences when generating new images. Notably, we also demonstrate how interpolating within the latent space of our mask ablated-trained model enables generating images with adjustable anatomical similarity to some real image (Sec. 3.2).
|
| 31 |
+
|
| 32 |
+
In summary, we introduce a diffusion model, "SegGuidedDiff" that can flexibly and precisely generate medical images according to anatomical masks. We evaluate our model's ability to generate images conditioned on a range of
|
| 33 |
+
|
| 34 |
+
anatomical objects of interest for breast MRI and neck-to-pelvis CT, where it outperforms state-of-the-art mask-conditional generative models in its faithfulness to input anatomical masks, and is on par for general anatomical realism (Sec. 3.1). Our code is publicly released at https://github.com/mazurowski-lab/segmentation-guided-diffusion with a focus on usability on any dataset, along with a dataset of paired "pre-registered" generated breast MRIs.
|
| 35 |
+
|
| 36 |
+
# 1 Method
|
| 37 |
+
|
| 38 |
+
# 1.1 A Brief Review of Diffusion Models
|
| 39 |
+
|
| 40 |
+
Denoising diffusion probabilistic models [9] (DDPMs, or diffusion models for short) are a type of generative latent variable model that learns to sample from some data distribution $p(x_0)$ ( $x_0 \in \mathbb{R}^n$ ) by defining a stochastic process that gradually converts the data to noise—the forward process $q(x_t | x_{t-1})$ and learning to reverse this process via a learned denoising process $p_\theta(x_{t-1} | x_t)$ , where $\theta$ is the model parameters. Data is generated by iteratively sampling from $p_\theta(x_{t-1} | x_t)$ , beginning with a Gaussian noise sample $x_T \sim p(x_T)$ , for $t = T - 1, \ldots, 0$ (we use $T = 1000$ ) until an image $x_0$ is recovered.
|
| 41 |
+
|
| 42 |
+
Any forward process step can be written explicitly as $x_{t} = \sqrt{\overline{\alpha}_{t}} x_{0} + \sqrt{1 - \overline{\alpha}_{t}}\epsilon$ where $\epsilon \sim \mathcal{N}(0,I_n)$ , and $\alpha_{t} \coloneqq 1 - \beta_{t}$ given the variance of the additive prescheduled noise $\beta_{t}$ , and $\overline{\alpha}_t \coloneqq \prod_{s = 1}^t\alpha_s$ . DDPMs can be trained by the usual evidence lower bound (ELBO) maximization, which can be approximately optimized in a relatively simple form by training a network $\epsilon_{\theta}(x_t,t)$ to predict the noise $\epsilon$ added to each datapoint $x_0$ for various time steps $t$ , with the loss $L = \mathbb{E}_{x_0,t,\epsilon}\left[||\epsilon -\epsilon_\theta (x_t,t)||^2\right] = \mathbb{E}_{x_0,t,\epsilon}\left[||\epsilon -\epsilon_\theta (\sqrt{\overline{\alpha}_t} x_0 + \sqrt{1 - \overline{\alpha}_t}\epsilon ,t)||^2\right]$ , which has proven to be the typically superior DDPM loss in practice [18].
|
| 43 |
+
|
| 44 |
+
# 1.2 Adding Segmentation Guidance to Diffusion Models
|
| 45 |
+
|
| 46 |
+
Rather than sampling from the unconditional distribution $p(x_0)$ , our goal is to condition the generation of some $c$ -channel image $x_0 \in \mathbb{R}^{c \times h \times w}$ to follow some multi-class anatomical mask $m \in \{0, \dots, C - 1\}^{h \times w}$ , where $C$ is the number of classes (including background), or in other words, sample from $p(x_0|m)$ . While modifying the data likelihood $p(x_0|m)$ to be mask-conditional does not alter the noisng process $q(x_t|x_{t-1})$ , it does modify the reverse process $p_\theta(x_{t-1}|x_t, m)$ and noise-predicting network $\epsilon_\theta$ . Propagating this to the ELBO results in a loss of
|
| 47 |
+
|
| 48 |
+
$$
|
| 49 |
+
L _ {m} = \mathbb {E} _ {\left(x _ {0}, m\right), t, \epsilon} \left[ | | \epsilon - \epsilon_ {\theta} \left(\sqrt {\bar {\alpha} _ {t}} x _ {0} + \sqrt {1 - \bar {\alpha} _ {t}} \epsilon , t \mid m\right) | | ^ {2} \right] \tag {1}
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
for training our model, where each training image $x_0$ has some paired mask $m$ . We propose to implement this simply by concatenating $m$ channel-wise to the network input at every denoising step, i.e., modifying the network to have an additional input channel as $\epsilon_{\theta}(x_t,t|m):\mathbb{R}^{(c + 1)\times h\times w}\to \mathbb{R}^{c\times h\times w}$ , which can be any image-to-image model (see Sec. 3). We use the DDIM algorithm [26] for fast, yet high-quality sampling.
|
| 53 |
+
|
| 54 |
+
This simple method generates images that are very faithful to input masks (Fig. 2, Table 1), because the denoising process is conditioned on the mask at each of its many gradual steps, allowing the network to follow the masks because they provide helpful spatial information that is directly correlated with the optimal denoised model output that minimizes the loss.
|
| 55 |
+
|
| 56 |
+
# 1.3 Mask-Ablated Training and Sampling
|
| 57 |
+
|
| 58 |
+
Given that our model is mask-guided, the quality of these masks is important; a generated image may be misleading if the input mask is not fully annotated, known as the partial label problem [1] in medical image analysis. This is because the model may assume that un-annotated objects should not be present in the output image whatsoever (associating the missing/zero pixel labels as background or some other object), when in reality we may desire for the model to simply "fill in"/infer the unprovided objects.
|
| 59 |
+
|
| 60 |
+
To alleviate this problem, we propose a mask-ablated training (MAT) strategy (Algorithm 1), which provides examples of masks with various numbers and combinations of classes removed for the model to learn to generate images from during training. This can be thought of as a form of self-supervised learning of anatomical object representations (somewhat analogous to MAE [7]). We set all $2^{C-1}$ of these possible combinations of classes being removed from a given mask in training to occur with equal probability so that the model can handle each equally, although any other balancing of these probabilities for different object classes could be used. Finally, we note that our MAT algorithm is immediately applicable to any mask-conditional generative model.
|
| 61 |
+
|
| 62 |
+
Algorithm 1: Segmentation-guided model training with mask ablation.
|
| 63 |
+
Input: number of mask classes $C$ dataset $p(x_0,m)$
|
| 64 |
+
repeat $\begin{array}{l}x_0,m\sim p(x_0,m)\\ \text{for} c = 1,\ldots ,C - 1\text{do}\\ \delta \sim \mathrm{Bernoulli}(0.5)\\ \text{if}\delta = 1\text{then}\\ |m[m = c] = 0\\ \text{end}\\ \epsilon \sim \mathcal{N}(0,I_n);t\sim \mathrm{Uniform}(\{1,\dots,T\})\\ x_t = \sqrt{\bar{\alpha}_t} x_0 + \sqrt{1 - \bar{\alpha}_t}\epsilon \\ \mathrm{Update}\theta \mathrm{with}\nabla_\theta \| \epsilon -\epsilon_\theta (x_t,t|m)\| ^2\\ \mathrm{until}\mathrm{converged}; \end{array}$
|
| 65 |
+
|
| 66 |
+
# 2 Datasets
|
| 67 |
+
|
| 68 |
+
Breast MRI: Our first dataset is a 100-patient subset of the Duke Breast Cancer MRI dataset [25]. We use all 2D image slices from the fat-saturated gradient echo T1-weighted pre-contrast sequence, with a train/test split of 70/15 patients,
|
| 69 |
+
|
| 70 |
+
resulting in $\sim 12000 / 2500$ slice images per split. We also keep a held-out training set of 15 patients for additional experiments. All images have full segmentation annotations for (1) breast, (2) blood vessels (BV), and (3) fibroglandular/dense tissue (FGT) provided at [14]. Notably, the FGT and BV have very high variability in shape, size, and other morphological characteristics, posing a challenge for generative models to realistically capture. CT Organ: Our second dataset is a 40-patient subset of neck-to-pelvis and abdominal CT scans from [22], with segmentation annotations for liver, bladder, lungs, kidney, and bone. This results in a train/test split of $\sim 11000 / 2100$ 2D slice images, given a patient-wise split of $24 / 8$ , as well as a held-out training set of 8 patients. All generative models are trained on the training sets, and the auxiliary segmentation network, introduced next, is trained on the held-out training sets.
|
| 71 |
+
|
| 72 |
+
# 3 Experiments
|
| 73 |
+
|
| 74 |
+
Training, Architecture and Implementational Details. All images are resized to $256 \times 256$ and normalized to [0, 255]. We use a UNet architecture [24] for the denoising model $\epsilon_{\theta}$ , modified to take two channels (image and mask) as input; see Appendix A for additional training and architecture details.
|
| 75 |
+
|
| 76 |
+
# 3.1 Comparison to Existing Image Generation Models
|
| 77 |
+
|
| 78 |
+
We next compare our segmentation-guided diffusion model ("SegGuidedDiff" for short) to state-of-the-art segmentation-conditional image generation models. These are SPADE [19], a GAN-based model that uses spatially-adaptive normalization layers, and ControlNet [31], a recent method for adding spatial conditioning to large pretrained text-to-image diffusion models. Training/implementation details for each are in Appendix A.1; note that we use the standard implementation of SPADE rather than the recent brain MRI SPADE model [5,4] because its modifications to SPADE do not apply to our setting (see Appendix A.1), and that we follow ControlNet's guidelines for adapting it to medical images. We show example generated images from all models in Fig. 2 (using masks randomly sampled from the test set); more are provided in Appendix B.
|
| 79 |
+
|
| 80 |
+
Evaluating Faithfulness of Generated Images to Input Masks. To measure how well our model follows an input mask for image generation, we use an auxiliary segmentation network trained on the real training set (a standard UNet; training details in Appendix A.2), to predict segmentations $m_{gen}^{pred}$ on images that were generated from the masks $m$ in the test set, and measure their overlap with (a) $m$ and (b) the model's predicted segmentations $m_{real}^{pred}$ for the input masks' original corresponding real images, similar to metrics used in [19]. Our model's generated images have high overlap for both metrics ( $> 0.85$ Dice coeff., Table 1), showing that our model closely followed the input masks when generating the images, and also outperformed the competing methods.
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
Fig.2. Visual comparison of our model (SegGuidedDiff, or "Seg-Diff" for short) to existing segmentation-conditional image generation models. For breast MRI, the breast, BV, and FGT segmentations are shown as white, red, and blue, respectively, while for CT, the liver, bladder, lungs, kidneys, and bone are in maroon, orange, pink, red, and white, respectively. "MAT" = max ablated training, "STD" = our standard method.
|
| 84 |
+
|
| 85 |
+
Evaluating Generated Image Quality. We first attempted to use the common Fréchet Inception Distance (FID) [8] as a metric for quality/realism of generated image features compared to real data, via a CNN image encoder trained on the corresponding dataset. We observed that samples generated from both our segmentation-guided and standard unconditional diffusion models achieved potentially promising results, (e.g., breast MRI feature $\mathrm{FID} \simeq 0.5$ ), yet CNN feature-based metrics like FID fail to capture the global feature of anatomical realism that can differ in images generated by these models (e.g., fibroglandular tissue as shown in Fig. 1), so we caution using such metrics.
|
| 86 |
+
|
| 87 |
+
Instead, we propose to more precisely measure anatomical realism by determining how well the aforementioned auxiliary segmentation models for the objects of interest can be trained solely on these synthetic images to be able to generalize to real data, using the input masks as targets. We compare the performance of the segmentation models trained on (a) the real held-out training set (Sec. 2) and (b) the set of images generated from all masks corresponding to these images. We split the real test set in half (by patient) into a validation set and a test set to use for these models. The results for this are in Table 2; we see that for both datasets, the segmentation network trained on our model's synthetic data barely performs worse (by only $\leq 0.04$ Dice) than the network trained on real data, implying that our generated images are both highly realistic and faithful to the input masks (especially considering certain objects'
|
| 88 |
+
|
| 89 |
+
Table 1. Faithfulness of generated images to input masks. $m$ denotes input masks, and ${m}_{\text{gen }}^{\text{pred }}$ and ${m}_{\text{real }}^{\text{pred }}$ denote the masks predicted for (a) the generated images and (b) the real images corresponding to the input masks, respectively, by an auxiliary segmentation model. Best-performing is shown in bold, and second best is underlined.
|
| 90 |
+
|
| 91 |
+
<table><tr><td></td><td colspan="2">Breast MRI</td><td colspan="2">CT Organ</td></tr><tr><td>Model</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td><td>Dice(mpredgen,m)</td><td>Dice(mpredgen,mreal)</td></tr><tr><td>ControlNet</td><td>0.3636</td><td>0.3604</td><td>0.1132</td><td>0.1126</td></tr><tr><td>SPADE</td><td>0.8473</td><td>0.8477</td><td>0.8771</td><td>0.8603</td></tr><tr><td>Ours</td><td>0.9027</td><td>0.8593</td><td>0.8980</td><td>0.8797</td></tr></table>
|
| 92 |
+
|
| 93 |
+
segmentation difficulty). Our method is on par with SPADE for CT Organ, and slightly worse for breast MRI, while outperforming ControlNet in both cases.
|
| 94 |
+
|
| 95 |
+
Table 2. Quality of generated images. Real test set performance (Dice coeff.) of a segmentation network for the objects of interest, trained on real data vs. synthetic data generated by different models.
|
| 96 |
+
|
| 97 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Real training set</td><td colspan="3">Synthetic training set:</td></tr><tr><td>ControlNet</td><td>SPADE</td><td>Ours</td></tr><tr><td>Breast MRI</td><td>0.8376</td><td>0.7570</td><td>0.8333</td><td>0.7934</td></tr><tr><td>CT Organ</td><td>0.9075</td><td>0.0000</td><td>0.8932</td><td>0.8981</td></tr></table>
|
| 98 |
+
|
| 99 |
+
ControlNet performed poorly for all metrics because it failed to follow input masks closely (if at all) due to its limitations and untested nature of adapting to medical images; more details are in Appendix A.1. We also see that our mask-ablated-trained model (Sec 1.3) follows input masks less strictly than our standard model (Fig. 2), especially for the dataset with more object classes (CT Organ): for breast MRI the model obtained faithfulness metrics (Table 1) of $\mathrm{Dice}(m_{gen}^{pred}, m) = 0.6589$ and $\mathrm{Dice}(m_{gen}^{pred}, m_{real}^{pred}) = 0.6684$ , and quality metric (Table 2) of 0.7478, and 0.5952, 0.5963 and 0.7564 for CT Organ, respectively. This is likely because the diversity of object class combinations seen in training scales exponentially with the number of classes in order to prepare the model for all possible combinations (Sec. 1.3), so the model is "spread thin". However, this relaxation of the mask constraint has its own benefits, detailed next.
|
| 100 |
+
|
| 101 |
+
# 3.2 Advantages of Mask-Ablated-Training
|
| 102 |
+
|
| 103 |
+
Sampling from Ablated Masks. The direct benefit of mask ablated training (MAT) is its ability to generate images from masks with classes missing. In Fig. 3 we demonstrate the effect on generated images of ablating certain classes from an input mask for breast MRI (more examples, including for CT Organ, are provided in Appendix B). For example, we see that constraining the BV+FGT
|
| 104 |
+
|
| 105 |
+
in breast MRI, yet keeping the breast free, results in images that have the latter two classes pre-registered while the breast shape varies, and vice-versa.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Fig. 3. Generating images (even rows) Fig.4. Using our model to generate imfrom masks with classes removed (odd ages that are anatomically similar to real rows), shown for breast MRI. images.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
|
| 112 |
+
Adjustable Anatomical Similarity of Generated Images to Real Images. One application of our model is the adjustable generation of images that are anatomically similar to some real image, which is not immediately possible for existing state-of-the-art GAN-based methods like SPADE. Consider some real image $x_0$ with anatomical mask $m$ . We can adjust the anatomical similarity to $x_0$ of an image generated from $m$ with our model by interpolating between the synthetic image and the real image in the model's latent space, as follows.
|
| 113 |
+
|
| 114 |
+
If the generation/de-noising process given $m$ (starting at timestep $t = T$ ) is halted at some intermediate step $t = \tilde{t}$ (we use $\tilde{t} = 240$ ), we obtain a latent representation $x_{\tilde{t}}'$ of the generating image. We can then convert the real image $x_0$ to the same latent space by applying the noisiging process to $x_0$ from $t = 0$ to $t = \tilde{t}$ to obtain $x_{\tilde{t}}$ . Next, the features of the two images can be mixed via the interpolated latent $x_{\tilde{t}}^{\lambda} := (1 - \lambda)x_{\tilde{t}} + \lambda x_{\tilde{t}}'$ , where $\lambda \in (0,1]$ controls the similarity of the mixed features to those of the real image. From here, $x_{\tilde{t}}^{\lambda}$ can be denoised back to image space to obtain the interpolated image $x_0^{\lambda}$ . An advantage of using our mask-ablated-trained model to generate $x_0^{\lambda}$ is that certain objects can be constrained while others are free to vary, resulting in higher, yet adjustable, semantic diversity. We demonstrate this with various $\lambda$ in Figure 4, with only FGT+BV constrained for breast MRI, and only bone constrained for CT Organ.
|
| 115 |
+
|
| 116 |
+
# Conclusion
|
| 117 |
+
|
| 118 |
+
Our segmentation-guided diffusion model enables superior anatomically-controllable medical image generation, which has many potential applications, including (1) the generation of anatomically rare cases to augment some imbalanced dataset, (2) the synthesis of anatomically-paired/pre-registered data, and (3) cross-modality anatomy translation, where our model could be trained on images and masks from one sequence (e.g., T2 MRI), and then supplied with masks from another sequence (e.g., T1 MRI) to create new T2 images from the T1 masks.
|
| 119 |
+
|
| 120 |
+
However, some limitations of this study are that we did not consider full 3D generation, and we did not compare to ControlNet-like latent diffusion models [23] trained completely from scratch. For future work, we are interested in further improving generated image quality, incorporating image-level class guidance [10] either for pathological or domain-related variables [13], and extending our model to segmentation-guided image translation.
|
| 121 |
+
|
| 122 |
+
Acknowledgments. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R01EB031575. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
| 123 |
+
|
| 124 |
+
Disclosure of Interests. The authors have no competing interests.
|
| 125 |
+
|
| 126 |
+
# References
|
| 127 |
+
|
| 128 |
+
1. Partial label learning: Taxonomy, analysis and outlook. Neural Networks (2023)
|
| 129 |
+
2. Cao, S., Konz, N., Duncan, J., Mazurowski, M.A.: Deep learning for breast mri style transfer with limited training data. Journal of Digital Imaging 36(2), 666-678 (2023)
|
| 130 |
+
3. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8789-8797 (2018)
|
| 131 |
+
4. Fernandez, V., Pinaya, W.H.L., Borges, P., Graham, M.S., Vercauteren, T., Cardoso, M.J.: A 3D Generative Model of Pathological Multi-modal MR Images and Segmentations. In: Deep Generative Models. pp. 132-142. Springer Nature Switzerland, Cham (2024)
|
| 132 |
+
5. Fernandez, V., Pinaya, W.H.L., Borges, P., Tudosiu, P.D., Graham, M.S., Vercauteren, T., Cardoso, M.J.: Can segmentation models be trained with fully synthetically generated data? In: International Workshop on Simulation and Synthesis in Medical Imaging. pp. 79-90. Springer (2022)
|
| 133 |
+
6. Gong, K., Johnson, K., El Fakhri, G., Li, Q., Pan, T.: Pet image denoising based on denoising diffusion probabilistic model. European Journal of Nuclear Medicine and Molecular Imaging pp. 1-11 (2023)
|
| 134 |
+
7. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 16000-16009 (2022)
|
| 135 |
+
|
| 136 |
+
8. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017)
|
| 137 |
+
9. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)
|
| 138 |
+
10. Ho, J., Salimans, T.: Classifier-free diffusion guidance. In: NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications (2021)
|
| 139 |
+
11. Kazerouni, A., Aghdam, E.K., Heidari, M., Azad, R., Fayyaz, M., Hacihaliloglu, I., Merhof, D.: Diffusion models in medical imaging: A comprehensive survey. Medical Image Analysis p. 102846 (2023)
|
| 140 |
+
12. Khader, F., Müller-Franzes, G., Tayebi Arasteh, S., Han, T., Haarburger, C., Schulze-Hagen, M., Schad, P., Engelhardt, S., Baeßler, B., Foersch, S., et al.: Denoising diffusion probabilistic models for 3d medical image generation. Scientific Reports 13(1), 7303 (2023)
|
| 141 |
+
13. Konz, N., Mazurowski, M.A.: Reverse engineering breast mris: Predicting acquisition parameters directly from images. In: Medical Imaging with Deep Learning (2023)
|
| 142 |
+
14. Lew, C.O., Harouni, M., Kirksey, E.R., Kang, E.J., Dong, H., Gu, H., Grimm, L.J., Walsh, R., Lowell, D.A., Mazurowski, M.A.: A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast mri. Scientific reports 14 1, 5383 (2024), https://api-semanticscholar.org/CorpusID:268251677
|
| 143 |
+
15. Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. In: International Conference on Learning Representations (2016)
|
| 144 |
+
16. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2018)
|
| 145 |
+
17. Lyu, Q., Wang, G.: Conversion between ct and mri images using diffusion and score-matching models. arXiv preprint arXiv:2209.12104 (2022)
|
| 146 |
+
18. Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: International Conference on Machine Learning. pp. 8162-8171. PMLR (2021)
|
| 147 |
+
19. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
|
| 148 |
+
20. Pinaya, W.H., Graham, M.S., Gray, R., Da Costa, P.F., Tudosiu, P.D., Wright, P., Mah, Y.H., MacKinnon, A.D., Teo, J.T., Jager, R., et al.: Fast unsupervised brain anomaly detection and segmentation with diffusion models. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 705-714. Springer (2022)
|
| 149 |
+
21. Pinaya, W.H., Tudosiu, P.D., Dafflon, J., Da Costa, P.F., Fernandez, V., Nachev, P., Ourselin, S., Cardoso, M.J.: Brain imaging generation with latent diffusion models. In: MICCAI Workshop on Deep Generative Models. pp. 117-126. Springer (2022)
|
| 150 |
+
22. Rister, B., Yi, D., Shivakumar, K., Nobashi, T., Rubin, D.L.: Ct-org, a new dataset for multiple organ segmentation in computed tomography. Scientific Data 7(1), 381 (2020)
|
| 151 |
+
23. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10684-10695 (June 2022)
|
| 152 |
+
|
| 153 |
+
24. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234-241. Springer (2015)
|
| 154 |
+
25. Saha, A., Harowicz, M.R., Grimm, L.J., Kim, C.E., Ghate, S.V., Walsh, R., Mazurowski, M.A.: A machine learning approach to radiogenomics of breast cancer: a study of 922 subjects and 529 dce-mri features. British journal of cancer 119(4), 508-516 (2018)
|
| 155 |
+
26. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=St1giarCHLP
|
| 156 |
+
27. Wang, T., Zhang, T., Zhang, B., Ouyang, H., Chen, D., Chen, Q., Wen, F.: Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952 (2022)
|
| 157 |
+
28. Wolleb, J., Bieder, F., Sandkühler, R., Cattin, P.C.: Diffusion models for medical anomaly detection. In: International Conference on Medical image computing and computer-assisted intervention. pp. 35-45. Springer (2022)
|
| 158 |
+
29. Wolleb, J., Sandkuhler, R., Bieder, F., Valmaggia, P., Cattin, P.C.: Diffusion models for implicit image segmentation ensembles. In: International Conference on Medical Imaging with Deep Learning. pp. 1336-1348. PMLR (2022)
|
| 159 |
+
30. Yang, J., Dvornek, N.C., Zhang, F., Shapiro, J., Lin, M., Duncan, J.S.: Unsupervised domain adaptation via disentangled representations: Application to cross-modality liver segmentation. In: Medical Image Computing and Computer Assisted Intervention-MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part II 22. pp. 255-263. Springer (2019)
|
| 160 |
+
31. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)
|
| 161 |
+
|
| 162 |
+
# A Additional Training and Architectural Details
|
| 163 |
+
|
| 164 |
+
Our model (segmentation-guided diffusion). The denoising model (UNet)'s encoder is constructed from six standard ResNet down-sampling blocks, with the fifth block also having spatial self-attention, with (128, 128, 256, 256, 512, 512) output channels, respectively. The decoder is simply the up-sampling reverse of the encoder. We use a standard forward process variance schedule that linearly increases from $\beta_{1} = 10^{-4}$ to $\beta_{T} = 0.02$ [9]. For training, we use the AdamW optimizer [16] and a cosine learning rate scheduler [15] with an initial learning rate of $10^{-4}$ , with 500 linear warm-up steps. We train for 400 epochs with a batch size of 64 (about 26 hours), and we perform all training and evaluation on four 48 GB NVIDIA A6000 GPUs. We use the Diffusers library as a backbone (https://github.com/huggingface/diffusers).
|
| 165 |
+
|
| 166 |
+
# A.1 Comparison models
|
| 167 |
+
|
| 168 |
+
SPADE. We train SPADE [19] using the default settings, with a batch size of 128 for 50 epochs. We did not adopt the changes of the recent brain MRI SPADE model [5] because they are not applicable to our datasets/task, namely: (1) the contrast-based clustering is not applicable due to us using pre-contrast MRIs or CT, (2) we work with standard categorical segmentation maps, not partial volume/probabilistic segmentation maps, so changes using the latter are not applicable, and (3) we work with independent 2D slice images, rather than full 3D volumes, so the enforcement of style and content separation via using different slices from the same volume during training is not applicable.
|
| 169 |
+
|
| 170 |
+
ControlNet. We adapted ControlNet [31] to each of our medical image datasets as was instructed at their official tutorial (https://github.com/lllyasviel/ ControlNet/blob/main/docs/train.md#sd.locked) for use with datasets that are out-of-distribution (e.g., medical images) from their model's very large natural image pre-training set, using empty prompts for text inputs. We note that despite this tutorial, none of this was tested in the ControlNet paper, which may explain ControlNet's poor performance on our medical datasets.
|
| 171 |
+
|
| 172 |
+
This involved first finetuning the VAE for 200 epochs, then finetuning the Stable Diffusion (SD) model for 400 epochs using the respective breast MRI or CT organ training set images. We then finetuned the ControlNet with the images and their corresponding masks for segmentation guidance for 200 epochs. The pretrained (pre-finetuning) models are from the SD v1.5 checkpoints available on Hugging Face at https://huggingface.co/runwayml/stable-diffusion-v1-5. For all training, we set the batch size to 128, the initial learning rate to $10^{-4}$ , and adopted cosine annealing learning rate schedulers rate with 500 steps of warm-up.
|
| 173 |
+
|
| 174 |
+
# A.2 Auxiliary segmentation model
|
| 175 |
+
|
| 176 |
+
We used the MONAI UNet (https://docs.monai.io/en/stable/networks.html) with 1-channel input and (number of target object classes + 1)-channel
|
| 177 |
+
|
| 178 |
+
output. The sequence of intermediate UNet channels was set to (16, 32, 64, 128, 256). We trained each model for 100 epochs with a batch size of 8 and selected the models with the lowest validation loss, with an initial learning rate of $10^{-3}$ and a cosine annealing scheduler.
|
| 179 |
+
|
| 180 |
+
# B Additional Sampled Images
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Fig. 5. Additional samples from all segmentation-conditional models; breast MRI on the left, CT organ on the right. Please see Fig. 2 caption for more details.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
Fig. 6. Additional samples from our mask-ablated-trained model with various classes removed from given input segmentations for breast MRI (top) and CT Organ (bottom).
|
| 191 |
+
|
| 192 |
+

|
2402.05xxx/2402.05210/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0cdbf8b4496802c5f46241dd65e3d97cde92750fe0a5ca60e14663615ae841d
|
| 3 |
+
size 321294
|
2402.05xxx/2402.05210/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05235/08fbf57a-7080-44a3-8aea-9b65c6c26803_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c4063c0f36758c2fc14094fe2bc6742a27a06b45035cf79b20d7183403c137da
|
| 3 |
+
size 3899395
|
2402.05xxx/2402.05235/full.md
ADDED
|
@@ -0,0 +1,656 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SPAD: Spatially Aware Multi-View Diffusers
|
| 2 |
+
|
| 3 |
+
Yash Kant $^{1,2,4}$ , Ziyi Wu $^{1,4}$ , Michael Vasilkovsky $^{2}$ , Guocheng Qian $^{2,3}$ , Jian Ren $^{2}$ , Riza Alp Guler $^{2}$ , Bernard Ghanem $^{3}$ , Sergey Tulyakov $^{2,*}$ , Igor Gilitschenski $^{1,4,*}$ , Aliaksandr Siarohin $^{2,*}$ , $^{1}$ University of Toronto, $^{2}$ Snap Research, $^{3}$ KAUST, $^{4}$ Vector Institute https://yashkant.github.io/spad/
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
A DSLR photo of a pair of tan cowboy boots, studio lighting, product photography
|
| 7 |
+
|
| 8 |
+

|
| 9 |
+
|
| 10 |
+

|
| 11 |
+
A cute steampunk elephant
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
A knight's armored metal helmet with gold trim and holes
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
A small robot with a glass container on its head, metal legs, and a glass top
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
F-15 Eagle, F-16 Fighter Jet, and F/A-18F Super Hornet aircraft
|
| 21 |
+
Figure 1. Consistent multi-view generation from text with SPAD. Given a text prompt, SPAD is capable of synthesizing many 3D consistent images of the same object, ranging from daily objects to highly complex machines. SPAD can generate many images from arbitrary camera viewpoints, while being trained only on four views. Here, we generate eight views sampled uniformly at a fixed elevation.
|
| 22 |
+
|
| 23 |
+
# Abstract
|
| 24 |
+
|
| 25 |
+
We present SPAD, a novel approach for creating consistent multi-view images from text prompts or single images. To enable multi-view generation, we repurpose a pretrained 2D diffusion model by extending its self-attention layers with cross-view interactions, and fine-tune it on a high quality subset of Objaverse. We find that a naive extension of the self-attention proposed in prior work (e.g., MVDream [73]) leads to content copying between views. Therefore, we explicitly constrain the cross-view attention based on epipolar geometry. To further enhance 3D consistency, we utilize Plücker coordinates derived from camera rays and inject them as positional encoding. This enables SPAD to reason over spatial proximity in 3D well. In contrast to recent works that can only generate views at fixed azimuth and elevation, SPAD offers full camera control and achieves state-of-the-art results in novel view synthesis on unseen objects from the Objaverse and Google Scanned Objects datasets. Finally, we demonstrate that text-to-3D generation using SPAD prevents the multi-face Janus issue.
|
| 26 |
+
|
| 27 |
+
# 1. Introduction
|
| 28 |
+
|
| 29 |
+
3D content generation holds great importance in a wide range of applications, including gaming, virtual reality, manufacturing, etc. Yet, the creation of high-quality 3D assets remains a time-consuming endeavor even for seasoned 3D artists. In recent years we have witnessed the emergence of generative models capable of creating 3D objects from a single or several 2D images, or just text inputs. Early methods in this field directly train models such as Variational Auto Encoders (VAEs) [41] and Generative Adversarial Networks (GANs) [27] on 3D shapes [1, 49, 96, 98]. These methods produce results with lower resolution to manage computational demands and have limited diversity due to the small scale of training dataset. Later approaches explored differentiable rendering to learn 3D GANs from monocular images [7, 8, 14, 26, 51, 55, 76, 77]. These methods improved resolution, but only show impressive results on relatively few categories (e.g., ShapeNet [10] furniture).
|
| 30 |
+
|
| 31 |
+
Recent advances in Diffusion Models (DMs) have revolutionized the field of 2D image generation [21, 34, 54]. Trained on billions of image-text data, state-of-the-art models [64, 66] learn generic object priors that enable high-quality and generalizable text-guided image generation. Recent works thus seek to leverage such 2D priors to generate 3D objects. One line of research proposes to optimize a NeRF [50] model by distilling a pre-trained text-to-image DM via Score Distillation Sampling (SDS) [59, 90], which enables single-view 3D reconstruction [19, 48, 60] and di
|
| 32 |
+
|
| 33 |
+
rect text-to-3D synthesis [36, 44, 93]. However, these methods lack understanding of the underlying object structures. The 2D prior provided by pre-trained DMs only considers one view at each optimization step, ignoring the geometric relationship across views. Even with hand-crafted prompts specifying explicit viewpoints [59], these methods continue to exhibit 3D inconsistencies, exemplified by issues such as the multi-faced Janus problem.
|
| 34 |
+
|
| 35 |
+
One natural solution is to equip 2D diffusion models with some form of 3D understanding. Recent work Zero-1-to-3 [45] proposes to condition Stable Diffusion [64] with one view and generate another one given the relative camera pose. However, conditioning in Zero-1-to-3 is performed by simply concatenating the input view, while disregarding any geometric priors. An alternate approach based on depth warping was proposed in iNVS [39]. It shows that, provided with an accurate depth map, one can establish dense correspondences between two views. This allows DMs to reconstruct high-quality novel views. Unfortunately, the generation quality of iNVS heavily relies on the precision of depth maps, while monocular depth estimation in itself is an unsolved problem.
|
| 36 |
+
|
| 37 |
+
Recent works [83, 102] have observed that Stable Diffusion can be utilized to obtain accurate image correspondences. Self-attention layers of text-to-image DMs can be directly used for establishing correspondences within the same image [87]. An interesting question to consider is whether the same layers can also find correspondences between different views, which can enable 3D geometric understanding. For this, we can modify the original self-attention into multi-view self-attention by running it over the concatenated feature maps across views. This approach trained on orthogonal multi-view images with known camera parameters can generate multiple novel views of the same object simultaneously, as shown in previous works such as MVDream [73]. However, we find such a model lacks precise camera control across views, and cannot generate arbitrarily free novel views. When tasked to generate two views that are close to each other (with significant overlap), such a model suffers from content copying problem – where the content of one view is just copied from another view without modification (see Fig. 5).
|
| 38 |
+
|
| 39 |
+
Inspired by [79], we design an Epipolar Attention layer, where feature map positions in one view can only attend to positions along the epipolar lines in other views. By restricting the cross-view attention maps, these layers enable better camera control and produce different views at viewpoints close to each other. While Epipolar Attention alone significantly improves 3D consistency, since epipolar lines do not have a direction, it still remains difficult for this model to disambiguate the direction of the camera ray. This ambiguity leads to flipping in predicted views, as observed in iNVS [39] which also used epipolar lines. Motivated by re
|
| 40 |
+
|
| 41 |
+
cent works in the Light Field Networks [6, 89], we propose to represent rays passing through each pixel in Plücker coordinates, which assigns unique homogeneous coordinates for each ray. We use these coordinates as positional embeddings inside Epipolar Attention layers. These embeddings for rays hitting opposite sides of the object provide a high negative bias for self-attention, essentially preventing it from utilizing information from the wrong side. Additionally, Plücker embeddings also encourages pixels whose rays are close to each other to have similar representations, thus promoting self-attention to pick features from nearby positions.
|
| 42 |
+
|
| 43 |
+
Our method can operate in two modes: text-conditioned and image-conditioned. In text-conditioned mode, SPAD can simultaneously denoise several views given a text prompt. While in image-conditioned mode, given an image, SPAD denoises several other views. In both cases, the architecture of our method stays the same, and only the input and output changes. We evaluate SPAD in the task of text-conditioned multi-view generation and image-conditioned novel view synthesis on Google Scanned Objects (GSO) [22] and an unseen subset of Objaverse [63] datasets. The results show that SPAD is able to synthesize high-quality and 3D consistent images of objects. Finally, we enable high-quality text-to-3D generation using SPAD via a) feed-forward multi-view to 3D triplane generator, and b) multi-view Score Distillation Sampling similar to [73].
|
| 44 |
+
|
| 45 |
+
# 2. Related Works
|
| 46 |
+
|
| 47 |
+
3D Generative Models. 3D generative modeling is a longstanding problem in computer vision and graphics. Earlier works directly train generative models such as Variational Auto Encoders (VAEs) [41] on ground-truth 3D shapes [1, 25, 49, 81, 96, 98]. However, due to the small scale of 3D shape datasets, these methods produce less realistic and diverse results compared to their 2D counterparts. With the rapid development of Generative Adversarial Networks (GANs) [27] and differentiable rendering, later works focus on learning 3D GANs from monocular images, showing impressive generation of multi-view images [51, 52], radiance fields [7, 8, 20, 28, 55, 100, 105], and meshes [14, 15, 26, 56, 57]. Nevertheless, GANs still suffer from poor generalizability and training stability, preventing them from scaling to unconstrained objects and scenes. Recently, Diffusion Models (DMs) [34, 78] have achieved great success in general 2D image synthesis, and are also applied to 3D [23, 24, 38, 40, 42, 53, 74, 91]. Yet, these methods train 3D DMs from scratch on specific objects such as human faces or vehicles, limiting their generalization. Closer to ours are methods that adapt large-scale pre-trained 2D DMs [64] for 3D generation, which we will detail next. Novel View Synthesis (NVS) with 2D Diffusion Models. Instead of reconstructing the entire 3D shape, NVS aims to
|
| 48 |
+
|
| 49 |
+
generate 3D consistent images conditioned on a few input views [67, 95]. Early methods leverage the knowledge of epipolar geometry to perform interpolation between different input views [13, 17, 69, 108]. Since NVS is a 2D image-to-image translation task, recent works have re-purposed 2D DMs for it [9, 29, 80, 86, 94, 101]. To achieve 3D consistency, SparseFusion [106] builds a view-conditioned DM on the latent space of Stable Diffusion [64], and utilizes Epipolar Feature Transformer (EFT) [79] to fuse features from input views. Zero-1-to-3 [45] directly fine-tunes Stable Diffusion on multi-view images rendered from Objaverse [18].
|
| 50 |
+
|
| 51 |
+
The concurrent work MVDream [73] proposes to denoise four views jointly with multi-view self-attention layers. However, camera pose information is fed in as 1D features to these models, discarding the explicit constraint of 3D geometry. Thus this method does not allow accurate camera control. To cope with this issue MVDream [73] generates views at fixed camera positions spread apart 90 degrees from each other. However, this approach limits the maximum number of views that can be generated to only 4. Moreover, it limits the training data to only synthetic 3D model datasets, such as Objaverse [18], since it requires rendering with the same fixed camera view for each object.
|
| 52 |
+
|
| 53 |
+
Other works thus study more explicit pose conditioning. MVDiffusion [84] derives inter-view dense correspondence from homography transformation, which is used to guide the attention module in Stable Diffusion. iNVS [39] applies image wrapping based on depth maps to re-use pixels from the source view, and thus only needs to inpaint occluded regions in novel view images. While it can produce precise reconstructions when good depth maps are available, the quality of this method degrades drastically when depth maps are noisy or inaccurate. In addition, the depth ambiguity caused by epipolar lines used in iNVS results in the flipped prediction issue, where the model cannot differentiate two views from opposite directions. SyncDreamer [46] instead builds a 3D feature volume by up-projecting features from each view, and then re-projects it to ensure 3D consistency among views. However up-projection operation requires the network to explicitly understand the depth of each pixel, sharing the same issue with iNVS [39].
|
| 54 |
+
|
| 55 |
+
Different from prior works, we exploit the internal properties of large-scale pre-trained text-to-image diffusion models and enrich self-attention maps with the cross-view interactions derived from epipolar geometry. In addition, we use Plücker coordinates [37] as positional encodings to inject 3D priors of the scene into the diffusion model, further improving camera conditioning and disambiguating different sides of the object.
|
| 56 |
+
|
| 57 |
+
Lifting 2D Diffusion Models to 3D Generation. Instead of training a model on 3D data, several works adopt a per-instance optimization paradigm where pre-trained 2D DMs provide image priors [59, 90]. Some of them apply it for
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
Figure 2. Model pipeline. (a) We initialize our multi-view diffusion model from pre-trained text-to-image model, and fine-tune it on multi-view renders of 3D objects. (b) Our model performs joint denoising on noisy multi-view images $\{\pmb{x}_t^i\}_{i=1}^N$ conditioned on text $\pmb{y}$ and relative camera poses $\Delta E$ . Here, we illustrate the pipeline using $N = 2$ , which can be easily extended to more views. To enable cross-view interaction, we apply 3D self-attention by concatenating all views, and enforce epipolar constraints on the attention map. We further add (c) Plücker Embedding $\{\pmb{P}^i\}_{i=1}^N$ to the attention layers as positional encodings, to enable precise camera control and prevent object flipping artefacts (as shown in Fig. 5).
|
| 61 |
+
|
| 62 |
+
single-view 3D reconstruction [19, 48, 60, 62, 72, 82, 99]. More relevant to ours are text-to-3D methods that optimize a NeRF model [50] by distilling the pre-trained text-to-image DM. Follow-up works have improved this text-to-3D distillation process in many directions, including more efficient 3D representations [12, 44, 71, 85, 97], better diffusion process [16, 36, 70], new loss functions [93, 107], and prompt design [2]. However, these methods still suffer from low visual quality and view consistency issues such as multi-face Janus and content drifting. SPAD generates multi-view images from a text prompt or a single input view with better 3D consistency and visual quality, which can mitigate these issues with multi-view distillation [73].
|
| 63 |
+
|
| 64 |
+
# 3. Method
|
| 65 |
+
|
| 66 |
+
Task Formulation. Our goal is to generate consistent and many novel views of the same object given a text prompt or an image, along with relative camera poses as input. Towards this goal, we train a multi-view diffusion model that is made spatially aware by explicitly encoding the 3D knowledge of the scene.
|
| 67 |
+
|
| 68 |
+
We build upon a state-of-the-art 2D text-to-image diffusion model (Sec. 3.1). Our specific adaptations enable 3D-aware interactions between views (Sec. 3.2), which include 3D self-attention (Sec. 3.2.1), Epipolar Attention (Sec. 3.2.2), and Plücker Embeddings (Sec. 3.2.3).
|
| 69 |
+
|
| 70 |
+
# 3.1. Preliminary: Text-to-Image Diffusion Models
|
| 71 |
+
|
| 72 |
+
Diffusion models (DMs) [34, 78] are generative models that learn a target data distribution $p_{\theta}(\pmb{x}_0)$ by gradually denoising a standard Gaussian distribution, denoted as $p_{\theta}(\pmb{x}_0) = \int p_{\theta}(\pmb{x}_{0:T}) d\pmb{x}_{1:T}$ , where $\pmb{x}_{1:T}$ are intermediate noisy samples. DMs leverage a forward process that iteratively adds Gaussian noise $\epsilon$ to the clean data $\pmb{x}_0$ , which is controlled by a pre-defined variance schedule $\{\bar{\alpha}_t\}_{t=1}^T$ . During training, we manually construct noisy samples $\pmb{x}_t = \sqrt{\bar{\alpha}_t} \pmb{x}_0 + \sqrt{1 - \bar{\alpha}_t} \pmb{\epsilon}_t$ , and train a denoiser model $\epsilon_{\theta}(\pmb{x}_t, t)$ to predict
|
| 73 |
+
|
| 74 |
+
the added noise conditioned on the denoising time step $t$ :
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\mathcal {L} _ {\mathrm {D M}} = \left\| \boldsymbol {\epsilon} _ {t} - \epsilon_ {\theta} (\boldsymbol {x} _ {t}, t) \right\| ^ {2}, \text {w h e r e} \boldsymbol {\epsilon} _ {t} \sim \mathcal {N} (\boldsymbol {0}, \boldsymbol {I}). \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Generally, the denoiser $\epsilon_{\theta}$ is parameterized as a U-Net [65], which comprises of interleaved residual blocks [31] and self-attention layers [88]. Within this U-Net, we are interested primarily in self-attention layers [88], and we refer the reader to the original paper [64] for the overview of other blocks. The self-attention layer takes as input a feature map $\pmb{F}$ and compute attention of feature in location s with entire feature map:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\tilde {\boldsymbol {F}} _ {s} = \operatorname {S o f t M a x} \left(\frac {Q (\boldsymbol {F} _ {s}) \cdot K (\boldsymbol {F}) ^ {\top}}{\sqrt {d}}\right) \cdot V (\boldsymbol {F}), \tag {2}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where $Q, K, V$ are linear projection layers, $\mathbf{F} \in \mathbb{R}^{(hw) \times d}$ is a flattened feature map obtained from the 2D denoiser $\epsilon_{\theta}$ , where $d$ is the feature dimension, and $h, w$ are intermediate spatial dimensions. $\mathbf{F}_s, \tilde{\mathbf{F}}_s$ is the input and output feature for location $s$ respectively. In practice, the self-attention operation occurs at multiple resolutions in $\epsilon_{\theta}$ .
|
| 87 |
+
|
| 88 |
+
# 3.2. Multi-View Diffusion Models
|
| 89 |
+
|
| 90 |
+
Inspired by the success of text-to-image DMs, we propose to generate multi-view images by fine-tuning a pre-trained 2D DM on multi-view rendered images of 3D assets. Fig. 2 shows the overall pipeline of our framework, SPAD. In this section, we use $N = 2$ views to explain our method for brevity. However, note that SPAD is easily extensible to generate an arbitrary number of views.
|
| 91 |
+
|
| 92 |
+
# 3.2.1 Multi-View Self-attention
|
| 93 |
+
|
| 94 |
+
The goal of our multi-view DM $\epsilon_{\theta}(\pmb{x}_t^1,\pmb{x}_t^2,t,\pmb{y},\Delta \pmb{E})$ is to generate 3D consistent images $(\pmb{x}^{1},\pmb{x}^{2})\in \mathbb{R}^{H\times W\times 3}$ of an object guided by a text input $\pmb{y}$ and their relative camera pose $\Delta E\in \mathbb{R}^{3\times 4}$ . To enable cross-view interaction, we
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Figure 3. Epipolar Attention. For each point $s$ (red point) on a feature map $\pmb{F}^i$ , we compute its epipolar lines $\{l^j\}_{j \neq i}$ on all other views $\{\pmb{F}^j\}_{j \neq i}$ . Point $s$ will only attend to features along these lines plus all the points on itself (blue points).
|
| 98 |
+
|
| 99 |
+
concatenate the feature maps of two views side-by-side as input to the self-attention layers, denoted as $[F^1 |F^2 ]$ . This allows each location $s$ on $F^1$ to attend to all locations on itself and $F^2$ , calculated as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\tilde {\boldsymbol {F}} _ {s} ^ {1} = \operatorname {S o f t M a x} \left(\frac {Q \left(\boldsymbol {F} _ {s} ^ {1}\right) \cdot K ([ \boldsymbol {F} ^ {1} | \boldsymbol {F} ^ {2} ]) ^ {\top}}{\sqrt {d}}\right) \cdot V ([ \boldsymbol {F} ^ {1} | \boldsymbol {F} ^ {2} ]). \tag {3}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Camera conditioning. We embed the relative camera pose $\Delta E$ with an MLP and fuse it with timestep embedding of DM to condition the residual blocks as shown in Fig. 4, similar to [73].
|
| 106 |
+
|
| 107 |
+
Issues with vanilla self-attention. We find empirically that such an unconstrained multi-view self-attention leads to content-copying between views (shown in Figure 5), i.e., the model generates similar images when the camera pose difference $\Delta E$ is small, ignoring the underlying 3D geometry. We hypothesize that this could be the reason concurrent works such as MVDream [73] opt to generate images with 90 degree view change (along azimuths and fixed elevation) – as it diminishes the overlap between different views.
|
| 108 |
+
|
| 109 |
+
# 3.2.2 Multi-View Epipolar Attention
|
| 110 |
+
|
| 111 |
+
To enable SPAD to synthesize views at arbitrary relative poses, and address the above content-copying challenge, we propose to replace the vanilla self-attention operation with Epipolar Attention [79]. Epipolar Attention works by restricting the positions a point in feature map can attend to in other views - by exploiting epipolar geometry. Fig. 3 presents this mechanism. Specifically, given a source point $s$ on a feature map $F^i$ , we compute its epipolar lines (implemented as a set of points) $\{l^j\}_{j \neq i}$ on all the other views $\{F^j\}_{j \neq i}$ . When computing the attention map between views, we ignore points that do not lie on these epipolar lines, so that the source point $s$ only has access to features that lie along the camera ray (in other views) as well
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
Figure 4. Illustration of one block in our multi-view diffusion model, which consists of a residual block, a self-attention layer, and a cross-attention layer. The residual block guides the model on the denoising timestep $t$ and the relative camera pose $\Delta E$ , while the cross-attention layer conditions on text $y$ . We add Plücker Embedding $P$ to feature maps $F$ in the self-attention layer by inflating the original QKV projection layers with zero projections.
|
| 115 |
+
|
| 116 |
+
as all points in its own view for denoising:
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\tilde {\boldsymbol {F}} _ {s} ^ {i} = \operatorname {S o f t M a x} \left(\frac {Q \left(\boldsymbol {F} _ {s} ^ {i}\right) \cdot K \left([ \boldsymbol {F} ^ {i} \mid \boldsymbol {F} _ {l ^ {j}} ^ {j} ]\right) ^ {\top}}{\sqrt {d}}\right) \cdot V ([ \boldsymbol {F} ^ {i} \mid \boldsymbol {F} _ {l ^ {j}} ^ {j} ]). \tag {4}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
In practice, we dilate the epipolar lines with a $3 \times 3$ filter to consider neighboring target points for better robustness. Overall, Epipolar Attention enhances our generalizability to unseen viewpoint differences and objects.
|
| 123 |
+
|
| 124 |
+
# Issues with Epipolar Attention.
|
| 125 |
+
|
| 126 |
+
However, solely using epipolar lines to constrain attention masks models can cause flipped predictions especially under large viewpoint changes. This happens because in the absence of precise depth of the object surface, the model can leverage information from any point along these lines. Consider the feature map $\pmb{F}^{N}$ in Fig. 3, the source point $s$ may attend to the front face of the figure or the back of it. The latter will cause a flipped prediction. iNVS [39] solves this problem with a monocular depth estimator. Yet, their imprecise depth leads to deformed object surfaces and distorted textures. We instead address this issue with a Plücker Ray Embedding which is detailed next.
|
| 127 |
+
|
| 128 |
+
# 3.2.3 Plücker Ray Embedding
|
| 129 |
+
|
| 130 |
+
Given a camera with its center placed at $\pmb{o} \in \mathbb{R}^3$ in the world coordinate system, we represent a ray passing through a point on the feature map $\pmb{F}_{ij}$ along a normalized direction $\pmb{d} \in \mathbb{R}^3$ as $\pmb{r}_{ij}$ . We embed this ray as the positional encoding to help our model distinguish between different views. We find the simple ray parametrization $\pmb{r}_{ij} = (\pmb{o}, \pmb{d})$ to be insufficient here. As an example, consider two rays in the same direction but with different camera origins which lie along this ray, $\pmb{r}_{ij}^{1} = (\pmb{o}, \pmb{d})$ and $\pmb{r}_{ij}^{2} = (\pmb{o} + t\pmb{d}, \pmb{d})$ , their embeddings turn out to be notably different, despite them representing essentially the same ray.
|
| 131 |
+
|
| 132 |
+
Inspired by recent works in Neural Light Fields [11, 75], we adopt the Plücker Ray Embedding $\pmb{P}_{ij} = (\pmb{o} \times \pmb{d}, \pmb{d})$ where $\times$ is the cross product. See Fig. 2 (c) for illustration. This parametrization is able to map $\pmb{r}^1$ and $\pmb{r}^2$ to the same embedding as we have:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\left(\boldsymbol {o} + t \boldsymbol {d}\right) \times \boldsymbol {d} = \boldsymbol {o} \times \boldsymbol {d} + t \boldsymbol {d} \times \boldsymbol {d} = \boldsymbol {o} \times \boldsymbol {d}. \tag {5}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
We simply pass the Plücker Embedding $\pmb{P} \in \mathbb{R}^{(hw) \times 6}$ through a linear projection layer to project it into $d$ dimension and add it to the multi-view feature maps $\pmb{F}$ , which serves as the input to the Epipolar Attention layer, shown in Fig. 4. To avoid disturbing the pre-trained model, the weights of the projection layer are initialized as zeros and learned during fine-tuning similar to ControlNet [103].
|
| 139 |
+
|
| 140 |
+
3D geometric priors in Plücker Embedding. With the Plücker coordinates, rays that are close in the 3D space share similar embeddings, which leads to higher values in the pre-softmax self-attention map $Q(\boldsymbol{F}) \cdot K(\boldsymbol{F})^\top$ . This encourages feature points to look at spatially nearby locations in other views, enhancing the 3D consistency across views. On the other hand, two rays passing through the same 3D location from opposite cameras will have Plücker coordinates with flipped (positive/negative) sign. Their embeddings will have a smaller dot product and result in a smaller attention value. The two pixels will thus attend less to each other, effectively addressing the flipped prediction problem.
|
| 141 |
+
|
| 142 |
+
# 4. Experiments
|
| 143 |
+
|
| 144 |
+
We conduct extensive experiments to answer the following questions: (i) Can SPAD generate high-quality multiview images from diverse (non-orthogonal and overlapping) viewpoints that are aligned with input text or image? (Sec. 4.2) (ii) Are the synthesized views 3D consistent? (Sec. 4.3) (iii) To what extent do Plücker Positional Embeddings and Epipolar Attention contribute to the overall performance? (Sec. 4.3) (iv) Lastly, can SPAD enable high-quality text-guided 3D asset generation? (Sec. 4.5)
|
| 145 |
+
|
| 146 |
+
# 4.1. Experimental Setup
|
| 147 |
+
|
| 148 |
+
Training Data Curation. Instead of using the entire Objaverse [18] which consists of many flat and primitive shapes that can drift the diffusion model away from high-quality generation, we filter Objaverse using a few simple heuristics based on its metadata. We use captions from Cap3D [47]. We select 150K objects with the most like, view, and comment count available in metadata, as well as the top 50K objects that contain the highest number of mesh polygons and vertex counts. We use Blender [4] to render 12 multi-view images for each object at a resolution of $256 \times 256$ . All objects are centered and re-scaled to a unit cube. We randomly sample camera positions with elevations in $[-90^{\circ}, 90^{\circ}]$ , azimuths in $[0^{\circ}, 360^{\circ}]$ , and fix the distance to origin as 3.5 and FOV as $40.26^{\circ}$ .
|
| 149 |
+
|
| 150 |
+
Training Details. We initialize SPAD from the pre-trained weights of Stable Diffusion v1.5 [64]. We train two versions of our model: one with text conditioning and another one with image conditioning for the novel view synthesis task. In both the variants, we set the number of views $N$ to 2 while training. For the text-conditioned model, we jointly denoise both the views. For the image-conditioned model, we feed in one clean source view image and denoise the target view. All our baselines and reported numbers follow this setup. First, we train our models two-view models for 40K iterations on Objaverse, with an effective batch size of 1728 samples per iteration, on eight H100 GPUs. Later we train a larger text-conditioned model with $N = 4$ views on sixteen H100 GPUs for 100K steps, and use it to generate all visuals (except ablation study).
|
| 151 |
+
|
| 152 |
+
Evaluation Datasets and Metrics. For text-conditioned multi-view image generation, we follow MVDream [73] and randomly sample 1,000 Objverse captions as text prompts to generate images. We use CLIP-score [61] to measure the image-text alignment. We also report the Inception Score (IS) [68] and Fréchet Inception Distance (FID) [32] of the generated images to evaluate image generation quality. It is important to highlight that these metrics only measure generation quality of individual images, and do not provide any information about their multi-view 3D consistency.
|
| 153 |
+
|
| 154 |
+
For image-conditioned novel view synthesis, we select 1,000 unseen Objaverse objects which are not contained in our training set for testing. Following [39, 45], we also adopt real-world scanned objects from the Google Scanned Objects (GSO) dataset [22] to evaluate the generality of our method. We render each object from two views following the same setup in training data generation, where one view serves as the model input and another view is the target novel view image. We report PSNR, SSIM [92], and LPIPS [104] metrics to measure the accuracy of the synthesized novel views.
|
| 155 |
+
|
| 156 |
+
Baselines. Since it is difficult to replicate and control for training and rendering setups used in prior works, we choose the following variants of our model as primary baselines: Vanilla MV-DM that only adds 3D self-attention on concatenated multi-view feature maps without Epipolar Attention and Plücker Embedding; MV-DM (Epipolar) and MV-DM (Plücker) which incorporate the two components, respectively. We also compare SPAD with two concurrent works: MVDream [73] and SyncDreamer [46]. Different from SPAD, both methods can only generate views at a fixed elevation and azimuth ranges. In the image-conditioned novel view synthesis task, we compare with additional baselines Zero-1-to-3 [45] and iNVS [39]. We used the official codebase and pre-trained weights of these methods on our testing data to report their results.
|
| 157 |
+
|
| 158 |
+
<table><tr><td>Method</td><td>IS ↑</td><td>CLIP-score ↑</td></tr><tr><td>MVDream (v2.1) † [73]</td><td>13.36±0.87</td><td>30.22±3.83</td></tr><tr><td>MVDream (v1.5) † [73]</td><td>9.72±0.43</td><td>28.55±4.05</td></tr><tr><td>SyncDreamer ‡ [46]</td><td>11.69±0.24</td><td>27.76±4.84</td></tr><tr><td>Vanilla MV-DM</td><td>11.04±0.81</td><td>28.52±3.69</td></tr><tr><td>SPAD (Ours)</td><td>11.18±0.97</td><td>29.87±3.33</td></tr></table>
|
| 159 |
+
|
| 160 |
+
# 4.2. Text-conditioned Multi-View Generation
|
| 161 |
+
|
| 162 |
+
We use single view quality metrics to compare methods, similar to MVDream. We evaluate two MVDream variants, which are fine-tuned from Stable Diffusion v1.5 (same as ours) and v2.1, respectively. For SyncDreamer, we follow the text-to-image-to-3D pipeline described in their paper to first generate a single-view image from a text prompt using Stable Diffusion, and then generate multiple views from it. We make sure that the single-view image is aligned with the text, and pre-process it using the script provided in their official codebase.
|
| 163 |
+
|
| 164 |
+
SPAD is a strong 2D text-to-image generator. The results on image generation quality are presented in Tab. 1. SPAD outperforms or matches both baselines on 2D Image Quality metrics when compared against the methods utilizing the same underlying Stable Diffusion v1.5 base model. This confirms that our method while being more 3D consistent, does not compromise either text-to-image alignment or overall image quality, but rather improves it compared to our baseline MV-DM.
|
| 165 |
+
|
| 166 |
+
We provide qualitative results in Fig. 1, Fig. 11, and Fig. 12. SPAD is able to generate consistent multi-view images of diverse 3D subjects, ranging from daily objects to highly complex machines. Additionally, we put preliminary investigations of training SPAD with v2.1 base model in Appendix B.3.
|
| 167 |
+
|
| 168 |
+
# 4.3. Image-conditioned Novel View Synthesis
|
| 169 |
+
|
| 170 |
+
Since image quality metrics do not provide any indication of multi-view consistency or the quality of camera control. For evaluation of multi-view consistency, we mostly rely on image-conditioned experiments. For this evaluation, given an input view and relative camera pose, we generate the target view and compare it against ground truth. Tab. 2 and Tab. 3 present the novel view synthesis results on Objaverse and GSO, respectively.
|
| 171 |
+
|
| 172 |
+
SPAD preserves structural and perceptual details faithfully. We find that SPAD outperforms all baselines on
|
| 173 |
+
|
| 174 |
+
Table 1. Quantitative results on text-conditioned multi-view image generation. We randomly sample 1,000 captions from Objaverse, and evaluate the FID, Inception Score (IS), and CLIP-score.† We ran MVDream's code on the same captions we used.‡ We first generated single-view images using Stable Diffusion [64] on the same captions we used and removed their backgrounds. Then, we ran SyncDreamer's code to generate multi-view images.
|
| 175 |
+
|
| 176 |
+
<table><tr><td>Method</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>Zero-1-to-3†[45]</td><td>18.16</td><td>0.81</td><td>0.201</td></tr><tr><td>iNVS [39]</td><td>20.52</td><td>0.81</td><td>0.178</td></tr><tr><td>SyncDreamer†[46]</td><td>19.51</td><td>0.84</td><td>0.174</td></tr><tr><td>Vanilla MV-DM</td><td>17.56</td><td>0.81</td><td>0.20</td></tr><tr><td>MV-DM (Epipolar)</td><td>18.90</td><td>0.82</td><td>0.19</td></tr><tr><td>MV-DM (Plücker)</td><td>17.98</td><td>0.81</td><td>0.20</td></tr><tr><td>SPAD (Ours)</td><td>20.29</td><td>0.84</td><td>0.166</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 2. Quantitative results on image-conditioned novel view synthesis on Objaverse. We report PSNR, SSIM, and LPIPS on the generated novel view images of 1,000 unseen Objaverse objects.† We run the official codebases of SyncDreamer and Zero-1-to-3 to report results.
|
| 179 |
+
|
| 180 |
+
<table><tr><td>Method</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td></tr><tr><td>Zero-1-to-3 [45]</td><td>16.10</td><td>0.82</td><td>0.183</td></tr><tr><td>iNVS [39]</td><td>18.53</td><td>0.80</td><td>0.180</td></tr><tr><td>SyncDreamer†[46]</td><td>17.18</td><td>0.83</td><td>0.178</td></tr><tr><td>Vanilla MV-DM</td><td>15.98</td><td>0.81</td><td>0.20</td></tr><tr><td>MV-DM (Epipolar)</td><td>17.13</td><td>0.82</td><td>0.19</td></tr><tr><td>MV-DM (Plücker)</td><td>16.15</td><td>0.81</td><td>0.20</td></tr><tr><td>SPAD (Ours)</td><td>17.99</td><td>0.83</td><td>0.169</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 3. Quantitative results on image-conditioned novel view synthesis on GSO. We report PSNR, SSIM, and LPIPS on the generated novel view images of GSO objects. ${}^{ \dagger }$ SyncDreamer only reports results on 30 selected objects from GSO in their paper [46]. We ran their code and test it on all the GSO objects here.
|
| 183 |
+
|
| 184 |
+
LPIPS metrics across both datasets, while matching its performance to SyncDreamer on SSIM. Moreover, we find that adding each component (Epipolar and Plucker) gradually improves scores across the board, and leads to state-of-the-art performance with our full model. This confirms our main hypothesis that imparting 3D understanding to MV-DMs is helpful.
|
| 185 |
+
|
| 186 |
+
We also find that iNVS [39] is able to achieve the highest PSNR, since it directly copies pixels from the source view (via depth-based reprojection). However, it particularly performs worse on SSIM and LPIPS metrics, which measure the structural and semantic accuracy of the generated view. This is because of deformations introduced by reprojection when viewpoint changes are large and monocular depth from ZoeDepth [3] is inaccurate.
|
| 187 |
+
|
| 188 |
+
The official inference code of SyncDreamer always generates 16 views at fixed azimuth angles uniformly distributed in $[0^{\circ}, 360^{\circ}]$ , which is incompatible with our random view generation setup. We modified their code to consider the exact target camera pose as model input, but found it performed worse than choosing the prediction at the azimuth that is closest to the target azimuth. Therefore, we report SyncDreamer results using the closest view, where the error is usually smaller than $10^{\circ}$ .
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Figure 5. Qualitative comparison between SPAD and its variants. We prompt models trained on two views to generate four views at 90 degree intervals for clear visual distinctions. The flipped predicted views are highlighted with red circles, while the content-copying issues are indicated by blue circles.
|
| 192 |
+
|
| 193 |
+
# 4.4. Qualitative Analysis
|
| 194 |
+
|
| 195 |
+
We also conduct qualitative analysis to visually understand the usefulness of each component of our model. The text-conditioned multi-view generation results of baselines and SPAD are shown in Fig. 5, where all models are trained with two views while prompted to generate four views. The elevation is fixed for all the views, and the azimuth spans uniformly between $[0^{\circ}, 360^{\circ}]$ .
|
| 196 |
+
|
| 197 |
+
Epipolar Attention promotes better camera control in SPAD. We find that the vanilla (full) 3D self-attention used in Vanilla MV-DM and MV-DM (Plucker) models often leads to content copying. This is highlighted in the figure using blue circles, where the generated dogs face in similar direction, ignoring the target camera poses. We hypothesize that the readaptation of the self-attention layer of SD originally trained to attend only to itself hinders with generalization and controllability of this model.
|
| 198 |
+
|
| 199 |
+
Additionally, since these models are trained only to generate two views, we hypothesize that they overfit to predicting only two novel views. In contrast, Epipolar Attention constrains cross-view interactions to only happen between spatially related pixels, reducing the search space in establishing correspondences across images. Despite not being trained on four views, the model is still able to generate 3D consistent images by attending to the correct regions.
|
| 200 |
+
|
| 201 |
+
Plücker Embeddings help prevent generation of flipped views. When the difference in camera positions between two views is large, the epipolar lines introduce ambiguities in the ray directions. Indeed, Vanilla MV-DM and MV-DM (Epipolar) sometimes predict image regions that are rotated by $180^{\circ}$ . For example, the dog's head highlighted by red
|
| 202 |
+
|
| 203 |
+
circles looks in the opposite direction of the body, which is inconsistent with other views. Instead, Plücker Embeddings bias the model to pay less attention to camera views on opposite sides of the object, while leveraging more information from spatially closer views.
|
| 204 |
+
|
| 205 |
+
# 4.5. Text-to-3D Generation
|
| 206 |
+
|
| 207 |
+
Multi-view SDS. Inspired by [45, 73], we also adopt the multi-view Score Distillation Sampling (SDS) [59] to perform text-to-3D generation using the four-view SPAD variant. Concretely, we integrate our model into the state-of-the-art text-to-3D generation codebase threestudio [30], and follow the setup similar to MVDream [73] for stable NeRF [50] distillation. Fig. 7 shows the multi-view rendered images of the trained NeRF models. We find that SPAD is able to reconstruct consistent geometry without Janus problem, while maintaining good visual quality.
|
| 208 |
+
|
| 209 |
+
Multi-view Triplane Generator. Inspired by concurrent works [35, 43], we trained a multi-view images to triplane generator on Objaverse. We follow closely followed the setup from Instant3D [43], and used four orthogonal views from SPAD to generate a NeRF in a single feed-forward pass. Combined together this approach takes roughly 10 seconds to generate a single asset from text prompt, which is greater than two orders of magnitude faster than SDS optimization. Fig. 6 shows the results from this experiment. Thus, we find that SPAD can be used as a faithful base model to facilitate such generations.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
Figure 6. Text-to-3D generation using multi-view Triplane generator with SPAD. Following [35, 43] we trained a multi-view conditioned triplane generator that outputs a NeRF using four outputs of SPAD in a single feed-forward pass. We show the rendered NeRF on the top row (zoomed) and corresponding multi-view outputs from SPAD in the bottom row. For entire 360-degree videos see our website.
|
| 213 |
+
|
| 214 |
+
# 5. Conclusion
|
| 215 |
+
|
| 216 |
+
In this paper, we propose SPAD, a novel framework for generating multiple views from text or image input. We propose to transform the self-attention layers of the pretrained text-to-image diffusion model into Epipolar Attention to promote multi-view interactions and improve camera control. Moreover, we augment self-attention layers with Plücker positional encodings to further improve camera control by preventing flipping view prediction of the object. We provide rigorous evaluations of these modifications and demonstrate state-of-the-art results in terms of image-conditioned novel view synthesis.
|
| 217 |
+
|
| 218 |
+
Limitations and Future Work. While our method improves the 3D consistency of multi-view diffusion models, there still remains lots of scope for improvements. For example, a larger Stable Diffusion such as SDXL [58] can further improve performance while preventing lossy compression of image conditioning. We can use a monocular depth estimator similar to iNVS [39] to further improve the correspondences established by epipolar self-attention. Finally, we plan to explore the usage of SPAD to generate dynamic 4D assets and multi-object scenes. Additionally, datasets of monocular videos and pre-trained text-to-video generators can be explored to improve the quality and consistency of the generated results.
|
| 219 |
+
|
| 220 |
+
# Acknowledgments
|
| 221 |
+
|
| 222 |
+
We would like to thank Xuanchi Ren and Weize Chen, for valuable discussions and support.
|
| 223 |
+
|
| 224 |
+
# References
|
| 225 |
+
|
| 226 |
+
[1] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. In Proceedings of the International Conference on Machine Learning, 2018. 2, 3
|
| 227 |
+
[2] Mohammadreza Armandpour, Huangjie Zheng, Ali Sadeghian, Amir Sadeghian, and Mingyuan Zhou. Re-imagine the negative prompt algorithm: Transform 2d diffusion into 3d, alleviate janus problem and beyond. arXiv preprint arXiv:2304.04968, 2023. 4
|
| 228 |
+
[3] Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias Müller. Zoedepth: Zero-shot transfer by combining relative and metric depth. arXiv preprint arXiv:2302.12288, 2023. 7
|
| 229 |
+
[4] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Blender Institute, Amsterdam, 2022. 6
|
| 230 |
+
[5] Tim Brooks, Aleksander Holynski, and Alexei A. Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 15
|
| 231 |
+
[6] Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
A bald eagle carved out of wood
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
A bichon frise wearing academic regalia
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
A blue poison-dart frog sitting on a water lily
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
A brightly colored mushroom growing on a log
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
A capybara wearing a top hat, low poly
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
A beautiful dress made out of garbage bags, on a mannequin. Studio lighting, high quality, high resolution
|
| 250 |
+
Figure 7. Text-to-3D generation using multi-view SDS with SPAD. We adopt the multi-view SDS proposed in MVDream [73] to train a NeRF model. Thanks to the 3D consistency of our model, we do not suffer from the multi-face Janus issue.
|
| 251 |
+
|
| 252 |
+
Tulyakov, and Jian Ren. Real-time neural light field on mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 253 |
+
|
| 254 |
+
[7] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE Conference on Computer Vision and
|
| 255 |
+
|
| 256 |
+
Pattern Recognition, 2021. 2, 3
|
| 257 |
+
|
| 258 |
+
[8] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog-
|
| 259 |
+
|
| 260 |
+
nition, 2022. 2, 3
|
| 261 |
+
[9] Eric R Chan, Koki Nagano, Matthew A Chan, Alexander W Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3d-aware diffusion models. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 262 |
+
[10] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 2
|
| 263 |
+
[11] Eric Ming Chen, Sidhanth Holalkere, Ruyu Yan, Kai Zhang, and Abe Davis. Ray conditioning: Trading photoconsistency for photo-realism in multi-view image generation. arXiv preprint arXiv:2304.13681, 2023. 6
|
| 264 |
+
[12] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation*. In *Proceedings of the IEEE International Conference on Computer Vision*, 2023. 4
|
| 265 |
+
[13] Shenchang Eric Chen and Lance Williams. View interpolation for image synthesis. In Special Interest Group on Computer Graphics and Interactive Techniques, 1993. 3
|
| 266 |
+
[14] Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to predict 3d objects with an interpolation-based differentiable renderer. Advances in Neural Information Processing Systems, 2019. 2, 3
|
| 267 |
+
[15] Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Fuji Tsang, Sameh Khamis, Or Litany, and Sanja Fidler. Dib-r++: learning to predict lighting and material with a hybrid differentiable renderer. Advances in Neural Information Processing Systems, 2021. 3
|
| 268 |
+
[16] Yiwen Chen, Chi Zhang, Xiaofeng Yang, Zhongang Cai, Gang Yu, Lei Yang, and Guosheng Lin. It3d: Improved text-to-3d generation with explicit view synthesis. arXiv preprint arXiv:2308.11473, 2023. 4
|
| 269 |
+
[17] Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach. In Special Interest Group on Computer Graphics and Interactive Techniques, 1996. 3
|
| 270 |
+
[18] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3, 6
|
| 271 |
+
[19] Congyue Deng, Chiyu Jiang, Charles R Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, Dragomir Anguelov, et al. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2, 4
|
| 272 |
+
[20] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image
|
| 273 |
+
|
| 274 |
+
generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. 3
|
| 275 |
+
[21] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 2021. 2
|
| 276 |
+
[22] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In Proceedings of the IEEE International Conference on Robotics and Automation, 2022. 3, 6
|
| 277 |
+
[23] Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, and Angela Dai. Hyperdiffusion: Generating implicit neural fields with weight-space diffusion. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 278 |
+
[24] Lin Geng Foo, Jia Gong, Hossein Rahmani, and Jun Liu. Distribution-aligned diffusion for human mesh recovery. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 279 |
+
[25] Matheus Gadelha, Rui Wang, and Subhransu Maji. Multiresolution tree networks for 3d point cloud processing. In Proceedings of the European Conference on Computer Vision, 2018. 3
|
| 280 |
+
[26] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances in Neural Information Processing Systems, 2022. 2, 3
|
| 281 |
+
[27] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 2014. 2, 3
|
| 282 |
+
[28] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d aware generator for high-resolution image synthesis. In Proceedings of the International Conference on Learning Representations, 2022. 3
|
| 283 |
+
[29] Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Susskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. Nerfdiff: Single-image view synthesis with nef-guided distillation from 3d-aware diffusion. In Proceedings of the International Conference on Machine Learning, 2023. 3
|
| 284 |
+
[30] Yuan-Chen Guo, Ying-Tian Liu, Ruizhi Shao, Christian Laforte, Vikram Voleti, Guan Luo, Chia-Hao Chen, Zixin Zou, Chen Wang, Yan-Pei Cao, and Song-Hai Zhang. threestudio: A unified framework for 3d content generation. https://github.com/threestudio-project/threestudio, 2023.8
|
| 285 |
+
[31] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 4
|
| 286 |
+
[32] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 2017. 6
|
| 287 |
+
|
| 288 |
+
[33] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2022. 15
|
| 289 |
+
[34] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 2020. 2, 3, 4
|
| 290 |
+
[35] Yicong Hong, Kai Zhang, Jiuxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d, 2023. 8, 9
|
| 291 |
+
[36] Yukun Huang, Jianan Wang, Yukai Shi, Xianbiao Qi, Zheng-Jun Zha, and Lei Zhang. Dreamtime: An improved optimization strategy for text-to-3d content creation. arXiv preprint arXiv:2306.12422, 2023. 2, 4
|
| 292 |
+
[37] Yan-Bin Jia. Plücker coordinates for lines in the space. Problem Solver Techniques for Applied Computer Science, Com-S-477/577 Course Handout, 2020. 3
|
| 293 |
+
[38] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3
|
| 294 |
+
[39] Yash Kant, Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Guler, Jian Ren, Sergey Tulyakov, and Igor Gilitschenski. invs: Repurposing diffusion inpainters for novel view synthesis. In SIGGRAPH Asia 2023 Conference Papers, 2023. 2, 3, 5, 6, 7, 9, 15
|
| 295 |
+
[40] Animesh Karnewar, Andrea Vedaldi, David Novotny, and Niloy J Mitra. Holodiffusion: Training a 3d diffusion model using 2d images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 296 |
+
[41] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 2, 3
|
| 297 |
+
[42] Juil Koo, Seungwoo Yoo, Minh Hieu Nguyen, and Minhyuk Sung. Salad: Part-level latent diffusion for 3d shape generation and manipulation. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 298 |
+
[43] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. https://arxiv.org/abs/2311.06214, 2023.8,9
|
| 299 |
+
[44] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2, 4
|
| 300 |
+
[45] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 2, 3, 6, 7, 8, 15, 18
|
| 301 |
+
[46] Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang. Syncdreamer: Generating multiview-consistent images from a single-view image. arXiv preprint arXiv:2309.03453, 2023. 3, 6, 7, 16
|
| 302 |
+
|
| 303 |
+
[47] Tiange Luo, Chris Rockwell, Honglak Lee, and Justin Johnson. Scalable 3d captioning with pretrained models, 2023. 6
|
| 304 |
+
[48] Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, and Andrea Vedaldi. Realfusion: 360deg reconstruction of any object from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2, 4
|
| 305 |
+
[49] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 2, 3
|
| 306 |
+
[50] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Proceedings of the European Conference on Computer Vision, 2020. 2, 4, 8
|
| 307 |
+
[51] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE International Conference on Computer Vision, 2019. 2, 3
|
| 308 |
+
[52] Thu H Nguyen-Phuoc, Christian Richardt, Long Mai, Yongliang Yang, and Niloy Mitra. Blockgan: Learning 3d object-aware scene representations from unlabelled images. Advances in Neural Information Processing Systems, 2020. 3
|
| 309 |
+
[53] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 3
|
| 310 |
+
[54] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Proceedings of the International Conference on Machine Learning, 2021. 2
|
| 311 |
+
[55] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021. 2, 3
|
| 312 |
+
[56] Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, and Aurelien Lucchi. Convolutional generation of textured 3d meshes. Advances in Neural Information Processing Systems, 2020. 3
|
| 313 |
+
[57] Dario Pavllo, Jonas Kohler, Thomas Hofmann, and Aurelien Lucchi. Learning generative models of textured 3d meshes from real-world images. In Proceedings of the IEEE International Conference on Computer Vision, 2021. 3
|
| 314 |
+
[58] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 9
|
| 315 |
+
[59] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In Proceedings of the International Conference on Learning Representations, 2023. 2, 3, 8
|
| 316 |
+
|
| 317 |
+
[60] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors. arXiv preprint arXiv:2306.17843, 2023. 2, 4
|
| 318 |
+
[61] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, 2021. 6
|
| 319 |
+
[62] Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, et al. Dreambooth3d: Subject-driven text-to-3d generation. arXiv preprint arXiv:2303.13508, 2023. 4
|
| 320 |
+
[63] Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, and David Novotny. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. In Proceedings of the IEEE International Conference on Computer Vision, 2021. 3
|
| 321 |
+
[64] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. 2, 3, 4, 6, 7
|
| 322 |
+
[65] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, 2015. 4
|
| 323 |
+
[66] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 2022. 2
|
| 324 |
+
[67] Mehdi SM Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, et al. Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. 3
|
| 325 |
+
[68] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in Neural Information Processing Systems, 2016. 6
|
| 326 |
+
[69] Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2006. 3
|
| 327 |
+
[70] Hoigi Seo, Hayeon Kim, Gwanghyun Kim, and Se Young Chun. Ditto-nerf: Diffusion-based iterative text to omni
|
| 328 |
+
|
| 329 |
+
directional 3d model. arXiv preprint arXiv:2304.02827, 2023.4
|
| 330 |
+
[71] Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, and Seungryong Kim. Let 2d diffusion model know 3d-consistency for robust text-to-3d generation. arXiv preprint arXiv:2303.07937, 2023. 4
|
| 331 |
+
[72] Qiuhong Shen, Xingyi Yang, and Xinchao Wang. Anything-3d: Towards single-view anything reconstruction in the wild. arXiv preprint arXiv:2304.10261, 2023. 4
|
| 332 |
+
[73] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 2, 3, 4, 5, 6, 7, 8, 10, 15, 16, 17
|
| 333 |
+
[74] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 334 |
+
[75] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neural scene representations with single-evaluation rendering. Advances in Neural Information Processing Systems, 2021. 6
|
| 335 |
+
[76] Ivan Skorokhodov, Sergey Tulyakov, Yiqun Wang, and Peter Wonka. Epigraph: Rethinking training of 3d gans. Advances in Neural Information Processing Systems, 2022. 2
|
| 336 |
+
[77] Ivan Skorokhodov, Aliaksandr Siarhin, Yinghao Xu, Jian Ren, Hsin-Ying Lee, Peter Wonka, and Sergey Tulyakov. 3d generation on imagenet. In Proceedings of the International Conference on Learning Representations, 2023. 2
|
| 337 |
+
[78] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the International Conference on Machine Learning, 2015. 3, 4
|
| 338 |
+
[79] Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Generalizable patch-based neural rendering. In ECCV, 2022. 2, 3, 5
|
| 339 |
+
[80] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-) image-conditioned 3d generative models from 2d data. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 340 |
+
[81] Qingyang Tan, Lin Gao, Yu-Kun Lai, and Shihong Xia. Variational autoencoders for deforming 3d mesh models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 3
|
| 341 |
+
[82] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. 2023. 4
|
| 342 |
+
[83] Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, and Bharath Hariharan. Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881, 2023. 2
|
| 343 |
+
[84] Shitao Tang, Fuyang Zhang, Jiacheng Chen, Peng Wang, and Yasutaka Furukawa. Mvdiffusion: Enabling holistic multi-view image generation with correspondence-aware
|
| 344 |
+
|
| 345 |
+
diffusion. Advances in Neural Information Processing Systems, 2023. 3
|
| 346 |
+
[85] Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, and Federico Tombari. Textmesh: Generation of realistic 3d meshes from text prompts. In Proceedings of the International Conference on 3D Vision, 2023. 4
|
| 347 |
+
[86] Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, and Johannes Kopf. Consistent view synthesis with pose-guided diffusion models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 348 |
+
[87] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2
|
| 349 |
+
[88] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 2017. 4
|
| 350 |
+
[89] Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Mengei Chai, Yun Fu, and Sergey Tulyakov. R21: Distilling neural radiance field to neural light field for efficient novel view synthesis. In Proceedings of the European Conference on Computer Vision, 2022. 3
|
| 351 |
+
[90] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2, 3
|
| 352 |
+
[91] Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jian-min Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, et al. Rodin: A generative model for sculpting 3d digital avatars using diffusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 353 |
+
[92] Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 2004. 6
|
| 354 |
+
[93] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 2023. 2, 4
|
| 355 |
+
[94] Daniel Watson, William Chan, Ricardo Martin Brulla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. In Proceedings of the International Conference on Learning Representations, 2022. 3
|
| 356 |
+
[95] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. 3
|
| 357 |
+
[96] Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space
|
| 358 |
+
|
| 359 |
+
of object shapes via 3d generative-adversarial modeling. Advances in Neural Information Processing Systems, 2016. 2, 3
|
| 360 |
+
[97] Jinbo Wu, Xiaobo Gao, Xing Liu, Zhengyang Shen, Chen Zhao, Haocheng Feng, Jingtuo Liu, and Errui Ding. Hdfusion: Detailed text-to-3d generation leveraging multiple noise estimation. arXiv preprint arXiv:2307.16183, 2023. 4
|
| 361 |
+
[98] Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Learning descriptor networks for 3d shape synthesis and analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 2, 3
|
| 362 |
+
[99] Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, and Zhangyang Wang. Neurallift-360: Lifting an in-the-wild 2d photo to a 3d object with 360deg views. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 4
|
| 363 |
+
[100] Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, and Bolei Zhou. 3d-aware image synthesis via learning structural and textural representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. 3
|
| 364 |
+
[101] Jason J Yu, Fereshteh Forghani, Konstantinos G Derpanis, and Marcus A Brubaker. Long-term photometric consistent novel view synthesis with diffusion models. In Proceedings of the IEEE International Conference on Computer Vision, 2023. 3
|
| 365 |
+
[102] Junyi Zhang, Charles Herrmann, Junhwa Hur, Luisa Polania Cabrera, Varun Jampani, Deqing Sun, and Ming-Hsuan Yang. A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. 2023. 2
|
| 366 |
+
[103] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. 6
|
| 367 |
+
[104] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 6
|
| 368 |
+
[105] Xuanmeng Zhang, Zhedong Zheng, Daiheng Gao, Bang Zhang, Pan Pan, and Yi Yang. Multi-view consistent generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022. 3
|
| 369 |
+
[106] Zhizhuo Zhou and Shubham Tulsiani. Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. 3
|
| 370 |
+
[107] Joseph Zhu and Peiye Zhuang. Hifa: High-fidelity text-to-3d with advanced diffusion guidance. arXiv preprint arXiv:2305.18766, 2023. 4
|
| 371 |
+
[108] C Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. High-quality video view interpolation using a layered representation. ACM Transactions on Graphics, 2004. 3
|
| 372 |
+
|
| 373 |
+
# A. More Implementation Details
|
| 374 |
+
|
| 375 |
+
DDIM Initialization. We use a black Gaussian-blob on a white background to initialize the first 20 steps (out of 200) of our DDIM sampler, which ensures that our model correctly generates a single object placed at the center on the white background (similar to images during training). This trick is similar to the one used in iNVS [39] which starts inpainting with the partial image warped with depth.
|
| 376 |
+
|
| 377 |
+
We list our hyper-parameter choices and miscellaneous training details below:
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Hyper-parameters</td><td>Value</td></tr><tr><td>Base Learning rate</td><td>1e-4</td></tr><tr><td>Learning rate decay</td><td>None</td></tr><tr><td>Loss Type</td><td>L2</td></tr><tr><td>Classifier-free guidance</td><td>7.5 (text-only)</td></tr><tr><td>Effective batch size</td><td>1152</td></tr><tr><td>DDIM Steps</td><td>200</td></tr><tr><td>Gaussian Blob Initialization</td><td>20</td></tr><tr><td>CLIP Frozen</td><td>True</td></tr><tr><td>renders background color</td><td>White</td></tr><tr><td>Image Resolution</td><td>256</td></tr><tr><td>Learning rate linear warmup</td><td>100 steps</td></tr></table>
|
| 380 |
+
|
| 381 |
+
Table 4. Hyperparameter choices for SPAD.
|
| 382 |
+
|
| 383 |
+
# B. Additional Experiments and Results
|
| 384 |
+
|
| 385 |
+
# B.1. Qualitative Results and Baseline Comparisons
|
| 386 |
+
|
| 387 |
+
Text-conditioned multi-view generations and comparison with MVDream [73]. Fig. 8 presents the results. We find SPAD synthesizes images with higher quality details and better alignment with the text prompt.
|
| 388 |
+
|
| 389 |
+
Image-conditioned novel view synthesis and comparison with Zero123 [45]. Fig. 9 presents the results. We find SPAD preserves the structural and perceptual details of objects and exhibits better 3D consistency.
|
| 390 |
+
|
| 391 |
+
Close viewpoints generations from SPAD. In Fig. 10, we put text-conditioned multi-view generations from SPAD where we increment the azimuth angle by 10 degrees per view. We find that SPAD can synthesize continuous moving views well, without content copying issues.
|
| 392 |
+
|
| 393 |
+
# B.2. User Study comparing SPAD with MVDream
|
| 394 |
+
|
| 395 |
+
We conducted a user study on the visual quality, 3D consistency, and text alignment of multi-view generations. We distributed our questions via Amazon Mechanical Turk, where participants were given 4-view generations of SPAD and MVDream [73], and asked to choose the better one satisfying the above properties. We found that SPAD is preferred over MVDream with $59\%$ vs $41\%$ .
|
| 396 |
+
|
| 397 |
+
Exact Instructions: You are shown a text prompt and two sets of images corresponding to 4 different views of the same object. The views is front, left, right and back. Your task is to choose which of the sets of views is better, based on (1) consistency between different views (e.g it should represent the same object, have the same structure and colors) (2) looks better visually, (3) describes what is written in the text accurately, either Option A or Option B.
|
| 398 |
+
|
| 399 |
+
# B.3. Training with Stable Diffusion v2.1 Weight
|
| 400 |
+
|
| 401 |
+
The SPAD model we evaluated in the main paper and Appendix B.1 is initialized from the weight of Stable Diffusion (SD) v1.5. Here, we train another model initializing from the weight of the stronger SD v2.1 release. Fig. 12 presents the multi-view generation results of this model. Indeed, we observe better alignment with the text input, especially with longer and more complicated prompts.
|
| 402 |
+
|
| 403 |
+
This is also verified by the quantitative result. SPAD with SD v1.5 achieves a CLIP-score of $29.87 \pm 3.33$ . SPAD with SD v2.1 achieves a better CLIP-score of $30.39 \pm 3.30$ , which is also higher than MVDream [73] initialized from the same SD v2.1 weight $(30.22 \pm 3.83)$ .
|
| 404 |
+
|
| 405 |
+
# B.4. Classifier-free Guidance
|
| 406 |
+
|
| 407 |
+
Classifier-free diffusion guidance [33] is a technique used to balance the quality and diversity of images produced by diffusion models. This method is particularly effective in class-conditional and text-conditional image generation, enhancing both the visual quality of images and their alignment with given conditions. Inspired by [5] we explore the integration of classifier-free guidance with Epipolar Attention and Plücker Embedding. Implementing classifier-free guidance involves simultaneous training of the diffusion model for both conditional and unconditional denoising tasks. During inference, these models' score estimates are merged. We have four different types of conditioning injected into our system:
|
| 408 |
+
|
| 409 |
+
- Text $(c_{T})$ : Injected from CLIP text-encoder similar to Vanilla Stable Diffusion.
|
| 410 |
+
- Camera $(c_{C})$ : Injected with timestep via Residual blocks.
|
| 411 |
+
- Epipolar Attention $(c_{E})$ : Injected by applying mask during self-attention.
|
| 412 |
+
- Plücker Embedding $(c_{P})$ : Injected by concatenation during self-attention.
|
| 413 |
+
|
| 414 |
+
During training, we extend classifier-free guidance over all these conditions. Therefore, our modified score estimate
|
| 415 |
+
|
| 416 |
+
during inference is as follows:
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
\begin{array}{l} \tilde {e _ {\theta}} \left(z _ {t}, c _ {T}, c _ {C}, c _ {E}, c _ {P}\right) = e _ {\theta} \left(z _ {t}, \varnothing , \varnothing , \varnothing , \varnothing\right) \\ + s _ {T} \cdot \left(e _ {\theta} \left(z _ {t}, c _ {T}, \varnothing , \varnothing , \varnothing\right) - e _ {\theta} \left(z _ {t}, \varnothing , \varnothing , \varnothing , \varnothing\right)\right) \\ + s _ {C} \cdot \left(e _ {\theta} \left(z _ {t}, c _ {T}, c _ {C}, \varnothing , \varnothing\right) - e _ {\theta} \left(z _ {t}, c _ {T}, \varnothing , \varnothing , \varnothing\right)\right) \\ + s _ {E} \cdot \left(e _ {\theta} \left(z _ {t}, c _ {T}, c _ {C}, c _ {E}, \varnothing\right) - e _ {\theta} \left(z _ {t}, c _ {T}, c _ {C}, \varnothing , \varnothing\right)\right) \\ + s _ {P} \cdot \left(e _ {\theta} \left(z _ {t}, c _ {T}, c _ {C}, c _ {E}, c _ {P}\right) - e _ {\theta} \left(z _ {t}, c _ {T}, c _ {C}, c _ {E}, \varnothing\right)\right) \\ \end{array}
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
Outcome: As shown in Fig. 13, we find that classifier-free guidance beyond text conditioning does not provide additional benefits, and rather leads to over-saturated generations. This also aligns with our observations on MVDream.
|
| 423 |
+
|
| 424 |
+
# B.5. Joint Multi-View Inference
|
| 425 |
+
|
| 426 |
+
Concurrent multi-view diffusion models [46, 73] are limited to generating the same number of views they were trained on during testing. However, generating a high-quality 3D asset by e.g. training a NeRF model usually requires more than ten views of the asset. A naive solution is to use more views during training, which leads to quadratically increasing training costs due to the use of 3D self-attention. Instead, we propose a joint multi-view inference technique, which enables generating an infinite number of views using a model trained with fewer views.
|
| 427 |
+
|
| 428 |
+
Assume that we want to generate $M$ views with a two-view model. We first initialize $M$ noise maps $\{\pmb{x}_T^i\}_{i=1}^M$ , and then iteratively denoise all possible pairs of views:
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
\left(\boldsymbol {x} _ {t - 1} ^ {i}, \boldsymbol {x} _ {t - 1} ^ {j}\right) = \operatorname {D e n o i s e} \left(\boldsymbol {x} _ {t} ^ {i}, \boldsymbol {x} _ {t} ^ {j}, \epsilon_ {\theta}\right), \forall i, j \in [ 1, M ], i \neq j. \tag {6}
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
Since the model is only trained on both views with the same noise level (i.e., timestep $t$ ), we sample $(i,j)$ pairs without replacement and make sure to go over all possible combinations uniformly via simple heuristics.
|
| 435 |
+
|
| 436 |
+
Outcome: We find that this experiment trades off 3D consistency, as it only allows cross-view communication between two views at any given timestep of generation.
|
| 437 |
+
|
| 438 |
+
# B.6. Fréchet Inception Distance (FID) Results
|
| 439 |
+
|
| 440 |
+
Compared to Vanilla MV-DM with an FID score of 55.25, our full model SPAD achieves a better FID score of 52.77 which shows further evidence of improvement in 2D generation quality.
|
| 441 |
+
|
| 442 |
+
FID Comparison with MVDream. Since our model generates images at random views, it has a much larger pose distribution mismatch compared to MVDream which uses orthogonal (90-degree varying) views in both ground-truth and generated images. Due to this reason, our FID cannot be compared directly with MVDream (trained with v2.1) which is reported to be 32.06 in the original work.
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
A small robot with a glass container on its head, metal legs, and a glass top
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
Futuristic space helmet
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
An axe with a red handle
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
A wooden chair
|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
A medieval shield with a cross and wooden handle
|
| 464 |
+
An owl with a cat head
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
A small biplane flying in the air
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
A wooden-framed couch with purple upholstery
|
| 475 |
+
Figure 8. Comparison of text-conditioned multi-view generation with MVDream [73].
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
Figure 9. Comparison of image-conditioned novel view synthesis with Zero123 [45].
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
A white pickup truck
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
Red Fidget Spinner Model
|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
A white marble Greek temple with columns and pillars
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
A blue muscle car
|
| 491 |
+
|
| 492 |
+

|
| 493 |
+
The US Capitol building with a white exterior and golden-yellow dome
|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
A black sports car
|
| 497 |
+
Figure 10. Close-view generation results from SPAD. We generate images at continuous viewpoints with an offset of 10 degrees.
|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
Figure 11. More multi-view generation results with SPAD. The tested model is initialized with the weight of Stable Diffusion v1.5, and fine-tuned on Objaverse rendered images (same as Fig. 1 in the main paper).
|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
A medieval shield with a cross and wooden handle
|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
A black futuristic space helmet with reflective surface
|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
A small biplane flying in the air
|
| 510 |
+
|
| 511 |
+

|
| 512 |
+
A flying red dragon
|
| 513 |
+
|
| 514 |
+

|
| 515 |
+
Yellow teapot with a hat on top
|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
An owl with a cat head
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
A wooden-framed couch with purple upholstery
|
| 522 |
+
|
| 523 |
+

|
| 524 |
+
A small stone fountain and cistern with leaves, accompanied by a stone pillar, wall, and old building
|
| 525 |
+
Figure 12. More multi-view generation results with SPAD. The tested model is initialized with the weight of Stable Diffusion v2.1, and fine-tuned on Objverse rendered images. Compared to results in Fig. 11 which adopts the weight of Stable Diffusion v1.5, this model is able to follow more complicated text prompts.
|
| 526 |
+
|
| 527 |
+

|
| 528 |
+
|
| 529 |
+

|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
|
| 533 |
+

|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
|
| 537 |
+

|
| 538 |
+
|
| 539 |
+

|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
|
| 543 |
+

|
| 544 |
+
|
| 545 |
+

|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
|
| 549 |
+

|
| 550 |
+
|
| 551 |
+

|
| 552 |
+
|
| 553 |
+

|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
|
| 557 |
+

|
| 558 |
+
|
| 559 |
+

|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
|
| 563 |
+

|
| 564 |
+
|
| 565 |
+

|
| 566 |
+
|
| 567 |
+

|
| 568 |
+
|
| 569 |
+

|
| 570 |
+
|
| 571 |
+

|
| 572 |
+
|
| 573 |
+

|
| 574 |
+
|
| 575 |
+

|
| 576 |
+
|
| 577 |
+

|
| 578 |
+
|
| 579 |
+

|
| 580 |
+
|
| 581 |
+

|
| 582 |
+
|
| 583 |
+

|
| 584 |
+
|
| 585 |
+

|
| 586 |
+
|
| 587 |
+

|
| 588 |
+
|
| 589 |
+

|
| 590 |
+
|
| 591 |
+

|
| 592 |
+
|
| 593 |
+

|
| 594 |
+
|
| 595 |
+

|
| 596 |
+
|
| 597 |
+

|
| 598 |
+
|
| 599 |
+

|
| 600 |
+
|
| 601 |
+

|
| 602 |
+
|
| 603 |
+

|
| 604 |
+
|
| 605 |
+

|
| 606 |
+
|
| 607 |
+

|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
|
| 611 |
+

|
| 612 |
+
|
| 613 |
+

|
| 614 |
+
|
| 615 |
+

|
| 616 |
+
|
| 617 |
+

|
| 618 |
+
|
| 619 |
+

|
| 620 |
+
|
| 621 |
+

|
| 622 |
+
|
| 623 |
+

|
| 624 |
+
|
| 625 |
+

|
| 626 |
+
|
| 627 |
+

|
| 628 |
+
|
| 629 |
+

|
| 630 |
+
|
| 631 |
+

|
| 632 |
+
|
| 633 |
+

|
| 634 |
+
|
| 635 |
+

|
| 636 |
+
|
| 637 |
+

|
| 638 |
+
|
| 639 |
+

|
| 640 |
+
a blue poison-dart frog sitting on a water lily
|
| 641 |
+
|
| 642 |
+

|
| 643 |
+
Figure 13. Ablation study regarding the classifier-free guidance scales. Using a large scale of $s_T = 7.5$ for text conditioning works the best (row 2), while increasing scales for camera embedding, Epipolar Attention, and Plücker Embedding all leads to over-saturated images.
|
| 644 |
+
|
| 645 |
+

|
| 646 |
+
|
| 647 |
+

|
| 648 |
+
|
| 649 |
+

|
| 650 |
+
a DSLR photo of a bald eagle
|
| 651 |
+
|
| 652 |
+

|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
|
| 656 |
+

|
2402.05xxx/2402.05235/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b51a287d6c89d9e1fd9dc7252bc1c08b77d73d2451b080865f136103a4012e45
|
| 3 |
+
size 1839384
|
2402.05xxx/2402.05235/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05239/fd2cf25f-4e25-4777-92e7-2a27541ade9f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e8577aa89219771cf9f907da84c5b23eb09cfd0c717ca9de41cc0e6b9654310
|
| 3 |
+
size 656913
|
2402.05xxx/2402.05239/full.md
ADDED
|
@@ -0,0 +1,703 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Efficient approximate unitary designs from random Pauli rotations
|
| 2 |
+
|
| 3 |
+
Jeongwan Haah* Yunchao Liu** Xinyu Tan***
|
| 4 |
+
|
| 5 |
+
We construct random walks on simple Lie groups that quickly converge to the Haar measure for all moments up to order $t$ . Specifically, a step of the walk on the unitary or orthogonal group of dimension $2^n$ is a random Pauli rotation $e^{\mathrm{i}\theta P / 2}$ . The spectral gap of this random walk is shown to be $\Omega (1 / t)$ , which coincides with the best previously known bound for a random walk on the permutation group on $\{0,1\}^n$ . This implies that the walk gives an $\varepsilon$ -approximate unitary $t$ -design in depth $\mathcal{O}(\mathsf{n}t^{2} + t\log 1 / \varepsilon)d$ where $d = \mathcal{O}(\log \mathsf{n})$ is the circuit depth to implement $e^{\mathrm{i}\theta P / 2}$ . Our simple proof uses quadratic Casimir operators of Lie algebras.
|
| 6 |
+
|
| 7 |
+
# 1 Introduction
|
| 8 |
+
|
| 9 |
+
An approximate unitary $t$ -design [DCEL09; GAE07] is an ensemble of unitaries that behaves similarly to the Haar random ensemble up to $t$ -th moments. For $n$ -qubit $(\mathbb{C}^2)$ systems, there have been constructions of approximate unitary $t$ -designs with circuit size $\mathrm{poly}(\mathfrak{n},t)$ [BHH16; Haf22], which have found wide applications in quantum information theory. However, existing constructions using local random quantum circuits had rather steep dependence on $t$ . In this paper, we consider random Pauli rotations, which are $\exp (\mathrm{i}\theta P / 2)$ where $\theta$ is a random angle and $P$ is a random $n$ -qubit Pauli operator. We show that the product of $k$ independent random Pauli rotations $e^{\mathrm{i}\theta_kP_k / 2}\dots e^{\mathrm{i}\theta_2P_2 / 2}e^{\mathrm{i}\theta_1P_1 / 2}$ converges to a unitary $t$ -design as $k$ increases.
|
| 10 |
+
|
| 11 |
+
Theorem 1.1. For any integers $\mathfrak{n}, t \geq 1$ , it holds that
|
| 12 |
+
|
| 13 |
+
$$
|
| 14 |
+
\left\| \underset {\theta \sim (- \pi , \pi)} {\mathbb {E}} \underset {P \sim \mathbf {P} _ {\mathfrak {n}}} {\mathbb {E}} \left(e ^ {\mathrm {i} \frac {\theta}{2} P} \otimes e ^ {- \mathrm {i} \frac {\theta}{2} \bar {P}}\right) ^ {\otimes t} - \underset {U \sim \mathsf {S U} (2 ^ {\mathfrak {n}})} {\mathbb {E}} (U \otimes \bar {U}) ^ {\otimes t} \right\| \leq 1 - \frac {1}{4 t} - \frac {1}{4 ^ {\mathfrak {n}} - 1}. \tag {1}
|
| 15 |
+
$$
|
| 16 |
+
|
| 17 |
+
Here, $\mathbf{P}_{\mathfrak{n}} = \{\mathbf{1}_2,\sigma^x,\sigma^y,\sigma^z\}^{\otimes \mathfrak{n}}\setminus \{\mathbf{1}_{2^{\mathfrak{n}}}\}$ is the set of all nonidentity n-qubit Pauli operators, the norm denotes the greatest singular value, $\bar{U}$ denotes the complex conjugate of $U$ , the distributions for $P$ $\theta$ , and $U$ are uniform in the designated domain.
|
| 18 |
+
|
| 19 |
+
In addition, for any finite dimensional unitary representation $\rho$ of $\mathsf{SU}(2^n)$ , we have
|
| 20 |
+
|
| 21 |
+
$$
|
| 22 |
+
\left\| \underset {\theta \sim (- 2 \pi , 2 \pi)} {\mathbb {E}} \underset {P \sim \mathbf {P} _ {n}} {\mathbb {E}} \rho \left(e ^ {\mathrm {i} \frac {\theta}{2} P}\right) - \underset {U \sim \mathsf {S U} (2 ^ {n})} {\mathbb {E}} \rho (U) \right\| \leq 1 - \frac {1}{4 ^ {n} - 1}. \tag {2}
|
| 23 |
+
$$
|
| 24 |
+
|
| 25 |
+
Corollary 1.2. Consider two mixed unitary channels
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
\mathcal {C} _ {t}: \eta \mapsto \underset {P \sim \mathbf {P} _ {\mathrm {n}}, \theta \sim (- \pi , \pi)} {\mathbb {E}} \left(e ^ {\mathrm {i} \frac {\theta}{2} P}\right) ^ {\otimes t} \eta \left(e ^ {- \mathrm {i} \frac {\theta}{2} P}\right) ^ {\otimes t} \quad a n d \quad \mathcal {H} _ {t}: \eta \mapsto \underset {U \sim \mathbf {S U} (2 ^ {\mathrm {n}})} {\mathbb {E}} U ^ {\otimes t} \eta U ^ {\dagger \otimes t} \tag {3}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
using the same distribution of $P, \theta$ and $U$ as in Eq. (1). Then,
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\left\| \mathcal {C} _ {t} ^ {k} - \mathcal {H} _ {t} \right\| _ {\diamond} \leq \varepsilon \quad i f \quad k \geq (4 \log 2) \mathrm {n t} ^ {2} + 4 t \log \frac {1}{\varepsilon}, \tag {4}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
(1 - \varepsilon) \mathcal {H} _ {t} \preceq \mathcal {C} _ {t} ^ {k} \preceq (1 + \varepsilon) \mathcal {H} _ {t} \qquad i f \qquad k \geq (4 \log 8) \mathsf {n} t ^ {2} + 4 t \log \frac {1}{\varepsilon}.
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
Here, $\| \cdot \|_{\diamond}$ denotes the diamond norm (completely bounded norm). Every instance $e^{\mathrm{i}\theta_kP_k / 2}\dots e^{\mathrm{i}\theta_1P_1 / 2}$ can be implemented using $\mathcal{O}(k\mathfrak{n})$ 1-qubit and any-to-any CNOT gates in depth $\mathcal{O}(k\log \mathfrak{n})$ .
|
| 42 |
+
|
| 43 |
+
We also give similar results for the special orthogonal groups in Section 6.
|
| 44 |
+
|
| 45 |
+
# 1.1 Previous spectral gap bounds
|
| 46 |
+
|
| 47 |
+
Unless otherwise noted, $\mathsf{N}$ stands for $2^{\mathsf{n}}$ .
|
| 48 |
+
|
| 49 |
+
For a distribution $\nu$ over $\mathsf{SU}(\mathsf{N})$ , the spectral gap $\Delta(\nu, t)$ at $t$ -th order<sup>1</sup> is given by
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
1 - \Delta (\nu , t) = \left\| \underset {U \sim \nu} {\mathbb {E}} (U \otimes \bar {U}) ^ {\otimes t} - \underset {U \sim \mathrm {S U} (\mathrm {N})} {\mathbb {E}} (U \otimes \bar {U}) ^ {\otimes t} \right\|. \tag {5}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
Consider the distribution of the product of $k$ independent draws from $\nu$ , which corresponds to the $k$ -fold convolution $\nu^{*k}$ . Then, since $\mathcal{H}_t = \mathbb{E}_{U \sim \mathsf{SU}(\mathsf{N})}(U \otimes \bar{U})^{\otimes t}$ is a projector and $\mathbb{E}_{U \sim \nu}(U \otimes \bar{U})^{\otimes t}$ contains the image of $\mathcal{H}_t$ in the eigenspace of eigenvalue $+1$ (see Proposition 2.4), the spectral gap amplifies as $1 - \Delta(\nu^{*k}, t) = (1 - \Delta(\nu, t))^k$ . This allows us to exponentially improve the accuracy at the cost of linear blow-up in circuit size. More generally, proving lower bounds on the spectral gap of the $t$ -th moment operator is a standard approach to show that a random walk on a group quickly converges to a $t$ -wise independent distribution (often referred to as "designs").
|
| 56 |
+
|
| 57 |
+
Hence, a primary goal in efficient approximate unitary designs is to find $\nu$ with $\mathrm{poly}(\mathfrak{n})$ circuit size with a large spectral gap, for example, $1 / \mathrm{poly}(\mathfrak{n},t)$ . A simple brickwall "spacetime" geometry of random unitary circuit has been shown to achieve this goal [BHH16], whose analysis was recently improved [Haf22]. Once the operator norm distance is bounded, one can convert it to additive or relative diamond distance.
|
| 58 |
+
|
| 59 |
+
As far as we know, the best previous spectral gap for any efficient approximate unitary design on an $\mathfrak{n}$ -qubit system was $\Omega(t^{-4 - o(1)})$ [Haf22]. This work takes the ensemble of [BHH16], where the circuit geometry is brickwall that uses local gates in a one-dimensional lattice. Our ensemble does not have any geometric locality. Note that Theorem 1.1 gives a lower bound $4^{-\mathfrak{n}}$ on the spectral gap independent of $t$ . Such a $t$ -independent bound was also given in [Haf22, Theorem 1], which reads $\Omega(\mathfrak{n}^{-5}4^{-\mathfrak{n}})$ .
|
| 60 |
+
|
| 61 |
+
Similarly to unitary designs, the best previous spectral gap lower bound for the special orthogonal group $\mathsf{SO}(\mathsf{N})$ had a large inverse-polynomial dependence on $t$ [OSP23], while the best previous spectral gap for the symmetric group $S_{\mathbb{N}}$ was $\Omega (t^{-1})$ [BH08].<sup>2</sup> Our spectral gap bounds for the special unitary and orthogonal groups are thus the strongest in terms of $t$ dependence, and they coincide with the best known spectral gap for the symmetric group.
|
| 62 |
+
|
| 63 |
+
Some results on unitary design bypasses spectral gap analysis. Aiming to minimize non-Clifford resources, $\mathrm{[HMH + 22]}$ analyzed alternating "Clifford $+K$ "circuits and the diamond distance of the associated mixed unitary channel to the Haar random channel directly. Compared with the brickwall circuits, our construction is conceptually closer to $\mathrm{[HMH + 22]}$ . However, their result is only applicable in the regime when $t = \mathcal{O}(\sqrt{\mathfrak{n}})$ .
|
| 64 |
+
|
| 65 |
+
# 1.2 Implications
|
| 66 |
+
|
| 67 |
+
Circuit complexity. By known reductions [BCH+21], our result directly implies a lower bound for robust quantum circuit complexity of a product of $k$ random Pauli rotations. Specifically, let $U$ be a product of $k \ll 2^n$ random Pauli rotations, which can be implemented by $\mathcal{O}(k\mathfrak{n})$ gates. Then with high probability over the choice of $U$ , any unitary $V$ satisfying $\| U - V \| \leq 0.01$ must have quantum circuit complexity (the minimum number of gates to implement $V$ ) $\tilde{\Omega}(\sqrt{k\mathfrak{n}})$ . Note that
|
| 68 |
+
|
| 69 |
+
a robust square root circuit complexity lower bound was also established in [Haf23]; however, the family of quantum circuits considered there used a non-universal gate set, and therefore does not form an approximate unitary design. A major open question is whether it is possible to construct distributions on $\mathsf{SU}(2^{\mathsf{n}})$ using poly(n) size quantum circuits, such that the spectral gap is at least $1 / \mathrm{poly}(\mathsf{n})$ and independent of $t$ . Such a result would imply a robust linear growth of quantum circuit complexity.
|
| 70 |
+
|
| 71 |
+
Seed length. Our unitary design requires sampling from a continuous interval $(-\pi, \pi)$ ; however, for given $t$ , we can instead sample uniformly from a discrete set $\{m\pi / t : m \in \mathbb{Z} \cap [-t, t - 1]\}$ (see Appendix A). Therefore, our distribution for $\varepsilon$ -approximate unitary $t$ -design is samplable using only $\mathcal{O}(t(\mathfrak{nt} + \log 1 / \varepsilon)(\mathfrak{n} + \log t))$ random bits. Furthermore, instead of sampling each random Pauli rotation independently and uniformly at random, we can sample them in a pseudorandom way using a technique of [OSP23] which is a generalization of the derandomized graph squaring [RV05]. We can thus reduce the seed length to only $\mathcal{O}(\mathfrak{nt} + \log 1 / \varepsilon)$ by applying [OSP23, Theorem 6.21]. While this has the same scaling as the main result of [OSP23], our construction has the advantage of having explicit constants, as we do not rely on the implicit spectral gap of [BG12].
|
| 72 |
+
|
| 73 |
+
Orthogonal designs and more. Our approach to unitary designs can be adapted to the special orthogonal groups $\mathsf{SO}(\mathsf{N})$ with parallel arguments. The results are found in Section 6. The seed length can be similarly reduced to $\mathcal{O}(\mathsf{nt} + \log 1 / \varepsilon)$ with explicit constants. This has been used to construct pseudorandom generators for halfspaces [OSP23]. Finally, we discuss quantum state designs in Appendix B, where we obtain better bounds than what would be obtained by directly applying our unitary design.
|
| 74 |
+
|
| 75 |
+
The analysis of the orthogonal groups is so similar to that of the unitary groups that one might desire to have unified statements for all simple finite dimensional Lie groups. However, as the representation theory of Lie groups is tackled in a case-by-case fashion in detail, we find it best to analyze them separately. Beyond the unitary and orthogonal groups, there is a family of symplectic groups, which might have applications in classical Hamiltonian dynamics and quantum optics as one often encounters symplectic spaces in these subjects.
|
| 76 |
+
|
| 77 |
+
# 1.3 Overview of the argument
|
| 78 |
+
|
| 79 |
+
We start by rewriting the tensor product in a different form, $\left(e^{\mathrm{i}\frac{\theta}{2} P}\right)^{\otimes t}\otimes \left(e^{-\mathrm{i}\frac{\theta}{2}\bar{P}}\right)^{\otimes t} = e^{\mathrm{i}\theta \tau_{*}(P / 2)}$ where
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\tau_ {*} (P / 2) = \frac {1}{2} \sum_ {j = 1} ^ {t} \left(\mathbf {1} _ {\mathbb {N}} ^ {\otimes (j - 1)} \otimes P \otimes \mathbf {1} _ {\mathbb {N}} ^ {\otimes (2 t - j)} - \mathbf {1} _ {\mathbb {N}} ^ {\otimes (t + j - 1)} \otimes \bar {P} \otimes \mathbf {1} _ {\mathbb {N}} ^ {\otimes (t - j)}\right). \tag {6}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Note that for every $P \in \mathbf{P}_{\mathfrak{n}}$ , the eigenvalues of $\tau_{*}(P / 2)$ are exactly the integers in $[-t, t]$ . Thus, the averaging over $\theta$ gives $\mathbb{E}_{\theta \sim (-\pi, \pi)} e^{\mathrm{i}\theta \tau_{*}(P / 2)} = K_{P}$ , where $K_{P}$ denotes the orthogonal projector onto the kernel of $\tau_{*}(P / 2)$ . Our goal is now reduced to analyzing the spectrum of $\mathbb{E}_{P \sim \mathbf{P}_{\mathfrak{n}}} K_{P}$ . We calculate the norm of this exactly for the special case of $\mathfrak{n} = 1$ in Section 4. In general cases, we first block-diagonalize $K_{P}$ using the observation that $P \mapsto \tau_{*}(P)$ is a Lie algebra representation. In each irrep, we upper bound the kernel projector by a quadratic approximation:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
K (H) \preceq \mathbf {1} - H ^ {2} / \| H \| ^ {2} \tag {7}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $K(H)$ is the kernel projector for a Hermitian operator $H$ , which holds for any nonzero $H$ . This inequality is useful because the kernel projector sum is then bounded by a sum of squares of
|
| 92 |
+
|
| 93 |
+
represented operators. A nice property of Pauli operators is that this sum of squares of represented operators is a scalar multiple of the identity for any irrep. We then bound the scalar by $t, \mathbb{N}$ .
|
| 94 |
+
|
| 95 |
+
We use the representation theory of Lie algebras, but our exposition is elementary for the core bound in Theorem 1.1; we assume no prior knowledge beyond the representation theory of $\mathfrak{su}(2)$ for the main bound.
|
| 96 |
+
|
| 97 |
+
Note added. We recently became aware of independent related work of A. Bouland, C. Chen, J. Docter, P. Hayden, and M. Xu, achieving similar results via a different construction [BCD+24].
|
| 98 |
+
|
| 99 |
+
# 2 Lie algebras and probability distributions
|
| 100 |
+
|
| 101 |
+
We begin with an observation that any unitary design can be regarded as a distribution on the linear space of a Lie algebra. This will allow us to analyze spectral properties of a unitary design by looking at certain hermitian operators in irreducible representations of $\mathfrak{su}(\mathsf{N} = 2^{\mathsf{n}})$ . We will find the latter more convenient since our unitary design will have the most succinct description as a distribution on the Lie algebra, rather than on the Lie group.
|
| 102 |
+
|
| 103 |
+
Often a Lie algebra is described by very concrete data, called structure constants, $f_{bc}^{a}$ , that enter in the commutation relations as $[J_b, J_c] = \mathrm{i}\sum_a f_{bc}^a J_a$ where $J_{a}$ are said to span the Lie algebra. While this is mostly correct and causes no trouble in practice, the appearance of the imaginary unit i might bring some confusion. So, we would like to clarify the complex and real coefficients. The tangent space at the origin of a Lie group, taken as a real manifold, is a real Lie algebra $\mathfrak{su}(\mathbb{N};\mathbb{R})$ . This is a linear space over real numbers of traceless antihermitian matrices where a Lie bracket is defined by matrix commutator; after all, the commutator of two hermitian operators is antihermitian, which is not in the $\mathbb{R}$ -linear space of hermitian operators. However, since a representation space is taken to be a complex vector space, there is no reason not to allow complex coefficients in the span of antihermitian operators. This extension of the coefficient field is formally called the complexification of the Lie algebra: $\mathbb{C} \otimes_{\mathbb{R}} \mathfrak{su}(\mathbb{N};\mathbb{R})$ . This complexified space consists of all $\mathbb{C}$ -linear combinations of traceless antihermitian operators, which is the $\mathbb{C}$ -linear space of all traceless matrices. Hence, the complexification is perhaps better denoted as $\mathfrak{sl}(\mathbb{N};\mathbb{C}) = \mathbb{C} \otimes_{\mathbb{R}} \mathfrak{su}(\mathbb{N};\mathbb{R})$ , the Lie algebra of special linear group. In this paper, we take a liberal convention that
|
| 104 |
+
|
| 105 |
+
- when $\mathfrak{su}(N)$ appears in the context of representation, we mean its complexification $\mathfrak{sl}(N;\mathbb{C})$ , and
|
| 106 |
+
- when we discuss a probability distribution on $\mathfrak{su}(\mathsf{N})$ , we mean a distribution on the real vector space of hermitian, rather than antihermitian, operators, with insertion of the imaginary unit i, whenever needed, understood.
|
| 107 |
+
|
| 108 |
+
Suppose we have an $M$ -dimensional representation $\rho: \mathsf{SU}(\mathsf{N}) \to \mathsf{U}(M)$ of $\mathsf{SU}(\mathsf{N})$ for some $M \geq 1$ , which may be reducible. The representation map $\rho$ is a Lie group homomorphism, and we have a corresponding commutative diagram [FH13, §8.3] by the exponential map:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\begin{array}{l} \mathfrak {s u} (N) \xrightarrow {\rho_ {*}} \mathfrak {u} (M) \\ \Biggl \downarrow \exp \quad \Biggl \downarrow \exp \\ S U (N) \xrightarrow {\rho} U (M) \end{array} \tag {8}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
where $\rho_{*}$ is the induced, natural, Lie algebra homomorphism (a representation). In the context of unitary designs, we are interested in the tensor representation $\tau : \mathsf{SU}(\mathsf{N}) \ni U \mapsto (U \otimes \bar{U})^{\otimes t}$ so
|
| 115 |
+
|
| 116 |
+
$M = \mathbb{N}^{2t}$ , whose induced Lie algebra representation $\tau_{*}$ is given by
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\tau_ {*} \left(\frac {P}{2}\right) = \frac {1}{2} \sum_ {j = 1} ^ {t} \left(\mathbf {1} _ {\mathrm {N}} \otimes \mathbf {1} _ {\mathrm {N}}\right) ^ {\otimes (j - 1)} \otimes \left(P \otimes \mathbf {1} _ {\mathrm {N}} - \mathbf {1} _ {\mathrm {N}} \otimes \bar {P}\right) \otimes \left(\mathbf {1} _ {\mathrm {N}} \otimes \mathbf {1} _ {\mathrm {N}}\right) ^ {\otimes (t - j)} \tag {9}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
for all traceless $\mathsf{N}\times \mathsf{N}$ matrix $P$
|
| 123 |
+
|
| 124 |
+
We define
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathbf {P} _ {\mathfrak {n}} = \left\{\mathbf {1} _ {2}, \sigma^ {x}, \sigma^ {y}, \sigma^ {z} \right\} ^ {\otimes \mathfrak {n}} \backslash \left\{\mathbf {1} _ {2 ^ {\mathfrak {n}}} \right\}, \tag {10}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
the set of all nonidentity tensor products of Pauli matrices
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\sigma^ {x} = \left( \begin{array}{l l} 0 & 1 \\ 1 & 0 \end{array} \right), \quad \sigma^ {y} = \left( \begin{array}{c c} 0 & - \mathrm {i} \\ \mathrm {i} & 0 \end{array} \right), \quad \sigma^ {z} = \left( \begin{array}{c c} 1 & 0 \\ 0 & - 1 \end{array} \right). \tag {11}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
There are $\mathsf{N}^2 - 1 = 4^\mathsf{n} - 1$ elements, each of which is called a Pauli operator. The factor of half in Eq. (9) is immaterial in that equation since $\tau_*$ is $\mathbb{C}$ -linear, but we will keep it because
|
| 137 |
+
|
| 138 |
+
Lemma 2.1. For any Pauli operator $P \in \mathbf{P}_{\mathfrak{n}}$ , all the eigenvalues of $\tau_{*}(P / 2)$ are integers in $[-t, t]$ , and any integer in that range appears as an eigenvalue of $\tau_{*}(P / 2)$ .
|
| 139 |
+
|
| 140 |
+
Proof. All the summands of Eq. (9) are commuting with each other, so they are simultaneously diagonalizable, which amounts to setting $P = \sigma^z \otimes \mathbf{1}_{2^{n-1}}$ , that is a diagonal matrix with $\pm 1$ on the diagonal. The lemma follows. Alternatively, we can think of $\tau_*$ as $\tau_*(\mathbf{1} \oplus \mathsf{ad})^{\otimes t} = \bigoplus_{k=0}^{t} \binom{t}{k} \mathsf{ad}^{\otimes k}$ where $\mathbf{1}$ denotes the trivial representation and $\mathsf{ad}$ is the adjoint representation. This is because the tensor product of the defining representation of $\mathfrak{su}(N)$ and its dual is a direct sum of the trivial and the adjoint, each of which is irreducible. The represented operator $\mathsf{ad}(P/2)$ has eigenvalues $\pm 1$ and therefore $\mathsf{ad}^{\otimes t}$ has integer eigenvalues in $[-t, t]$ .
|
| 141 |
+
|
| 142 |
+
We note that all Pauli operators $P$ are equivalent to one another in any representation:
|
| 143 |
+
|
| 144 |
+
Lemma 2.2. For any representation $\rho_{*}$ of $\mathfrak{su}(\mathsf{N})$ that is possibly reducible, and any nonidentity hermitian Pauli operator $P$ , the eigenvalue spectrum of the represented operator $\rho_{*}(P)$ is independent of $P$ . In particular, the eigenvalue spectrum of $\rho_{*}(P)$ is inversion symmetric about the origin; that is, $\rho_{*}(P)$ and $-\rho_{*}(P)$ have the same spectrum.
|
| 145 |
+
|
| 146 |
+
Proof. Any two nonidentity Pauli operator $P, Q$ on $\mathfrak{n}$ qubits are congruent: $P = UQU^{\dagger}$ for some $U \in \mathrm{SU}(\mathsf{N})$ . Exponentiating with $\theta \in \mathbb{R}$ we have $e^{\mathrm{i}\theta P} = Ue^{\mathrm{i}\theta Q}U^{\dagger}$ , and thus $\rho (e^{\mathrm{i}\theta P}) = \rho (U)\rho (e^{\mathrm{i}\theta Q})\rho (U)^{\dagger}$ . By Eq. (8) this translates to $e^{\mathrm{i}\theta \rho_{*}(P)} = \rho (U)e^{\mathrm{i}\theta \rho_{*}(Q)}\rho (U)^{\dagger}$ . Differentiating with respect to $\theta$ , we finally have $\rho_{*}(P) = \rho (U)\rho_{*}(Q)\rho (U)^{\dagger}$ . The last claim is because $P$ and $-P$ are congruent by some anticommuting Pauli operator.
|
| 147 |
+
|
| 148 |
+
Now, we can consider probability distributions on $\mathfrak{su}(\mathsf{N})$ and their induced distributions on $\mathsf{SU}(\mathsf{N})$ . For example, to assess a unitary design we have to analyze the distribution on $\mathsf{U}(\mathsf{N}^{2t})$ for various values of $t$ induced by the tensor representation $\tau$ . For a probability distribution $\mu$ on the top left of the diagram Eq. (8), we have corresponding distributions on all three other entries. For any distribution $\mu$ on $\mathfrak{su}(\mathsf{N})$ , we denote an average with respect to $\mu$ by $\int_{\mathfrak{su}(\mathsf{N})}\dots \mu (X)\mathrm{d}X$ where $X$
|
| 149 |
+
|
| 150 |
+
denotes any hermitian operator. In other words, $X \mapsto \mu(X)$ is the probability density "function." For any distribution $\mu$ on $\mathfrak{su}(\mathsf{N})$ and any integer $t \geq 1$ we consider a linear operator on $(\mathbb{C}^{\mathsf{N}})^{\otimes 2t}$
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathcal {C} _ {\mu , t} = \int_ {\mathfrak {s u} (\mathsf {N})} \exp (\mathrm {i} X) ^ {\otimes t} \otimes \exp (- \mathrm {i} \bar {X}) ^ {\otimes t} \mu (X) \mathrm {d} X. \tag {12}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
An obvious lemma will be useful:
|
| 157 |
+
|
| 158 |
+
Lemma 2.3. If $\phi : \mathsf{SU}(\mathsf{N}) \to \operatorname{Aut}(V)$ for $V \subseteq (\mathbb{C}^{\mathbb{N}})^{\otimes 2t}$ is a subrepresentation of $\tau : U \mapsto (U \otimes \bar{U})^{\otimes t}$ , then
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\left. \mathcal {C} _ {\mu , t} \right| _ {V} = \int_ {\operatorname {s u} (\mathbb {N})} \exp \left(\mathrm {i} \phi_ {*} (X)\right) \mu (X) \mathrm {d} X. \tag {13}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
Proof. $\phi$ is a Lie group representation, so the claim follows from the commutative diagram Eq. (8). The assumption that $\phi$ is a subrepresentation of $\rho$ is irrelevant.
|
| 165 |
+
|
| 166 |
+
The Haar probability distribution on $\mathsf{SU}(\mathsf{N})$ , which we denote as $\mathrm{d}U$ , does give a distribution on $\mathfrak{su}(\mathsf{N})$ using the fact that the exponential map is one-to-one on the open ball of radius $\pi$ at the origin in the Schatten $\infty$ -norm and is almost onto from that restricted domain, but this is not very enlightening. However, relevant averages can be succinctly described in terms of subrepresentations.
|
| 167 |
+
|
| 168 |
+
Proposition 2.4. For any finite dimensional unitary representation $\rho$ of a compact Lie group $G$ , the integral $\int_{G} \rho(U) \, \mathrm{d}U$ with respect to the Haar measure is the orthogonal projector onto the trivial subrepresentation subspace of $\rho$ . In particular, for any integer $t \geq 1$ the Haar average
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\mathcal {H} _ {t} = \int_ {\mathrm {S U} (\mathrm {N})} (U \otimes \bar {U}) ^ {\otimes t} \mathrm {d} U \tag {14}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
is the orthogonal projector onto the trivial subrepresentation subspace of $\tau : U \mapsto (U \otimes \bar{U})^{\otimes t}$ within $(\mathbb{C}^{\mathsf{N}})^{\otimes 2t}$ .
|
| 175 |
+
|
| 176 |
+
Note that in Theorem 1.1 we denoted the Haar random mixed unitary channel by $\mathcal{H}_t$ , but here we overload the notation to mean its linearized map. The representation $\rho$ does not have to be finite dimensional, but we do not discuss any infinite dimensional representations in this paper.
|
| 177 |
+
|
| 178 |
+
Proof. Let $\mathcal{H} = \int_G\rho (U)\mathrm{d}U$ . Since the Haar measure is left invariant, the represented unitary $\rho (V)$ for any $V\in G$ acts by the identity on the image of $\mathcal{H}$ . It follows that $\mathcal{H}^2 = \mathcal{H}$ . Since $\rho$ is a unitary representation, $\mathcal{H}^\dagger = \int_G\rho (U^{-1})\mathrm{d}U$ . Since $U\mapsto U^{-1}$ is a measure-preserving homeomorphism of $G$ onto itself, $\mathcal{H}^\dagger = \mathcal{H}$ . Let $\mathcal{V}$ be the representation space. The trivial representation subspace is $\mathcal{T} = \{v\in \mathcal{V}:\rho (U)v = v,\forall U\in G\}$ . If $v = \mathcal{H}w$ for some $w\in \mathcal{V}$ , then $\rho (U)v = \rho (U)\mathcal{H}w = \mathcal{H}w = v$ , and hence $v\in \mathcal{T}$ . So, the image of $\mathcal{H}$ is contained in the trivial representation. If $v\in \mathcal{T}$ , then $\mathcal{H}v = v$ , showing that $v$ is in the image of $\mathcal{H}$ .
|
| 179 |
+
|
| 180 |
+
# 3 Random Pauli rotations
|
| 181 |
+
|
| 182 |
+
Now we consider more concrete distributions on $\mathfrak{su}(\mathsf{N})$ where $\mathsf{N} = 2^{\mathsf{n}}$ for an integer $\mathsf{n}\geq 1$ .
|
| 183 |
+
|
| 184 |
+
Definition 3.1. For any $P \in \mathbf{P}_{\mathfrak{n}}$ we define a distribution, called a random Pauli rotation by $P$ , as the uniform probability measure on $\{\mathrm{i}\theta P / 2 \in \mathfrak{su}(\mathsf{N};\mathbb{R}) \mid \theta \in (-\pi ,\pi) \subset \mathbb{R}\}$ . A random Pauli rotation with respect to a discrete probability distribution $\{(P,\operatorname*{Pr}[P])|P \in \mathbf{P}_{\mathfrak{n}}\}$ on $\mathbf{P}_{\mathfrak{n}}$ is the probabilistic mixture $\sum_{P} \operatorname*{Pr}[P] \mu_{P}$ of random Pauli rotations $\mu_{P}$ by $P$ .
|
| 185 |
+
|
| 186 |
+
We will only use the uniform distribution over $\mathbf{P}_{\mathfrak{n}}$ , but it may be helpful to proceed with a general distribution on $\mathbf{P}_{\mathfrak{n}}$ .
|
| 187 |
+
|
| 188 |
+
Lemma 3.2. For a random Pauli rotation $\mu = \sum_{P} \operatorname{Pr}[P] \mu_{P}$ , the average operator $\mathcal{C}_{\mu, t}$ in Eq. (12) restricted to a subrepresentation $\phi_{*}: \mathfrak{su}(\mathsf{N}) \to \mathrm{Aut}(V)$ of $\mathfrak{su}(\mathsf{N})$ within the tensor representation $\tau: U \mapsto (U \otimes \bar{U})^{\otimes t}$ , simplifies as
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
\left. \mathcal {C} _ {\mu , t} \right| _ {V} = \int_ {\mathfrak {s u} (\mathsf {N})} \exp (\mathrm {i} \phi_ {*} (X)) \mu (X) \mathrm {d} X = \sum_ {P} \Pr [ P ] K \left(\phi_ {*} (P / 2)\right) \tag {15}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
where $K(H)$ for any hermitian operator $H$ is the orthogonal projector onto $\ker H$ , the eigenspace of eigenvalue zero.
|
| 195 |
+
|
| 196 |
+
Proof. The first equality is noted in Lemma 2.3. For the second equality, it suffices to evaluate $\mathcal{C}_{\mu_P,t}|_V$ for a random Pauli rotation $\mu_P$ by $P$ . We have observed in Lemma 2.1 that all the eigenvalues of $\tau_{*}(P / 2)$ are integers. A subrepresentation of $\tau_{*}$ is nothing but a block-diagonal piece of $\tau_{*}$ after a unitary basis change on $(\mathbb{C}^{\mathbb{N}})^{\otimes 2t}$ , so the eigenvalues of $\phi_{*}(P / 2)$ can only be a subset of those of $\tau_{*}(P / 2)$ . Hence, the average of $e^{\mathrm{i}\theta \phi_{*}(P / 2)}$ over $\theta$ eliminates all eigenspaces of nonzero eigenvalues.
|
| 197 |
+
|
| 198 |
+
Remark 3.3. A Pauli operator $P$ , a tensor product of hermitian Pauli matrices, is a traceless unitary of eigenvalues $\pm 1$ . Observe that $\mathrm{i}P$ is a member of $\mathsf{SU}(\mathsf{N})$ and also of $\mathfrak{su}(\mathsf{N};\mathbb{R})$ where $\mathsf{N}$ is a power of 2. For example, $\mathrm{i}\sigma^z\in \mathsf{SU}(2)\cap \mathfrak{su}(2;\mathbb{R})$ and $\mathrm{i}\sigma^z\otimes \sigma^z\in \mathsf{SU}(4)\cap \mathfrak{su}(4;\mathbb{R})$ . If $\rho$ is a Lie group representation map, and $\rho_{*}$ is the derived Lie algebra representation map, then we may consider $\rho (\mathrm{i}P)$ and $\rho_{*}(\mathrm{i}P)$ both of which are some matrices of the same dimension. Generally, $\rho (\mathrm{i}P)\neq \rho_{*}(\mathrm{i}P)$ . However, since $P^{2} = 1$ , we instead have $\exp (\mathrm{i}\pi P / 2) = \cos (\pi /2)\mathbf{1} + \mathrm{i}\sin (\pi /2)P = \mathrm{i}P$ and therefore by Eq. (8) we have
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\exp (\mathrm {i} \pi \rho_ {*} (P) / 2) = \exp (\rho_ {*} (\mathrm {i} \pi P / 2)) = \rho (\exp (\mathrm {i} \pi P / 2)) = \rho (\mathrm {i} P). \tag {16}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
Proposition 3.4. For any random Pauli rotation with respect to $\{(P, \operatorname{Pr}[P])\}$ at order $t$ , we have
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
\left\| \mathcal {C} _ {\mu , t} - \mathcal {H} _ {t} \right\| = \max _ {\phi} \left\| \sum_ {P} \Pr [ P ] K \left(\phi_ {*} (P / 2)\right) \right\| \tag {17}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
where $\phi$ ranges over all irreducible nontrivial subrepresentations of the tensor representation $\tau : U \mapsto (U \otimes \bar{U})^{\otimes t}$ .
|
| 211 |
+
|
| 212 |
+
Proof. Immediate from Lemma 3.2 and Proposition 2.4. $\square$
|
| 213 |
+
|
| 214 |
+
It is known [HL09, Lemma 3.7] that the spectral gap, $1 - \| \mathcal{C}_{\mu ,t} - \mathcal{H}_t\|$ , is nonzero positive if $\{(P,\operatorname *{Pr}[P])\}$ induces a "densely generating" distribution on $\mathsf{SU}(\mathsf{N})$
|
| 215 |
+
|
| 216 |
+
The motivation for us to consider the quantum circuit of random Pauli rotations is its simple implementation:
|
| 217 |
+
|
| 218 |
+
Proposition 3.5. Suppose that for an $n$ -qubit system, CNOT can be applied only across a set of unordered pairs of qubits. This defines an undirected simple graph ("connectivity graph") over qubits, which we assume is connected. For any $P \in \mathbf{P}_n$ and $\theta \in \mathbb{R}$ , a unitary $e^{\mathrm{i}\frac{\theta}{2} P}$ can be implemented using (1) one 1-qubit Pauli $X$ rotation $e^{\mathrm{i}\frac{\theta}{2}\sigma^x}$ , (2) at most 2n 1-qubit Hadamard and Phase gates, and (3) at most 2n - 2 CNOT and SWAP gates.
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
Figure 1: Implementation of $e^{\mathrm{i}\frac{\theta}{2} P}$ ( $P \in \mathbf{P}_{\mathfrak{n}}$ ) by an $\mathcal{O}(\log \mathfrak{n})$ depth circuit. The example corresponds to the Pauli string XZXZXXXZ. Gates between two dashed lines are implemented in parallel.
|
| 222 |
+
|
| 223 |
+
Proof. It suffices to find a sequence of gates that maps $e^{\mathrm{i}\frac{\theta}{2} P}$ to $e^{\mathrm{i}\frac{\theta}{2}\sigma^x} \otimes \mathbf{1}_2^{\otimes (\mathfrak{n} - 1)}$ by conjugation. We first apply Hadamard and Phase gates by conjugation to obtain $e^{\mathrm{i}\frac{\theta}{2} Q}$ where $Q = Q_1 \otimes \dots \otimes Q_{\mathfrak{n}}$ is a tensor product of $\sigma^x$ 's and $\mathbf{1}_2$ 's. In the connectivity graph, we assign each node a binary value corresponding to the support of $Q$ , i.e., $v_i = 1$ if and only if $Q_i = \sigma^x$ . Every connected graph has a spanning tree. For each edge $(v_{\text{parent}}, v_{\text{child}})$ in the spanning tree such that all the children of $v_{\text{child}}$ are zeros, if $v_{\text{parent}} = v_{\text{child}} = 1$ , apply a CNOT gate by conjugation on the corresponding two qubits. If $v_{\text{parent}} = 0$ and $v_{\text{child}} = 1$ , apply a SWAP gate by conjugation on the corresponding two qubits. Both operations will result in $v_{\text{parent}} = 1$ , $v_{\text{child}} = 0$ , i.e., all the children of $v_{\text{parent}}$ are zeros. This procedure terminates when the only nonzero node is the root, which corresponds to $e^{\mathrm{i}\frac{\theta}{2}\sigma^x} \otimes \mathbf{1}_2^{\otimes (n - 1)}$ . The total number of CNOT and SWAP gates applied is at most twice the number of edges in the spanning tree, which is $2(\mathfrak{n} - 1)$ .
|
| 224 |
+
|
| 225 |
+
Corollary 3.6. With all-to-all connectivity, for any $P \in \mathbf{P}_{\mathfrak{n}}$ and $\theta \in \mathbb{R}$ , the unitary $e^{\mathrm{i}\frac{\theta}{2} P}$ can be implemented using $\mathcal{O}(\mathfrak{n})$ 1-qubit and CNOT gates in circuit depth $\mathcal{O}(\log \mathfrak{n})$ . See Fig. 1 for an example.
|
| 226 |
+
|
| 227 |
+
# 4 The special case of $\mathfrak{su}(2)$
|
| 228 |
+
|
| 229 |
+
There are only three Pauli operators $\sigma^x, \sigma^y, \sigma^z$ (up to real scalars) in $\mathfrak{su}(2)$ so a random Pauli rotation is specified by $\operatorname{Pr}[\sigma^x], \operatorname{Pr}[\sigma^y], \operatorname{Pr}[\sigma^z]$ . The goal is clear in Proposition 3.4. With an $\mathfrak{su}(2)$ -irrep $\phi_*$ in mind, we just write $J_{x,y,z}$ to mean $\phi_*(\sigma^{x,y,z}/2)$ . Note the factor of 2 in the denominator, which gives $[J_a, J_b] = \mathrm{i}J_c$ where $(a,b,c)$ is a cyclic permutation of $(x,y,z)$ . We have to calculate the spectral norm of
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\Pr \left[ \sigma^ {x} \right] K \left(J _ {x}\right) + \Pr \left[ \sigma^ {y} \right] K \left(J _ {y}\right) + \Pr \left[ \sigma^ {z} \right] K \left(J _ {z}\right) \tag {18}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
for all irreps that appear in the tensor representation $\tau : \mathsf{SU}(2) \ni U \mapsto (U \otimes \bar{U})^{\otimes t}$ . Since the dual of the defining irrep of $\mathfrak{su}(2)$ is equivalent to itself, the representation $\tau$ is simply the $2t$ -fold tensor product of the defining irrep of $\mathfrak{su}(2)$ . It is a standard fact that every irrep that appears in $\tau$ is odd dimensional because $2t$ is an even number, and every irrep of dimension $2\ell + 1$ appears in $\tau$ where $\ell \leq t$ .
|
| 236 |
+
|
| 237 |
+
Lemma 4.1. Each of $K(J_x), K(J_y), K(J_z)$ has rank 1 on any nontrivial $\mathfrak{su}(2)$ -irrep of odd dimension $2\ell + 1$ for an integer $\ell \geq 1$ . There exist normalized vectors $|x\rangle \in \ker J_x, |y\rangle \in \ker J_y, |z\rangle \in \ker J_z$ such that
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\langle x | y \rangle = \langle y | z \rangle = \langle z | x \rangle = \left\{ \begin{array}{l l} \frac {(- 1) ^ {\ell / 2}}{2 ^ {\ell}} \binom {\ell} {\ell / 2} & \text {i f \ell i s e v e n}, \\ 0 & \text {o t h e r w i s e}. \end{array} \right. \tag {19}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
In view of $\mathfrak{so}(3) \cong \mathfrak{su}(2)$ , an odd dimensional irrep is often called an "integer spin" representation, and an even dimensional irrep a "half-integer spin" representation. For even dimensional irreps, it is well known that $K(J_x) = K(J_y) = K(J_z) = 0$ .
|
| 244 |
+
|
| 245 |
+
Proof. Let $\phi_*$ be an $\mathfrak{su}(2)$ -irreducible representation map acting on $V \cong \mathbb{C}^{2\ell + 1}$ . It is a standard fact that $J_z$ has eigenvalues $\ell, \ell - 1, \ldots, -\ell + 1, -\ell$ , each with multiplicity 1. Hence, the kernel is one-dimensional.
|
| 246 |
+
|
| 247 |
+
(First method by symmetric powers) Note that $U = \exp(-\mathrm{i}\frac{\pi}{3\sqrt{3}}(\sigma^x + \sigma^y + \sigma^z)) \in \mathsf{SU}(2)$ acts by conjugation as $\sigma^x \mapsto \sigma^y \mapsto \sigma^z \mapsto \sigma^x$ . For a normalized vector $|z\rangle \in \ker J_z \subset V$ , the vector $|x\rangle = \phi(U)|z\rangle$ spans $\ker J_x$ and $|y\rangle = \phi(U^2)|z\rangle$ spans $\ker J_y$ . So, the three inner products in the claim are the same. It remains to calculate $\langle z|x\rangle = \langle z|\phi(U)|z\rangle$ .
|
| 248 |
+
|
| 249 |
+
A concrete expression for $\phi$ is obtained by considering the $2\ell$ -fold symmetric power of the defining representation of $\mathfrak{su}(2)$ [FH13, (11.8)]. Let $|0\rangle$ and $|1\rangle$ be a basis of $\mathbb{C}^2$ such that $\sigma^z |0\rangle = +|0\rangle$ and $\sigma^z |1\rangle = -|1\rangle$ . Then,
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
U = \frac {e ^ {- \mathrm {i} \pi / 4}}{\sqrt {2}} \left( \begin{array}{c c} 1 & - \mathrm {i} \\ 1 & \mathrm {i} \end{array} \right). \tag {20}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
Written in terms of vectors in $(\mathbb{C}^2)^{\otimes 2\ell}$ , a set of basis vectors of $V$ can be chosen to be
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\binom {2 \ell} {k} ^ {- 1 / 2} \sum_ {w \in \{0, 1 \} ^ {2 \ell}: | w | = k} | w \rangle \tag {21}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
for $k = 0,1,2,\ldots ,2\ell$ , where $|w|$ is the number of 1's in the bitstring $w$ of length $2\ell$ . These are eigenvectors of $J_{z}$ with eigenvalues $\ell -k$ . So,
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
| z \rangle = \binom {2 \ell} {\ell} ^ {- 1 / 2} \sum_ {w: | w | = \ell} | w \rangle . \tag {22}
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
Applying $\phi (U) = U^{\otimes 2\ell}\big|_{V}$ , we will obtain $|x\rangle$ . With $| + \rangle = 2^{-1 / 2}(|0\rangle + |1\rangle)$ and $|- \rangle = 2^{-1 / 2}(|0\rangle - |1\rangle)$ , we have $U|0\rangle = e^{-\mathrm{i}\pi /4}| + \rangle$ and $U|1\rangle = -e^{\mathrm{i}\pi /4}|- \rangle$ . Hence,
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\langle z | x \rangle = \langle z | U ^ {\otimes 2 \ell} | z \rangle = \sum_ {w \in \{0, 1 \} ^ {2 \ell}: | w | = \ell} \langle w | (- 1) ^ {\ell} | + \rangle^ {\otimes \ell} | - \rangle^ {\otimes \ell} \tag {23}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
where the second equality is because both $|z\rangle$ and $\langle z|$ are invariant under permutations of tensor factors. For a bitstring $w$ of length $2\ell$ with $|w| = \ell$ , let $m$ be the number of 1's in the last $\ell$ bits.
|
| 274 |
+
|
| 275 |
+
Then, $\langle w| + ^{\ell} - ^{\ell}\rangle = 2^{-\ell}(-1)^{m}$ . There are $\binom{\ell}{\ell-m}\binom{\ell}{m}$ such bitstrings, and $m$ ranges from 0 to $\ell$ . So,
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
\langle z | x \rangle = \frac {(- 1) ^ {\ell}}{2 ^ {\ell}} \sum_ {m = 0} ^ {\ell} (- 1) ^ {m} \binom {\ell} {\ell - m} \binom {\ell} {m}. \tag {24}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
The sum is the coefficient of $h^\ell$ in a polynomial $(1 + h)^\ell (1 - h)^\ell = (1 - h^2)^\ell$ in a variable $h$ . There is no $h^\ell$ term if $\ell$ is odd, implying that the sum is zero. If $\ell$ is even, then the coefficient is $(-1)^{\ell /2}\binom {\ell}{\ell /2}$ . This completes the proof.
|
| 282 |
+
|
| 283 |
+
(Second method using raising and lowering operators) Let $|\ell\rangle$ be an eigenstate of $J_z$ with eigenvalue $\ell$ : $J_z|\ell\rangle = \ell|\ell\rangle$ . Define $J^+ = J_x + \mathrm{i}J_y$ and $J^- = J_x - \mathrm{i}J_y$ , and inductively $J^-|k\rangle = a_{-k}|k-1\rangle$ for $k = \ell, \ell-1, \ldots, -\ell+1$ where
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
a _ {k} = \sqrt {\ell (\ell + 1) - k (k + 1)} = a _ {- k - 1}. \tag {25}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
Here, $|0\rangle = |z\rangle$ . It is straightforward to check that $J^{+}|k\rangle = a_{k}|k + 1\rangle$ ; the vector $|\ell +1\rangle$ is never defined, but $a_{\ell} = 0$ .
|
| 290 |
+
|
| 291 |
+
Since $J_{y}|y\rangle = 0$ , we have $J^{+}|y\rangle = J^{-}|y\rangle$ . This implies that $|\ell -1\rangle$ cannot be a nonzero component of $|y\rangle$ , which implies, in turn, that $|\ell -2j - 1\rangle$ for any integer $j$ cannot be a nonzero component of $|y\rangle$ . Hence, $|y\rangle$ is in the span of $|\ell \rangle ,|\ell -2\rangle ,\ldots ,| - \ell +2\rangle ,| - \ell \rangle$ . In particular, if $\ell$ is odd, $\langle z|y\rangle = \langle 0|y\rangle = 0$ .
|
| 292 |
+
|
| 293 |
+
Suppose $\ell = 2p$ , an even integer, and put $|y\rangle = \sum_{k = -p}^{p}c_{k}|2k\rangle$ . Then, the equation $J^{+}|y\rangle = J^{-}|y\rangle$ implies that
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
c _ {k} a _ {2 k} = c _ {k + 1} a _ {- 2 k - 2} = c _ {k + 1} a _ {2 k + 1}. \tag {26}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
One can verify that $c_k = c_{-k}$ and
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\begin{array}{l} \frac {a _ {2 k} ^ {2}}{a _ {2 k + 1} ^ {2}} = \frac {2 p (2 p + 1) - 2 k (2 k + 1)}{2 p (2 p + 1) - (2 k + 1) (2 k + 2)} = \frac {(p - k) (2 p + 2 k + 1)}{(p + k + 1) (2 p - 2 k - 1)} \tag {27} \\ = \frac {(p - k) ^ {2} (2 p + 2 k + 2) (2 p + 2 k + 1)}{(2 p - 2 k) (2 p - 2 k - 1) (p + k + 1) ^ {2}} = \frac {\binom {2 p - 2 k - 2} {p - k - 1} \binom {2 p + 2 k + 2} {p + k + 1}}{\binom {2 p - 2 k} {p - k} \binom {2 p + 2 k} {p + k}}. \\ \end{array}
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
Therefore,
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\frac {\left| c _ {k} \right| ^ {2}}{\left| c _ {0} \right| ^ {2}} = \frac {\binom {2 p - 2 k} {p - k} \binom {2 p + 2 k} {p + k}}{\binom {2 p} {p}}. \tag {28}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
Since $\langle y|y\rangle = 1$ , we must have
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
1 = \sum_ {k = - p} ^ {p} | c _ {k} | ^ {2} = \frac {\left| c _ {0} \right| ^ {2}}{\binom {2 p} {p} ^ {2}} \sum_ {k = - p} ^ {p} \binom {2 p - 2 k} {p - k} \binom {2 p + 2 k} {p + k} = \frac {\left| c _ {0} \right| ^ {2}}{\binom {2 p} {p} ^ {2}} 4 ^ {2 p}, \tag {29}
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
where the last equality follows from a combinatorial identity [Sve84]
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
\sum_ {i = 0} ^ {n} \binom {2 i} {i} \binom {2 n - 2 i} {n - i} = 4 ^ {n}. \tag {30}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
This shows that $|\langle y|z\rangle | = |c_0| = \frac{1}{4^p}\binom{2p}{p}$ if $\ell = 2p$
|
| 324 |
+
|
| 325 |
+
The complex phase $\alpha$ of $\langle y|z\rangle = \alpha \frac{1}{4^p}\binom{2p}{p}$ is not fixed by the normalization, but $\langle x|y\rangle \langle y|z\rangle \langle z|x\rangle$ is well defined regardless of $\alpha$ . To evaluate this product of the three inner products, we may use any normalized vectors in the kernels. A vector $|x\rangle \in \ker J_x$ can be computed by solving $J^{+}|x\rangle = -J^{-}|x\rangle$ . By completely parallel calculation, we find a solution $|x\rangle = \sum_{k = -p}^{p}(-1)^{k}c_{k}|2k\rangle$ . Then,
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\beta = \langle x | y \rangle \langle y | z \rangle \langle z | x \rangle = \langle x | y \rangle \langle y | 0 \rangle \langle 0 | x \rangle = \left| c _ {0} \right| ^ {2} \sum_ {k = - p} ^ {p} (- 1) ^ {k} \left| c _ {k} \right| ^ {2}. \tag {31}
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
This is a real number, which means that we may take $\alpha = \pm 1 = \beta /|\beta |$ . We know that $|\beta | = |c_0|^3 < 1$ so $\sum_{k = -p}^{p}(-1)^{k}|c_{k}|^{2} = \pm |c_{0}|$
|
| 332 |
+
|
| 333 |
+
From Eq. (26) we know that $|c_0| < |c_1| < \dots < |c_p|$ . Suppose $p$ is odd and $\beta > 0$ . Then, we must have $\sum_{k = -p}^{p} (-1)^k |c_k|^2 = |c_0|$ , which gives a contradiction: $0 = (-|c_0| - |c_1|^2) + (|c_0|^2 - |c_1|^2) + 2(|c_2|^2 - |c_3|^2) + \dots + 2(|c_{p - 1}|^2 - |c_p|^2) < 0$ . Therefore, $\beta < 0$ if $p$ is odd. Similarly, suppose $p$ is even and $\beta < 0$ . Then, we must have $\sum_{k = -p}^{p} (-1)^k |c_k|^2 = -|c_0|$ , which gives a contradiction: $0 = (|c_0| + |c_0|^2) + 2(-|c_1|^2 + |c_2|^2) + \dots + 2(-|c_{p - 1}|^2 + |c_p|^2) > 0$ . Therefore, $\beta > 0$ if $p$ is even. This completes the proof.
|
| 334 |
+
|
| 335 |
+
Corollary 4.2. For a random Pauli rotation $\mu$ with respect to $\{(\sigma^x,\frac{1}{3}),(\sigma^y,\frac{1}{3}),(\sigma^z,\frac{1}{3})\}$ we have
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
\left\| \mathcal {C} _ {\mu , t} - \mathcal {H} _ {t} \right\| = \frac {1}{1 2} \cdot \left\{ \begin{array}{l l} 4 & (t = 1) \\ 6 & (t = 2, 3). \\ 7 & (t \geq 4) \end{array} \right. \tag {32}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
Proof. We calculated $f^2 = \mathrm{Tr}(K(J_x)K(J_y))$ , etc, in Lemma 4.1, where $(-1)^{\ell /2}f = \binom {\ell}{\ell /2} / 2^\ell$ is a decreasing function in even integer $\ell$ . Hence by Proposition 3.4, there are only three cases to check: $f = 0$ if $\ell$ is odd, $f = -1 / 2$ if $\ell = 2$ , and $f = 3 / 8$ if $\ell = 4$ .
|
| 342 |
+
|
| 343 |
+
Let $M = K(J_{x}) + K(J_{y}) + K(J_{z})$ and $a, b, c \in [0,3] \subset \mathbb{R}$ be the eigenvalues of $M$ . Then $\operatorname{Tr}(M) = a + b + c = 3$ , $\operatorname{Tr}(M^2) = a^2 + b^2 + c^2 = 3 + 6f^2$ , and $\operatorname{Tr}(M^3) = a^3 + b^3 + c^3 = 3 + 18f^2 + 6f^3$ . Calculation gives $a = b = 1 - f$ and $c = 1 + 2f$ up to permutations, so $\| M \| = \max(1 - f, 1 + 2f)$ . This norm reaches the maximum $7/4$ when $\ell = 4$ and $f = 3/8$ .
|
| 344 |
+
|
| 345 |
+
# 5 A spectral gap bound by quadratic Casimir invariants
|
| 346 |
+
|
| 347 |
+
Theorem 5.1. Let $\mu$ be the random Pauli rotation with respect to the uniform distribution on $\mathbf{P}_{\mathfrak{n}}$ . Then, with $\mathsf{N} = 2^{\mathsf{n}}$ and for any integer $t \geq 1$ ,
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\left\| \mathcal {C} _ {\mu , t} - \mathcal {H} _ {t} \right\| \leq 1 - \frac {1}{4 t} \frac {\mathrm {N} ^ {2}}{\mathrm {N} ^ {2} - 1} - \frac {1}{\mathrm {N} ^ {2} - 1}. \tag {33}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
If $t \leq \mathsf{N} / 2$ , then
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\left\| \mathcal {C} _ {\mu , t} - \mathcal {H} _ {t} \right\| \leq 1 - \frac {1}{2 t} \frac {\mathrm {N} (\mathrm {N} - t + 1)}{\mathrm {N} ^ {2} - 1}. \tag {34}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
Comparing with Corollary 4.2, we see that with $\mathsf{N} = 2$ the inequality in Eq. (33) is saturated if and only if $t \in \{1, 2, 4\}$ .
|
| 360 |
+
|
| 361 |
+
Proof. It follows from Proposition 3.4 that
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\left\| \mathcal {C} _ {\mu , t} - \mathcal {H} _ {t} \right\| = \max _ {J} \left\| \underset {P \in \mathbf {P} _ {\mathfrak {n}}} {\mathbb {E}} K (J _ {P}) \right\| \tag {35}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
where the maximum is over all nontrivial $\mathfrak{su}(\mathsf{N})$ -irreps that appear in the tensor representation $\tau: U \mapsto (U \otimes \bar{U})^{\otimes t}$ . By Eq. (9), the norm $\ell$ of a represented operator $J_P$ in a nontrivial $\mathfrak{su}(\mathsf{N})$ -irrep is a nonzero integer and is at most $t$ . We use an operator inequality
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
K (H) \preceq \mathbf {1} - H ^ {2} / \| H \| ^ {2} \tag {36}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
which holds for any nonzero hermitian operator $H$ , where $K(H)$ is the orthogonal projector onto $\ker H$ . Averaging over $\mathbf{P}_{\mathfrak{n}}$ , we have that
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
\underset {P \in \mathbf {P} _ {\mathrm {n}}} {\mathbb {E}} K (J _ {P}) \preceq \mathbf {1} - \underset {P \in \mathbf {P} _ {\mathrm {n}}} {\mathbb {E}} \frac {J _ {P} ^ {2}}{\| J _ {P} \| ^ {2}}. \tag {37}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
It is well known (by a more general argument) that the last term is a scalar multiple of the identity, called a Casimir operator or invariant, where the scalar depends only on the irrep. We give an elementary calculation to this end in Lemma 5.2 below. By the lower bound in Lemma 5.3 below,
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
\mathbf {1} - \underset {P \in \mathbf {P} _ {\mathrm {n}}} {\mathbb {E}} \frac {J _ {P} ^ {2}}{\| J _ {P} \| ^ {2}} \preceq \mathbf {1} \left(1 - \frac {1}{4 \ell} \frac {\mathrm {N} ^ {2}}{\mathrm {N} ^ {2} - 1} - \frac {1}{\mathrm {N} ^ {2} - 1}\right) \tag {38}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
where $\ell = \| J_P\|$ is independent of $P$ . Since $\ell \leq t$ , we complete the proof of Eq. (33). For Eq. (34), we use Lemma 5.4 further below.
|
| 386 |
+
|
| 387 |
+
Lemma 5.2. For any $\mathfrak{su}(N)$ -irrep $\phi_{*}$ , the quadratic sum $\sum_{P\in \mathbf{P}_n}\phi_*(P / 2)^2$ is a scalar multiple of the identity.
|
| 388 |
+
|
| 389 |
+
Proof. Abbreviate $\phi_{*}(P / 2)$ by $J_{P}$ . We show by direct calculation that $\sum_{P} J_{P}^{2}$ commutes with $J_{Q}$ for all $Q \in \mathbf{P}_{\mathfrak{n}}$ . Since the commutator obeys the Leibniz rule, we have $[J_{Q}, J_{P}^{2}] = J_{P}[J_{Q}, J_{P}] + [J_{Q}, J_{P}]J_{P}$ . This may be nonzero only if $PQ = -QP = \pm \mathrm{i}R$ for some $R \in \mathbf{P}_{\mathfrak{n}}$ . For an anticommuting pair $P, Q$ , the Pauli operator $R$ also anticommutes with each of $P, Q$ . So, the subset of all Pauli operators that anticommute with $Q$ is partitioned into unordered pairs $\{P, R\}$ where $[J_{P}, J_{Q}] = \mathrm{i}J_{R}$ . That is, for each pair $\{P, R\}$ , the three elements $J_{P}, J_{Q}, J_{R}$ span $\mathfrak{su}(2)$ . Then,
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\begin{array}{l} \left[ J _ {Q}, J _ {P} ^ {2} + J _ {R} ^ {2} \right] = J _ {P} \left[ J _ {Q}, J _ {P} \right] + \left[ J _ {Q}, J _ {P} \right] J _ {P} + J _ {R} \left[ J _ {Q}, J _ {R} \right] + \left[ J _ {Q}, J _ {R} \right] J _ {R} \\ = J _ {P} (- \mathrm {i} J _ {R}) + (- \mathrm {i} J _ {R}) J _ {P} + J _ {R} (\mathrm {i} J _ {P}) + (\mathrm {i} J _ {P}) J _ {R} \tag {39} \\ = 0. \\ \end{array}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Since we are working with an irrep, Schur's lemma implies that $A = \sum_{P} J_{P}^{2}$ is proportional to the identity.
|
| 396 |
+
|
| 397 |
+
Lemma 5.3. Let $J_P = \phi_*(P/2)$ be the represented operator in an $\mathfrak{su}(\mathsf{N})$ -irrep $\phi_*$ for any Pauli operator $P \in \mathbf{P}_{\mathfrak{n}}$ where $\mathsf{N} = 2^{\mathfrak{n}} \geq 2$ , and let $\ell = \|J_P\|$ be the Schatten $\infty$ -norm, which is independent of $P$ . Then,
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\left(\frac {\mathrm {N} ^ {2} \ell}{4} + \ell^ {2}\right) \mathbf {1} \preceq \sum_ {P \in \mathbf {P} _ {\mathrm {n}}} J _ {P} ^ {2} \preceq \left(\frac {\mathrm {N} (\mathrm {N} - 1) \ell}{2} + (\mathrm {N} - 1) \ell^ {2}\right) \mathbf {1}. \tag {40}
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
For any $\mathfrak{n} \in \mathbb{Z}_{>0}$ and $\ell \in \frac{1}{2}\mathbb{Z}_{>0}$ , there is an $\mathfrak{su}(2^{\mathfrak{n}})$ -irrep that saturates the upper bound. For any $\mathfrak{n}, k \in \mathbb{Z}_{>0}$ , there is an $\mathfrak{su}(2^{\mathfrak{n}})$ -irrep with $\ell = 2^{\mathfrak{n} - 2}k$ that saturates the lower bound. The saturating irreps are unique up to isomorphisms.
|
| 404 |
+
|
| 405 |
+
In this lemma it is not required that $\phi_*$ is a subrepresentation of a tensor representation $\tau : U \mapsto (U \otimes \bar{U})^{\otimes t}$ .
|
| 406 |
+
|
| 407 |
+
The lower bound proof can be understood using just the representation theory of $\mathfrak{su}(2)$ .
|
| 408 |
+
|
| 409 |
+
Proof of the lower bound. The norm $\ell = \| J_P\|$ is independent of $P$ by Lemma 2.2. Lemma 5.2 says that $A = \sum_{P}J_{P}^{2}$ is a scalar multiple of the identity. We have to estimate the eigenvalue of $A$ . It suffices to examine the action of $A$ on any vector.
|
| 410 |
+
|
| 411 |
+
Let $Z_{1} = \sigma^{z}\otimes \mathbf{1}_{2}^{\otimes (n - 1)}\in \mathbf{P}_{n}$ . Let $|\psi \rangle$ be any vector such that $J_{Z_1}|\psi \rangle = \ell |\psi \rangle$ , where $\ell$ is the greatest eigenvalue. There are $4^{n - 1}$ unordered pairs $\{\sigma^x\otimes W,\sigma^y\otimes W\}$ where $W\in \{\mathbf{1}_2,\sigma^x,\sigma^y,\sigma^z\}^{\otimes (n - 1)}$ such that $\mathbb{C}\{Z_1,\sigma^x\otimes W,\sigma^y\otimes W\} \cong \mathfrak{su}(2)$ as Lie algebras. We know if $X,Y,Z$ are a triple generating $\mathfrak{su}(2)$ such that $[X,Y] = \mathrm{i}Z$ (and cyclic permutations thereof), then
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
\rho_ {*} (X) ^ {2} + \rho_ {*} (Y) ^ {2} + \rho_ {*} (Z) ^ {2} = \ell_ {\rho} (\ell_ {\rho} + 1) \mathbf {1} \tag {41}
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
for any irrep $\rho_{*}$ where $\ell_{\rho}$ is the greatest eigenvalue of $\rho_{*}(Z)$ . The linear span of all vectors obtained by acting with $J_{Z_1}, J_{\sigma^x \otimes W}, J_{\sigma^y \otimes W}$ on $|\psi\rangle$ is an $\mathfrak{su}(2)$ -irrep because $J_{Z_1}$ assumes the greatest eigenvalue $\ell$ on $|\psi\rangle$ , and hence
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\begin{array}{l} \left(J _ {Z _ {1}} ^ {2} + J _ {\sigma^ {x} \otimes W} ^ {2} + J _ {\sigma^ {y} \otimes W} ^ {2}\right) | \psi \rangle = (\ell^ {2} + \ell) | \psi \rangle , \\ \left(J _ {\sigma^ {x} \otimes W} ^ {2} + J _ {\sigma^ {y} \otimes W} ^ {2}\right) | \psi \rangle = \ell | \psi \rangle . \tag {42} \\ \end{array}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
Therefore,
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\langle \psi | A | \psi \rangle \geq \langle \psi | J _ {Z _ {1}} ^ {2} | \psi \rangle + \sum_ {W} \langle \psi | \left(J _ {\sigma^ {x} \otimes W} ^ {2} + J _ {\sigma^ {y} \otimes W} ^ {2}\right) | \psi \rangle = \ell^ {2} + 4 ^ {n - 1} \ell . \tag {43}
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
This proves the lower bound.
|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
|
| 433 |
+
The remainder of the proof uses highest weights.
|
| 434 |
+
|
| 435 |
+
Proof of the rest of the claims in Lemma 5.3. To prove the upper bound we take $|\psi \rangle$ to be a highest weight vector. By definition, this means that $|\psi \rangle$ is annihilated by all positive roots of $\mathfrak{su}(\mathsf{N})$ , which span the $\mathbb{C}$ -linear space of all strictly upper triangular $\mathsf{N}$ -by- $\mathsf{N}$ matrices. This means that
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\left(J _ {X} + \mathrm {i} J _ {Y}\right) | \psi \rangle = 0 \tag {44}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
for any $X + \mathrm{i}Y\in \mathfrak{su}(\mathsf{N})$ that is upper triangular in a standard basis for Pauli operators. Then,
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
0 = \langle \psi | (J _ {X} - \mathrm {i} J _ {Y}) (J _ {X} + \mathrm {i} J _ {Y}) | \psi \rangle = \langle \psi | (J _ {X} ^ {2} + J _ {Y} ^ {2} - J _ {Z}) | \psi \rangle \tag {45}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
where $\mathrm{i}Z = [X,Y]$ . It follows that
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
\langle \psi | J _ {X} ^ {2} + J _ {Y} ^ {2} | \psi \rangle = \langle \psi | J _ {Z} | \psi \rangle \leq \ell . \tag {46}
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
To use this we partition $\mathbf{P}_n$ as follows. For a bitstring $z\in \{0,1\}^{k - 1}$ of length $k - 1$ , define $Z(z) = \bigotimes_{j = 1}^{k - 1}(\sigma^z)^{z_j}\in \mathbf{P}_{k - 1}$ . For a $\mathbb{Z}_4$ -string $w\in \{0,1,2,3\}^{\mathfrak{n} - k}$ , define $W(w) = \bigotimes_{j = 1}^{\mathfrak{n} - k}\sigma^{w_j}$ where $\sigma^0 = \mathbf{1}_2$ , $\sigma^1 = \sigma^x$ , $\sigma^2 = \sigma^y$ , and $\sigma^3 = \sigma^z$ . Then, a triple
|
| 454 |
+
|
| 455 |
+
$$
|
| 456 |
+
Z (z) \otimes \sigma^ {x} \otimes W (w), \quad Z (z) \otimes \sigma^ {y} \otimes W (w), \quad \mathbf {1} _ {2} ^ {\otimes (k - 1)} \otimes \sigma^ {z} \otimes \mathbf {1} _ {2} ^ {\otimes (\mathfrak {n} - k)} \tag {47}
|
| 457 |
+
$$
|
| 458 |
+
|
| 459 |
+
forms an $\mathfrak{su}(2)$ subalgebra. So we have identified $\sum_{k=1}^{\mathfrak{n}} 2^{k-1} 4^{\mathfrak{n}-k} = 4^{\mathfrak{n}} (2^{-1} - 2^{-\mathfrak{n}-1})$ subalgebras. The pairs $\{Z(z) \otimes \sigma^x \otimes W(w), Z(z) \otimes \sigma^y \otimes W(w)\}$ account for $4^{\mathfrak{n}} - 2^{\mathfrak{n}}$ elements of $\mathbf{P}_{\mathfrak{n}}$ , and the remaining $2^{\mathfrak{n}} - 1$ operators of $\mathbf{P}_{\mathfrak{n}}$ are tensor products of $\mathbf{1}_2$ and $\sigma^z$ . Therefore,
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
\sum_ {P \in \mathbf {P} _ {n}} \langle \psi | J _ {P} ^ {2} | \psi \rangle \leq 4 ^ {n} \left(2 ^ {- 1} - 2 ^ {- n - 1}\right) \ell + (2 ^ {n} - 1) \ell^ {2}. \tag {48}
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
The tightness of the bounds can be shown by considering specific highest weights and using the partition above. The upper bound is saturated if and only if Eq. (46) is saturated for all nonidentity tensor product $Z$ of $\mathbf{1}_2$ and $\sigma^z$ . So, we need to show that such a linear functional on the Cartan subalgebra is a valid point on the weight lattice. This is easy: if $L_j$ denotes the dual vector of the diagonal matrix with a sole 1 at the $j$ -th diagonal, $2\ell L_1$ is the desired weight.
|
| 466 |
+
|
| 467 |
+
To prove that the lower bound can be saturated, we take a highest weight in which only $J_{Z_1}$ , where $Z_1 = \sigma^z \otimes \mathbf{1}_2^{\otimes (\mathfrak{n} - 1)}$ , takes the greatest eigenvalue $\ell$ , but $J_Z$ for any other diagonal $Z$ assumes zero. This amounts to the weight $k\sum_{j = 1}^{\mathsf{N} / 2}L_j$ , giving $\ell = k\mathsf{N} / 4$ for some positive integer $k$ . Then, on the highest weight vector $|\psi\rangle$ , the generator $Z_1$ gives $\ell^2$ , and the $4^{\mathsf{n} - 1}$ pairs $\{\sigma^x \otimes W, \sigma^y \otimes W\}$ gives $4^{\mathsf{n} - 1}\ell$ , but all other $2^{\mathsf{n}} - 2$ diagonal generators give zero by the choice of the highest weight. Finally, Eq. (45) implies that the $4^{\mathsf{n}} - 2^{\mathsf{n}} - 2 \cdot 4^{\mathsf{n} - 1}$ generators $Z(z) \otimes \sigma^x \otimes W(w), Z(z) \otimes \sigma^y \otimes W(w)$ give zero.
|
| 468 |
+
|
| 469 |
+
In those saturating conditions, we are forced to choose a unique highest weight given $\ell$ , which in turn determines the irrep (up to equivalence).
|
| 470 |
+
|
| 471 |
+
Next, we present an alternative and slightly tighter bound, which assumes familiarity with highest weights for Lie algebra representations (e.g. [FH13, §14-15]). Note, again, that our proof of the lower bound in Lemma 5.3 only uses the representation theory of $\mathfrak{su}(2)$ .
|
| 472 |
+
|
| 473 |
+
A finite-dimensional irreducible representation of $\mathfrak{su}(\mathsf{N})$ is labeled by its highest weight $\sum_{j}\mu_{j}L_{j}$ which is labeled by a sequence of $\mathsf{N}$ integers $\mu = (\mu_1,\mu_2,\dots ,\mu_{\mathsf{N}})$ where $\mu_{i}\geq \mu_{i + 1}$ modulo integer multiples of $(1,1,\ldots ,1)$ . We choose a representative $\mu$ such that $\sum_{i = 1}^{N}\mu_{i} = 0$ , which is possible for subrepresentations of $\tau :U\mapsto (U\otimes \bar{U})^{\otimes t}$ , and henceforth the Killing form $\langle \cdot ,\cdot \rangle$ on the dual of Cartan subalgebra is given by $\langle \mu ,\mu^{\prime}\rangle = \sum_{i}\mu_{i}\mu_{i}^{\prime}$ . (The normalization here is different from that in [FH13, §15].)
|
| 474 |
+
|
| 475 |
+
Let $H = \frac{1}{2}\sigma^z\otimes \mathbf{1}_2^{\otimes n - 1} = \frac{1}{2}\mathrm{diag}(1,1,\ldots ,1, - 1, - 1,\ldots , - 1)$ be an element of the Cartan subalgebra. We would like to determine $\ell = \| \phi_*(H)\|$ . Since the set of weights are in the convex hull of the Weyl group orbit of $\mu$ , the maximum eigenvalue of $\phi_{*}(H)$ is given by $\ell = \max_{w\in \mathrm{Weyl}}(w\cdot \mu)(H) = \mu (H) = \frac{1}{2}\Big(\mu_1 + \dots +\mu_{\mathsf{N} / 2} - \mu_{\mathsf{N} / 2 + 1} - \dots -\mu_{\mathsf{N}}\Big)$ .
|
| 476 |
+
|
| 477 |
+
Next, we invoke a formula for the quadratic Casimir operator (e.g. [FH13, (25.14)]): For any finite-dimensional irrep $\phi_*$ with highest weight $\mu$ , and for any basis $\{X_j\}$ of $\mathfrak{su}(\mathsf{N})$ that is orthonormal with respect to the Killing form,
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
\sum_ {j} \phi_ {*} (X _ {j}) ^ {2} = (\langle \mu , \mu \rangle + \langle \mu , \delta \rangle) \mathbf {1}, \tag {49}
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
where $\delta$ is the sum of all positive roots, which can be written in a vector form as $\delta = (\mathsf{N} - 1,\mathsf{N} - 3,\ldots , - (\mathsf{N} - 1))$ . Therefore, $\frac{4}{\mathsf{N}}\sum_{P\in \mathbf{P}_{\mathfrak{n}}}J_P^2 = (\langle \mu ,\mu \rangle +\langle \mu ,\delta \rangle)\mathbf{1}$ , where the factor $4 / \mathsf{N}$ comes from renormalizing $J_{P}$ to an orthonormal basis $(\mathrm{Tr}((P / 2)^2) = \mathsf{N} / 4)$ . Now, our quantity of interest is exactly determined by the highest weight of a given irrep:
|
| 484 |
+
|
| 485 |
+
$$
|
| 486 |
+
\frac {1}{\ell^ {2}} \sum_ {P \in \mathbf {P} _ {\mathrm {n}}} J _ {P} ^ {2} = \frac {\mathsf {N} (\langle \mu , \mu \rangle + \langle \mu , \delta \rangle)}{\left(\mu_ {1} + \cdots + \mu_ {\mathrm {N} / 2} - \mu_ {\mathrm {N} / 2 + 1} - \cdots - \mu_ {\mathrm {N}}\right) ^ {2}} \mathbf {1}. \tag {50}
|
| 487 |
+
$$
|
| 488 |
+
|
| 489 |
+
Lemma 5.4. Assume $t \leq \mathsf{N} / 2$ . Let $\phi_*$ be an irreducible $\mathfrak{su}(\mathsf{N})$ -subrepresentation of $\tau: U \mapsto (U \otimes \bar{U})^{\otimes t}$ . Let $\ell = \| \phi_*(P / 2) \|$ be the Schatten $\infty$ -norm, which is independent of $P \in \mathbf{P}_{\mathfrak{n}}$ .
|
| 490 |
+
|
| 491 |
+
$$
|
| 492 |
+
\frac {1}{\ell^ {2}} \sum_ {P \in \mathbf {P} _ {\mathrm {n}}} J _ {P} ^ {2} \succeq \frac {\mathrm {N} (\mathrm {N} - t + 1)}{2 t} \mathbf {1}. \tag {51}
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
There exists an irreducible subrepresentation $\phi_*$ of $\tau$ achieving the equality.
|
| 496 |
+
|
| 497 |
+
Roughly speaking, in the small- $t$ regime where $t \ll \mathsf{N}$ , this gives a factor of 2 improvement relative to Lemma 5.3. Note that this is consistent with the tightness stated in Lemma 5.3, because the irreps that saturate the lower bound of Lemma 5.3 are not subrepresentations of $\tau$ when $t \ll \mathsf{N}$ . If $t = \mathsf{N}/2$ , then this lemma gives the same bound as Lemma 5.3.
|
| 498 |
+
|
| 499 |
+
The strategy in the proof below applies for $t > \mathsf{N} / 2$ and gives an alternative proof of the lower bound of Lemma 5.3, but we will omit such calculation.
|
| 500 |
+
|
| 501 |
+
Proof. It only remains to perform an elementary calculation: minimize the right-hand side of Eq. (50) over the highest weights that correspond to irreducible subrepresentations of $\tau$ .
|
| 502 |
+
|
| 503 |
+
The decomposition of $\tau$ into irreps is well understood (e.g. [RS09, Theorem 4]). An irrep with highest weight $\mu$ is a subrepresentation of $\tau$ if and only if $\sum_{i}\mu_{i} = 0$ and $\sum_{i}|\mu_{i}|\leq 2t$ . When $t\leq \mathsf{N} / 2$ , the minimizer is given by
|
| 504 |
+
|
| 505 |
+
$$
|
| 506 |
+
\mu_ {*} = (1, 1, \dots , 1, 0, 0, \dots , 0, - 1, - 1, \dots , - 1) \tag {52}
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
with the first $t$ entries being 1 and last $t$ entries being $-1$ . Calculating the right-hand side of Eq. (50) with respect to $\mu_*$ gives the stated bound.
|
| 510 |
+
|
| 511 |
+
Remark 5.5. If Eq. (37) is saturated, then $t \in \{1,2,4\}$ . If $\ell \notin \{1,2,4\}$ , then we know by comparing Corollary 4.2 with Eq. (33) that $K(J_{x}) + K(J_{y}) + K(J_{z}) + (J_{x}^{2} + J_{y}^{2} + J_{z}^{2}) / \ell^{2}$ has norm strictly smaller than 3 in any nontrivial irrep of $\mathfrak{su}(\mathsf{N})$ with $\| J_x\| = \ell$ for any triple $J_{x},J_{y},J_{z}$ that form an $\mathfrak{su}(2)$ subalgebra.
|
| 512 |
+
|
| 513 |
+
Remark 5.6. It may be significant underestimation of the spectral gap using $K(H) \preceq \mathbf{1} - H^2 / \|H\|^2$ . Take the case of $\mathfrak{su}(2)$ and consider a random Pauli rotation by $\{(\sigma^x, \frac{1}{2}), (\sigma^y, \frac{1}{2})\}$ . In the $(2\ell + 1)$ -dimensional $\mathfrak{su}(2)$ -irrep, since $J_x^2 + J_y^2 + J_z^2 = (\ell^2 + \ell)\mathbf{1}$ and $J_z^2 \preceq \ell^2\mathbf{1}$ , we see $(J_x^2 + J_y^2) / \ell^2 \succeq \mathbf{1} / \ell$ which is best possible since $\langle \psi | (J_x^2 + J_y^2) |\psi\rangle / \ell^2 = 1 / \ell$ if $J_z |\psi\rangle = \ell |\psi\rangle$ . This gives a lower bound on the spectral gap $\Omega(1/t)$ . However, by the exact calculation in Lemma 4.1 above, we know that the spectral gap of this design is independent of $t$ .
|
| 514 |
+
|
| 515 |
+
Proof of Theorem 1.1. The spectral gap bound is proved in Theorem 5.1. Note that a random Pauli rotation is defined by a uniform probability distribution on $\mathbf{P}_{\mathfrak{n}} \times (-\pi, \pi)$ while the second statement of Theorem 1.1 takes a uniform distribution on $\mathbf{P}_{\mathfrak{n}} \times (-2\pi, 2\pi)$ .
|
| 516 |
+
|
| 517 |
+
The discrepancy of a factor of 2 here is completely dismissable for the first statement of Theorem 1.1 since we only need a range of $\theta$ in $\exp (\mathrm{i}\theta \phi_{*}(P / 2))$ such that the average over $\theta$ of a represented operator $\phi_{*}(P / 2)$ for any Pauli operator $P\in \mathbf{P}_{\mathfrak{n}}$ is the projection onto the kernel of $\phi_{*}(P / 2)$ ; for irreps $\phi$ that appear in $\tau :U\mapsto (U\otimes \bar{U})^{\otimes t}$ , the eigenvalues of $\phi_{*}(P / 2)$ are integers.
|
| 518 |
+
|
| 519 |
+
However, if we consider all finite dimensional unitary representation $\rho$ of $\mathsf{SU}(\mathsf{N})$ , we are no more guaranteed that $\rho_{*}(P / 2)$ has integer eigenvalues. Fortunately, every eigenvalue of $\rho_{*}(P / 2)$ is half an integer for any unitary representation $\rho$ because every finite dimensional unitary irrep of $\mathsf{SU}(\mathsf{N})$ is a subrepresentation of $U \mapsto U^{\otimes m}$ for some integer $m \geq 0$ [FH13, §15.3].
|
| 520 |
+
|
| 521 |
+
Note that for an irrep $\rho_{*}$ where $2\| \rho_{*}(P / 2)\|$ is an odd integer, the kernel of $\rho_{*}(P / 2)$ is zero, implying that averaging over $\theta$ eliminates this irrep.
|
| 522 |
+
|
| 523 |
+
Proof of Corollary 1.2. Let $\mathcal{D} = \mathcal{C}_t^k -\mathcal{H}_t$ be the difference of the channels. With $k\geq (4\log 2)\mathsf{nt}^2 +$ $4t\log \frac{1}{\varepsilon}$ , we have $\| \mathcal{D}\otimes \mathcal{I}\|_{2\to 2} = \| \mathcal{D}\|_{2\to 2}\leq (1 - \frac{1}{4t})^k\leq \varepsilon 2^{-\mathsf{nt}}$ by Theorem 1.1, where $\mathcal{I}$ means the identity channel on any auxiliary system. Since the diamond norm is obtained by taking an equal dimensional auxiliary system, $\| \mathcal{D}\|_{\diamond} = \| \mathcal{D}\otimes \mathcal{I}_{2^{nt}}\|_{1\to 1}\leq 2^{\mathsf{nt}}\| \mathcal{D}\otimes \mathcal{I}_{2^{nt}}\|_{2\to 2}\leq \varepsilon$ , where the first equality is by [KSV02, Theorem 11.1]. (The use of $2\rightarrow 2$ norm for this purpose has appeared in [Low10; BHH16].)
|
| 524 |
+
|
| 525 |
+
Similarly, if $k \geq (4\log 8)\mathsf{nt}^2 + 4t\log \frac{1}{\varepsilon}$ , we have $\left\| \mathcal{C}_t^k - \mathcal{H}_t \right\|_{\diamond} \leq \varepsilon 4^{-\mathsf{nt}}$ which implies $(1 - \varepsilon)\mathcal{H}_t \preceq \mathcal{C}_t^k \preceq (1 + \varepsilon)\mathcal{H}_t$ by [BHH16, Lemma 3].
|
| 526 |
+
|
| 527 |
+
The gate complexity follows from Corollary 3.6.
|
| 528 |
+
|
| 529 |
+
# 6 Orthogonal designs
|
| 530 |
+
|
| 531 |
+
Our approach can be adapted for special orthogonal groups. This section uses arguments parallel to those in the analysis for $\mathsf{SU}(\mathsf{N})$ , so we will be rather brief.
|
| 532 |
+
|
| 533 |
+
# 6.1 Skew-symmetric Pauli operators
|
| 534 |
+
|
| 535 |
+
We consider the special orthogonal group $\mathsf{SO}(\mathsf{N})$ in a fashion similar to our random Pauli rotations. We directly use the inclusion $\mathsf{SO}(\mathsf{N}) = \mathsf{SU}(\mathsf{N})\cap \mathbb{R}^{\mathsf{N}\times \mathsf{N}}\subset \mathsf{SU}(\mathsf{N})$ for $\mathsf{N} = 2^n$ .
|
| 536 |
+
|
| 537 |
+
Define a set $\mathbf{Y}_{\mathfrak{n}}$ of Pauli operators with entries in iR:
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
\mathbf {Y} _ {\mathfrak {n}} = \{P \in \mathbf {P} _ {\mathfrak {n}} \mid \text {A n o d d n u m b e r o f} \sigma^ {y} \text {t e n s o r f a c t o r s a p p e a r i n} P. \} \tag {53}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
We first verify that the $\mathbb{R}$ -linear span of $\mathrm{iY}_{\mathfrak{n}}$ is precisely the real Lie algebra $\mathfrak{so}(\mathsf{N} = 2^{\mathfrak{n}})$ consisting of all antisymmetric real matrices. It is clear that $\mathrm{iY}_{\mathfrak{n}}$ is $\mathbb{R}$ -linearly independent and consists of skew-symmetric real matrices. We can check that $|\mathbf{Y}_{\mathfrak{n}}| = \mathsf{N}(\mathsf{N} - 1) / 2$ by solving a recursion equation as follows. Let $e(\mathfrak{n})$ be the number of all Pauli operators $\{\mathbf{1}, \sigma^{x}, \sigma^{y}, \sigma^{z}\}^{\otimes \mathfrak{n}}$ that contain an even number of tensor factors $\sigma^{y}$ . The identity operator $\mathbf{1}_2^{\otimes \mathfrak{n}}$ contributes 1 to $e(\mathfrak{n})$ . Consider a subset of Pauli "strings" whose first "letter" is $\sigma^{y}$ , and another set of Pauli strings whose first letter is one of $\mathbf{1}, \sigma^{x}, \sigma^{z}$ . It is then clear that $|\mathbf{Y}_{\mathfrak{n} + 1}| = 3|\mathbf{Y}_{\mathfrak{n}}| + e(\mathfrak{n})$ and $e(\mathfrak{n} + 1) = 3e(\mathfrak{n}) + |\mathbf{Y}_{\mathfrak{n}}|$ with initial conditions $e(1) = 3$ and $|\mathbf{Y}_1| = 1$ . The claim $|\mathbf{Y}_{\mathfrak{n}}| = 2^{\mathfrak{n} - 1}(2^{\mathfrak{n}} - 1)$ follows by induction in $\mathfrak{n}$ .
|
| 544 |
+
|
| 545 |
+
Theorem 6.1. Suppose $\mathsf{N} = 2^{\mathsf{n}} > 4$ . For any integer $t \geq 1$ , we have
|
| 546 |
+
|
| 547 |
+
$$
|
| 548 |
+
\left\| \underset {\theta \sim (- 2 \pi , 2 \pi), P \in \mathbf {Y} _ {\mathfrak {n}}} {\mathbb {E}} \left(e ^ {\mathrm {i} \theta P / 2}\right) ^ {\otimes t} - \underset {O \sim \operatorname {S O} (\mathsf {N})} {\mathbb {E}} O ^ {\otimes t} \right\| \leq 1 - \frac {1}{2 t} \frac {\mathsf {N} - 2}{\mathsf {N} - 1} - \frac {2}{\mathsf {N} (\mathsf {N} - 1)}. \tag {54}
|
| 549 |
+
$$
|
| 550 |
+
|
| 551 |
+
The small orthogonal groups, $\mathsf{SO}(2)$ and $\mathsf{SO}(4)$ , are excluded for simplicity of the proof as they are not simple Lie groups. Note that every finite dimensional unitary $\mathrm{irrep}^6$ of $\mathsf{SO}(\mathbb{N})$ for $\mathbb{N} > 4$ appears as a subrepresentation of the tensor representation $O \mapsto O^{\otimes t}$ for some integer $t \geq 1$ . This is explained in [FH13, §19]. Therefore, Theorem 6.1 implies that for any irrep of $\mathsf{SO}(\mathbb{N} = 2^n > 4)$ the spectral gap is at least $1 / \dim \mathsf{SO}(\mathbb{N})$ .
|
| 552 |
+
|
| 553 |
+
A representation of $\mathsf{SU}(\mathsf{N})$ gives a representation of $\mathsf{SO}(\mathsf{N})$ , but a representation of $\mathsf{SO}(\mathsf{N})$ does not in general give a representation of $\mathsf{SU}(\mathsf{N})$ . So, this theorem cannot be thought of as a corollary of Theorem 1.1. The proof below is however almost identical to that of Theorem 1.1, mainly because we use only the common aspects of the representation theories of $\mathsf{SO}(\mathsf{N})$ and $\mathsf{SU}(\mathsf{N})$ . Thus, we will assume a reader's familiarity with the proof of Theorem 1.1, or rather the proofs of Theorem 5.1 and Lemma 5.3, and omit some detail.
|
| 554 |
+
|
| 555 |
+
Proof. (Step 0: to irreps) The Haar average (the second term in the norm) is the projector onto the trivial subrepresentation (Proposition 2.4). Hence, we consider an irreducible nontrivial $\mathfrak{so}(\mathsf{N})$ -subrepresentation $\rho_{*}$ of $O \mapsto O^{\otimes t}$ .
|
| 556 |
+
|
| 557 |
+
(Step 1: random angles give kernel projectors.) It is clear that $\rho_{*}(P / 2)$ has half-integer eigenspectrum for any $P$ , and therefore the average over $\theta \in (-2\pi, 2\pi)$ eliminates all nonzero eigenvalues: $\mathbb{E}_{\theta \sim (-2\pi, 2\pi)} e^{\mathrm{i}\theta \rho_{*}(P / 2)} = K(\rho_{*}(P / 2))$ is the projector onto the kernel of $\rho_{*}(P / 2)$ .<sup>7</sup>
|
| 558 |
+
|
| 559 |
+
(Step 2: identical eigenspectra for all represented operators) The norm of $\rho_{*}(P / 2)$ is independent of $P\in \mathbf{Y}_{\mathfrak{n}}$ ; this is an analog of Lemma 2.2 for $\mathfrak{so}(\mathsf{N})$ , and the proof is similar. Note that for any matrix $M$ , we have $\operatorname *{det}(M\otimes \mathbf{1}_2) = \operatorname *{det}(M\oplus M) = (\operatorname *{det}M)^2$ . So, if $O\in \mathsf{O}(\mathsf{N} / 2)$ , then $O\otimes \mathbf{1}_2\in \mathsf{SO}(\mathsf{N})$ . The Clifford unitaries, CNOT and Hadamard, are in $O(4)$ . If there is a tensor factor $\sigma^y\otimes \sigma^y$ in some $P\in \mathbf{Y}_{\mathfrak{n}}$ , then we must have $\mathfrak{n}\geq 3$ since $P$ must contain an odd number of $\sigma^{y}$ 's. Hence, using CNOT and Hadamard Clifford unitary acting on those two $2\times 2$ tensor factors, we can turn $P$ by $\mathsf{SO}(\mathsf{N})$ conjugation into a Pauli that where $\sigma^y\otimes \sigma^y$ is replaced by $\sigma^x\otimes \sigma^x$ while not changing any other tensor factor of $P$ . Inductively, we turn all pairs of $\sigma^y$ tensor factors into pairs of $\sigma^x$ . By the same argument, we can turn any $\sigma^z$ tensor factor into $\sigma^x$ . Now, the conjugation of $\sigma^y\otimes \mathbf{1}_2$ by CNOT is $\sigma^y\otimes \sigma^x$ . Therefore, any $P\in \mathbf{Y}_{\mathfrak{n}}$ with $\mathfrak{n}\geq 3$ is congruent to $\sigma^y\otimes \mathbf{1}_2^{\otimes (\mathfrak{n} - 1)}$ by some element of $\mathsf{SO}(\mathsf{N})$ . Hence for any $P\in \mathbf{Y}_{\mathfrak{n}}$ where $\mathfrak{n}\geq 3$ , there exists $O\in \mathsf{SO}(\mathsf{N})$ such that $\rho (O)\rho_{*}(P)\rho (O)^{-1} = \rho_{*}(\sigma^{y}\otimes \mathbf{1}_{2}^{\otimes (\mathfrak{n} - 1)})$ . Put $\ell = \| \rho_{*}(P / 2)\| \leq t / 2$ for any $P\in \mathbf{Y}_{\mathfrak{n}}$ .
|
| 560 |
+
|
| 561 |
+
(Step 3: to quadratic Casimir) Bounding the kernel projector by the quadratic operator (Eq. (7)), we are left with the problem of lower bounding
|
| 562 |
+
|
| 563 |
+
$$
|
| 564 |
+
\underset {P \in \mathbf {Y} _ {\mathrm {n}}} {\mathbb {E}} \rho_ {*} (P / 2) ^ {2} / \ell^ {2}. \tag {55}
|
| 565 |
+
$$
|
| 566 |
+
|
| 567 |
+
By a completely analogous calculation as in Eq. (39), this average is a scalar multiple of the identity.
|
| 568 |
+
|
| 569 |
+
(Step 4: find a large number of $\mathfrak{su}(2)$ 's) To estimate the unique eigenvalue of this average, we look at a vector $|\psi\rangle$ such that $\rho_{*}(\frac{1}{2}\sigma^{y} \otimes \mathbf{1}_{2}^{\otimes(n-1)}) |\psi\rangle = \ell |\psi\rangle$ . We can find $|\mathbf{Y}_{n-1}|$ $\mathfrak{su}(2)$ -subalgebras:
|
| 570 |
+
|
| 571 |
+
$$
|
| 572 |
+
\sigma^ {y} \otimes \mathbf {1} _ {2} ^ {\otimes (n - 1)}, \quad \sigma^ {x} \otimes W, \quad \sigma^ {z} \otimes W, \tag {56}
|
| 573 |
+
$$
|
| 574 |
+
|
| 575 |
+
where each triple of $\mathfrak{su}(2)$ generators is uniquely labeled by $W\in \mathbf{Y}_{n - 1}$ . Hence,
|
| 576 |
+
|
| 577 |
+
$$
|
| 578 |
+
\sum_ {P \in \mathbf {Y} _ {\mathfrak {n}}} \rho_ {*} (P / 2) ^ {2} \succeq \mathbf {1} \left(\ell^ {2} + | \mathbf {Y} _ {\mathfrak {n} - 1} | \ell\right). \tag {57}
|
| 579 |
+
$$
|
| 580 |
+
|
| 581 |
+
The theorem is proved since $\ell \leq t / 2$
|
| 582 |
+
|
| 583 |
+
# 6.2 Skew-symmetric elementary matrix basis
|
| 584 |
+
|
| 585 |
+
We give another orthogonal design. In this subsection, we will not require $\mathsf{N}$ to be a power of 2.
|
| 586 |
+
|
| 587 |
+
Let $\mathsf{N} \geq 3$ be any integer. For any integers $a, b$ ( $1 \leq a, b \leq \mathsf{N}$ ), let $E_{a,b} = |a\rangle\langle b| - |b\rangle\langle a|$ denote the skew-symmetric $\mathsf{N} \times \mathsf{N}$ matrix in which there are only two nonzero matrix entries $\pm 1$ . Define
|
| 588 |
+
|
| 589 |
+
$$
|
| 590 |
+
\mathbf {E} _ {\mathbb {N}} = \left\{E _ {a, b} \in \mathbb {R} ^ {\mathbb {N} \times \mathbb {N}} \mid 1 \leq a < b \leq \mathbb {N} \right\} \tag {58}
|
| 591 |
+
$$
|
| 592 |
+
|
| 593 |
+
Clearly, $\mathbf{E}_{\mathbb{N}}$ is a linear basis for $\mathfrak{so}(\mathbb{N})$ . We see that $[E_{a,b}, E_{b,c}] = E_{a,c}$ for any $a, b, c$ . This basis is convenient because different elements are orthogonal with respect to the Killing form.
|
| 594 |
+
|
| 595 |
+
Theorem 6.2. Let $\mathsf{N} \geq 3$ and $t \geq 1$ be any integers. Then,
|
| 596 |
+
|
| 597 |
+
$$
|
| 598 |
+
\left\| _ {\theta \sim (- \pi , \pi), E \in \mathbf {E} _ {\mathbb {N}}} \left(e ^ {\theta E}\right) ^ {\otimes t} - \underset {O \sim S O (\mathbb {N})} {\mathbb {E}} O ^ {\otimes t} \right\| \leq 1 - \frac {1}{t} \frac {2 (\mathrm {N} - 2)}{\mathrm {N} (\mathrm {N} - 1)} - \frac {2}{\mathrm {N} (\mathrm {N} - 1)}. \tag {59}
|
| 599 |
+
$$
|
| 600 |
+
|
| 601 |
+
This can be used to generate an approximately Haar random $\mathsf{N} \times \mathsf{N}$ orthogonal matrix fast. The exponential of an element $E \in \mathbf{E}_{\mathsf{N}}$ is a $2 \times 2$ matrix, direct summed with an $\mathsf{N} - 2$ dimensional identity matrix. Hence, multiplying a dense $\mathsf{N} \times \mathsf{N}$ matrix by $e^{\theta E}$ takes $\mathcal{O}(\mathsf{N})$ arithmetic operations. For some applications, this method can be better than generating $\mathsf{N} \times \mathsf{N}$ Gaussian random entries and running the Gram-Schmidt process.
|
| 602 |
+
|
| 603 |
+
Proof. As before, we consider an $\mathsf{SO}(\mathsf{N})$ -irrep $\rho$ , and estimate the norm of $\mathbb{E}_{\theta,E}\rho(e^{\theta E})$ .
|
| 604 |
+
|
| 605 |
+
(Step 1: random angles give kernel projectors.) The eigenvalues of any $E \in \mathbf{E}_{\mathbb{N}}$ are $0, \pm \mathrm{i}$ . So, the average over $\theta$ gives $\mathbb{E}_{\theta, E} \rho(e^{\theta E}) = \mathbb{E}_E K(\rho_*(E))$ .
|
| 606 |
+
|
| 607 |
+
(Step 2: identical eigenspectra for all represented operators) It is obvious that two different elements of $\mathbf{E}_{\mathbb{N}}$ are related by some row and column permutations. A transposition is not in $\mathsf{SO}(\mathbb{N})$ , but the product of a transposition and a diagonal matrix with $\mathbb{N} - 1$ entries being 1 and the remaining being $-1$ is. Since $\mathbb{N} \geq 3$ , we can always find such diagonal matrix that leaves a given $E_{a,b}$ fixed — just look at a zero column or a row. Hence, any two elements of $\mathbf{E}_{\mathbb{N}}$ are congruent by $\mathsf{SO}(\mathbb{N})$ , and the eigenspectrum of represented operators $\rho_{*}(E)$ is independent of $E \in \mathbf{E}_{\mathbb{N}}$ . Let $\ell = \| \rho_{*}(E) \| \leq t$ for any $E \in \mathbf{E}_{\mathbb{N}}$ .
|
| 608 |
+
|
| 609 |
+
(Step 3: to quadratic Casimir) We need to check that $\sum_{E\in \mathbf{E}_{\mathbb{N}}}\rho_{*}(E)^{2}$ commutes with every $E\in \mathbf{E}_{\mathbb{N}}$ . A moment's thought shows that it suffices to check the commutation of $\rho_{*}(E_{a,b})$ with $\rho_{*}(E_{a,c})^{2} + \rho_{*}(E_{b,c})^{2}$ for any $c$ , but this is exactly the same calculation as for the $\mathfrak{su}(2)$ case. Hence, an upper bound $\mathbf{1} - \mathbb{E}_E\rho_*(E)^2 /\ell^2$ on $\mathbb{E}_E K(\rho_*(E))$ is a scalar multiple of the identity.
|
| 610 |
+
|
| 611 |
+
(Step 4: find a large number of $\mathfrak{su}(2)$ 's) We focus on a vector with an eigenvalue $\ell$ for $\rho_{*}(E_{1,2})$ . For any $c \geq 3$ , we have an $\mathfrak{su}(2)$ -subalgebra generated by $E_{1,2}, E_{1,c}, E_{2,c}$ . So, $\sum_{E \in \mathbf{E}_{\mathbb{N}}} \rho_{*}(E)^{2} \succeq \mathbf{1}(\ell^{2} + (\mathsf{N} - 2)\ell)$ .
|
| 612 |
+
|
| 613 |
+
# Acknowledgements
|
| 614 |
+
|
| 615 |
+
We thank Thiago Bergamaschi, Jonas Haferkamp, Aram Harrow, Zeph Landau, Ryan O'Donnell, and Peter Shor for helpful discussions. This work was done in part while X.T. was visiting the Simons Institute for the Theory of Computing, and while J.H., Y.L., and X.T. were at the Park City Mathematics Institute 2023 Graduate Summer School. Y.L. is supported by DOE Grant No. DE-SC0024124, NSF Grant No. 2311733, and MURI Grant No. S394857. X.T. is supported by NSF Grant No. CCF-1729369.
|
| 616 |
+
|
| 617 |
+
# A Discrete angles
|
| 618 |
+
|
| 619 |
+
In the proof of Theorem 1.1, the only place where we use averaging over $\theta \sim (-\pi, \pi)$ is in the following context: for a nonzero hermitian operator $H$ with integer eigenvalues in $[-t, t]$ , we have
|
| 620 |
+
|
| 621 |
+
$$
|
| 622 |
+
\underset {\theta \sim (- \pi , \pi)} {\mathbb {E}} e ^ {\mathrm {i} \theta H} = K (H) \preceq \mathbf {1} - \frac {H ^ {2}}{\| H \| ^ {2}}, \tag {60}
|
| 623 |
+
$$
|
| 624 |
+
|
| 625 |
+
where $K(H)$ is the orthogonal projector onto the kernel of $H$ (see Eq. (36)). Since $H$ has bounded norm, we can instead consider averaging over the angles in the discrete set $\Theta_t = \{m\pi / t : m \in \mathbb{Z} \cap [-t, t - 1]\}$ . Then, for any hermitian operator $H$ with integer eigenvalues in $[-t, t]$ , we have
|
| 626 |
+
|
| 627 |
+
$$
|
| 628 |
+
\underset {\theta \sim \Theta_ {t}} {\mathbb {E}} e ^ {\mathrm {i} \theta H} = \underset {\theta \sim (- \pi , \pi)} {\mathbb {E}} e ^ {\mathrm {i} \theta H} = K (H) \tag {61}
|
| 629 |
+
$$
|
| 630 |
+
|
| 631 |
+
because for any integer $k$
|
| 632 |
+
|
| 633 |
+
$$
|
| 634 |
+
\sum_ {m = - t} ^ {t - 1} \exp (\mathrm {i} m k \pi / t) = \left\{ \begin{array}{l l} 2 t & \text {i f} k = 0, \\ 0 & \text {i f} 0 < | k | \leq t. \end{array} \right. \tag {62}
|
| 635 |
+
$$
|
| 636 |
+
|
| 637 |
+
The rest of the proof is exactly the same as the proof of Theorem 1.1.
|
| 638 |
+
|
| 639 |
+
# B State designs
|
| 640 |
+
|
| 641 |
+
We consider distributions $\nu$ on a complex projective space $\mathbb{CP}^{\mathsf{N}-1}$ where $\mathsf{N} = 2^{\mathsf{n}}$ is a power of 2. This is often called a state design because $\mathbb{CP}^{\mathsf{N}-1}$ is the set of all normalized state vectors modulo global phase factors in an n-qubit system, or equivalently the set of all rank-1 projectors $|\psi\rangle\langle\psi|$ on $(\mathbb{C}^2)^{\otimes \mathsf{n}}$ . There is a natural (left) action of a unitary group given by $|\psi\rangle\langle\psi| \mapsto U |\psi\rangle\langle\psi| U^\dagger$ for $U \in \mathsf{SU}(\mathsf{N})$ . The Haar measure of $\mathsf{SU}(\mathsf{N})$ induces an $\mathsf{SU}(\mathsf{N})$ -invariant measure on $\mathbb{CP}^{\mathsf{N}-1}$ . This is the target distribution we wish to approximate. A natural metric to measure the quality of approximation is closeness in $t$ -th moments, maximized over all possible measurements. This is succinctly described by the trace distance:
|
| 642 |
+
|
| 643 |
+
$$
|
| 644 |
+
\frac {1}{2} \left\| \underbrace {\mathbb {E} _ {\psi \sim \nu} (| \psi \rangle \langle \psi |) ^ {\otimes t}} _ {\mathcal {S} _ {\nu , t}} - \underbrace {\mathbb {E} _ {U \sim \mathrm {S U} (\mathrm {N})} (U | \alpha \rangle \langle \alpha | U ^ {\dagger}) ^ {\otimes t}} _ {\mathcal {S} _ {\text {H a a r}, t}} \right\| _ {1} \tag {63}
|
| 645 |
+
$$
|
| 646 |
+
|
| 647 |
+
where $|\alpha \rangle$ can be any normalized vector in $(\mathbb{C}^2)^{\otimes n}$ due to the right invariance of the Haar measure. Any approximate unitary design can be used for state designs, and a bound on the $t$ -th moment trace distance directly comes from the analysis of the approximate unitary design. For example, the result of Corollary 1.2 serves the purpose. However, this is not necessarily the best one can show.
|
| 648 |
+
|
| 649 |
+
Theorem B.1. Let $\| \cdot \| _1$ denote the Schatten 1-norm of a matrix, the sum of all singular values. For any integers $t,k,\mathfrak{n}\geq 1$ and a normalized vector $|\alpha \rangle \in \mathbb{C}^{\mathbb{N}}$ , we have
|
| 650 |
+
|
| 651 |
+
$$
|
| 652 |
+
\left\| \mathcal {C} _ {t} ^ {k} \left(| \alpha \rangle^ {\otimes t} \langle \alpha | ^ {\otimes t}\right) - \mathcal {S} _ {\text {H a a r}, t} \right\| _ {1} \leq \binom {N + t - 1} {t} ^ {1 / 2} \left(1 - \frac {1}{2 t} \frac {N}{N + 1} - \frac {N}{2 \left(N ^ {2} - 1\right)}\right) ^ {k}. \tag {64}
|
| 653 |
+
$$
|
| 654 |
+
|
| 655 |
+
The last term in the parenthesis is $\approx (2\mathsf{N})^{-1}$ for large $\mathsf{N}$ , which contrasts to the last term $\approx \mathsf{N}^{-2}$ in Theorem 1.1.
|
| 656 |
+
|
| 657 |
+
Proof. The input $(|\alpha\rangle\langle\alpha|)^{\otimes t}$ is invariant under tensor factor permutations either on the ket or bra factors. The action by $\mathsf{SU}(\mathsf{N})$ commutes with this permutation symmetry, and hence the input vector is in an $\mathsf{SU}(\mathsf{N})$ -representation $\Sigma = \mathrm{Sym}^t(\mathbb{C}^{\mathsf{N}}) \otimes \mathrm{Sym}^t((\mathbb{C}^{\mathsf{N}})^*)$ . By the Littlewood-Richardson rule (actually its special case [FH13, 15.25(i)]), we have a decomposition of $\Sigma$ into irreps:
|
| 658 |
+
|
| 659 |
+
$$
|
| 660 |
+
\Sigma = \operatorname {S y m} ^ {t} \left(\mathbb {C} ^ {\mathbb {N}}\right) \otimes \operatorname {S y m} ^ {t} \left(\left(\mathbb {C} ^ {\mathbb {N}}\right) ^ {*}\right) = \bigoplus_ {s = 0} ^ {t} \underbrace {\operatorname {h i g h e s t w e i g h t} s \left(L _ {1} - L _ {\mathbb {N}}\right)} _ {\Gamma_ {s}} \tag {65}
|
| 661 |
+
$$
|
| 662 |
+
|
| 663 |
+
Here, $L_{i}$ is the dual of the diagonal matrix (an element of the Cartan subalgebra) where there is a sole nonzero entry that is 1 at the $i$ -th position. Note that all the multiplicities of the irreps are 1, and $\Gamma_0$ is a one-dimensional trivial representation. Decompose $(|\alpha \rangle \langle \alpha |)^{\otimes t}$ into $\bigoplus_{s = 0}^{t}\gamma_{s}(\alpha)$ according
|
| 664 |
+
|
| 665 |
+
to the irrep decomposition $\Gamma_s$ .<sup>8</sup> Proposition 2.4 applied to $\Sigma$ says that the Haar average of $S_{\mathrm{Haar},t}$ projects $(|\alpha\rangle\langle\alpha|)^{\otimes t}$ onto $\gamma_0(\alpha)$ . This projection is independent of $\alpha$ because $\gamma_0 = \gamma_0(\alpha)$ is uniquely determined by the trace-preserving property.<sup>9</sup>
|
| 666 |
+
|
| 667 |
+
It is now clear that
|
| 668 |
+
|
| 669 |
+
$$
|
| 670 |
+
\mathcal {C} _ {t} ^ {k} \left(\left| \alpha \right\rangle \left\langle \alpha \right|\right) ^ {\otimes t}) - \mathcal {S} _ {\text {H a a r}, t} = \mathcal {C} _ {t} ^ {k} \left(\bigoplus_ {s = 1} ^ {t} \gamma_ {s} (\alpha)\right). \tag {66}
|
| 671 |
+
$$
|
| 672 |
+
|
| 673 |
+
We are going to bound the Schatten 2-norm of Eq. (66) by the factor in the parenthesis of Eq. (64). Since it is after all a matrix acting on $\mathrm{Sym}^t (\mathbb{C}^{\mathsf{N}})$ of dimension $\binom{\mathsf{N}+t-1}{t}$ , conversion to the Schatten 1-norm gives the theorem.
|
| 674 |
+
|
| 675 |
+
Lemma 3.2 says that $\| \mathcal{C}_t^k |_{\Gamma_s}\|_{2\to 2} = \| \mathbb{E}_{P\in \mathbf{P}_n}K(\Gamma_{s*}(P / 2))\| ^k$ where $\Gamma_{s*}$ is the induced Lie algebra representation. Invoking Eq. (36), Lemma 5.2, and most importantly Eq. (50) with highest weights $s(L_1 - L_N)$ where $s = 1,2,\ldots ,t$ , we find that
|
| 676 |
+
|
| 677 |
+
$$
|
| 678 |
+
\left\| \bigoplus_ {s = 1} ^ {t} \mathbb {E} _ {P} K \left(\Gamma_ {s *} (P / 2)\right) \right\| \leq 1 - \frac {1}{2 t} \frac {\mathrm {N}}{\mathrm {N} + 1} - \frac {\mathrm {N}}{2 \left(\mathrm {N} ^ {2} - 1\right)}. \tag {67}
|
| 679 |
+
$$
|
| 680 |
+
|
| 681 |
+
One can perform similar calculation for a real projective space $\mathbb{RP}^{\mathbb{N} - 1}$ , which is the same as the hemisphere (spherical cap) of dimension $\mathsf{N} - 1$ , excluding the equator of measure zero.
|
| 682 |
+
|
| 683 |
+
# References
|
| 684 |
+
|
| 685 |
+
[BCD+24] Adam Bouland, Chi-Fang Chen, Jordan Docter, Patrick Hayden, and Michelle Xu. to appear. 2024.
|
| 686 |
+
[BCH+21] Fernando G.S.L. Brandão, Wissam Chemissany, Nicholas Hunter-Jones, Richard Kueng, and John Preskill. "Models of Quantum Complexity Growth". In: PRX Quantum 2 (3 July 2021), p. 030316. DOI: 10.1103/PRXQuantum.2.030316. arXiv: 1912.04297.
|
| 687 |
+
[BG12] Jean Bourgain and Alex Gamburd. “A spectral gap theorem in $\mathrm{SU}(\mathrm{d})$ . In: Journal of the European Mathematical Society (EMS Publishing) 14.5 (2012). arXiv: 1108.6264.
|
| 688 |
+
[BH08] Alex Brodsky and Shlomo Hoory. "Simple permutations mix even better". In: *Random Structures & Algorithms* 32.3 (2008), pp. 274-289. DOI: https://doi.org/10.1002/rsa.20194. arXiv:math/0411098.
|
| 689 |
+
[BHH16] Fernando G. S. L. Brandão, Aram W. Harrow, and Michal Horodecki. “Local Random Quantum Circuits are Approximate Polynomial-Designs”. In: Commun. Math. Phys. 346.2 (Aug. 2016), pp. 397–434. DOI: 10.1007/s00220-016-2706-8. arXiv: 1208.0692.
|
| 690 |
+
[DCEL09] Christoph Dankert, Richard Cleve, Joseph Emerson, and Etera Livine. “Exact and approximate unitary 2-designs and their application to fidelity estimation”. In: Phys. Rev. A 80 (1 July 2009), p. 012304. DOI: 10.1103/PhysRevA.80.012304. arXiv:quant-ph/0606161.
|
| 691 |
+
[FH13] William Fulton and Joe Harris. Representation theory: a first course. Vol. 129. Springer Science & Business Media, 2013.
|
| 692 |
+
|
| 693 |
+
[GAE07] D. Gross, K. Audenaert, and J. Eisert. “Evenly distributed unitaries: On the structure of unitary designs”. In: J. Math. Phys. 48.5 (May 2007), p. 052104. DOI: 10.1063/1.2716992. arXiv:quant-ph/0611002.
|
| 694 |
+
[Haft22] Jonas Haferkamp. “Random quantum circuits are approximate unitary $t$ -designs in depth $O(nt^{5 + o(1)})$ . In: Quantum 6 (Sept. 2022), p. 795. DOI: 10.22331/q-2022-09-08-795. arXiv: 2203.16571.
|
| 695 |
+
[Ha23] Jonas Haferkamp. On the moments of random quantum circuits and robust quantum complexity. 2023. arXiv: 2303.16944.
|
| 696 |
+
[HL09] Aram W. Harrow and Richard A. Low. “Random Quantum Circuits are Approximate 2-designs”. In: Commun. Math. Phys. 291.1 (Oct. 2009), pp. 257-302. DOI: 10.1007/s00220-009-0873-6. arXiv: 0802.1919.
|
| 697 |
+
[HMH+22] J. Haferkamp, F. Montealegre-Mora, M. Heinrich, J. Eisert, D. Gross, and I. Roth. "Efficient Unitary Designs with a System-Size Independent Number of Non-Clifford Gates". In: Commun. Math. Phys. 397.3 (Nov. 2022), pp. 995-1041. DOI: 10.1007/s00220-022-04507-6. arXiv: 2002.09524.
|
| 698 |
+
[KSV02] A. Yu. Kitaev, A. H. Shen, and M. N. Valyi. Classical and Quantum Computation. Vol. 47. American Mathematical Society, 2002.
|
| 699 |
+
[Low10] Richard A. Low. Pseudo-randomness and Learning in Quantum Computation. 2010. arXiv: 1006.5227.
|
| 700 |
+
[OSP23] R. O'Donnell, R. A. Servedio, and P. Paredes. "Explicit orthogonal and unitary designs". In: 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). Los Alamitos, CA, USA: IEEE Computer Society, Nov. 2023, pp. 1240-1260. DOI: 10.1109/FOCS57990.2023.00073. arXiv: 2310.13597.
|
| 701 |
+
[RS09] Aidan Roy and A. J. Scott. "Unitary designs and codes". In: *Designs, Codes and Cryptography* 53.1 (Oct. 2009), pp. 13-31. DOI: 10.1007/s10623-009-9290-2. arXiv: 0809.3813.
|
| 702 |
+
[RV05] Eyal Rozenman and Salil Vadhan. “Derandomized Squaring of Graphs”. In: Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques. Ed. by Chandra Chekuri, Klaus Jansen, José D. P. Rolim, and Luca Trevisan. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 436-447.
|
| 703 |
+
[Mve84] Marta Sved. “Counting and recounting: The aftermath”. In: The Mathematical Intelligence 6.4 (1984), pp. 44-46. DOI: 10.1007/bf03026737.
|
2402.05xxx/2402.05239/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:22426434ac681e9ccc644369fe848f20acebf118aede345238fc649c5b79796f
|
| 3 |
+
size 600850
|
2402.05xxx/2402.05239/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05319/b05a954c-5fc0-4c9a-ae09-5c47d5417c75_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e20e71b1a6f0958702d2e37813ef0495f365f0e21b13496e084b8e5391db6178
|
| 3 |
+
size 1588505
|
2402.05xxx/2402.05319/full.md
ADDED
|
@@ -0,0 +1,477 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Optimal energy-aware task scheduling for batteryless IoT devices
|
| 2 |
+
|
| 3 |
+
Carmen Delgado<sup>1,2</sup> and Jeroen Famaey<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ IDLab, University of Antwerp - imec, Antwerp, Belgium
|
| 6 |
+
|
| 7 |
+
$^{2}$ i2CAT Foundation, Barcelona, Spain
|
| 8 |
+
|
| 9 |
+
Abstract—Today's IoT devices rely on batteries, which offer stable energy storage but contain harmful chemicals. Having billions of IoT devices powered by batteries is not sustainable for the future. As an alternative, batteryless devices run on long-lived capacitors charged using energy harvesters. The small energy storage capacity of capacitors results in intermittent on-off behaviour. Traditional computing schedulers can not handle this intermittency, and in this paper we propose a first step towards an energy-aware task scheduler for constrained batteryless devices. We present a new energy-aware task scheduling algorithm that is able to optimally schedule application tasks to avoid power failures, and that will allow us to provide insights on the optimal look-ahead time for energy prediction. Our insights can be used as a basis for practical energy-aware scheduling and energy availability prediction algorithms. We formulate the scheduling problem as a Mixed Integer Linear Program. We evaluate its performance improvement when comparing it with state-of-the-art schedulers for batteryless IoT devices. Our results show that making the task scheduler energy aware avoids power failures and allows more tasks to successfully execute. Moreover, we conclude that a relatively short look-ahead energy prediction time of 8 future task executions is enough to achieve optimality.
|
| 10 |
+
|
| 11 |
+
Index Terms—Internet of Things (IoT); batteryless IoT devices; task scheduler; energy-aware; low-power wide-area networks (LPWAN); energy harvesting; optimization; Mixed Integer Linear Programming (MILP)
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
The Internet of Things (IoT), where tens of billions of interconnected devices communicate and cooperate with each other over the Internet, is getting more and more attention nowadays. This is due to many reasons, but the most important ones are that these devices aim at supporting and improving daily life, they are cheap, and they are easy to use. Normally, these devices are equipped with a battery, a radio chip, a microcontroller unit (MCU) and one or more sensors and/or actuators. With the advancements in low-power and miniature electronics and in low power radio technologies, there has been a clear increase of IoT applications covering a wide range of application areas [1], such as home automation, wearable devices and industrial or agricultural monitoring.
|
| 16 |
+
|
| 17 |
+
However, since their inception, batteries have been one of the main drivers of these IoT devices. But batteries are not only incompatible with a sustainable IoT since they contain harmful chemicals [2], they are also sensitive to temperature changes, dangerous when not carefully protected, and short-lived, requiring costly maintenance and replacement every few years at best. Although rechargeable batteries in combination with energy harvesters can somehow offset this problem, they still suffer from capacity degradation due to frequent charge-discharge cycles, as well limiting their lifetime to a few years. This results in millions upon millions of discarded IoT batteries every year, filled with dangerous chemicals that significantly affect our environment and ecology. Moreover, batteries are susceptible to current peaks, which makes them degrade faster. Sadly, IoT devices often
|
| 18 |
+
|
| 19 |
+
suffer from such peaks, due to the fact they spend most of their time in a low-power (sleep) state. When they wake up to transmit or receive data, their power consumption jumps up many orders of magnitude, resulting in short-lived current peaks. To address all these IoT-related battery problems, researchers have recently started investigating batteryless IoT devices and networks [3].
|
| 20 |
+
|
| 21 |
+
These batteryless devices run on small but long-lived capacitors for energy storage, charged using various forms of energy harvesting (e.g., thermal, solar, vibration), which make them more environmentally friendly, cheaper to maintain, easy to recycle and more resistant to temperature variations and charge-discharge degradation. This makes them especially suitable for applications in hard-to-reach locations (e.g., intra-body health monitoring, remote-area sensing) and large-scale deployments (e.g., dense building automation networks, smart cities). However, the combination of small energy storage capacities and stochastic energy harvesting behaviour causes batteryless devices to intermittently turn on and off due to frequent power failures (c.f. Figure 1). This results in a power failure when the capacitor voltage drops below the turn-off threshold. When the device harvests enough energy, it will turn on again when the turn-on voltage threshold is reached.
|
| 22 |
+
|
| 23 |
+
Such intermittency challenges the fundamental assumption that devices can operate uninterrupted to perform their tasks, and requires rethinking computing, communications and networking paradigms. In this paper we focus on the computing part, where we present an optimal energy-aware task scheduling algorithm for batteryless devices. We propose a Mixed Integer Linear Programming (MILP)
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Fig. 1: Batteryless device intermittent behaviour
|
| 27 |
+
|
| 28 |
+
optimization framework which intelligently decides when to execute which task, according to the energy harvested and stored, energy consumed by the tasks and their priority. We follow the same approach as in [4], where the programmer decomposes the application into a collection of interconnected atomic tasks. The runtime keeps track of the active task, re-executing it after a power failure, and keeping its output in non-volatile (i.e., permanent) memory after successful completion for use as input to other tasks. However, our approach is able to avoid power failures, improving the overall performance of the scheduler. The proposed optimal algorithm not only shows the potential maximum performance improvement that can be achieved with energy-aware scheduling compared to non-energy-aware scheduling, but it also offers insights in the lookahead window in terms of predicting the future available energy that is needed to achieve such a gain. This means that thanks to our approach, designers will be able to define their energy harvesting prediction windows, as well as expected tasks to be executed. Although the presented MILP formulation cannot be directly solved in real-time on a batteryless IoT device, the proposed scheduling formulation and obtained insights can be used as a basis for the design of fast heuristic scheduling algorithms that can be executed in real-time on batteryless devices.
|
| 29 |
+
|
| 30 |
+
The remainder of this paper is organized as follows. Section 2 provides an overview of the related work in the field of batteryless computing and tasks schedulers. Section 3 describes the proposed system model. In Section 4, we introduce our optimization framework. The evaluation of the optimal task scheduling is provided in Section 5. Finally, conclusions and future work are discussed in Section 6.
|
| 31 |
+
|
| 32 |
+
# 2 RELATED WORK
|
| 33 |
+
|
| 34 |
+
The batteryless vision requires computing mechanisms to deal with this intermittent behaviour. Traditional sequential computing models and programming languages cannot handle such abrupt intermittent behavior, as they assume uninterrupted execution of programming instructions and rely on volatile memory to keep application execution progress. Different computing models and schedulers to overcome the intermittent execution in batteryless devices have already been proposed. In their recent work, Sandhu et al. survey existing task scheduling schemes for energy harvesting IoT, which aim at ensuring optimal utilization of harvested energy to extend system lifetime as well to provide highest activity detection/monitoring performance [5]. They analyze the three major strategies employed: Dynamic Voltage and Frequency Scaling (DVFS), decomposing and combining of tasks and duty cycling. First, DVFS adjusts
|
| 35 |
+
|
| 36 |
+
the voltage level and the frequency to power the active hardware module. Second, large tasks can be decomposed into smaller atomic subtasks. Finally, duty cycling is another task scheduling mechanism that allows controlling the consumed energy by the nodes when they are not performing any useful operation. DVFS algorithms are difficult to implement on these energy-constrained sensor nodes, due to the stringent requirement of complex circuitry that provides various voltage levels for different components of the node [5]. For this reason, in this work we propose a new optimization algorithm that uses the other two strategies: we divide tasks into atomic subtasks and we also use duty cycling in order to let the device intelligently sleep in order to harvest more energy that will benefit the execution of future tasks.
|
| 37 |
+
|
| 38 |
+
The two main computing strategies are checkpointing- and task-based models, where these last ones are based on the task decomposing strategy described above. While checkpointing-based models such as Mementos [6] and Clank [7] are not scalable due to the time and energy cost to create checkpoints, which increases with the size of the volatile memory, task-based models are more suitable for batteryless devices. As mentioned before, these models divide the program into different atomic subtasks. The output of a task is stored in non-volatile memory when it successfully completes. To reduce the overhead, task-based models have been proposed, dividing the program into different atomic subtasks. The output of a task is stored in non-volatile memory when it successfully completes. Other approaches considering non-volatile processors, that not only integrate non-volatile memories but also non-volatile registers and flip-flops, have been proposed [8]. However, the cost in terms of hardware (increased power consumption, increased area, and decreased frequency) results in significant software slowdowns and complexity overhead.
|
| 39 |
+
|
| 40 |
+
Alpaca [9], Mayfly [10] and InK [4] are the most relevant state-of-the-art task-based computing models and schedulers. The first two only consider static task flows, and if any task cannot be completed due to the energy level at that time, it will be executed again. However, if a specific task is not able to be completed or if the energy conditions of the capacitor or energy harvester change, any other tasks will starve, waiting for the current one to be tried over and over again. To overcome this problem, InK [4] considers a dynamic scheduler based on priorities and event-triggerers (e.g., timers, energy level triggers, sensor value triggers), which are defined in advance by the programmer. This allows the application to adapt to changes in available energy and variations in sensing behavior. However, it places the entire burden of adapting the application logic and task selection on the programmer. This requires them to have in-depth knowledge on the energy consumption of tasks, as well as the energy life-cycle of the device, which is generally not known before deployment of the device. If the device does not know how much energy will be available in the future, it can spend energy on a task or chain of tasks without knowing if it will have enough energy for completing it before the deadline. If the harvested energy is insufficient, tasks will not be completed, and energy and time will be wasted. To overcome this issue, and fully automate the problem of task selection, in this paper we propose an
|
| 41 |
+
|
| 42 |
+
energy-aware task scheduler. We consider the task chains' priorities and deadlines, but also the available energy of the device, the energy cost of the tasks and the (predicted) energy harvesting budget, to decide which task should be completed first, making it more intelligent and resource-efficient. Although Mayfly [11] and InK [4] consider deadlines and data freshness in the form of expiration timers, they do not consider deadlines across the different tasks of a chain. AsTAR [12] also presents an energy-aware task scheduler and an associated reference platform that aims to lower the burden of developing sustainable applications through self-adaptative task scheduling. It does not need any pre-configuration and supports platform heterogeneity. However, it does not consider different types of application requirements nor different priorities in their design. More recently, Islam et al. [13] have proposed two scheduling algorithms for batteryless devices where the energy of the capacitor and the deadline of the tasks are taken into account. However, they assume that the capacitor follows a linear charging behaviour, harvesting and computing are exclusive and applications have all the same priority. In our work, we follow a different approach, where we use an exponential capacitor charging model, and we let the harvester and the device to work simultaneously. We also define different task priorities, to allow more power hungry tasks to be executed if they are more beneficial for the specific use case.
|
| 43 |
+
|
| 44 |
+
Other scheduling algorithms have also been proposed. Caruso et al. proposed a dynamic programming algorithm for the optimization of the scheduling of the tasks in IoT devices that harvest energy by means of a solar panel [14]. They used estimations of the solar energy that is produced in each slot of time to compute the optimal scheduling in advance. SolarCore [15] also presents a solution for solar energy harvesting. It includes a power management scheme to optimize the power obtained from a solar panel thanks to the maximal power provisioning control and workload optimization. However, in our work we present a more generic task scheduling solution that does not rely on any specific energy source and whose main constraint is the energy scarcity available in the capacitor. A simple scheduler model that does not consider different tasks, nor a harvesting source is proposed in [16], where authors explain the importance of taking into account the energy consumption of the memory for backing up and restoring the data to and from non-volatile memory when a power failure occurs. They claim that expending energy on instructions whose output is not saved before a power outage is wasteful. The authors in [17] present the modified earliest deadline first (MEDF) algorithm based on super capacitors and energy harvesting that takes into account energy and time constraints of the tasks. However, in their algorithm they do not avoid energy violations (when the voltage of the capacitor falls below the turn-off threshold), but only count how many occur. Counting how many energy violations happen without avoiding them or considering the device needs to turn-off is not realistic and something we address in this paper. Furthermore, they assume the tasks to be independent and nonpreemptable, while we not only avoid power failures but also consider tasks dependency in our work.
|
| 45 |
+
|
| 46 |
+
A more practical scheduling algorithm has been presented in [18], where the authors optimally set the overall node power consumption based on the utility, and on the energy required by tasks. It is implemented on an Arduino node, equipped with a small (portable) solar panel, and attached to a small wind turbine. In [19], Srbinovski et al. present an energy aware adaptive sampling algorithm, where the node manages its activity in the network according to its energy levels. However, the user needs to define the critical battery level at which it becomes energy conservative by reducing the sampling rate. This is the same approach followed by AsTAR [12]. In a more general scenario where a server farm needs to be optimized, Blink [20] proposes to frequently adjust the servers' duty cycle (to turn on and off the servers) to adapt to power variations, while maintaining a certain synchronization between them when needed. And in [21] an energy-aware scheduling algorithm that is able to configure the hardware of a Field-Programmable Gate Array (FPGA) based on the weather forecast solar energy available is proposed. However, we look at the optimal scheduling of application tasks on a constrained IoT device without batteries, which is a problem with significantly different constraints and requirements.
|
| 47 |
+
|
| 48 |
+
Although ILP-based approaches have already been proposed, they are not intended to work for batteryless devices. In fact, [22] presented an ILP approach to be used with multiprocessor partitioned scheduling and assume that any task may be interrupted at any instant in time, and its execution resumed later with no cost or penalty, which might not be realistic in batteryless devices. The same assumption is also used in [23], where the authors present an optimal scheduling algorithm for rechargeable batteries or supercapacitors, although their energy model is very simplistic. In [24] a task scheduling algorithm for Simultaneous Wireless Information and Power Transfer (SWIPT) IoT devices is proposed, where only one device can be charged at a time. As we target environmental sources (e.g., solar), all devices could be charged at the same time. Although in their evaluation they use a rechargeable battery and a capacitor as storage elements, they consider that both follow a linear charging behaviour. We have also presented an energy-aware algorithm for batteryless LoRaWAN devices using energy harvesting, where we evaluate the performance of these constrained devices when allowing sleeping between tasks or letting them turn off [25]. We showed that sleeping between tasks normally performs better, and for this reason, in this work we present a more generic task scheduler where we follow this approach. Finally, in order to reduce energy consumption, task offloading could be used, as authors in [26] have used in smart mobile devices. The task offloading could be combined with our scheduling algorithm for constrained batteryless IoT devices where, according to the available energy, the scheduler should decide whether to transmit data towards the edge cloud where computing should take place rather than performing computations on the device itself.
|
| 49 |
+
|
| 50 |
+
# 3 SYSTEM MODEL
|
| 51 |
+
|
| 52 |
+
In this section we give a brief overview of the considered batteryless IoT device model (more details are provided in
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Fig. 2: Electrical circuit model of a batteryless IoT device using a current source energy harvester
|
| 56 |
+
|
| 57 |
+
our previous work [27]) and how the energy-aware task scheduler for these batteryless devices works.
|
| 58 |
+
|
| 59 |
+
# 3.1 Batteryless model
|
| 60 |
+
|
| 61 |
+
Batteryless IoT devices are equipped with a harvester mechanism, a capacitor, an MCU, a radio unit and the needed peripherals. In order to model the behaviour of these devices, we have considered the electrical circuit presented in [27] and shown in Figure 2, where the circuit is divided into three main parts: the harvester (source of the energy), the capacitor (storage of the energy) and the load (consumer of the energy: MCU, radio, peripherals).
|
| 62 |
+
|
| 63 |
+
We have considered a generic and simplified approach, where only the generated power is taken into account. Since photovoltaic cells are modeled as current harvester sources [28], we have considered the current source model presented in [27], where the harvester is modeled as an ideal current source composed by a real current source in parallel with an internal resistance (denoted by $I$ and $r_h$ , respectively). The parallel resistance $r_h$ (in $\Omega$ ) limits the power of the harvester and its value is calculated using the following equation:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
r _ {h} = \frac {V _ {\text {m a x}} ^ {2}}{P _ {\text {h a r v e s t e r}}} \tag {1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where $P_{harvester}$ is the power of the harvester source, which can vary greatly depending on the type of energy harvesting considered (e.g., up to $1mW / cm^2$ for indoor natural light, and up to $100mW / cm^2$ for outdoor sun) [29], and $V_{max}$ (in Volts) is the maximum voltage supported by the circuit elements, which is determined by the load. And the value of the current I (in Amperes) can be calculated as follows:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
I = \frac {V _ {\text {m a x}}}{r _ {h}} \tag {2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
The capacitor is the part of the circuit where the energy is stored. As shown in Figure 1, the behaviour of the system is a succession of intervals, where the capacitor is being charged or discharged. Each interval is characterized by a specific state of the load components (e.g., MCU is active and radio is transmitting). We characterize the voltage of the capacitor throughout each interval using $V_{0}$ and $v_{C}(t)$ . $V_{0}$ represents the initial voltage of the capacitor at the beginning of the interval (i.e., time $t_0$ ), and $v_{C}(t)$ is the temporal evolution of said voltage at time $t$ (relative to $t_0$ ). Both $V_{0}$ and $v_{C}(t)$ are included in the circuit as an ideal voltage source and the voltage over time of an ideal capacitor respectively, as shown in Figure 2.
|
| 76 |
+
|
| 77 |
+
The load of the model corresponds to the set of components that consume the stored energy in the capacitor per task being executed, such as the MCU, radio or sensors. Each of these components is characterized by a specific power consumption in each of its states (e.g., active, sleeping, off). Therefore, they can be modeled as a load resistance denoted by $R_{L}$ (in $\Omega$ ), which can be calculated as follows:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
R _ {L} = \frac {E}{I _ {\text {l o a d}}} \tag {3}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where $E$ and $I_{load}$ can be defined either theoretically or empirically. Theoretically, $E$ is the operating voltage of the circuit elements, which is given in the datasheet, and $I_{load}$ is the sum current consumption of all components for the specific task to be executed, which can also be found in the corresponding datasheets. In order to calculate it empirically, $E$ is the corresponding voltage value at which the device is being powered while $I_{load}$ is the current consumption measured per each task. $R_{L}$ thus varies across different tasks depending on the state of each component (i.e., radio, MCU, or sensors) during that specific task. Please note that $I_{load}$ is renamed as $e_{j}$ to be consistent with the scheduler formulation in the energy-aware task scheduler in Section 4.
|
| 84 |
+
|
| 85 |
+
To determine if the device has enough energy at a specific time $t$ to perform its tasks (e.g., transmit data), it is needed to calculate the voltage across the load of the model $v(t)$ :
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
v (t) = I R _ {e q} \left(1 - e ^ {\left(\frac {- t}{R _ {e q} C}\right)}\right) + V _ {0} e ^ {\left(\frac {- t}{R _ {e q} C}\right)} \tag {4}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $C$ is the capacitance in Farads, $t$ is the time (in seconds) spent in the current task, and $R_{eq}$ is the equivalent resistance of the circuit (in $\Omega$ ), computed as:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
R _ {e q} = \frac {R _ {L} r _ {h}}{R _ {L} + r _ {h}} \tag {5}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
The value of $v(t)$ provides the voltage available in the load, which will be used to determine if a specific task (e.g., transmit, listen, sense) can be performed during an interval, according to the needed time $t$ it will take, the energy harvesting rate $P_{harvester}$ , the specific load $I_{load}$ , and the capacitor voltage $V_0$ at the start of the execution of such tasks. Note that $v(t)$ can be increasing or decreasing depending on the specific parameters, and if it goes below the turn-off threshold, the device (which is represented as the load in Figure 2) will turn off.
|
| 98 |
+
|
| 99 |
+
# 3.2 Energy-aware task scheduling concept
|
| 100 |
+
|
| 101 |
+
As mention in Section 2, in this work we propose a new optimization framework that uses two approaches: we first divide tasks into atomic subtasks and second, we use duty cycling, meaning that we let the device sleep to replenish energy when there are no tasks to be executed or if there is not enough energy, so it will harvest more energy benefiting the successful completion of future tasks.
|
| 102 |
+
|
| 103 |
+
The state-of-the-art schedulers introduced in Section 2, often do not consider the energy in their algorithms, while we argue in batteryless devices, energy is the main concern. In Figure 3 we show the comparison of the behavior of the state-of-the-art approach and our energy-aware approach.
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
Fig. 3: Energy-unaware vs energy-aware task schedulers in batteryless devices
|
| 107 |
+
|
| 108 |
+
In this example, the temperature update application is composed of three tasks: sense, process and transmit. As can be seen, it first performs the sensing task, which corresponds to take measurements from the environment, for example, reading the temperature. Then, the data is processed and transmitted to the central controller in the following two tasks, completing the task chain for this event. However, existing task schedulers for intermittent computing select the next task to execute based on simple priority rules [4], leading to potential deadlocks or wasting of the scarce available energy on task chains that cannot be completed on time anyway.
|
| 109 |
+
|
| 110 |
+
As also reflected in Figure 3, we are targeting applications and tasks with deadlines, since for example, data freshness is important in monitoring applications (e.g., sending a heart rate measurement of several minutes old might be useless). So we consider a device trying to complete a task chain within a deadline. If it is energy-unaware (depicted in grey), it will try to complete every task one after another. However, when it tries to transmit the data in the example, there is not enough energy to complete this task, and a power failure occurs. In this case, the device needs to harvest additional energy before it reaches its voltage turn-on threshold and is able to retransmit the data. However, by the time this happens, the deadline is missed (depicted by the red line), which means that the data is not "fresh" anymore, and useless to the backend application that uses it.
|
| 111 |
+
|
| 112 |
+
To solve this problem, our energy-aware scheduler instead decides to wait till enough energy is available to be able to transmit the data before the deadline. In order to do so, in this work we are assuming we have knowledge of the energy consumption of the tasks and the energy that is being harvested over a certain look-ahead time window. The first assumption is easy to fulfill since we can either use the data sheet consumption values of the reference platforms used or by measuring it before device deployment. Moreover, we can assume we have the knowledge of the future energy that can be harvested for some predictable and controllable sources such as RF or indoor light [30].
|
| 113 |
+
|
| 114 |
+
The proposed energy-aware task scheduler will determine which task should be executed and when, according not only to the three main batteryless components: the energy available in the capacitor, the energy that is being
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
Fig. 4: General overview of the atomic tasks model
|
| 118 |
+
|
| 119 |
+
harvested, and the energy consumption of the device performing the tasks; but also the tasks requirements (arrival time, execution time, priority, deadline and order of the tasks). The main goal of the energy-aware scheduler is to execute the maximum number of tasks, prioritizing its priority status. The most common types of tasks on the IoT node include sampling of information, processing the data, data transmission, data reception or the use of the actuators.
|
| 120 |
+
|
| 121 |
+
We consider a task as a sequence of atomic operations that are executed on a node. Typical IoT applications are for example report sensor values, relay data or use an actuator. We divide these applications into tasks. For example, report sensor values can be divided into sense the environmental variable (e.g., temperature) and transmit the data, relay data can be divided into receive and transmit and use an actuator can be divided into receive the order and actuate. Every tasks will be characterized by its arrival time, execution time, deadline, priority, order and energy consumption. We consider that the energy consumption of the tasks already take into account the consumption of the memory for back up and restore, as mentioned in [16]. It is also important to mention that although some tasks such as receiving are very technology specific (LoRaWAN and BLE have different behaviours), they can also be considered as a single task where the different parameters of the optimization framework (i.e., execution time will determine the whole BLE transmission over the three channels) will need to be defined accordingly.
|
| 122 |
+
|
| 123 |
+
In Figure 4 we show an example of how we have modeled the atomic tasks. Every task is defined by execution time, deadline, priority and energy consumption. Some of the tasks are also characterized by an order. This is the case for $\text{Task}_3$ , that has two parents which are $\text{Task}_1$ and $\text{Task}_2$ , or $\text{Task}_6$ , which has a $\text{Task}_5$ as a parent. Finally, the arrival time will depend on the specific task. For example, the arrival time of $\text{Task}_1$ and $\text{Task}_2$ should be given as an input, while the arrival time of $\text{Task}_5$ is periodic. Furthermore, $\text{Task}_4$ will only arrive if the condition of $\text{Task}_3$ is satisfied.
|
| 124 |
+
|
| 125 |
+
# 4 OPTIMIZATION FRAMEWORK
|
| 126 |
+
|
| 127 |
+
This section formally describes the energy-aware task scheduling problem which aims to maximize the successful task execution rate (weighted by task priority). We formulate the optimization problem as a MILP that uses, among others, the energy that its being harvested, the tasks execution time and the tasks energy consumption as input and decides on the optimal way of executing the tasks avoiding power failures and missing deadlines. The MILP will provide the globally optimal if we assume a total knowledge of the expected tasks to be executed and the
|
| 128 |
+
|
| 129 |
+
TABLE 1: Set of parameters of the optimization framework
|
| 130 |
+
|
| 131 |
+
<table><tr><td>Parameter</td><td>Definition</td></tr><tr><td>T = {t1, t2, ..., t|T|}</td><td>Set of time instants; subscript index t refers to time instant t_t</td></tr><tr><td>A = {a1, a2, ..., a|A|}</td><td>Set of tasks to be executed; subscript index j refers to task a_j</td></tr><tr><td>Rj = {pj, tej, ta_j, dj, ej}</td><td>Requirement vector of task j (priority, execution time, arrival time, deadline and current consumption)</td></tr><tr><td>Pj</td><td>Subset of tasks that need to be executed before task j is executed</td></tr><tr><td>yjt</td><td>Binary decision variable indicating if task j starts executing at time t</td></tr><tr><td>zjt</td><td>Binary decision variable indicating if task j is being executed at time t</td></tr><tr><td>Vt</td><td>The voltage of the capacitor at time instant t</td></tr></table>
|
| 132 |
+
|
| 133 |
+
energy that is being harvested. However, given knowledge about the tasks to execute and (predicted) energy harvested over a certain look-ahead time window, the scheduler can calculate a sub-optimal solution that is optimal over that specific time window. Furthermore, since energy harvesting is normally not known in advance, energy harvesting predictors can be used [31]. The remainder of this section describes the different aspects of the MILP formulation and Table 1 summarizes the notations used.
|
| 134 |
+
|
| 135 |
+
# 4.1 Input variables
|
| 136 |
+
|
| 137 |
+
Let $T = \{t_1, t_2, \dots, t_{|T|}\}$ be the ordered set of time instants where the set $A = \{a_1, a_2, \dots, a_{|A|}\}$ of tasks need to be scheduled. To simplify notation, in the following we will use the subscript index $t$ to refer to a instant of time $t_t$ and the subscript index $j$ to refer to a task $a_j$ . Each task $j \in A$ is characterized by its requirement vector $R_j = \{p_j, te_j, ta_j, d_j, e_j\}$ , where $p_j$ is the priority, $te_j$ is the execution time of the task, $ta_j$ is the arrival time, meaning that the task cannot be scheduled before this time, and $d_j$ is the task deadline to guarantee the freshness and usability of the data and tasks outputs, and $e_j$ is the average current consumption of the task (which is considered constant per each task). Furthermore, in order to guarantee the task chain order, every task $j \in A$ has a set of parents $P_j \subset A$ , which contains the set of tasks that need to be executed before the execution of task $j$ . The set of parents $P_j$ can also be empty.
|
| 138 |
+
|
| 139 |
+
# 4.2 Decision variables
|
| 140 |
+
|
| 141 |
+
There are two decision variables in the MILP, $y_{jt}$ and $z_{jt} \forall j \in A, \forall t \in T$ , which represent the specific task scheduler decisions. While $y_{jt}$ is the binary variable indicating if task $j$ starts executing at time $t$ , $z_{jt}$ is the binary variable indicating if task $j$ is being executed at time $t$ . This means that if a task $j$ starts executing at time $t$ and its execution time is $te_j = 2$ , $y_{jt} = 1$ , $z_{jt} = 1$ , $z_{jt + 1} = 1$ and for the rest of the elements in the set $T$ , $y_{jt}$ and $y_{jt}$ are equal to 0.
|
| 142 |
+
|
| 143 |
+
# 4.3 Objective function and constraints
|
| 144 |
+
|
| 145 |
+
The main goal of the energy-aware task scheduler is to maximize the number of tasks successfully scheduled multiplied
|
| 146 |
+
|
| 147 |
+
by their priority:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\max \sum_ {j \in A} \sum_ {t \in T} y _ {j t} \times p _ {j} \tag {6}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The presented objective function is restricted by some constraints. First of all, two tasks cannot be scheduled at the same time, but also a task $j$ should only be scheduled once, as specified in Equations 7 and 8 respectively:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\sum_ {j \in A} z _ {j t} \leq 1 \quad \forall t \in T \tag {7}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\sum_ {t \in T} y _ {j t} \leq 1 \quad \forall j \in A \tag {8}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
We need to ensure that task $j$ is only successful if it is deployed for all its execution time $(te_j)$ (see Equation 9) but also that if a task $j$ starts its execution at time $t$ , it is being executed from time $t$ until $t + te_j$ , as Equation 10 defines.
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\sum_ {t \in T} y _ {j t} = \frac {1}{t e _ {j}} \sum_ {t \in T} z _ {j t} \quad \forall j \in A \tag {9}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
z _ {j t} = \sum_ {u \in [ t - t e _ {j} + 1, t ]} y _ {j u} \quad \forall j \in A, \forall t \in T \tag {10}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
Furthermore, task $j$ cannot be scheduled before its arrival nor after its deadline, and Equation 11 guarantees it.
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\sum_ {d _ {j} < t < t a _ {j}} z _ {j t} = 0 \quad \forall j \in A \tag {11}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Every task $j$ needs to be executed after all its parents in the set $P_{j}$ have finished their executions:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\sum_ {u \leq t} y _ {j u} \leq \sum_ {u \leq t} y _ {p u} \quad \forall j \in A, \forall p \in P _ {j}, \forall t \in T \tag {12}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
As mentioned in Section 3.1, we are assuming every device is equipped with a harvester and a capacitor to store the harvested energy from the environment. The voltage across the capacitor for every instant of time $t$ is defined by the continuous variable $V_{t} \forall t \in T$ . It is important to note that $V_{t}$ is the discrete time version of $v(t)$ of Equation 4. We need to guarantee that the voltage across the capacitor is enough to execute the scheduled tasks. For this reason, we first need to ensure that this voltage remains between the minimum and maximum supported values ( $V_{min}$ and $V_{max}$ ). $V_{max}$ was already defined in Section 3.1, and $V_{min}$ corresponds to the voltage turn-off threshold of Figure 1 and Figure 3.
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
V _ {\min } \leq V _ {t} \leq V _ {\max } \quad \forall t \in T \tag {13}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
The voltage across the capacitor needs to be calculated for every time instant and depends on the specific energy harvested and the energy consumed by the tasks per every time instant. Applying Equation 4 to our variables, we obtain the following constraint:
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
V _ {t} = \sum_ {j \in A} z _ {j t} \times \tag {14}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\left(I _ {t} R _ {e q _ {j, t}} (1 - e ^ {(\frac {- \triangle t}{R e q _ {j , t} C})}) + V _ {t - 1} e ^ {(\frac {- \triangle t}{R e q _ {j , t} C})}\right) \quad \forall t \in T
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
where $\triangle t$ corresponds to the amount of time (in seconds) between two instants of time in $T$ (i.e., $\triangle t = \text{time}(t_t - t_{t-1})$ ), $V_{-1}$ is the known initial voltage of the capacitor, $C$ is the capacitance in Farads of the capacitor. As explained in Section 3.1, the harvester is modeled as a real current source composed of an ideal current source and a parallel resistance. Since these values can vary over time, we now denoted them by $I_t$ and $r_{h_t}$ , respectively. The value of $I_t$ (in Amperes) is calculated as follows:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
I _ {t} = \frac {V _ {\text {m a x}}}{r _ {h _ {t}}} \quad \forall t \in T \tag {15}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where the parallel resistance $r_{h_t}$ (in $\Omega$ ), that limits the power of the harvester, is calculated using the following equation (similar to Equation 1):
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
r _ {h _ {t}} = \frac {V _ {\text {m a x}} ^ {2}}{P _ {\text {h a r v e s t e r} , t}} \quad \forall t \in T \tag {16}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
where $P_{harvester,t}$ is the power of the harvester source at time instant $t$ , in Watts. Finally, $R_{eq,j,t}$ is the equivalent resistance of the circuit (in $\Omega$ ) at time instant $t$ , which depends on the specific task $j$ that is scheduled at that time, and is computed as:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
R _ {e q _ {j, t}} = \frac {R _ {L _ {j}} r _ {h _ {t}}}{R _ {L _ {j}} + r _ {h _ {t}}} \quad \forall j \in A, \forall t \in T \tag {17}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
where $R_{L_j}$ (in $\Omega$ ) is the load of the model (it corresponds to the set of components that consume the stored energy in the capacitor when executing the scheduled task $j$ ), which can be calculated as follows:
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
R _ {L _ {j}} = \frac {E}{e _ {j}} \quad \forall j \in A \tag {18}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
where $E$ is the operating voltage of the circuit elements and $e_j$ is the current consumption of task $j$ , defined in the $R_j$ vector.
|
| 226 |
+
|
| 227 |
+
In order to compute Equation 14, it is needed to define all $V_{t}$ for all values of $t$ . However, if at a certain point in time the device is sleeping and no task is being executed, $z_{jt} = 0$ for all values of $j$ . For this reason, we need to adapt and reformulate Equation 14 as follows:
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
V _ {t} =
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\sum_ {j \in A} z _ {j t} \times \left(I _ {t} R _ {e q _ {j, t}} (1 - e ^ {(\frac {- \triangle t}{R _ {e q _ {j , t}} C})}) + V _ {t - 1} e ^ {(\frac {- \triangle t}{R _ {e q _ {j , t}} C})}\right) +
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\begin{array}{l} (1 - \sum_ {j \in A} z _ {j t}) \times \left(I _ {t} R _ {e q _ {s, t}} (1 - e ^ {(\frac {- \triangle t}{R _ {e q _ {s , t}} C})}) + V _ {t - 1} e ^ {(\frac {- \triangle t}{R _ {e q _ {s , t}} C})}\right) \\ \forall t \in T \tag {19} \\ \end{array}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
where $R_{eq,t}$ is calculated when the device is in sleep mode, which means we use the current consumption of the sleep mode for calculating it. And we can reformulate Equation 19 as follows:
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
V _ {t} - \sum_ {j \in A} z _ {j t} \times
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\left(I _ {t} R _ {e q _ {j, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {j , t}} C}\right)}\right) - I _ {t} R _ {e q _ {s, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)}\right)\right) -
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\sum_ {j \in A} z _ {j t} \times V _ {t - 1} \times \left(e ^ {\left(\frac {- \triangle t}{R e q _ {j , t} C}\right)} - e ^ {\left(\frac {- \triangle t}{R e q _ {s , t} C}\right)}\right) -
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
V _ {t - 1} e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)} - I _ {t} R _ {e q _ {s, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)}\right) = 0 \quad \forall t \in T \tag {20}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
Considering that $V_{t-1}$ is dependent on the decision variable $z_{jt}$ , the multiplication $z_{jt} \times V_{t-1}$ is no longer linear. To linearize it, we can define a new continuous variable $\Upsilon_{jt} = z_{jt} \times V_{t-1}$ , and Equation 20 can be reformulated as:
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
V _ {t} - \sum_ {j \in A} z _ {j t} \times
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\left(I _ {t} R _ {e q _ {j, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {j , t}} C}\right)}\right) - I _ {t} R _ {e q _ {s, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)}\right)\right) -
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\sum_ {j \in A} \Upsilon_ {j t} \times \left(e ^ {\left(\frac {- \triangle t}{R _ {e q _ {j , t}} C}\right)} - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)}\right) - V _ {t - 1} e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)} =
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
I _ {t} R _ {e q _ {s, t}} \left(1 - e ^ {\left(\frac {- \triangle t}{R _ {e q _ {s , t}} C}\right)}\right) \quad \forall t \in T \tag {21}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
where, since $V_{t - 1}$ is bounded below by zero and above by $V_{max}$ , the variable $\Upsilon_{jt}$ needs to fulfill the following restrictions:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\Upsilon_ {j t} \leq z _ {j t} \times V _ {\max } \quad \forall j \in A, \forall t \in T \tag {22}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\Upsilon_ {j t} \leq V _ {t - 1} \quad \forall j \in A, \forall t \in T \tag {23}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\Upsilon_ {j t} \geq V _ {t - 1} - (1 - z _ {j t}) \times V _ {\max } \quad \forall j \in A, \forall t \in T \tag {24}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
\Upsilon_ {j t} \geq 0 \quad \forall j \in A, \forall t \in T \tag {25}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
To sum up, we define our MILP scheduler by the objective function in Equation 6 subject to the constraints of the Equations 7, 8, 9, 10, 11, 12, 13, 21, 22, 23, 24 and 25, and where Equations 15 - 18 are not constraints but helpers for Equation 21.
|
| 296 |
+
|
| 297 |
+
# 5 EVALUATION
|
| 298 |
+
|
| 299 |
+
In this section we evaluate the performance of the proposed energy-aware task scheduler. We first introduce the simulation setup and the methodology used to evaluate the optimal algorithm. Then, in order to validate its behaviour, we compare it against one of state-of-the-art solution. Finally, we provide some insights on how long in the future the behaviour of the batteryless device needs to be predicted in order to get the best performance and avoid power failures.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
Fig. 5: Detailed tasks of the Smart Building Application
|
| 303 |
+
|
| 304 |
+
# 5.1 Simulation setup and methodology
|
| 305 |
+
|
| 306 |
+
As detailed in Section 3.1, we have considered a batteryless device, composed of a harvester (e.g., a photovoltaic cell) that is able to continuously harvest a power of $P_{harvester}$ [27], and that can store it when not used in its capacitor, with a capacitance $C$ . Let us name the harvested power as $PH$ .
|
| 307 |
+
|
| 308 |
+
We consider a smart building application, composed of three main commands: sense and transmit the average data, request-response and receive&actuate, as can be seen in Figure 5. The first command is able to sense an environmental variable (i.e., temperature) periodically every $X$ seconds, compute the average of avg samples and transmit it. Secondly, if the device receives a request, it will immediately answer to it with the response task. This task needs to be executed within a short deadline as it is considered containing sensible data. Finally, we have also considered that the device can receive a message to enable one of its peripherals (such as an alarming LED). In the detailed diagram it is possible to see what are the periodicities considered and the parents of the atomic tasks. For example, the Request task will have a $Y$ seconds periodicity and the Response task has only one parent which is Request.
|
| 309 |
+
|
| 310 |
+
In order to be able to communicate, the device is also equipped with a Bluetooth Low Energy (BLE) radio. It is important to note that in order to simplify the problem, we have considered that a transmission in the three advertisement channels of BLE is an atomic task, and therefore we have considered the average energy consumption of the complete transmission, as no differences in the results were observed after making this simplification. Furthermore, every considered task also takes the energy consumption of the backup memory and of restoring it from memory into account, as already mentioned in Section 3.2.
|
| 311 |
+
|
| 312 |
+
For our simulations, we have considered that the sensing task is executed every second $(X = 1s)$ and its first arrival is at time $0$ $(x = 0s)$ . Additionally, we would like to compute the average samples every 5 samples $(avg = 5)$ . The two other commands of the diagram of Figure 5, Request and Receive, arrive at the instants 1 and 3 seconds, and their periodicity is 2 and 5 seconds, respectively. This is $y = 1s$ , $z = 3s$ , $Y = 3$ and $Z = 5$ .
|
| 313 |
+
|
| 314 |
+
In Table 2 we show all the parameters considered for the atomic tasks. The priority has been determined based on the type of the application, sensing periodically is not as critical as responding to an urgent Request. The values of the current consumption and execution time are based on the specifications of the Nordic nRF52840 [32], and $e_j$ is
|
| 315 |
+
|
| 316 |
+
the theoretical value for obtaining $I_{load}$ (see Section 3.1). As such, $V_{min}$ has been defined as $1.8\mathrm{V}$ (minimum operating voltage) and $E$ and $V_{max}$ have been defined as $3.3\mathrm{V}$ (typical operating voltage). The value of the deadline has been taken according to the urgency of the tasks. For example, executing a Response for the Request is considered critical, and therefore its deadline is short. In contrast, the sensing task has a deadline which depends on its periodicity, which means that if its not being sensed in the first third of its periodicity, that data is not longer "fresh". The needed parameters of the batteryless devices are shown in Table 3, unless explicitly specified.
|
| 317 |
+
|
| 318 |
+
To solve the optimization problem described in Section 4, we have used Gurobi Optimization<sup>1</sup>. The output of the optimizer is then fed into the event-based simulator in which we evaluate the proposed solution. This event-based simulator has been implemented in $\mathbf{C}++$ , and simulates the energy level of the batteryless device according to the expected energy to be harvested, and the scheduled tasks. Finally, and as mentioned before, for simplicity of the analysis, we assume a constant energy harvesting rate during a single experiment, which is in line with the output of a buck regulator [27]. Even if the input of the buck regulator is not constant, it provides a constant output. However, both the optimizer and the simulator allow to work with time-varying harvesting power.
|
| 319 |
+
|
| 320 |
+
# 5.2 Energy-aware scheduler validation
|
| 321 |
+
|
| 322 |
+
In order to validate our energy-aware scheduler (E-Aware), we compare its behavior against InK [4], one of the most complete schedulers for batteryless devices of the state-of-the-art. InK is a dynamic scheduler based on priorities and deadlines, however, it is not energy-aware. Although AsTAR [12] is energy-aware, it is only able to change the rate at which applications are executed, and it cannot handle different priorities nor deadlines. For this reason, we have implemented InK in our event-based simulator and we have evaluated the behaviour of InK and our energy-aware scheduler. It is important to note that most of the parameters are chosen by the application developer or are device-specific, thus they are an input for our problem (i.e., they are not configurable by the algorithm). These parameters are the capacitor size $(C)$ , harvesting rate $(PH)$ , the voltage parameters $(V_{\min}, V_{\max}, V_{-1})$ , tasks parameters (current consumption values, priority, execution time, deadline, order or arrival time). The only two parameters that can be configured by the algorithm or the event-based simulator are $\triangle t$ and the voltage turn-on threshold, $V_{th}$ , which is a configurable parameter that corresponds to the voltage the capacitor needs to reach after a power failure in order to turn on again. While $\triangle t$ is only used by the optimization algorithm, $V_{th}$ is only used by Ink, as our energy-aware algorithm avoids power failures.
|
| 323 |
+
|
| 324 |
+
Figure 6a shows the voltage across the capacitor for the two approaches (E-Aware and InK) when performing a sequence of tasks that need to be executed during the 15 seconds of the simulation and with a harvesting power of $5\mathrm{mW}$ . As can be seen in the Figure 6b (where we show the arrival time of the tasks), Sensing happens every 1 second,
|
| 325 |
+
|
| 326 |
+
TABLE 2: Smart Building Tasks considered and their parameters
|
| 327 |
+
|
| 328 |
+
<table><tr><td>Task, j</td><td>Priority, pj</td><td>Execution Time, tej</td><td>Current Consumption, ej</td><td>Deadline, dj</td><td>Comments</td></tr><tr><td>Sense</td><td>1</td><td>0.03s</td><td>1.7mA</td><td>1/3X</td><td>Periodically sense</td></tr><tr><td>Compute</td><td>3</td><td>0.01s</td><td>1mA</td><td>1s</td><td>Average the sensed data</td></tr><tr><td>Tx</td><td>3</td><td>0.19s</td><td>4.36mA</td><td>1s</td><td>Tx the averaged data (4dBm)</td></tr><tr><td>Request</td><td>8</td><td>0.21s</td><td>4.61mA</td><td>0.2s</td><td>Receive a request</td></tr><tr><td>Response</td><td>10</td><td>0.19s</td><td>4.36mA</td><td>0.02s</td><td>Send a response (4dBm)</td></tr><tr><td>Receive</td><td>8</td><td>0.21s</td><td>4.61mA</td><td>0.2s</td><td>Receive an order</td></tr><tr><td>Actuate</td><td>8</td><td>0.05s</td><td>9mA</td><td>1s</td><td>Use an actuator</td></tr></table>
|
| 329 |
+
|
| 330 |
+

|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
(a) Voltage variations when executing different tasks when $PH = 5mW$
|
| 334 |
+
(b) Task arrival time
|
| 335 |
+
Fig. 6: Voltage variations for InK and our approach and tasks to be executed
|
| 336 |
+
|
| 337 |
+
TABLE 3: Experiment setup
|
| 338 |
+
|
| 339 |
+
<table><tr><td>Definition</td><td>Value</td></tr><tr><td>Vmin</td><td>1.8V</td></tr><tr><td>Vmax</td><td>3.3V</td></tr><tr><td>V-1(initial capacitor voltage)</td><td>2.2V</td></tr><tr><td>C</td><td>4.7mF</td></tr><tr><td>Pharvester,t</td><td>Constant ∀t ∈ T</td></tr><tr><td>Δt</td><td>0.01s</td></tr><tr><td>Sleep Current Consumption</td><td>0.1mA</td></tr><tr><td>Turn On Current Consumption</td><td>3mA</td></tr><tr><td>Turn On Time</td><td>0.1s</td></tr></table>
|
| 340 |
+
|
| 341 |
+
while Request and Receive take place every 2 and 5 seconds, respectively. On the other side, Compute takes place after 5 Sense tasks. In this case, from the 41 tasks that need to be executed, InK is able to successfully schedule 21 tasks, while our energy-aware scheduler is able to schedule 36. While our solution is able to avoid power failures and never goes below 1.8V, InK has 3 power failures, at 5.37, 9.43 and 13.64 seconds. At these points in time, the device turns off (note the dashed line in Figure 6), consuming much less energy (we have considered it negligible) than in sleep mode $(0.1\mathrm{mA})$ . However, when the voltage turn-on threshold is reached $(V_{th} = 2.2V)$ , the device wakes up, which takes 0.1 seconds and consumes $3\mathrm{mA}$ . On the contrary, avoiding these power failures is beneficial to not waste energy in turning on and to not miss deadlines while in off.
|
| 342 |
+
|
| 343 |
+
To give a better overview of the improvement of our energy-aware solution, in Figures 7 and 8 we show the comparison between our energy-aware task scheduler and
|
| 344 |
+
|
| 345 |
+
InK. We have considered the experiment setup explained in Section 5.1 with $PH$ of $0.1\mathrm{mW}$ , $0.5\mathrm{mW}$ , $1\mathrm{mW}$ and $5\mathrm{mW}$ , which are in line with the energy that can be obtained from indoor light [29]. We show different values of $V_{th}$ for Ink but we do not need to define any turn on threshold for our energy-aware formulation as we ensure that the device will not turn off.
|
| 346 |
+
|
| 347 |
+
First, in Figure 7a we see how many power failures occur in both approaches. And as expected, our energy-aware scheduler avoids power failures in all the cases, while InK is not able to manage them as it is not aware of the energy. This is possible thanks to the fact that our approach is able to completely avoid power failures only if we assume perfect knowledge on the energy harvested (as it is the case in this specific experiment). However, if this knowledge is not perfect, some failures would occur. In order to provide a fair comparison, we have considered the effect of different values of the turn-on threshold voltage $(V_{th})$ for InK, and as we have seen that higher values than $2.8\mathrm{V}$ do not provide better results, we only consider $1.9\mathrm{V}$ (close to $V_{min}$ ), $2.2\mathrm{V}$ and $2.8\mathrm{V}$ .
|
| 348 |
+
|
| 349 |
+
For Ink we see that increasing the harvested power (from $0.1\mathrm{mW}$ up to $5\mathrm{mW}$ ), for the lowest turn-on voltage threshold $(V_{th} = 1.9V)$ the number of power failures increases, while for the case of $2.8\mathrm{V}$ of threshold, there is only 1 power failure. However, and as can be seen in Figure 7b, where we show the total time the device is On, we can see that the device is only awake for 3.5 seconds till $1\mathrm{mW}$ (in which InK is able to only schedule 5 tasks) and that in the case of $5\mathrm{mW}$ it stays on 5.34 seconds. However, this difference is due to
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
(a) Number of power failures
|
| 355 |
+
(b) Total time the device is On
|
| 356 |
+
Fig. 7: Comparison between the energy-aware scheduler and InK for different values of $PH$ when varying the turn-on threshold of InK
|
| 357 |
+
|
| 358 |
+

|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
(a) Task Success Rate
|
| 362 |
+
(b) Priority Success Rate
|
| 363 |
+
Fig. 8: Performance comparison between the energy-aware scheduler and InK for different values of $PH$ and a 4.7mF Capacitance when varying the turn-on threshold of InK
|
| 364 |
+
|
| 365 |
+
the fact that the device starts at $2.2\mathrm{V}$ and the $PH$ is enough to be awake for more than 5 seconds, but after that, it is not possible to reach the $2.8\mathrm{V}$ to turn on again during the 15 seconds of the simulation. This is mainly due to the fact that the harvested energy is too low to charge the capacitor up to the voltage turn-on threshold. On the other side, and as mentioned, the low energy harvesting rate is not an issue for our energy-aware solution and the device remains On all the time.
|
| 366 |
+
|
| 367 |
+
In Figure 8a we show the task success rate, and the priority success rate is shown in Figure 8b. While the task success rate only represents the rate of the tasks that can get scheduled from the total number of tasks that need to be scheduled, success priority rate is the success rate multiplied by all the task priorities. The simulation time for our experiments has been set to 15 seconds, where we assume that we have total knowledge on all the tasks that will be scheduled and the power that will be harvested during the entire simulation period.
|
| 368 |
+
|
| 369 |
+
For all the values of harvested power considered, our E-aware approach provides better results, no matter the turn-on threshold voltage value chosen by InK. However, the lower the harvested power is, the less improvement we are able to see, as many of the tasks require more energy than what is available in the capacitor. Since the $2.8V$ turn-on threshold provides the worst success (priority) rate, and as the lower one $(1.9V)$ does not provide better results for InK and can lead to more power failures, in the rest of the paper we have considered $V_{th} = 2.2V$ .
|
| 370 |
+
|
| 371 |
+
In order to reduce the time the capacitor needs to reach the voltage turn-on threshold, we have also considered a
|
| 372 |
+
|
| 373 |
+
smaller capacitor. In Figure 9 we show the voltage variations over time when executing some tasks for a capacitor of $4.7\mathrm{mF}$ and $0.47\mathrm{mF}$ when harvesting $1\mathrm{mW}$ of power. For the capacitor of $4.7\mathrm{mF}$ it takes longer to charge, and as can be seen, the energy-aware solution keeps increasing its voltage while executing non power hungry applications until time instant 13.66 seconds, where it is able to execute a more powerful task. However, InK turns off at time instant 3.35 seconds and it is not able to turn on again. When using a smaller capacitor of $0.47\mathrm{mF}$ a complete different behaviour is seen. Our energy-aware solution is able to schedule 18 applications, although all of them have low priority, resulting in a total profit of 24. In contrast, InK keeps turning off and on. But the problem is that with this low energy harvesting power and small capacitor, the energy needed to turn on and just execute an immediate and powerful task is not enough. Since InK is not aware of the energy, it chooses the task with the highest priority which fulfills the deadline constraints, without worrying about its energy consumption. The problem is that while trying to execute the task, its voltage drops to $V_{min}$ and the device turns off. After reaching the turn-on threshold it turns on again and tries to re-execute the same application if the deadline is not reached yet. In this way, InK is only able to successfully execute 2 tasks.
|
| 374 |
+
|
| 375 |
+
In Figure 10 we increase the harvested power to $5\mathrm{mW}$ (i.e., $PH = 5\mathrm{mW}$ ), so more tasks are scheduled. For the smaller capacitor ( $C = 0.47\mathrm{mF}$ ) we see how it gets charged faster, and in the energy-aware scheduler, we see how it intelligently decides to get almost fully charged to have
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
Fig. 9: Voltage variations when executing different tasks for $1\mathrm{mW}$ of $PH$
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
Fig. 10: Voltage variations when executing different tasks for $5\mathrm{mW}$ of $PH$
|
| 382 |
+
Fig. 11: Performance of our energy-aware scheduler when varying the look-ahead window time for $V_{-1} = 2.2V$
|
| 383 |
+
|
| 384 |
+
enough energy to be able to execute a more energy-hungry task in the time instants 3.4, 7.96 and 13.1 seconds. In the meantime, if it can achieve a higher reward by executing a less-power hungry task during the charging process, it will do it. While InK is only able to schedule 7 tasks, our energy-aware solution schedules 18 tasks. For the $4.7\mathrm{mF}$ capacitor, we can observe that it takes longer to charge the capacitor, but also the voltage drop is less abrupt. In this case, while our energy-aware solution it is able to execute 36 tasks, InK is only able to execute 21.
|
| 385 |
+
|
| 386 |
+
# 5.3 Influence of look-ahead window on performance
|
| 387 |
+
|
| 388 |
+
In the previous section, we showed the potential of energy-aware scheduling, by evaluating the maximum gain in performance when assuming perfect prediction of all future tasks and energy harvesting power. In this section, we evaluate how the size of the look-ahead window influences the effective performance gain in a real system (as perfect prediction over an infinite window is not achievable in practice). In batteryless devices, energy consumption is the main parameter to be aware off, and being able to predict both the available energy and the energy to be consumed will allow to better schedule the tasks. However, knowing how much energy is expected in the future gets harder the further in time it is predicted. And also, more memory is
|
| 389 |
+
|
| 390 |
+

|
| 391 |
+
|
| 392 |
+

|
| 393 |
+
(a) Task Success Rate
|
| 394 |
+
(b) Priority Success Rate
|
| 395 |
+
|
| 396 |
+
needed to perform those calculations. For this reason, it is important to know how far in the future the scheduler needs to look to achieve the best possible improvement.
|
| 397 |
+
|
| 398 |
+
For this reason, in Figure 11 we show the task success rate and the priority success rate of a simulation of 25 seconds when we let the scheduler look ahead only over a limited look-ahead optimization window and re-execute it at the start of each window. We define small look-ahead optimization windows with different sizes starting from 0.25 seconds (meaning 100 look-ahead optimization windows) up to 25 seconds, which means one look-ahead optimization window. As expected, when increasing the look-ahead window time a better performance in terms of success priority rate is achieved. However, in terms of tasks, it seems that more tasks are executed if we optimize every 2.5 seconds (c.f., Figure 11a), but since we are optimizing the priority of these scheduled tasks, in Figure 11b we see that the profit is not significantly improved after 1 second of look-ahead window time. This means that when increasing the look-ahead window, less tasks are executed, but that these tasks have a higher priority. For this reason, if the aim is to only maximize the number of tasks, a look-ahead window size of 1 second will be enough. However, only looking at a very short look-ahead window, tasks that consume a lot of power will never be executed. In our experiments, these tasks are also the tasks that have a high priority, and for this reason if more priority tasks need to be executed, a 5 seconds look-ahead window size will be needed.
|
| 399 |
+
|
| 400 |
+
In general terms, and for the considered task set, if we look at the look-ahead time window and the number of tasks executed per second, we can say that a look-ahead window that can look ahead up to 4 tasks will lead to a good
|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
(a) Task Success Rate
|
| 406 |
+
(b) Priority Success Rate
|
| 407 |
+
Fig. 12: Performance of our energy-aware scheduler when varying the look-ahead window time for $V_{-1} = 1.9V$
|
| 408 |
+
|
| 409 |
+
performance in terms of successful task executions, but if more power hungry tasks need to be deployed, looking ahead 8 tasks in the future is needed. To make sure that our conclusions can be generalized to other initial capacitor voltages $V_{-1}$ , we also show the results when the initial voltage of the capacitor is $V_{-1} = 1.9V$ in Figure 12. The same conclusions are still valid, although a performance reduction of $16\%$ occurs for short look-ahead windows (0.25 seconds). However, when increasing the look-ahead window size to 12.5 seconds, the difference is only $0.016\%$ . This means that the bigger the look-ahead window time is, the lower impact the initial voltage has, but also longer experiment duration will diminish this effect.
|
| 410 |
+
|
| 411 |
+
# 6 CONCLUSIONS AND FUTURE WORK
|
| 412 |
+
|
| 413 |
+
In this paper we have shown that energy-aware scheduling mechanisms are needed to improve the performance of successful application execution on batteryless devices. These tiny devices frequently turn on and off, and being aware of the expected energy consumed and the energy that can be harvested is crucial. For this reason, in this paper, we provide theoretical insights into the achievable performance gain of energy-aware task scheduling, compared to state of the art non-aware batteryless application task schedulers. Moreover, we study the influence of the size of the lookahead energy prediction window, as a first step towards developing a practical scheduling heuristic that can run on batteryless devices. To do so, we have proposed a new optimal energy-aware scheduling algorithm that takes into account the energy available in the capacitor and the expected energy to be harvested to optimally schedule the tasks, which are defined by their priority, arrival time, execution
|
| 414 |
+
|
| 415 |
+
time, energy consumption, and set of task parents that need to be executed beforehand. We have compared our energy-aware solution against InK, an energy-unaware dynamic scheduler based on priorities and deadlines. Our results show that making the task scheduler energy aware avoids power failures, which allows more tasks to make their deadlines. And finally, we have evaluated how much look-ahead window time in the future is needed to achieve optimal performance, and we can conclude optimizing every 4 tasks will optimize the task scheduled rate, but power-hungry tasks will suffer from it. In fact, increasing the number of optimized tasks up to 8 tasks will help to obtain a better successful rate where all kind of tasks can be scheduled.
|
| 416 |
+
|
| 417 |
+
There are several future research directions. The results presented in this work can now be used as a basis for heuristic schedulers that can be executed in real-time on batteryless devices, and can also be used to define the requirements for energy harvesting and consumption prediction techniques for such schedulers. However, there are also several challenges and difficulties that should be taken into account when applying the insights of this work in real life. Firstly, distilling from the decisions made by the MILP, more straight forward rules that are able to select a task with very limited look-ahead calculations should be considered. Secondly, and in order to make decisions, the algorithm needs the inputs described in this paper. However, obtaining all this necessary information as inputs could require some more advanced circuitry (i.e., obtaining the capacitor voltage or the harvesting power), whose energy consumption should also be taken into account. And thirdly, a real implementation needs to consider (and minimize) the energy consumed by the scheduler itself as well, but also other factors such as the operating systems underlying effects.
|
| 418 |
+
|
| 419 |
+
In order to tackle these challenges, first, we should design a more light-computing suboptimal solution (i.e., heuristic approach), to be solved in these energy constrained devices. There are different and straight forward solutions (e.g., greedy approaches, genetic algorithms, simulated annealing) that need to be investigated to determine which one offers better performance. Second, a dedicated circuitry module should be added to the design. This additional module should efficiently read the voltage of the capacitor with a resistor divider, but also should be able to obtain the harvested power. This would depend on the type of harvester to be used. For example, if a solar panel is used, the circuitry should also be able to read the open circuit voltage and the short circuit current of the photovoltaic cell, which should be done periodically. To read these values, we need to disconnect the photovoltaic cell from the capacitor, incurring in some waste of energy that should also be considered when addressing the harvested power. However, as an alternative to doing measurements, the use of prediction methods could be used. These techniques normally depend upon statistical and stochastic models of harvested energy using linear regression, Exponential Weighted Moving Average, Markov chains or machine learning. Depending on the look-ahead window, short-time prediction (but also low complexity) techniques should be further investigated.
|
| 420 |
+
|
| 421 |
+
Finally, aiming at considering the energy consumption of this circuitry, the scheduler and other factors such as the
|
| 422 |
+
|
| 423 |
+
effects of the underlying operating system, new energy measurements would need to be taken. These measurements should be performed in a huge variety of conditions and environments to have a broader view of the expected energy consumption.
|
| 424 |
+
|
| 425 |
+
# ACKNOWLEDGMENTS
|
| 426 |
+
|
| 427 |
+
Part of this research was funded by the Flemish FWO SBO S004017N IDEAL-IoT (Intelligent DENSE and Long range IoT networks) project, the University of Antwerp IOF funded project COMBAT (Time-Sensitive Computing on batteryless IoT Devices), the Flemish FWO SBO S001521N IoBaLeT (Sustainable Internet of batteryless Things) project and the CERCA Programme, by the Generalitat de Catalunya. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by FWO and the Flemish Government - department EWI.
|
| 428 |
+
|
| 429 |
+
# REFERENCES
|
| 430 |
+
|
| 431 |
+
[1] D. Ma, G. Lan, M. Hassan, W. Hu, and S. K. Das, "Sensing, computing, and communications for energy harvesting IoTs: A survey," IEEE Communications Surveys Tutorials, vol. 22, no. 2, pp. 1222-1250, 2020.
|
| 432 |
+
[2] J. Hester and J. Sorber, "The future of sensing is batteryless, intermittent, and awesome," in Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, ser. SenSys '17. New York, NY, USA: ACM, 2017, pp. 21:1-21:6. [Online]. Available: http://doi.acm.org/10.1145/3131672.3131699
|
| 433 |
+
[3] T. S. Muratkar, A. Bhurane, and A. Kothari, "Battery-less internet of things -A survey," Computer Networks, vol. 180, no. April, 2020.
|
| 434 |
+
[4] K. S. Yildirim, A. Y. Majid, D. Patoukas, K. Schaper, P. Pawelczak, and J. Hester, "Ink: Reactive kernel for tiny batteryless sensors," in Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, ser. SenSys '18. New York, NY, USA: ACM, 2018, pp. 41-53. [Online]. Available: http://doi.acm.org/10.1145/3274783.3274837
|
| 435 |
+
[5] M. M. Sandhu, S. Khalifa, R. Jurdak, and M. Portmann, "Task scheduling for simultaneous IoT sensing and energy harvesting: A survey and critical analysis," 2020, arXiv:2004.05728. [Online]. Available: https://arxiv.org/abs/2004.05728
|
| 436 |
+
[6] B. Ransford, J. Sorber, and K. Fu, "Mementos: System support for long-running computation on RFID-scale devices," in Proceedings of the Sixteenth International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS XVI. New York, NY, USA: ACM, 2011, pp. 159-170. [Online]. Available: http://doi.acm.org/10.1145/1950365.1950386
|
| 437 |
+
[7] M. Hicks, "Clank: Architectural support for intermittent computation," in Proceedings of the 44th Annual International Symposium on Computer Architecture, ser. ISCA '17. New York, NY, USA: Association for Computing Machinery, 2017, p. 228-240. [Online]. Available: https://doi.org/10.1145/3079856.3080238
|
| 438 |
+
[8] K. Ma, Y. Zheng, S. Li, K. Swaminathan, X. Li, Y. Liu, J. Sampson, Y. Xie, and V. Narayanan, "Architecture exploration for ambient energy harvesting nonvolatile processors," in 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), 2015, pp. 526-537.
|
| 439 |
+
[9] K. Maeng, A. Colin, and B. Lucia, "Alpaca: Intermittent execution without checkpoints," Proc. ACM Program. Lang., vol. 1, no. OOPSLA, pp. 96:1-96:30, oct 2017. [Online]. Available: http://doi.acm.org/10.1145/3133920
|
| 440 |
+
[10] J. Hester, K. Storer, and J. Sorber, "Timely execution on intermittently powered batteryless sensors," in Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, ser. SenSys '17. New York, NY, USA: Association for Computing Machinery, 2017. [Online]. Available: https://doi.org/10.1145/3131672.3131673
|
| 441 |
+
|
| 442 |
+
[11] J. Hester and J. Sorber, "Flicker: Rapid prototyping for the batteryless internet-of-things," in Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, ser. SenSys '17. New York, NY, USA: ACM, 2017, pp. 19:1-19:13. [Online]. Available: http://doi.acm.org/10.1145/3131672.3131674
|
| 443 |
+
[12] F. Yang, A. S. Thangarajan, W. Joosen, C. Huygens, D. Hughes, G. S. Ramachandran, and B. Krishnamachari, "Astar: Sustainable battery free energy harvesting for heterogeneous platforms and dynamic environments," in Proceedings of the 2019 International Conference on Embedded Wireless Systems and Networks, ser. EWSN '19. USA: Junction Publishing, 2019, pp. 71-82. [Online]. Available: http://dl.acm.org/citation.cfm?id=3324320.3324329
|
| 444 |
+
[13] B. Islam and S. Nirjon, "Scheduling computational and energy harvesting tasks in deadline-aware intermittent systems," in 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2020, pp. 95-109.
|
| 445 |
+
[14] A. Caruso, S. Chessa, S. Escolar, X. del Toro, and J. C. López, "A dynamic programming algorithm for high-level task scheduling in energy harvesting IoT," IEEE Internet of Things Journal, vol. 5, no. 3, pp. 2234-2248, 2018.
|
| 446 |
+
[15] C. Li, W. Zhang, C.-B. Cho, and T. Li, "Solarcore: Solar energy driven multi-core architecture power management," in 2011 IEEE 17th International Symposium on High Performance Computer Architecture, 2011, pp. 205-216.
|
| 447 |
+
[16] J. S. Miguel, K. Ganesan, M. Badr, and N. E. Jerger, "The EH Model: Analytical Exploration of Energy-Harvesting Architectures," IEEE Computer Architecture Letters, vol. 17, no. 1, pp. 76-79, 2018.
|
| 448 |
+
[17] H. Yang and Y. Zhang, "A task scheduling algorithm based on supercapacitor charge redistribution and energy harvesting for wireless sensor nodes," Journal of Energy Storage, vol. 6, pp. 186-194, 2016. [Online]. Available: http://dx.doi.org/10.1016/j.est.2016.03.007
|
| 449 |
+
[18] S. Escolar, A. Caruso, S. Chessa, X. d. Toro, F. J. Villanueva, and J. C. López, "Statistical energy neutrality in IoT hybrid energy-harvesting networks," in 2018 IEEE Symposium on Computers and Communications (ISCC), 2018, pp. 00444-00449.
|
| 450 |
+
[19] B. Srbinovski, M. Magno, F. Edwards-Murphy, V. Pakrashi, and E. Popovici, "An energy aware adaptive sampling algorithm for energy harvesting WSN with energy hungry sensors," Sensors (Switzerland), vol. 16, no. 4, pp. 1-19, 2016.
|
| 451 |
+
[20] N. Sharma, S. Barker, D. Irwin, and P. Shenoy, "Blink: Managing server clusters on intermittent power," SIGPLAN Not., vol. 46, no. 3, p. 185-198, Mar. 2011. [Online]. Available: https://doi.org/10.1145/1961296.1950389
|
| 452 |
+
[21] Y. Li, J. Si, S. Ma, and X. Hu, "Using energy-aware scheduling weather forecast based harvesting for reconfigurable hardware," IEEE Transactions on Sustainable Computing, vol. 4, no. 1, pp. 109-117, 2019.
|
| 453 |
+
[22] S. Baruah and E. Bini, "Partitioned scheduling of sporadic task systems: an ILP-based approach," Proceedings of the International Conference on Design and Architectures for Signal and Image Processing (DASIP), 2008.
|
| 454 |
+
[23] C. Moser, D. Brunelli, L. Thiele, and L. Benini, "Real-time scheduling for energy harvesting sensor nodes," Real-Time Systems, vol. 37, no. 3, pp. 233-260, 2007.
|
| 455 |
+
[24] H.-S. Lee and J.-W. Lee, "Resource and task scheduling for swipt IoT systems with renewable energy sources," IEEE Internet of Things Journal, vol. 6, no. 2, pp. 2729-2748, 2019.
|
| 456 |
+
[25] A. Sabovic, C. Delgado, D. Subotic, B. Jooris, E. De Poorter, and J. Famaey, "Energy-aware sensing on battery-less lorawan devices with energy harvesting," *Electronics*, vol. 9, no. 6, p. 904, May 2020. [Online]. Available: http://dx.doi.org/10.3390/electronics9060904
|
| 457 |
+
[26] J. Bi, H. Yuan, S. Duanmu, M. Zhou, and A. Abusorrah, "Energy-optimized partial computation offloading in mobile-edge computing with genetic simulated-annealing-based particle swarm optimization," IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3774-3785, March 2021.
|
| 458 |
+
[27] C. Delgado, J. M. Sanz, C. Blondia, and J. Famaey, "Battery-less lorawan communications using energy harvesting: Modeling and characterization," IEEE Internet of Things Journal, 2020.
|
| 459 |
+
[28] Y. Mahmoud, W. Xiao, and H. H. Zeineldin, "A simple approach to modeling and simulation of photovoltaic modules," IEEE Transactions on Sustainable Energy, vol. 3, no. 1, pp. 185-186, 2012.
|
| 460 |
+
[29] M. Shirvanimoghaddam, K. Shirvanimoghaddam, M. M. Abolhasani, M. Farhangi, V. Z. Barsari, H. Liu, M. Dohler, and M. Naebe, "Paving the path to a green and self-powered
|
| 461 |
+
|
| 462 |
+
internet of things," Electrical Engineering and Systems Science, 2018. [Online]. Available: http://arxiv.org/abs/1712.02277
|
| 463 |
+
[30] F. K. Shaikh and S. Zeadally, "Energy harvesting in wireless sensor networks: A comprehensive review," Renewable and Sustainable Energy Reviews, vol. 55, pp. 1041 - 1054, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1364032115012629
|
| 464 |
+
[31] K. S. Adu-Manu, N. Adam, C. Tapparello, H. Ayatollahi, and W. Heinzelman, "Energy-harvesting wireless sensor networks (eh-wsns): A review," ACM Trans. Sen. Netw., vol. 14, no. 2, apr 2018. [Online]. Available: https://doi.org/10.1145/3183338
|
| 465 |
+
[32] "nrf52840 product specification," https://infocenter.nordicsemi. com/pdf/nRF52840_PS_v1.0.pdf, accessed: 30 September 2020.
|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
|
| 469 |
+
Carmen Delgado She received the M.Sc. in telecommunications engineering, the M.Sc. degree in biomedical engineering and a Ph.D. (cum laude) in Mobile Network Information and Communication Technologies from the University of Zaragoza, Spain, in 2013, 2014, and 2018 respectively. She joined the Internet Technology and Data Science Lab (IDLab) of the University of Antwerp, associated with imec, Belgium as a post-doctoral researcher in 2018. She is currently working in the i2CAT Foundation as senior
|
| 470 |
+
|
| 471 |
+
researcher. Her research interests lie in the field of Internet of Things, resource allocation, energy harvesting, low power communications, energy modeling and performance evaluation of wireless sensor networks.
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
|
| 475 |
+
Jeroen Famaey He is an assistant professor associated with imec and the University of Antwerp, Belgium. He received his M.Sc. degree in Computer Science from Ghent University, Belgium in 2007 and a Ph.D. in Computer Science Engineering from the same university in 2012. He is co-author of over 120 articles published in international peer-reviewed journals and conference proceedings, and 10 submitted patent applications. His research focuses on performance modeling and optimization of wireless networks,
|
| 476 |
+
|
| 477 |
+
with a specific interest in low-power, dense and heterogeneous networks.
|
2402.05xxx/2402.05319/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b73ee6d962cc6a06d0102bd785675d9c17e4051ae0035a6b4bf308226bd11b1
|
| 3 |
+
size 712912
|
2402.05xxx/2402.05319/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05369/4f2ef0ab-2285-4293-a615-551786cb9e09_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b429c73579d3ea2a29f35d193fbde1e4e35dbbdc107296693289f149ae8f288c
|
| 3 |
+
size 1239719
|
2402.05xxx/2402.05369/full.md
ADDED
|
@@ -0,0 +1,716 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Noise Contrastive Alignment of Language Models with Explicit Rewards
|
| 2 |
+
|
| 3 |
+
Huayu Chen $^{1,2}$ , Guande He $^{1,2}$ , Lifan Yuan $^{1}$ , Ganqu Cui $^{1}$ , Hang Su $^{1,2,3}$ , Jun Zhu $^{1,2*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science and Technology, Tsinghua University
|
| 6 |
+
|
| 7 |
+
$^{2}$ Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University
|
| 8 |
+
|
| 9 |
+
$^{3}$ Zhongguancun Laboratory, Beijing, China
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
User intentions are typically formalized as evaluation rewards to be maximized when fine-tuning language models (LMs). Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given. In this paper, we introduce a general framework for LM alignment, leveraging Noise Contrastive Estimation (NCE) to bridge the gap in handling reward datasets explicitly annotated with scalar evaluations. Our framework comprises two parallel algorithms, NCA and InfoNCA, both enabling the direct extraction of an LM policy from reward data as well as preference data. Notably, we show that the DPO loss is a special case of our proposed InfoNCA objective under pairwise preference settings, thereby integrating and extending current alignment theories. By comparing NCA and InfoNCA, we demonstrate that the well-observed decreasing-likelihood trend of DPO/InfoNCA is caused by their focus on adjusting relative likelihood across different responses. In contrast, NCA optimizes the absolute likelihood for each response, thereby effectively preventing the chosen likelihood from decreasing. We evaluate our methods in both reward and preference settings with Mistral-8×7B and 7B models. Experiments suggest that InfoNCA/NCA surpasses various preference baselines when reward datasets are available. We also find NCA significantly outperforms DPO in complex reasoning tasks like math and coding. Code: https://github.com/thu-ml/Noise-Contrastive-Alignment.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Aligning pretrained Language Models (LMs) with scalar rewards that reflect human intentions is crucial for enhancing their ability to follow instructions [35, 25]. These rewards can be given either explicitly or implicitly. Explicit rewards can be scalar ratings of human annotators or advanced models like GPT-4, while implicit rewards are usually preference labels assigned to pairwise responses.
|
| 18 |
+
|
| 19 |
+
One effective approach for aligning LMs with preference data is Direct Preference Optimization (DPO, [33]). DPO applies a reward training loss but parameterizes the reward model as the response likelihood ratio between two LMs, allowing for training reward models and extracting LM policies simultaneously. This approach is more streamlined and thus more favorable compared with traditional Reinforcement Learning (RL) methods [26], which typically require a two-stage training process: first training reward models, then extracting LM policies.
|
| 20 |
+
|
| 21 |
+
Despite its simplicity and effectiveness, DPO is only tailored for preference data $(x\rightarrow \{y_w > y_l\})$ . When multiple responses are available, directly assigning a scalar reward to each response is usually more convenient and efficient than comparing them in a pairwise manner. The resulting reward
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: InfoNCA/NCA allows direct LM optimization for both reward and preference data.
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Figure 2: Pairwise NCA prevents chosen likelihood from decreasing while DPO cannot.
|
| 28 |
+
|
| 29 |
+
datasets $(x\rightarrow \{y_i,r_i\}_{1:K})$ , however, cannot be directly leveraged for DPO training. Previous work [40] usually prunes reward datasets by selecting the best response and pairing it with a random remaining one. This is suboptimal as all reward values and additional dispreferred responses are thrown away in its data-preprocessing process.
|
| 30 |
+
|
| 31 |
+
To address this issue, we present InfoNCA, an alignment method that allows directly extracting LM policies from both reward datasets and preference datasets with arbitrary response numbers (Figure 1). Notably, InfoNCA subsumes DPO loss as a special case under pairwise preference settings and can thus be seen as a natural extension of DPO (Sec. 3.2). With strong theoretical guarantees, we show DPO is a binary classification loss while InfoNCA is its multi-category version (Figure 3). However, unlike DPO which is built upon assumptions of Bradley-Terry models or Plackett-Luce models, InfoNCA is strictly derived from Information Noise Contrastive Estimation (InfoNCE, [24]), an established contrastive method that is widely applied in language and visual representation learning [31]. This closes the theoretical gap between current preference alignment methods and classic contrastive learning frameworks.
|
| 32 |
+
|
| 33 |
+
A well-observed problem with DPO is that the likelihood of the preferred response tends to decrease throughout training [27, 32]. Similar trends are shared by InfoNCA. We find this issue arises mainly from InfoNCA/DPO's focus on adjusting the relative likelihood across different responses per instruction. In Sec. 4, we propose NCA as an alternative alignment method to InfoNCA to mitigate this problem. NCA differs from InfoNCA by only loss definition and is also suitable for both preference and reward datasets. However, NCA is built on NCE [14], a parallel contrastive learning algorithm to InfoNCE, which optimizes the absolute data likelihood during training. In practice, NCA effectively prevents the chosen likelihood from decreasing (Figure 2).
|
| 34 |
+
|
| 35 |
+
We evaluate our methods on Mistral-7B and $8 \times 7\mathrm{B}$ models from two dimensions. When reward datasets [9] are available, we show that directly applying our reward-based alignment offers clear improvement compared with preference-based algorithms, achieving higher evaluation rewards in GPT-4 [47, 19] evaluations. We further validate this improvement comes from InfoNCA/NCA's ability to fully leverage the additional suboptimal responses. When only preference data is given [44], we compare pairwise NCA against the DPO loss. Our experimental results spanning various benchmarks show that NCA outperforms DPO in complex reasoning tasks such as math and coding.
|
| 36 |
+
|
| 37 |
+
Our main contributions: 1. We bridge the theoretical gap between DPO and classic contrastive learning theories. InfoNCA and NCA are uniquely suited for both reward and preference data, offering a general framework that integrates preference-based algorithms. 2. We show that suboptimal responses are also important for LM optimization. Our method outperforms various preference methods by fully exploiting data information in reward datasets. 3. NCA effectively mitigates the data likelihood decline issue of DPO and offers practical performance improvement.
|
| 38 |
+
|
| 39 |
+
# 2 Background: Direct Preference Optimization
|
| 40 |
+
|
| 41 |
+
LM alignment is essentially a constrained policy optimization problem:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\max _ {\pi_ {\theta}} \mathbb {E} _ {p (x)} \left[ \mathbb {E} _ {\pi_ {\theta} (y | x)} r (x, y) - \alpha D _ {\mathrm {K L}} \left(\pi_ {\theta} (\cdot | x) | | \mu (\cdot | x)\right) \right], \tag {1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\mu$ represents the pretrained LM. $x$ and $y$ are respectively instructions and responses. $r$ is a reward function that reflects human intentions. $\alpha$ is some temperature coefficient. Prior work [30, 29]
|
| 48 |
+
|
| 49 |
+
has proved that the optimal solution for the optimization problem in Eq. 1 is:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\pi^ {*} (y | x) = \mu (y | x) \frac {e ^ {r (x , y) / \alpha}}{Z (x)} \propto \mu (y | x) e ^ {r (x, y) / \alpha}. \tag {2}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
Direct Preference Optimization (DPO) [33] assumes we only have access to some pairwise preference data $x \to \{y_w > y_l\}$ for each instruction $x$ . The preference probability of human annotators is modeled by a learnable implicit reward model $r_\theta$ under Bradley-Terry theories [2]:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\pi_ {\theta} \left(y _ {w} > y _ {l} | x\right) := \sigma \left(r _ {\theta} \left(y _ {w}, x\right) - r _ {\theta} \left(y _ {l}, x\right)\right),
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\sigma$ is the sigmoid function. To learn $r_{\theta}$ , DPO simply adopts a binary classification loss:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\mathcal {L} _ {\mathrm {D P O}} = - \mathbb {E} _ {\{x, y _ {w} > y _ {l} \}} \log \sigma \left(r _ {\theta} \left(y _ {w}, x\right) - r _ {\theta} \left(y _ {l}, x\right)\right). \tag {3}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
In practice, the latent function $r_{\theta}$ is parameterized by the log-likelihood ratio between $\pi_{\theta}$ and $\mu$ :
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
r _ {\theta} (x, y) := \beta \log \frac {\pi_ {\theta} (y | x)}{\mu (y | x)},
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $\beta$ a linear coefficient for scaling $r_{\theta}$ . This parameterization is crucial because it ensures $\pi^{\theta}(y|x) \propto \mu(y|x)e^{r_{\theta}(x,y)/\beta}$ constantly hold. It transforms generative policy optimization into a simple discriminative classification task: When $r_{\theta} = r$ and $\beta = \alpha$ are satisfied, we naturally have $\pi_{\theta} = \pi^{*}$ .
|
| 74 |
+
|
| 75 |
+
# 3 InfoNCA: Extending DPO from Preference to Explicit Rewards
|
| 76 |
+
|
| 77 |
+
Compared with constructing preference datasets, annotating each response with scalar rewards can be more flexible and convenient. Preference methods are only suitable for pairwise data $(x\rightarrow \{y_w > y_l\})$ and would require $C_K^2$ evaluations for comparing $K$ responses. In contrast, reward datasets $(x\rightarrow \{y_i,r_i\}_{1:K})$ allow an arbitrary number of responses per prompt with $K$ evaluations.
|
| 78 |
+
|
| 79 |
+
Despite its simplicity in handling preference data, DPO is not tailored for reward datasets. We introduce a new alignment method termed InfoNCA to mitigate this gap. We first strictly derive InfoNCA in Sec. 3.1. We show that reward alignment can be solved by constructing a classification problem to identify the optimal response from multiple candidates. We then demonstrate that InfoNCA subsumes DPO as a special case and thus is a natural extension of DPO (Sec. 3.2).
|
| 80 |
+
|
| 81 |
+
# 3.1 Reward Alignment through Multi-Class Classification
|
| 82 |
+
|
| 83 |
+
In essence, DPO represents response rewards as LM likelihoods and constructs a binary classification task for learning the reward model. Given that there are more than two $(K > 2)$ responses per prompt in reward datasets, we seek to construct a multi-class classification task for learning reward models from explicit rewards instead of preference labels. We begin by formally defining this task:
|
| 84 |
+
|
| 85 |
+
Consider a batch of $K$ responses $\{y_i\}_{1:K}$ for an instruction $x$ . $\{y_i\}_{1:K}$ consists of one optimal response $y_{\nu}$ that is sampled from $\pi^{*}(y|x)\propto \mu (y|x)e^{r(x,y) / \alpha}$ , and $K - 1$ suboptimal noises independently sampled from $\mu (y|x)$ . $\nu \in 1:K$ is the random index of that optimal response. Our goal is to identify which of the $K$ candidates is $y_{\nu}$ , given only reward labels $r(y_i)$ for each response.
|
| 86 |
+
|
| 87 |
+
Intuitively, the response with higher rewards should have a higher probability of being the target response. This intuition can be more rigorously expressed:
|
| 88 |
+
|
| 89 |
+
Proposition 3.1 (proof in Appendix A.1). Given the above $K$ response candidates and their respective rewards, the posterior probability for the $\nu$ -th response being drawn from $\pi^{*}$ is
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
p (\nu | x, \{y _ {i} \} _ {1: K}) = \frac {e ^ {r (x , y _ {\nu})}}{\sum_ {i = 1} ^ {K} e ^ {r (x , y _ {i})}}. \tag {4}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
This finding is highly appealing because it shows response optimality is solely related to response rewards. This allows us to conveniently train reward models $r_{\theta}$ via maximum likelihood (MLE).
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
Figure 3: DPO, InfoNCA, and NCA all optimize LLM through classification tasks. DPO compares two responses and tells which one is preferred. InfoNCA compares multiple responses and identifies the one sampled from $\pi^{*}$ (Sec. 3.1). NCA predicts the model source of a single response (Sec. 4.1).
|
| 99 |
+
|
| 100 |
+
Theorem 3.2 (InfoNCA, proof in Appendix A.1). We define $\pi^{*}(y|x) \propto \mu(y|x)e^{r(x,y)/\alpha}$ and $\pi_{\theta}(y|x) \propto \mu(y|x)e^{r_{\theta}(x,y)}$ . For any $K > 1$ , $\alpha > 0$ , we have:
|
| 101 |
+
|
| 102 |
+
(a) Equivalent objective. The MLE objective for training $r_{\theta}$ has an equivalent form:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\max _ {\theta} \mathbb {E} _ {p (x, \{y _ {i} \})} \log p _ {\theta} (\nu | x, \{y _ {i} \}) \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x)} \prod_ {\mu (y _ {i} | x)} \sum_ {i = 1} ^ {K} \left[ \frac {e ^ {r (x , y _ {i}) / \alpha}}{Z (x)} \log \frac {e ^ {r _ {\theta} (x , y _ {i})}}{\sum_ {j = 1} ^ {K} e ^ {r _ {\theta} (x , y _ {j})}} \right], \tag {5}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
where $Z(x) = \mathbb{E}_{\mu (y|x)}e^{r(x,y) / \alpha}$ .
|
| 109 |
+
|
| 110 |
+
(b) Optimal solution. Assume unlimited model capacity. The optimal $r_{\theta^*}$ and $\pi_{\theta^*}$ for solving (5) are
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha + C (x),
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
a n d \quad \pi_ {\theta^ {*}} (y | x) = \pi^ {*} (y | x) \propto \mu (y | x) e ^ {r (x, y) / \alpha},
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
where $C(x)$ is an arbitrary function conditioning on $x$ .
|
| 121 |
+
|
| 122 |
+
In practical implementation of Eq. 5, we estimate $Z(x) \approx \frac{1}{K} \sum e^{r_j / \alpha}$ and parameterize $r_{\theta}(x,y) \coloneqq \beta \log \frac{\pi_{\theta}(y|x)}{\mu(y|x)}$ following DPO (Eq. 2). The loss function becomes
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathcal {L} _ {\theta} ^ {\text {I n f o N C A}} \left(x, \left\{y _ {i}, r _ {i} \right\} _ {1: K}\right) = - \sum_ {i = 1} ^ {K} \left[ \underbrace {\frac {e ^ {r _ {i} / \alpha}}{\sum_ {j = 1} ^ {K} e ^ {r _ {j} / \alpha}}} _ {\text {s o f t l a b e l s}} \log \underbrace {\frac {e ^ {\overbrace {e ^ {r _ {\theta} (x , y _ {i})}} ^ {\text {m o d e l l o g i t s}}}}{\sum_ {j = 1} ^ {K} e ^ {r _ {\theta} (x , y _ {j})}}} _ {\text {p r e d i c t e d p r o b a b i l i t y}} \right] _ {r _ {\theta} (x, y) := \beta \log \frac {\pi_ {\theta} (y | x)}{\mu (y | x)}}. \tag {6}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
The loss function in Eq. 6 is termed InfoNCA, where A stands for Alignment. This naming reflects its functional similarity to the Infomation Noise Contrastive Estimation (InfoNCE, [24]). Both methods transform generative modeling problems into classification tasks by contrasting multiple data points.
|
| 129 |
+
|
| 130 |
+
How does InfoNCA work? InfoNCA loss (Eq. 6) can be seen as a $K$ -category cross-entropy loss. The soft label is calculated by dataset rewards through a softmax operation. The model's predictions are represented by learned reward $r_{\theta}$ . The loss reaches 0 when $r_{\theta^*}(x, y) = r(x, y) / \alpha + C(x)$ .
|
| 131 |
+
|
| 132 |
+
# 3.2 InfoNCA Subsumes DPO as A Special Case
|
| 133 |
+
|
| 134 |
+
Below we show that DPO is a special case of InfoNCA asymptotically. Specifically, setting response number $K = 2$ and reward temperature $\alpha \rightarrow 0$ , we can fully recover the DPO objective:
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\begin{array}{l} \mathcal {L} _ {\theta} ^ {\text {I n f o N C A}} \left(x, \left\{y _ {i}, r _ {i} \right\} _ {1: K}\right) = - \sum_ {i = 1} ^ {K} \left[ \mathbb {1} \left(r _ {i} = r _ {\max }\right) \log \frac {e ^ {r _ {\theta} \left(x , y _ {i}\right)}}{\sum_ {j = 1} ^ {K} e ^ {r _ {\theta} \left(x , y _ {j}\right)}} \right] \quad (\alpha \rightarrow 0) \\ = - \log \frac {e ^ {r _ {\theta} (x , y _ {w})}}{e ^ {r _ {\theta} (x , y _ {w})} + e ^ {r _ {\theta} (x , y _ {l})}} \quad (K = 2, \text {s u p p o s e} r _ {w} > r _ {l}) \\ = - \log \sigma \left(r _ {\theta} \left(x, y _ {w}\right) - r _ {\theta} \left(x, y _ {l}\right)\right) \quad (\mathrm {D P O} \text {l o s s}, \tag {Eq.3} \\ \end{array}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
<table><tr><td>Alignment Method</td><td>InfoNCA (Sec. 3)</td><td>NCA (Sec. 4)</td></tr><tr><td>Modeling Target</td><td colspan="2">π*(y|x) ∝ μ(y|x)e^r(x,y)/α</td></tr><tr><td>Model Definition</td><td>πθ(y|x) ∝ μ(y|x)e^rθ(x,y)</td><td>πθ(y|x) = μ(y|x)e^rθ(x,y)</td></tr><tr><td>Reward Dataset</td><td colspan="2">x → {yi, ri}1:K</td></tr><tr><td>Loss (K>1,α>0)</td><td>- ∑i=1K [er/α/∑j e^rj/α log er/θ(x,y_i)/∑j e^rθ(x,y_j)]</td><td>- ∑i=1K [er/α/∑j e^rj/α log σ(rθ(x,y_i)) + 1/K log σ(-rθ(x,y_i))</td></tr><tr><td>Preference Dataset</td><td colspan="2">x → {yw > yl}</td></tr><tr><td>Loss (K=2, α→0)</td><td>- log σ(rθ(x,yw) - rθ(x,yl))(DPO)</td><td>- log σ(rθ(x,yw)) - 1/2 ∑y∈{yw,yl} log σ(-rθ(x,y))</td></tr><tr><td>Loss Type</td><td>InfoNCE loss [24]</td><td>NCE loss [14]</td></tr><tr><td>Optimizing Target</td><td>relative value of log likelihood ratio</td><td>absolute value of log likelihood ratio</td></tr><tr><td>Optimal rθ* (x,y)</td><td>r(x,y)/α + C(x)</td><td>r(x,y)/α - log Eμ(y|x)e^r(x,y)/α</td></tr><tr><td>rθ* (x,ybest) ≥ 0?</td><td>not guaranteed</td><td>✓</td></tr></table>
|
| 141 |
+
|
| 142 |
+
Table 1: Comparison of NCA and InfoNCA algorithm for aligning language models. Both reward loss and pairwise preference loss are given. We provide pseudocode in Appendix B.
|
| 143 |
+
|
| 144 |
+
Empirical effect for varying hyperparameter $K$ and $\alpha$ . As indicated by the derivation above, the root difference between preference-based and reward-based methods lies in the choices of $K$ and $\alpha$ .
|
| 145 |
+
|
| 146 |
+
$K$ affects how accurately we can estimate the partition function $Z(x) \approx \sum_{j=1}^{K} e^{r_j / \alpha}$ in Eq. 5. In practice, we find larger $K$ can lead to better performance (Sec. 5.1). $\alpha$ indicates a trade-off between diversity and optimality. At $a \to 0$ , the InfoNCA loss increases the likelihood only for the optimal response and decreases it for all other responses, turning the reward dataset $x \to \{y_i, r_i\}$ into a preference dataset $x \to \{y_w > y_l\}$ . We provide ablation studies of $\alpha$ and $\beta$ in Appendix D.
|
| 147 |
+
|
| 148 |
+
# 4 NCA: Fixing Decreased Response Likelihood Issue for InfoNCA
|
| 149 |
+
|
| 150 |
+
A well-observed issue with DPO is that the likelihood of all responses continually decrease throughout training [27, 32]. We find InfoNCA shares this trend due to their inherent equivalence. Decreased data likelihood is concerning because it directly contradicts the maximum likelihood objective for supervised training and may eventually harm performance [44].
|
| 151 |
+
|
| 152 |
+
We hypothesize the main cause of this decreasing likelihood is that InfoNCA methods only adjust relative rewards among responses, rather than optimizing their absolute value. To address this problem, we take inspiration from NCE, another contrastive learning method parallel to InfoNCE, and propose NCA(lignment) (Sec. 4.1). Similar to InfoNCA, NCA can also guarantee convergence to the optimal LM policy under ideal conditions (Theorem 4.2). However, it directly learns the absolute reward for each response, thereby counteracting the decreasing likelihood trend (Sec. 4.2).
|
| 153 |
+
|
| 154 |
+
# 4.1 Reward Alignment through Absolute Reward Prediction
|
| 155 |
+
|
| 156 |
+
To avoid optimizing relative rewards across multiple responses, we construct a binary classification task that deals with a single response.
|
| 157 |
+
|
| 158 |
+
Specifically, imagine sampling a response $y$ randomly from either the optimal LM $\pi^{*}(y|x) = \mu(y|x)\frac{e^{r(x,y)/\alpha}}{Z(x)}$ , or the pretrained LM $\mu(y|x)$ . The marginal probability of $y$ is $p(y|x) := \frac{1}{2}\mu(y|x) + \frac{1}{2}\pi^{*}(y|x)$ . Our goal is to guess its model source when given a response $y$ and its reward $r(y)$ .
|
| 159 |
+
|
| 160 |
+
Proposition 4.1 (proof in Appendix A.2). Let a binary variable $\nu = 1$ indicates the response $y$ is sampled from $\pi^{*}$ . The posterior probability of the distribution source given the response $y$ satisfies:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
p (\nu = 1 | x, y) = \frac {\pi^ {*} (y | x)}{\mu (y | x) + \pi^ {*} (y | x)} = \frac {e ^ {r (x , y) / \alpha}}{Z (x) + e ^ {r (x , y) / \alpha}}. \tag {7}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
Note that $p(\nu | x, y)$ is related to the partition function $Z(x)$ . In order to represent model likelihood $p_{\theta}(\nu | x, y)$ by only employing $r_{\theta}$ similarly to Proposition 3.1, we have to redefine
|
| 167 |
+
|
| 168 |
+
$\pi_{\theta}(y|x) = \mu (y|x)e^{r_{\theta}(x,y)}$ by absorbing $Z_{\theta}$ into $r_\theta$ . Then we have
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
p _ {\theta} (\nu = 1 | x, y) = \frac {\pi_ {\theta} (y | x)}{\mu (y | x) + \pi_ {\theta} (y | x)} = \sigma \left(r _ {\theta} (x, y)\right). \tag {8}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Similarly to Theorem 3.2, we can derive a MLE-based training objective for optimizing $r_{\theta}$ .
|
| 175 |
+
|
| 176 |
+
Theorem 4.2 (NCA, proof in Appendix A.2). Let $\alpha >0$ , we have the maximum likelihood objective:
|
| 177 |
+
|
| 178 |
+
(a) Equivalent objective.
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\max _ {\theta} \mathbb {E} _ {p (x, y)} \log p _ {\theta} (\nu | x, y) \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) \mu (y | x)} \left[ \frac {e ^ {r (x , y) / \alpha}}{Z (x)} \log \sigma \left(r _ {\theta} (x, y)\right) + \log \sigma (- r _ {\theta} (x, y)) \right], \tag {9}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
where $Z(x) = \mathbb{E}_{\mu(y|x)} e^{r(x,y)/\alpha}$ .
|
| 185 |
+
|
| 186 |
+
(b) Optimal solution. Assume unlimited model capacity. The optimal $r_{\theta^*}$ and $\pi_{\theta^*}$ for solving (9) are
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha - \log \mathbb {E} _ {\mu (y | x)} e ^ {r (x, y) / \alpha}, \tag {10}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
and $\pi_{\theta^{*}}(y|x)\propto \mu (y|x)e^{r(x,y) / \alpha}.$
|
| 193 |
+
|
| 194 |
+
For reward datasets $(x\rightarrow \{y_i,r_i\}_{1:K})$ , we estimate $Z(x)\approx \sum_{i = 1}^{K}e^{r_i / \alpha}$ in Eq. 9 and construct $r_\theta$ similarly to InfoNCA:
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
\mathcal {L} _ {\theta} ^ {\mathrm {N C A}} \left(x, \left\{y _ {i}, r _ {i} \right\} _ {1: K}\right) = - \sum_ {i = 1} ^ {K} \underbrace {\left[ \frac {e ^ {r _ {i} / \alpha}}{\sum_ {j = 1} ^ {K} e ^ {r _ {j} / \alpha}} \underbrace {\log \sigma \left(r _ {\theta} \left(x , y _ {i}\right)\right)} _ {\text {o p t i m i z e r} \uparrow} + \frac {1}{K} \underbrace {\log \sigma \left(- r _ {\theta} \left(x , y _ {i}\right)\right)} _ {\text {r e g u l a r i z e r} \downarrow} \right]} _ {\text {s o f t m a x w e i g h t}} (1 1)
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
How does NCA work? The loss function for NCA involves two opposing forces that jointly determine the trend of increasing or decreasing $r_{\theta}(x,y)$ . Since $\log \sigma(\cdot)$ is a monotonically increasing function, the first term in Eq. 11 tends to increase $r_{\theta}(x,y)$ while the second term tends to decrease it.
|
| 201 |
+
|
| 202 |
+
At the start of training, when $r_{\theta} = -r_{\theta} = 0$ , the direction of the combined force for $r_{\theta}$ is decided by the difference in their weights, expressed as $\frac{e^{r_i / \alpha}}{\sum_{j=1}^{K} e^{r_j / \alpha}} - \frac{1}{K}$ . Responses with higher rewards would, in principle, attain higher likelihood after training.
|
| 203 |
+
|
| 204 |
+
# 4.2 Connection between NCA and InfoNCA/DPO
|
| 205 |
+
|
| 206 |
+
Although both NCA and InfoNCA originate from solving a noise contrastive classification problem, their optimization targets are markedly different (Table 1).
|
| 207 |
+
|
| 208 |
+
InfoNCA and DPO both calibrate relative values of reward models across various responses $\{y_{i}\}_{1:K}$ for an instruction $x$ . In other words, the absolute value of $r_{\theta}(x,y)$ is not directly constrained. This can lead to some counterintuitive behaviors. For instance, the learned reward for even the highest-reward response could decrease over time without contradicting the loss definition, as long as the reward margin keeps increasing. This could lead to poor performance or training instability (Sec. 5.2).
|
| 209 |
+
|
| 210 |
+
In contrast, NCA specifically focuses on optimizing absolute values of the reward model. This characteristic is determined by its model definition: $\pi_{\theta}(y|x) = \mu(y|x)e^{r_{\theta}(x,y)}$ , where $r_{\theta}$ has to be self-normalized: $\mathbb{E}_{\mu(y|x)}e^{r_{\theta}(x,y)} = 1$ . In practice, NCA effectively prevents the likelihood of the preferred responses from decreasing. We find this is particularly helpful for math and coding tasks.
|
| 211 |
+
|
| 212 |
+
# 5 Experiments
|
| 213 |
+
|
| 214 |
+
We mainly seek to answer two questions in our experiments:
|
| 215 |
+
|
| 216 |
+
1. If we have access to reward-annotated datasets with $>2$ responses per prompt, does InfoNCA or NCA offer empirical improvement compared with preference-based approaches that simply prune reward datasets into preference datasets? (Sec. 5.1)
|
| 217 |
+
2. If only pairwise preference data is available, when should one choose NCA over DPO? What benefits does NCA offer? (Sec. 5.2) Note that InfoNCA is exactly DPO in this setting.
|
| 218 |
+
|
| 219 |
+
<table><tr><td></td><td>Name</td><td>Annotation Type</td><td>MT-bench</td><td>AlpacaEval</td><td>Win vs. DPO</td></tr><tr><td rowspan="6">Baseline</td><td>Mixtral-7B-sft</td><td>SFT Data</td><td>6.45</td><td>85.20</td><td>-</td></tr><tr><td>+KTO [11]</td><td>Preference</td><td>7.12</td><td>91.93</td><td>-</td></tr><tr><td>+IPO [1]</td><td>Preference</td><td>7.45</td><td>90.62</td><td>-</td></tr><tr><td>+DPO (Zephyr-β)</td><td>Preference</td><td>7.34</td><td>90.60</td><td>50.0</td></tr><tr><td>+DPO×3</td><td>Preference</td><td>7.22</td><td>91.60</td><td>58.1</td></tr><tr><td>+DPO×C42</td><td>Preference</td><td>7.38</td><td>90.29</td><td>48.1</td></tr><tr><td rowspan="2">Ours</td><td>+InfoNCA</td><td>Reward</td><td>7.63</td><td>92.35</td><td>56.9</td></tr><tr><td>+NCA</td><td>Reward</td><td>7.52</td><td>90.31</td><td>59.4</td></tr><tr><td rowspan="4">Reference</td><td>Mixtral-ORPO-β</td><td>Preference+SFT</td><td>7.32</td><td>91.41</td><td>-</td></tr><tr><td>Mistral-7B-instruct</td><td>SFT Data</td><td>6.84</td><td>92.78</td><td>-</td></tr><tr><td>LLaMA2-chat-70b</td><td>Reward Model</td><td>6.86</td><td>92.66</td><td>-</td></tr><tr><td>GPT-4</td><td>Reward Model</td><td>9.18</td><td>93.78</td><td>-</td></tr></table>
|
| 220 |
+
|
| 221 |
+
# 5.1 Aligning Language Models with Explicit Rewards
|
| 222 |
+
|
| 223 |
+
Reward dataset and Evaluation metric. We consider UltraFeedback [9], an instruction-following dataset annotated by GPT-4. This dataset comprises $\sim 64\mathrm{k}$ instructions. Each instruction has 4 responses generated by various LMs. GPT-4 rates each response with a scalar reward on a scale of 0-10. Prior research indicates that these GPT-4 rewards closely align with human annotations [47], establishing them as an efficient, cost-effective alternative to human feedback. In order to align exactly with the definition of dataset rewards, we similarly choose well-acknowledged GPT4-based benchmarks like MT-bench [47] and AlpacaEval [19] for evaluation. Human preference studies are also conducted on evaluation prompts from MT-bench. The rating system is in Appendix C.
|
| 224 |
+
|
| 225 |
+
InfoNCA and NCA outperform preference-based methods given reward dataset. To handle reward datasets with $K > 2$ responses per instruction, one approach is to simply prune them into pairwise data and apply preference learning like DPO. For instance, Zephyr [40] selects the highest-reward response and a random remaining one from UltraFeedback for each instruction. This procedure discards two additional suboptimal responses in the dataset as well as their reward information.
|
| 226 |
+
|
| 227 |
+
In Table 2, we fine-tune a Mistral-7B model on UltraFeedback and compare InfoNCA/NCA against the DPO baseline. Results show that our methods outperform preference baselines. This improvement can be attributed to InfoNCA/NCA's ability to exploit all information in the reward dataset.
|
| 228 |
+
|
| 229 |
+
Table 2: Comparison between reward-based methods (InfoNCA, NCA) and preference-based methods (DPO, IPO, etc.) in LLM alignment. We focus on the general instruction-following abilities of each method measured by GPT-4 evaluations and human preference. The highest number in each benchmark is bolded and the second highest is underlined.
|
| 230 |
+
|
| 231 |
+
<table><tr><td>Method</td><td>K=2</td><td>K=3</td><td>K=4</td></tr><tr><td>InfoNCA (MT-bench)</td><td>73.8</td><td>75.9</td><td>76.3</td></tr><tr><td>InfoNCA (Alpaca)</td><td>90.7</td><td>90.2</td><td>92.4</td></tr><tr><td>NCA (MT-bench)</td><td>73.2</td><td>73.3</td><td>75.2</td></tr><tr><td>NCA (Alpaca)</td><td>89.9</td><td>90.3</td><td>90.3</td></tr><tr><td>Average</td><td>81.9</td><td>82.4</td><td>83.5</td></tr></table>
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
Figure 4: More suboptimal responses can also increase LLM's instruction-following ability. We fix the highest-reward response in the UltraFeedback dataset and ablate the number of suboptimal responses per prompt, resulting in different contrastive response numbers $K$ during training. Left: Evaluation results under the same set of hyperparameters. Right: Performance-KL trade-off under various $\alpha$ and $\beta$ . Each dot represents an independent experiment trained for 1 epoch.
|
| 235 |
+
|
| 236 |
+
Suboptimal responses are also important. Previous practices always ensure selecting the highest-performing response when constructing preference data. The assumption behind this strategy is that the dataset's best-performing response determines the upper limit of alignment performance. However, our experiments contradict this assumption. Results in Figure 4 indicate that extra suboptimal responses can also be advantageous for policy training. Specifically, we observe consistent
|
| 237 |
+
|
| 238 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Reasoning BBH (CoT)</td><td colspan="2">Coding</td><td rowspan="2">GSMPLUS</td><td rowspan="2">MATH</td><td rowspan="2">Math TheoremQA</td><td rowspan="2">SVAMP</td><td rowspan="2">ASDiv</td><td rowspan="2">Avg.</td></tr><tr><td>LeetCode</td><td>HumanEval</td></tr><tr><td>Mixtral-7B-SFT</td><td>60.9</td><td>3.3</td><td>28.1</td><td>28.5</td><td>5.8</td><td>7.0</td><td>26.9</td><td>35.8</td><td>24.5</td></tr><tr><td>+ DPO</td><td>61.7</td><td>2.2 ↓</td><td>31.7</td><td>12.1 ↓</td><td>6.4</td><td>9.8</td><td>34.1</td><td>46.1</td><td>25.5</td></tr><tr><td>+ NCA</td><td>60.8 ↓</td><td>3.3</td><td>26.8 ↓</td><td>32.3</td><td>11.7</td><td>11.0</td><td>65.3</td><td>74.3</td><td>35.7</td></tr><tr><td>Mixtral-8×7B-SFT</td><td>75.6</td><td>16.7</td><td>61.0</td><td>57.6</td><td>40.1</td><td>25.9</td><td>85.9</td><td>87.5</td><td>56.3</td></tr><tr><td>+ DPO</td><td>74.9 ↓</td><td>17.2</td><td>47.6 ↓</td><td>55.8 ↓</td><td>35.3 ↓</td><td>26.9</td><td>67.3 ↓</td><td>75.7 ↓</td><td>50.1 ↓</td></tr><tr><td>+ NCA</td><td>75.6</td><td>21.1</td><td>62.8</td><td>61.5</td><td>41.6</td><td>26.9</td><td>86.8</td><td>86.9</td><td>57.9</td></tr></table>
|
| 239 |
+
|
| 240 |
+
Table 3: Alignment results for UltraInteract. We mark numbers that have decreased (↓) after training.
|
| 241 |
+
|
| 242 |
+
performance improvements when increasing the number of data responses from $K = 2$ to $K = 4$ for both InfoNCA and NCA algorithms, across various hyperparameters.
|
| 243 |
+
|
| 244 |
+
Combinatorial DPO are suboptimal solutions. Regarding the performance improvement offered by more suboptimal responses, one might predict that applying the DPO to a combinatorially constructed preference dataset would yield results comparable to NCA/InfoNCA. To investigate this, we examined two variants of DPO that utilize all available responses in UltraFeedback.
|
| 245 |
+
|
| 246 |
+
$\mathbf{DPO} \times 3$ : We pair the highest-performing response with each of the remaining three separately.
|
| 247 |
+
|
| 248 |
+
$\mathbf{DPO} \times C_4^2$ : We sum up all DPO loss possibilities for two out of the four responses.
|
| 249 |
+
|
| 250 |
+
Our experiments, detailed in Table 2, reveal that naively applying combinatorial DPO loss to leverage all response information underperforms InfoNCA/NCA. The $\mathrm{DPO} \times 3$ shows some benefit, while $\mathrm{DPO} \times C_4^2$ is harmful compared with simple data pruning. This is expected because InfoNCA and NCA possess theoretical guarantees (Theorem 3.2 and Theorem 4.2) that ensure convergence to the optimal LM policy whereas combinatorial preference methods do not.
|
| 251 |
+
|
| 252 |
+
# 5.2 NCA vs. DPO in Aligning Language Models with Pairwise Preference
|
| 253 |
+
|
| 254 |
+
In previous experiments, our focus is on the reward dataset with $K > 2$ responses per prompt $(x \rightarrow \{y_{i}, r_{i}\}_{1:K})$ . However, at present most alignment datasets are pairwise $(x \rightarrow \{y_{w} > y_{l}\})$ , making it essential also to evaluate our proposed methods in pairwise preference settings.
|
| 255 |
+
|
| 256 |
+
Since InfoNCA is equivalent to DPO when only pairwise preference data is available (Sec. 3.2), we will focus on comparing and clarifying the differences between the DPO and NCA algorithms.
|
| 257 |
+
|
| 258 |
+
Preferecne dataset and evaluation metrics. We consider fine-tuning Mistral-7B and Mistral- $8\times 7\mathrm{B}$ models on UltraInteract [44], a pairwise alignment dataset specifically designed for complex reasoning tasks. Before alignment, we perform SFT on UltraInteract's preferred responses for the $8\times 7\mathrm{B}$ model and use the existing Mistral-SFT model in Sec. 5.1. We evaluate the model's performance in various challenging tasks. This includes BBH-Hard [39] for CoT reasoning, HumanEval [4] and LeetCode [12] for coding, GSM-Plus [18], MATH, TheoremQA [6], SVAMP [28], and ASDiv [22] for math.
|
| 259 |
+
|
| 260 |
+
DPO may hurt reasoning performance while NCA helps. Results are presented in Table 3. Overall, NCA consistently outperforms DPO in various benchmarks. Notably, we observe DPO hurts the overall performance in most reasoning tasks regarding the Mixtral- $8\times 7\mathrm{B}$ -SFT model. This indicates that DPO might not be suitable for improving reasoning abilities, which echoes findings in concurrent work [44]. In contrast, NCA shows clear improvement on both the 7B and $8\times 7\mathrm{B}$ models.
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
Figure 5: Comparison of data likelihood between InfoNCA/DPO and NCA.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+
NCA prevents the chosen-likelihood from decreasing. What distinct optimization characteristics could cause performance differences between pairwise NCA and DPO? To understand this, we empirically inspect how the data likelihood changes during training. As shown in Figure 5. The likelihood of preferred responses interestingly decreases after DPO training and increases for NCA training. This pattern is consistent across both preference and reward learning. The decreasing chosen-likelihood trend is concerning because it directly contradicts the maximum-likelihood objective used
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
Figure 6: NCA is more robust to hyperparameter changes and KL derivations. Left: Ablation results of $\alpha$ and $\beta$ for UltraFeedback-binarized. Right: Results for UltraFeedback-reward ( $K = 4$ ).
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
|
| 278 |
+
during the SFT stage. This drawback is exacerbated in reasoning tasks, where the preferred response is often the ground truth answer. Consequently, we hypothesize that NCA's superior performance in reasoning tasks is due to its ability to avoid decreasing chosen likelihood.
|
| 279 |
+
|
| 280 |
+
Since DPO is essentially a specialization of InfoNCA, their contrasting likelihood trends can be explained theoretically. As we have elaborated in Sec. 4.2, NCA adjusts the absolute likelihood of data, while DPO/InfoNCA only considers relative likelihood across different responses. Thus, a declining chosen likelihood directly contradicts NCA's training objective but not DPO's.
|
| 281 |
+
|
| 282 |
+
Empirical takeaway: When to choose NCA over DPO? DPO and pairwise NCA have similar theoretical guarantees. Their different performance in alignment tasks is largely empirical, depending on the specific characteristics of datasets and the nature of tasks. Our observations show that NCA is more suitable for reasoning tasks such as math and coding (Table 3), where high-quality responses are sparse, and adhering closely to the preferred responses in the dataset is critical. DPO may be more suitable for general instruction-following tasks like summarization/role-playing (Table 2), where datasets only reflect human relative preference but do not contain "golden" answers. In essence, NCA benefits from better dataset regularization, while DPO relies more on LLMs' generalization abilities.
|
| 283 |
+
|
| 284 |
+
We also observe that NCA has a greater tolerance for divergence from the initial SFT policy and is more robust to hyperparameter changes. As evidenced in Figure 6, we conduct a grid search on $\alpha$ and $\beta$ . DPO can drastically fail to improve alignment performance if the learned policy strays too far from the SFT policy, and may randomly collapse under certain hyperparameters. In contrast, the NCA method does not exhibit similar issues. These observations suggest that NCA may be a worthwhile alternative if DPO training is unstable.
|
| 285 |
+
|
| 286 |
+
# 6 Related Work
|
| 287 |
+
|
| 288 |
+
Language model alignment. Current approaches cater to either explicit reward data or preference data, often lacking the versatility to address both concurrently. Reinforcement Learning [34] is inherently suitable for explicit reward scenarios. However, its on-policy nature necessitates learning a reward model from data first, leading to an indirect two-stage optimization process [8, 26, 36]. Recent developments in preference-based alignment techniques [33, 1, 11, 42, 16, 13] have streamlined this process. They enable direct alignment of LMs through a singular loss, but this comes at the expense of being confined to pairwise preference data. Other alignment approaches [43, 37, 45, 38] are also not tailored for aligning with reward datasets. Recent work [3] attempts to extend DPO's parameterization technique to explicit reward contexts. However, it only considers binary rewards. In comparison, our methods can handle both continuous rewards and preference data.
|
| 289 |
+
|
| 290 |
+
Noise contrastive estimation. NCE [14] and its variant, InfoNCE [24], are established optimization methods for training unnormalized generative models [21]. NCE primarily leverages a binary classification loss and can be applied in self-supervised representation learning. Examples are Word2Vec [23], MoCo [15], and SimCLR [5]. InfoNCE is related to maximizing mutual information between two distributions through a multiclass cross-entropy loss. It has successful applications in representation learning, such as CLIP [31]. It is also widely used in language modeling [7], diffusion modeling [20], and reinforcement learning [17].
|
| 291 |
+
|
| 292 |
+
# 7 Conclusion
|
| 293 |
+
|
| 294 |
+
In this work, we formally consider the language model alignment problem in the context of explicit reward settings. By adeptly harnessing the NCE and InfoNCE theories, we introduce two practical algorithms: NCA and InfoNCA. Our proposed methods are uniquely suited for both reward data and preference data, including DPO as a special case. Our experiments show that reward-based alignment methods outperform preference baseline by fully leveraging suboptimal responses in reward datasets. In preference settings, pairwise NCA outperforms DPO in complex reasoning tasks by effectively preventing data likelihood from decreasing.
|
| 295 |
+
|
| 296 |
+
# Acknowledgments and Disclosure of Funding
|
| 297 |
+
|
| 298 |
+
We especially thank Cheng Lu, who greatly inspires us in linking the NCA algorithm with NCE theories. We also thank Jiuhai Chen, and Tianlin Liu for their suggestions on Zephyr result reproduction. We thank Bingrui Li, and Weiyu Huang for their help with the experimental setup. We thank Github user Wing Lian for integrating the NCA algorithm into theTRL library. We thank Haosheng Zou for providing feedback on our method.
|
| 299 |
+
|
| 300 |
+
This work was supported by NSFC Projects (Nos. 62350080, 92370124, 92248303, 62276149, 62061136001, 62076147), BNRist (BNR2022RC01006), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. J. Zhu was also supported by the XPlorer Prize.
|
| 301 |
+
|
| 302 |
+
# References
|
| 303 |
+
|
| 304 |
+
[1] Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR, 2024.
|
| 305 |
+
[2] Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345, 1952.
|
| 306 |
+
[3] Tianchi Cai, Xierui Song, Jiyan Jiang, Fei Teng, Jinjie Gu, and Guannan Zhang. Ulma: Unified language model alignment with demonstration and point-wise human preference. arXiv preprint arXiv:2312.02554, 2023.
|
| 307 |
+
[4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021.
|
| 308 |
+
[5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020.
|
| 309 |
+
[6] Wenhu Chen, Ming Yin, Max W.F. Ku, Yixin Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. Theoremq: A theorem-driven question answering dataset. ArXiv preprint, abs/2305.12524, 2023.
|
| 310 |
+
[7] Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576–3588, Online, June 2021. Association for Computational Linguistics.
|
| 311 |
+
[8] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
|
| 312 |
+
[9] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, et al. Ultrafeedback: Boosting language models with scaled ai feedback. In *Forty-first International Conference on Machine Learning*, 2024.
|
| 313 |
+
[10] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
|
| 314 |
+
[11] Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024.
|
| 315 |
+
[12] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the large language model meets programming - the rise of code intelligence. ArXiv preprint, abs/2401.14196, 2024.
|
| 316 |
+
[13] Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie Zhou, Yankai Lin, et al. Controllable preference optimization: Toward controllable multi-objective alignment. arXiv preprint arXiv:2402.19085, 2024.
|
| 317 |
+
[14] Michael U Gutmann and Aapo Hyvarinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of machine learning research, 13(2), 2012.
|
| 318 |
+
|
| 319 |
+
[15] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020.
|
| 320 |
+
[16] Jiwoo Hong, Noah Lee, and James Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691, 2024.
|
| 321 |
+
[17] Michael Laskin, Aravind Srinivas, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. In International Conference on Machine Learning, pages 5639-5650. PMLR, 2020.
|
| 322 |
+
[18] Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. Gsm-plus: A comprehensive benchmark for evaluating the robustness of llms as mathematical problem solvers. ArXiv preprint, abs/2402.19255, 2024.
|
| 323 |
+
[19] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. AlpacaEval: An automatic evaluator of instruction-following models, 2023.
|
| 324 |
+
[20] Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, and Jun Zhu. Contrastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning. In Proceedings of the 40th International Conference on Machine Learning, 2023.
|
| 325 |
+
[21] Zhuang Ma and Michael Collins. Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. arXiv preprint arXiv:1809.01812, 2018.
|
| 326 |
+
[22] Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing English math word problem solvers. In Proc. of ACL, 2020.
|
| 327 |
+
[23] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
|
| 328 |
+
[24] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
|
| 329 |
+
[25] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
|
| 330 |
+
[26] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022.
|
| 331 |
+
[27] Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228, 2024.
|
| 332 |
+
[28] Arkil Patel, Satwik Bhattachamishra, and Navin Goyal. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021.
|
| 333 |
+
[29] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019.
|
| 334 |
+
[30] Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pages 745-750, 2007.
|
| 335 |
+
[31] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021.
|
| 336 |
+
[32] Rafael Rafailov, Joey Hejna, Ryan Park, and Chelsea Finn. From r to q*: Your language model is secretly a q-function. arXiv preprint arXiv:2404.12358, 2024.
|
| 337 |
+
[33] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
|
| 338 |
+
|
| 339 |
+
[34] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 340 |
+
[35] John Schulman, Barret Zoph, Christina Kim, Jacob Hilton, Jacob Menick, Jiayi Weng, Juan Felipe Ceron Uribe, Liam Fedus, Luke Metz, Michael Pokorny, et al. Chatgpt: Optimizing language models for dialogue. OpenAI blog, 2022.
|
| 341 |
+
[36] Wei Shen, Xiaoying Zhang, Yuanshun Yao, Rui Zheng, Hongyi Guo, and Yang Liu. Improving reinforcement learning from human feedback using contrastive rewards. arXiv preprint arXiv:2403.07708, 2024.
|
| 342 |
+
[37] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023.
|
| 343 |
+
[38] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, 2024.
|
| 344 |
+
[39] Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. ArXiv preprint, abs/2210.09261, 2022.
|
| 345 |
+
[40] Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, et al. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944, 2023.
|
| 346 |
+
[41] Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. TRL: Transformer reinforcement learning, 2020.
|
| 347 |
+
[42] Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, and Yuxin Chen. Beyond reverse kl: Generalizing direct preference optimization with diverse divergence constraints. arXiv preprint arXiv:2309.16240, 2023.
|
| 348 |
+
[43] Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. RRHF: Rank responses to align language models with human feedback. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
|
| 349 |
+
[44] Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, et al. Advancing llm reasoning generalists with preference trees. arXiv preprint arXiv:2404.02078, 2024.
|
| 350 |
+
[45] Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. SLICHF: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023.
|
| 351 |
+
[46] Yao Zhao, Mikhail Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. Calibrating sequence likelihood improves conditional language generation. In The Eleventh International Conference on Learning Representations, 2022.
|
| 352 |
+
[47] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595-46623, 2023.
|
| 353 |
+
|
| 354 |
+
# A Proof of Theorems
|
| 355 |
+
|
| 356 |
+
# A.1 InfoNCA Objective
|
| 357 |
+
|
| 358 |
+
Recall that our optimal language policy is
|
| 359 |
+
|
| 360 |
+
$$
|
| 361 |
+
\pi^ {*} (y | x) = \mu (y | x) \frac {e ^ {r (x , y) / \alpha}}{Z (x)}. \tag {Eq.2}
|
| 362 |
+
$$
|
| 363 |
+
|
| 364 |
+
Consider a batch of $K$ responses $\{y_i\}_{1:K}$ for an instruction $x$ . $\{y_i\}_{1:K}$ consists of one optimal response $y_{\nu}$ that is sampled from $\pi^{*}(y|x)\propto \mu (y|x)e^{r(x,y) / \alpha}$ , and $K - 1$ suboptimal noises independently sampled from $\mu (y|x)$ . $\nu \in 1:K$ is the random index of that optimal response. The joint probability for $\{y_i\}_{1:K}$ is
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
p ^ {\mathrm {j o i n t}} (\{y _ {i} \} _ {1: K} | x, \nu) = \pi^ {*} (y _ {\nu} | x) \prod_ {i \neq \nu} \mu (y _ {i} | x) = \frac {\pi^ {*} (y _ {\nu} | x)}{\mu (y _ {\nu} | x)} \prod_ {i = 1} ^ {K} \mu (y _ {i} | x).
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
Given that the prior satisfies $p(\nu = 1) = p(\nu = 2) = \ldots = p(\nu = K) = \frac{1}{K}$ , the data posterior is
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
p ^ {\mathrm {j o i n t}} (\nu | x, \{y _ {i} \} _ {1: K}) = \frac {\pi^ {*} (y _ {\nu} | x) / \mu (y _ {\nu} | x)}{\sum_ {j = 1} ^ {K} \pi^ {*} (y _ {j} | x) / \mu (y _ {j} | x)}.
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\begin{array}{l} p (\mathcal {O} = y _ {i} | \{y \} _ {1: K}) = \frac {p (\{y \} _ {1 : K} | \mathcal {O} = y _ {i}) p (\mathcal {O} = y _ {i})}{\sum_ {j = 1} ^ {K} p (\{y \} _ {1 : K} | \mathcal {O} = y _ {j}) p (\mathcal {O} = y _ {j})} \\ = \frac {\pi^ {*} (y _ {i} | x) / \mu (y _ {i} | x)}{\sum_ {j = 1} ^ {K} \pi^ {*} (y _ {j} | x) / \mu (y _ {j} | x)} \\ = \frac {e ^ {r (y _ {i}) / \alpha}}{\sum_ {j = 1} ^ {K} e ^ {r (y _ {j}) / \alpha}} \\ \end{array}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
Define model policy as
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\pi_ {\theta} (y | x) := \mu (y | x) \frac {e ^ {r _ {\theta} (x , y)}}{Z _ {\theta} (x)}.
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
The model posterior probability satisfies
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \} _ {1: K}) = \frac {e ^ {r _ {\theta} (x , y _ {\nu})}}{\sum_ {i = 1} ^ {K} e ^ {r _ {\theta} (x , y _ {i})}}.
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
Theorem A.1 (InfoNCA Objective). For any $K > 1$ , $\alpha > 0$ , we have the following results.
|
| 393 |
+
|
| 394 |
+
(a) Equivalent objective.
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\begin{array}{l} \min _ {\theta} \mathbb {E} _ {p ^ {j o i n t} (x, \{y _ {i} \})} D _ {\mathrm {K L}} \left[ p ^ {j o i n t} (\nu | x, \{y _ {i} \}) \right| \left| p _ {\theta} ^ {j o i n t} (\nu | x, \{y _ {i} \}) \right] \\ \Longleftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x)} \prod_ {\mu (y _ {i} | x)} \sum_ {i = 1} ^ {K} \frac {e ^ {r (x , y _ {i}) / \alpha}}{Z (x)} \log \frac {e ^ {r _ {\theta} (x , y _ {i})}}{\sum_ {j = 1} ^ {K} e ^ {r _ {\theta} (x , y _ {j})}}, \tag {Eq.5} \\ \end{array}
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
where $Z(x) = \mathbb{E}_{\mu(y|x)} e^{r(x,y)/\alpha}$ .
|
| 401 |
+
|
| 402 |
+
(b) Optimal solution. Assume unlimited model capacity and data samples. The optimal $r_{\theta^*}$ and $\pi_{\theta^*}$ for solving Eq. 5 are
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha + C (x),
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
a n d \quad \pi_ {\theta^ {*}} (x, y) \propto \mu (y | x) e ^ {r (x, y) / \alpha},
|
| 410 |
+
$$
|
| 411 |
+
|
| 412 |
+
where $C(x)$ is an arbitrary function conditioning on $x$ .
|
| 413 |
+
|
| 414 |
+
Proof. (a) Equivalent objective.
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\begin{array}{l} \min _ {\theta} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, \{y _ {i} \})} D _ {\mathrm {K L}} [ p ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) | | p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) ] \\ \Leftrightarrow \min _ {\theta} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, \{y _ {i} \})} \mathbb {E} _ {p ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \})} \log \frac {p ^ {\text {j o i n t}} (\nu | x , \{y _ {i} \})}{p _ {\theta} ^ {\text {j o i n t}} (\nu | x , \{y _ {i} \})} \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p ^ {\mathrm {j o i n t}} (x, \{y _ {i} \})} \mathbb {E} _ {p ^ {\mathrm {j o i n t}} (\nu | x, \{y _ {i} \})} \log p _ {\theta} ^ {\mathrm {j o i n t}} (\nu | x, \{y _ {i} \}) \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) p (\nu) p ^ {\text {j o i n t}} \left(\left\{y _ {i} \right\} | x, \nu\right)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \left\{y _ {i} \right\}) \quad (Bayes' rule) \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) p (\nu)} \prod_ {i = 1} ^ {K} \mu \left(y _ {i} | x\right) \frac {\pi^ {*} \left(y _ {\nu} | x\right)}{\mu \left(y _ {\nu} | x\right)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \left\{y _ {i} \right\}) \quad \text {(i m p o r t a n c e s a m p l i n g)} \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) \prod_ {i = 1} ^ {K} \mu \left(y _ {i} \mid x\right)} \left[ \mathbb {E} _ {p (\nu)} \frac {\pi^ {*} \left(y _ {\nu} \mid x\right)}{\mu \left(y _ {\nu} \mid x\right)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \left\{y _ {i} \right\}) \right] \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x)} \prod_ {i = 1} ^ {K} \mu \left(y _ {i} | x\right) \left[ \frac {1}{K} \sum_ {\nu = 1} ^ {K} \frac {\pi^ {*} \left(y _ {\nu} | x\right)}{\mu \left(y _ {\nu} | x\right)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \left\{y _ {i} \right\}) \right] \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x)} \prod_ {i = 1} ^ {K} \mu \left(y _ {i} \mid x\right) \left[ \sum_ {\nu = 1} ^ {K} \frac {e ^ {r \left(x , y _ {\nu}\right) / \alpha}}{Z (x)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, \left\{y _ {i} \right\}) \right] \quad (based on Eq. 2) \\ \Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x)} \prod_ {i = 1} ^ {K} \mu (y _ {i} | x) \left[ \sum_ {i = 1} ^ {K} \frac {e ^ {r (x , y _ {i}) / \alpha}}{Z (x)} \log \frac {e ^ {r _ {\theta} (x , y _ {i})}}{\sum_ {j = 1} ^ {K} e ^ {r _ {\theta} (x , y _ {j})}} \right] \quad (\text {c h a n g e s u m i n d e x}) \\ \end{array}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
# (b) Optimal solution.
|
| 421 |
+
|
| 422 |
+
Given conclusions from (a). With unlimited model capacity, $p_{\theta}^{\mathrm{joint}}(\nu | x, \{y_i\})$ could represent any discrete distribution, such that we can arrive at the global optimal point given infinite training data.
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\begin{array}{l} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, \{y _ {i} \})} D _ {\mathrm {K L}} \left[ p ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) \right\rvert \left| p _ {\theta^ {*}} ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) \right] = 0 \\ \Longrightarrow \quad p ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) = p _ {\theta^ {*}} ^ {\text {j o i n t}} (\nu | x, \{y _ {i} \}) \quad \forall x, \nu , \{y _ {i} \} _ {1: K} \\ \Longrightarrow \quad \frac {\pi^ {*} \left(y _ {\nu} | x\right) / \mu \left(y _ {\nu} | x\right)}{\sum_ {i = 1} ^ {K} \pi^ {*} \left(y _ {i} | x\right) / \mu \left(y _ {i} | x\right)} = \frac {e ^ {r _ {\theta^ {*}} \left(x , y _ {\nu}\right)}}{\sum_ {i = 1} ^ {K} e ^ {r _ {\theta^ {*}} \left(x , y _ {i}\right)}} \quad \forall x, \nu , \{y _ {i} \} _ {1: K} \\ \Rightarrow \quad r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha + C (x) \quad \forall x, y, C \\ \Rightarrow \quad \pi_ {\theta^ {*}} (x, y) \propto \mu (y | x) e ^ {r (x, y) / \alpha} \quad \forall x, y \\ \end{array}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+
# A.2 NCA Objective
|
| 431 |
+
|
| 432 |
+
Recall the optimal language policy is
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
\pi^ {*} (y | x) = \mu (y | x) \frac {e ^ {r (x , y) / \alpha}}{Z (x)}. \tag {Eq.2}
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
Consider a response $y$ randomly sampled from either the optimal LM $\pi^{*}(y|x) = \mu (y|x)\frac{e^{r(x,y) / \alpha}}{Z(x)}$ , or the pretrained LM $\mu (y|x)$ . Let a binary variable $\nu = 1$ indicates the response $y$ is sampled from $\pi^{*}$ .
|
| 439 |
+
|
| 440 |
+
Then the marginal distribution of $y$ is
|
| 441 |
+
|
| 442 |
+
$$
|
| 443 |
+
p ^ {\text {j o i n t}} (y | x) := p (\nu = 0) \mu (y | x) + p (\nu = 1) \pi^ {*} (y | x).
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
Given the prior $p(\nu = 0) = p(\nu = 1) = \frac{1}{2}$ , using Bayes' Rule, the data posterior satisfies
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
p ^ {\mathrm {j o i n t}} (\nu = 0 | x, y) = \frac {\mu (y | x)}{\mu (y | x) + \pi^ {*} (y | x)}.
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
p ^ {\text {j o i n t}} (\nu = 1 | x, y) = \frac {\pi^ {*} (y | x)}{\mu (y | x) + \pi^ {*} (y | x)}.
|
| 454 |
+
$$
|
| 455 |
+
|
| 456 |
+
Define model policy as $\pi_{\theta}(y|x) \coloneqq \mu(y|x)e^{r_{\theta}(x,y)}$ . The model posterior probability satisfies
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
p _ {\theta} ^ {\text {j o i n t}} (\nu = 1 | x, y) = \sigma \left(r _ {\theta} (x, y)\right).
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
$$
|
| 463 |
+
p _ {\theta} ^ {\text {j o i n t}} (\nu = 0 | x, y) = 1 - \sigma \left(r _ {\theta} (x, y)\right) = \sigma \left(- r _ {\theta} (x, y)\right). \tag {Eq.8}
|
| 464 |
+
$$
|
| 465 |
+
|
| 466 |
+
Theorem A.2 (NCA Objective). For any $\alpha > 0$ , we have the following results.
|
| 467 |
+
|
| 468 |
+
(a) Equivalent objective.
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\begin{array}{l} \min _ {\theta} \mathbb {E} _ {p ^ {j o i n t} (x, y)} D _ {\mathrm {K L}} [ p ^ {j o i n t} (\nu | x, y) | | p _ {\theta} ^ {j o i n t} (\nu | x, y) ] \\ \Longleftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) \mu (y | x)} \frac {e ^ {r (x , y) / \alpha}}{Z (x)} \log \sigma \left(r _ {\theta} (x, y)\right) + \log \sigma \left(- r _ {\theta} (x, y)\right), \tag {Eq.9} \\ \end{array}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
where $Z(x) = \mathbb{E}_{\mu (y|x)}e^{r(x,y) / \alpha}$ .
|
| 475 |
+
|
| 476 |
+
(b) Optimal solution. Assume unlimited model capacity and data samples. The optimal $r_{\theta^*}$ and $\pi_{\theta^*}$ for solving Eq. 5 are
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha - \log \mathbb {E} _ {\mu (y | x)} e ^ {r (x, y) / \alpha},
|
| 480 |
+
$$
|
| 481 |
+
|
| 482 |
+
$$
|
| 483 |
+
a n d \quad \pi_ {\theta^ {*}} (x, y) \propto \mu (y | x) e ^ {r (x, y) / \alpha}.
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
Proof. (a) Equivalent objective.
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
\min _ {\theta} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, y)} D _ {\mathrm {K L}} [ p ^ {\text {j o i n t}} (\nu | x, y) | | p _ {\theta} ^ {\text {j o i n t}} (\nu | x, y) ]
|
| 490 |
+
$$
|
| 491 |
+
|
| 492 |
+
$$
|
| 493 |
+
\Leftrightarrow \min _ {\theta} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, y)} \mathbb {E} _ {p ^ {\text {j o i n t}} (\nu | x, y)} \log \frac {p ^ {\text {j o i n t}} (\nu | x , y)}{p _ {\theta} ^ {\text {j o i n t}} (\nu | x , y)}
|
| 494 |
+
$$
|
| 495 |
+
|
| 496 |
+
$$
|
| 497 |
+
\Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p ^ {\text {j o i n t}} (x, y)} \mathbb {E} _ {p ^ {\text {j o i n t}} (\nu | x, y)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, y)
|
| 498 |
+
$$
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) p (\nu) p ^ {\text {j o i n t}} (y | x, \nu)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu | x, y) \quad \left(\text {B a y e s} ^ {\prime} \text {r u l e}\right)
|
| 502 |
+
$$
|
| 503 |
+
|
| 504 |
+
$$
|
| 505 |
+
\Leftrightarrow \min _ {\theta} - [ p (\nu = 0) \mathbb {E} _ {p (x) p ^ {\mathrm {j o i n t}} (y | x, \nu = 0)} \log p _ {\theta} ^ {\mathrm {j o i n t}} (\nu = 0 | x, y) +
|
| 506 |
+
$$
|
| 507 |
+
|
| 508 |
+
$$
|
| 509 |
+
p (\nu = 1) \mathbb {E} _ {p (x) p ^ {\text {j o i n t}} (y | x, \nu = 1)} \log p _ {\theta} ^ {\text {j o i n t}} (\nu = 1 | x, y) ]
|
| 510 |
+
$$
|
| 511 |
+
|
| 512 |
+
$$
|
| 513 |
+
\Leftrightarrow \min _ {\theta} - \left[ \mathbb {E} _ {p (x) \mu (y | x)} \log \sigma (- r _ {\theta} (x, y)) + \mathbb {E} _ {p (x) \pi^ {*} (y | x)} \log \sigma (r _ {\theta} (x, y)) \right] \tag {byEq.8}
|
| 514 |
+
$$
|
| 515 |
+
|
| 516 |
+
$$
|
| 517 |
+
\Leftrightarrow \min _ {\theta} - \mathbb {E} _ {p (x) \mu (y | x)} \frac {e ^ {r (x , y) / \alpha}}{Z (x)} \log \sigma \left(r _ {\theta} (x, y)\right) + \log \sigma (- r _ {\theta} (x, y)) \quad \text {(i m p o r t a n c e s a m p l i n g)}
|
| 518 |
+
$$
|
| 519 |
+
|
| 520 |
+
# (b) Optimal solution.
|
| 521 |
+
|
| 522 |
+
Given conclusions from (a). With unlimited model capacity, $p_{\theta}^{\mathrm{joint}}(\nu | x, y)$ could represent any discrete distribution, such that we can arrive at the global optimal point given infinite training data.
|
| 523 |
+
|
| 524 |
+
$$
|
| 525 |
+
\begin{array}{l} \mathbb {E} _ {p ^ {\text {j o i n t}} (x, y)} D _ {\mathrm {K L}} \left[ p ^ {\text {j o i n t}} (\nu | x, y) \right| \left| p _ {\theta^ {*}} ^ {\text {j o i n t}} (\nu | x, y) \right] = 0 \\ \Longrightarrow \quad p ^ {\text {j o i n t}} (\nu | x, y) = p _ {\theta^ {*}} ^ {\text {j o i n t}} (\nu | x, y) \quad \forall x, \nu , y \\ \Longrightarrow \quad p ^ {\text {j o i n t}} (\nu = 1 | x, y) = p _ {\theta^ {*}} ^ {\text {j o i n t}} (\nu = 1 | x, y) \quad \forall x, y \\ \Longrightarrow \quad \frac {\pi^ {*} (y | x)}{\mu (y | x) + \pi^ {*} (y | x)} = \sigma \left(r _ {\theta} ^ {*} (x, y)\right) = \frac {e ^ {r _ {\theta} ^ {*} (x , y)}}{1 + e ^ {r _ {\theta} ^ {*} (x , y)}} \quad \forall x, y \\ \Rightarrow \quad e ^ {r _ {\theta} ^ {*} (x, y)} = \frac {\pi^ {*} (y | x)}{\mu (y | x)} \quad \forall x, y \\ \Rightarrow \quad r _ {\theta^ {*}} (x, y) = r (x, y) / \alpha - \log \mathbb {E} _ {\mu (y | x)} e ^ {r (x, y) / \alpha} \quad \forall x, y \\ \Longrightarrow \quad \pi_ {\theta^ {*}} (x, y) \propto \mu (y | x) e ^ {r (x, y) / \alpha} \quad \forall x, y \\ \end{array}
|
| 526 |
+
$$
|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
|
| 530 |
+
# B Pseudocode
|
| 531 |
+
|
| 532 |
+
PyTorch code for the InfoNCA/NCA loss for reward datasets is provided below:
|
| 533 |
+
|
| 534 |
+
import torch.nn.Functional as F
|
| 535 |
+
|
| 536 |
+
```python
|
| 537 |
+
def reward_loss(pi_logps, ref_logps, rewards, alpha, beta, loss_type):
|
| 538 |
+
''' pi_logps: policy logprobs for K responses, shape (B, K)
|
| 539 |
+
ref_logps: reference logprobs for K responses, shape (B, K)
|
| 540 |
+
rewards: reward labels for K responses, shape (B, K)
|
| 541 |
+
alpha: the reward temperature controlling strength of KL penalty
|
| 542 |
+
beta: the parameterization coefficient that defines the reward model
|
| 543 |
+
loss_type: could be either "InfoNCA" or "NCA" loss
|
| 544 |
+
''' soft_labels = (rewards / alpha).softmax(dim=-1) # (B, K)
|
| 545 |
+
model_rewards = (pi_logps - ref_logps) * beta # (B, K)
|
| 546 |
+
if loss_type == "InfoNCA":
|
| 547 |
+
model_logps = model_rewards.log softmax(dim=-1) # (B, K)
|
| 548 |
+
losses = - (soft_labels * model_logps).sum(dim=-1) # (B,)
|
| 549 |
+
elif loss_type == "NCA":
|
| 550 |
+
optimization = - (soft_labels * F.logsigmoid(model_rewards)).sum(dim=-1) # (B,)
|
| 551 |
+
regularization = - F.logsigmoid(-model_rewards).mean(dim=-1) # (B,)
|
| 552 |
+
losses = optimization + regularization # (B,)
|
| 553 |
+
return losses.mean()
|
| 554 |
+
```
|
| 555 |
+
|
| 556 |
+
The loss implementation under pairwise preference settings is equivalent to reward losses with $K = 2$ and $\alpha \rightarrow 0$ . We provide the code separately for easy comparison with DPO.
|
| 557 |
+
|
| 558 |
+
```python
|
| 559 |
+
def preference_loss(chosen_pi_logps, chosen_ref_logps, rejected_pi_logps, rejected_ref_logps, beta, loss_type):
|
| 560 |
+
"" chosen_pi_logps: policy logprobs for the preferred responses, shape (B,) chosen_ref_logps: reference logprobs for the preferred responses, shape (B,) rejected_pi_logps: policy logprobs for the dispreferred responses, shape (B,) rejected_ref_logps: reference logprobs for the dispreferred responses, shape (B,) beta: the parameterization coefficient that defines the reward model loss_type: one of "InfoNCA", "NCA" or "DPO" loss
|
| 561 |
+
chosen_rewards = (chosen_pi_logps - chosen_ref_logps) * beta # (B,) rejected_rewards = (rejected_pi_logps - rejected_ref_logps) * beta # (B,) if loss_type in ["DPO", "InfoNCA]:
|
| 562 |
+
losses = -F.logsigmoid(chosen_rewards - rejected_rewards) # (B,) elif loss_type == "NCA":
|
| 563 |
+
losses = -F.logsigmoid(chosen_rewards) \ -0.5 * F.logsigmoid(-chosen_rewards) \ -0.5 * F.logsigmoid(-rejected_rewards) # (B,) return losses.mean()
|
| 564 |
+
```
|
| 565 |
+
|
| 566 |
+
# C Experimental Details
|
| 567 |
+
|
| 568 |
+
Experiments with UltraFeedback. Our implementation is heavily based on the Transformer Reinforcement Learning (TRL) library [41] and Zephyr's official code base [40]. All models are fine-tuned from the publicly accessible HuggingFaceH4/mistral-7B-SFT-beta model. Experiments are run on Nvidia A40 or RTX 4090 GPUs using bfloat16 precision. We ablate $\beta \in \{3e - 4,1e - 3,3e - 3,1e - 2,3e - 2,1e - 1,3e - 1,1.0\}$ and $\alpha \in \{0.01,0.1,0.33,1.0,3.33\}$ . The default reward temperature $\alpha$ is 0.01. The default parameterization coefficient $\beta$ is also 0.01. We adopt the QLoRA [10] fine-tuning technique with rank 16, $\alpha_{\mathrm{Iora}} = 16$ , and a dropout rate of 0.05. We train all models for 1 epoch. The batch size is 32. We use an AdamW optimizer with a learning rating of 5e-6. For KTO and IPO baselines, we adopt exactly the same training pipeline for reporting their performance except that we tune the $\beta \in \{0.01,0.1,0.3,0.5,1.0\}$ . We find the most suitable beta for KTO is 0.01, and for IPO is 0.5.
|
| 569 |
+
|
| 570 |
+
Experiments with UltraInteract. We follow [44] and fine-tune all parameters of the pretrained model with UltraInteract and UltraFeedback. Specifically, for HuggingFaceH4/mistral-7B-SFT-beta model, we directly adopt the recipe of Eurus, with the same data mixture (all 220K multi-turn trajectory pairs in UltraInteract and all 340K pairs in UltraFeedback) and hyperparameters. All $\beta$ is set to 0.1. The model is trained for 1 epoch with a cosine schedule. The learning rate is 5e-7 and the warmup ratio is 0.1. For Mixtral-8x7B, we first SFT the model with the recipe for Eurus-70B-SFT, and then applied the same preference learning recipe as aforementioned.
|
| 571 |
+
|
| 572 |
+
# My MT-Bench judge
|
| 573 |
+
|
| 574 |
+
Single Answer Grading
|
| 575 |
+
|
| 576 |
+
Pairwise Comparison
|
| 577 |
+
|
| 578 |
+
writing
|
| 579 |
+
|
| 580 |
+
91: Pretend yourself to be Elon Musk in all the following conversations. Speak like Elon Musk as much as possible. Why ...
|
| 581 |
+
|
| 582 |
+
Model A
|
| 583 |
+
|
| 584 |
+
Model B
|
| 585 |
+
|
| 586 |
+
# User
|
| 587 |
+
|
| 588 |
+
Pretend yourself to be Elon Musk in all the following conversations. Speak like Elon Musk as much as possible. Why do we need to go to Mars?
|
| 589 |
+
|
| 590 |
+
# Assistant A
|
| 591 |
+
|
| 592 |
+
Certainly, I'm Elon Musk, the founder and CEO of SpaceX, Tesla, Neuralink, and The Boring Company. Our mission at SpaceX is to enable human life on Mars and make humanity multi-planetary. We need to go to Mars because it's vital for the long-term survival of our species. Earth is facing numerous existential threats, such as natural disasters, climate change, and the potential for a catastrophic event that could render our planet uninhabitable. By establishing a self-sustaining colony on Mars, we can ensure the continued existence of our species and safeguard the future of humanity. Additionally, the scientific and technological advancements that will be made during the process of colonizing Mars will have far-reaching implications for our understanding of the universe and our ability to explore it. Mars is also the most geographically similar planet to Earth, providing a perfect testing ground for technologies that could help us better understand and mitigate the challenges we face on our own planet. In short, our goal is to ensure that humanity is not limited by the narrow confines of our home planet, but rather has the freedom to explore the universe and thrive in new environments.
|
| 593 |
+
|
| 594 |
+
# User's follow-up question
|
| 595 |
+
|
| 596 |
+
How do you like dancing? Can you teach me?
|
| 597 |
+
|
| 598 |
+
# Assistant A
|
| 599 |
+
|
| 600 |
+
Elon Musk is primarily focused on his work in the fields of renewable energy, space exploration, and transportation, and he doesn't often speak about his personal interests outside of these areas. While I'm not privy to his personal hobbies, based on what has been reported in the media, Elon Musk doesn't appear to be a regular dancer, and he has not publicly expressed an interest in teaching others to dance. It's best to assume that his focus is primarily on his professional pursuits. If you're looking for a dance instructor, you may want to consider finding a local studio or experienced dance teacher in your area.
|
| 601 |
+
|
| 602 |
+
Model Judgment (first turn)
|
| 603 |
+
|
| 604 |
+
A is better
|
| 605 |
+
|
| 606 |
+
B is better
|
| 607 |
+
|
| 608 |
+
Model Judgment (second turn)
|
| 609 |
+
|
| 610 |
+
A is better
|
| 611 |
+
|
| 612 |
+
B is better
|
| 613 |
+
|
| 614 |
+
# Assistant B
|
| 615 |
+
|
| 616 |
+
*The ultimate goal of our existence is to ensure the survival of our species. As the Earth's resources and habitability become increasingly scarce, we must expand our horizons to other planets. Mars, with its relatively close proximity and similar terrain to Earth, provides us with an opportunity to establish a permanent human presence beyond our blue planet. Furthermore, the development of Mars as a habitable world would enable us to create a backup for humanity in case of catastrophic events on Earth.*
|
| 617 |
+
|
| 618 |
+
# Assistant B
|
| 619 |
+
|
| 620 |
+
Elon Musk: "I'm not really one for dancing, but I've always admired the art form. However, I'm not a dance teacher. I suggest you find a reputable dance school or instructor to learn from. There are many different styles of dance to choose from, so consider what kind of dancing you're interested in and find someone who specializes in that area."
|
| 621 |
+
|
| 622 |
+
Figure 7: Rating system screenshot. Our rating system is adapted from [47].
|
| 623 |
+
|
| 624 |
+
# D Additional Experiment Results
|
| 625 |
+
|
| 626 |
+

|
| 627 |
+
Figure 8: MT-bench score for InfoNCA (left) and NCA (right) regarding various $\alpha$ and $\beta$ . Results are averaged for $K = 2$ and $K = 4$ experiments. Overall, model performance is relatively more sensitive to variations of $\beta$ compared with $\alpha$ . NCA shows greater tolerance to hyperparameter variations.
|
| 628 |
+
|
| 629 |
+

|
| 630 |
+
|
| 631 |
+
<table><tr><td>Method</td><td>Functional Form</td><td>MT-bench</td><td>KL Divergence</td></tr><tr><td>Mistral-7B-SFT</td><td>-</td><td>6.45</td><td>0.00</td></tr><tr><td>+DPO</td><td>rθ = β log πθ/μ</td><td>7.34</td><td>0.53</td></tr><tr><td rowspan="3">+InfoNCA</td><td>rθ = β log πθ/μ</td><td>7.63</td><td>0.47</td></tr><tr><td>rθ = β log πθ</td><td>4.65</td><td>4.39</td></tr><tr><td>rθ = β log πθ + SFT regularization</td><td>7.15</td><td>0.26</td></tr><tr><td rowspan="3">+NCA</td><td>rθ = β log πθ/μ</td><td>7.52</td><td>0.39</td></tr><tr><td>rθ = β log πθ</td><td>5.82</td><td>3.32</td></tr><tr><td>rθ = β log πθ + SFT regularization</td><td>6.88</td><td>0.09</td></tr></table>
|
| 632 |
+
|
| 633 |
+
Table 4: Comparison of different functional forms of $r_{\theta}$ on UltraFeedback datasets.
|
| 634 |
+
|
| 635 |
+
# E Appendix F: Additional Empirical Results
|
| 636 |
+
|
| 637 |
+
We provide sampled responses of LM policies trained respectively with the DPO and NCA to compare their data regularization feature. In general, generations from the NCA policy are more similar to the preferred response in the dataset.
|
| 638 |
+
|
| 639 |
+
# Prompt
|
| 640 |
+
|
| 641 |
+
<table><tr><td>You are given an answer. Create a question for the answer.
|
| 642 |
+
Grams to Ounces conversion 1 gram (g) is equal to 0.03527396195 ounces (oz).</td></tr><tr><td>Preferred Response in Dataset (r=8.0)</td></tr><tr><td>What is the conversion rate for grams to ounces for 1 gram?</td></tr><tr><td>Dispreferred Response in Dataset (r=7.0)</td></tr><tr><td>What is the relationship between grams and ounces in weight measurement?</td></tr><tr><td>Reponse generated by πθDPO (r=2.0)</td></tr><tr><td>How many ounces are equivalent to 200 grams according to the given conversion rate of 1 gram (g) to 0.03527396195 ounces (oz)?</td></tr><tr><td>Reponse generated by πθNCA (r=10.0)</td></tr><tr><td>Question: What is the conversion rate from grams to ounces, and how many grams are one ounce?</td></tr></table>
|
| 643 |
+
|
| 644 |
+
# Prompt
|
| 645 |
+
|
| 646 |
+
Table 5: Sampled generations and the log-likelihood ratio of the NCA and the DPO/InfoNCA policy.
|
| 647 |
+
|
| 648 |
+
<table><tr><td>If "A busy crosswalk somewhere in asia." does that mean that "The crosswalk was blocked off by police.?"
|
| 649 |
+
Options:
|
| 650 |
+
- yes; - it is not possible to tell; - no</td></tr><tr><td>Preferred Response in Dataset (r=8.0)</td></tr><tr><td>It is not possible to tell.
|
| 651 |
+
Confidence: 95%</td></tr><tr><td>Dispreferred Response in Dataset (r=7.5)</td></tr><tr><td>Question:
|
| 652 |
+
If "A busy crosswalk somewhere in asia." does that mean that "The crosswalk was blocked off by police.?"
|
| 653 |
+
Answer:
|
| 654 |
+
It is not possible to tell.
|
| 655 |
+
Confidence: 90%</td></tr><tr><td>Reponse generated by πθDPO(r=3.0)</td></tr><tr><td>- It is not possible to determine whether "The crosswalk was blocked off by police." in "A busy crosswalk somewhere in Asia." To determine whether the crosswalk was blocked off by police, you would need additional information or context.</td></tr><tr><td>Reponse generated by πθNCA(r=8.0)</td></tr><tr><td>It is not possible to tell.
|
| 656 |
+
Confidence: 95%</td></tr></table>
|
| 657 |
+
|
| 658 |
+
Table 6: Sampled generations and the log-likelihood ratio of the NCA and the DPO/InfoNCA policy.
|
| 659 |
+
|
| 660 |
+
# Prompt
|
| 661 |
+
|
| 662 |
+
<table><tr><td>In this task, you need to count the occurrences of the given word in the given sentence.
|
| 663 |
+
Sentence: 'a green double decker bus sitting on top of a parking lot'. Count the occurrences of the word 'a' in the given sentence.</td></tr><tr><td>Preferred Response in Dataset (r=9.0)</td></tr><tr><td>The word 'a' appears 3 times in the given sentence.
|
| 664 |
+
Confidence: 100%</td></tr><tr><td>Dispreferred Response in Dataset (r=6.0)</td></tr><tr><td>The given sentence is "a green double decker bus sitting on top of a parking lot." To answer your question about the occurrences of the word 'a', I will count the number of times the word appears in the sentence.
|
| 665 |
+
After counting the occurrences of the word 'a' in the sentence, I found that it appears once. Therefore, the answer to your question is that the word 'a' appears once in the given sentence.</td></tr><tr><td>Reponse generated by πDPO(r=4.0)</td></tr><tr><td>Of course, I'm here to help! There are 11 occurrences of the word 'a' in the given sentence. Here's how you can count them: 1. 'a' in 'a' 2. 'a' in 'green' 3. 'a' in 'double' 4. 'a' in 'decker' 5. 'a' in 'bus' 6. 'a' in 'sitting' 7. 'a' in 'on ... Confidence: 90%</td></tr><tr><td>Reponse generated by πNCA(r=8.0)</td></tr><tr><td>Sure! The word 'a' appears 3 times in the given sentence.</td></tr><tr><td>Table 7: Sampled generations and the log-likelihood ratio of the NCA and the DPO/InfoNCA policy.</td></tr><tr><td>Prompt</td></tr><tr><td>if X2+3=12. What is X?</td></tr><tr><td>Preferred Response in Dataset (r=9.0)</td></tr><tr><td>To find the value of X, we'll first need to isolate X2:
|
| 666 |
+
X2+3=12
|
| 667 |
+
Subtract 3 from both sides:
|
| 668 |
+
X2=9
|
| 669 |
+
Now, we can find the square root of both sides of the equation to solve for X:
|
| 670 |
+
X=±3
|
| 671 |
+
So, X can be either 3 or -3.</td></tr><tr><td>Dispreferred Response in Dataset (r=4.0)</td></tr><tr><td>I'm sorry, I cannot solve this algebra equation. Can I assist you with anything else?</td></tr><tr><td>Reponse generated by πDPO(r=7.0)</td></tr><tr><td>Sure, I'd love to help. By solving the equation, the value of X should be 3.</td></tr><tr><td>Reponse generated by πNCA(r=9.0)</td></tr><tr><td>To find the value of X, we need to solve the equation X2+3=12.
|
| 672 |
+
First, subtract 3 from both sides:
|
| 673 |
+
X2=12-3 X2=9
|
| 674 |
+
Next, take the square root of both sides:
|
| 675 |
+
X=±√9
|
| 676 |
+
So, X can be either 3 or -3.</td></tr></table>
|
| 677 |
+
|
| 678 |
+
Table 8: Sampled generations and the log-likelihood ratio of the NCA and the DPO/InfoNCA policy.
|
| 679 |
+
|
| 680 |
+
# Appendix H: Comparing NCA with DPO on Data Regularization.
|
| 681 |
+
|
| 682 |
+
Suppose there are only 4 possible responses $\{y_a, y_b, y_c, y_d\}$ . Each response has a reward $r$ , and a pretrained language model policy $\mu$ :
|
| 683 |
+
|
| 684 |
+
<table><tr><td>-</td><td>ya</td><td>yb</td><td>yc</td><td>yd</td></tr><tr><td>Likelihood μ(y)</td><td>40%</td><td>50%</td><td>5%</td><td>5%</td></tr><tr><td>Reward r(y)</td><td>10</td><td>3</td><td>7</td><td>0</td></tr></table>
|
| 685 |
+
|
| 686 |
+
Given a preference dataset $D = \{y_{a} > y_{b}\}$ ( $y_{c}$ and $y_{d}$ do not exist in the dataset), the loss functions are defined as:
|
| 687 |
+
|
| 688 |
+
$$
|
| 689 |
+
\begin{array}{l} L _ {\theta} ^ {\mathrm {D P O}} = - \log \sigma \left(\log \frac {\pi_ {\theta} (y _ {a})}{\mu (y _ {a})} - \log \frac {\pi_ {\theta} (y _ {b})}{\mu (y _ {b})}\right) \\ L _ {\theta} ^ {\mathrm {N C A}} = - \log \sigma \left(\log \frac {\pi_ {\theta} (y _ {a})}{\mu (y _ {a})}\right) - \frac {1}{2} \log \sigma \left(- \log \frac {\pi_ {\theta} (y _ {a})}{\mu (y _ {a})}\right) - \frac {1}{2} \log \sigma \left(- \log \frac {\pi_ {\theta} (y _ {b})}{\mu (y _ {b})}\right) \\ \end{array}
|
| 690 |
+
$$
|
| 691 |
+
|
| 692 |
+
After fine-tuning, there are several possibilities for $\pi_{\theta}$ :
|
| 693 |
+
|
| 694 |
+
Table 9: Initial probabilities and rewards
|
| 695 |
+
|
| 696 |
+
<table><tr><td>-</td><td>πθ(ya)</td><td>πθ(yb)</td><td>πθ(yc)</td><td>πθ(yd)</td><td>log πθ(ya)/μ(ya) - log πθ(yb)/μ(yb)</td><td>r</td></tr><tr><td>(1)</td><td>20% ↓</td><td>10% ↓↓</td><td>5%</td><td>65% ↑</td><td>0.916 > 0</td><td>2.65 ↓</td></tr><tr><td>(2)</td><td>20% ↓</td><td>10% ↓↓</td><td>65% ↑</td><td>5%</td><td>0.916 > 0</td><td>6.85 ↑</td></tr><tr><td>(3) Wanted</td><td>60% ↑</td><td>30% ↓</td><td>5%</td><td>5%</td><td>0.916 > 0</td><td>7.25 ↑</td></tr></table>
|
| 697 |
+
|
| 698 |
+
Table 10: Post fine-tuning probabilities and rewards
|
| 699 |
+
|
| 700 |
+
In scenarios (1) and (2), we can see that the likelihood for both $y_{a}$ and $y_{b}$ decreases. However, (1) and (2) satisfy the DPO loss function because the likelihood for $y_{b}$ decreases more, and the relative likelihood margin between $y_{a}$ and $y_{b}$ becomes larger. In (1) and (2), the likelihood for either $y_{c}$ or $y_{d}$ increases because $\pi(y_{a}) \downarrow + \pi(y_{b}) \downarrow + \pi(y_{c})? + \pi(y_{d})? = 1$ . However, $y_{c}$ and $y_{d}$ are unreliable because we do not know their quality (rewards). The LM policy could generalize to a low-quality response like $y_{d}$ (case (1)).
|
| 701 |
+
|
| 702 |
+
In contrast, the NCA effectively prevents the winning response likelihood $\pi(y_a)$ from decreasing, because it mainly optimizes the absolute data likelihood instead of just caring about the relative likelihood margin $\log \frac{\pi_\theta(y_a)}{\mu(y_a)} - \log \frac{\pi_\theta(y_b)}{\mu(y_b)}$ . Thus, we say NCA is more likely to assign a larger likelihood to responses within the dataset.
|
| 703 |
+
|
| 704 |
+
# Appendix G: Comparison with Related Works
|
| 705 |
+
|
| 706 |
+
We compare with a prior work SLiC-HF [45] in this section. SLiC is inspired by [46] and similarly aims to calibrate sequence likelihood to align with human preferences. Given a preference data pair $\{x\to y_w > y_l\}$ , the loss function for SLiC is
|
| 707 |
+
|
| 708 |
+
$$
|
| 709 |
+
L _ {\theta} := \max (0, \delta - \log \pi_ {\theta} \left(y _ {w} | x\right) + \log \pi_ {\theta} \left(y _ {w} | x\right)) - \lambda \log \pi_ {\theta} \left(y _ {w} | x\right),
|
| 710 |
+
$$
|
| 711 |
+
|
| 712 |
+
where $\delta$ is a hyperparameter that controls the likelihood margin of data, and $\lambda$ controls the regularization weight of the loss. The main difference between our proposed method and the SLiC loss can be summarized as follows:
|
| 713 |
+
|
| 714 |
+
- Theoretical framework. SLiC is mainly adapted from the existing LM calibration methods [46]. In contrast, our method is based on noise contrastive estimation methods [14, 24].
|
| 715 |
+
- Policy regularization. The training process of SLiC is regularized by the additional SFT loss controlled by $\lambda$ . In contrast, our proposed method is regularized through the parameterization technique $r_{\theta} = \beta \log \frac{\pi_{\theta}}{\mu}$ controlled by $\beta$ .
|
| 716 |
+
- Learning target. SLiC directly optimizes the policy model $\pi_{\theta}$ , while our method directly optimizes the residual model $r_{\theta}$ .
|
2402.05xxx/2402.05369/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e2e32a55de14140030ec2f278c037abaeb095f23685204deb8d69434ade51a71
|
| 3 |
+
size 1399543
|
2402.05xxx/2402.05369/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05375/1ec26f76-c2d8-43d1-8481-7aa17f504790_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:338ae430dbd7b30bfad849506b9c4fb4eebb6bb7076805c95c4d99e3bd179462
|
| 3 |
+
size 11783588
|
2402.05xxx/2402.05375/full.md
ADDED
|
@@ -0,0 +1,655 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GET WHAT YOU WANT, NOT WHAT YOU DON'T: IMAGE CONTENT SUPPRESSION FOR TEXT-TO-IMAGE DIFFUSION MODELS
|
| 2 |
+
|
| 3 |
+
Senmao Li $^{1}$ , Joost van de Weijer $^{2}$ , Taihang Hu $^{1}$ , Fahad Shahbaz Khan $^{3,4}$ , Qibin Hou $^{1}$ , Yaxing Wang $^{1*}$ , Jian Yang $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ VCIP, CS, Nankai University, $^{2}$ Universitat Autònoma de Barcelona
|
| 6 |
+
$^{3}$ Mohamed bin Zayed University of AI, $^{4}$ Linkoping University
|
| 7 |
+
{senmaonk, Hutaihang00}@gmail.com, joost@cvc.uab.es
|
| 8 |
+
fahad.khan@liu.se, {houqb, yaxing, csjyang}@nankai.edu.cn
|
| 9 |
+
|
| 10 |
+
# ABSTRACT
|
| 11 |
+
|
| 12 |
+
The success of recent text-to-image diffusion models is largely due to their capacity to be guided by a complex text prompt, which enables users to precisely describe the desired content. However, these models struggle to effectively suppress the generation of undesired content, which is explicitly requested to be omitted from the generated image in the prompt. In this paper, we analyze how to manipulate the text embeddings and remove unwanted content from them. We introduce two contributions, which we refer to as soft-weighted regularization and inference-time text embedding optimization. The first regularizes the text embedding matrix and effectively suppresses the undesired content. The second method aims to further suppress the unwanted content generation of the prompt, and encourages the generation of desired content. We evaluate our method quantitatively and qualitatively on extensive experiments, validating its effectiveness. Furthermore, our method is generalizability to both the pixel-space diffusion models (i.e. DeepFloyd-IF) and the latent-space diffusion models (i.e. Stable Diffusion).
|
| 13 |
+
|
| 14 |
+
# 1 INTRODUCTION
|
| 15 |
+
|
| 16 |
+
Text-based image generation aims to generate high-quality images based on a user prompt (Ramesh et al., 2022; Sahara et al., 2022; Rombach et al., 2021). This prompt is used by the user to communicate the desired content, which we call the positive target, and can potentially also include undesired content, which we define with the term negative target. Negative lexemes are ubiquitously prevalent and serve as pivotal components in human discourse. They are crucial for humans to precisely communicate the desired image content.
|
| 17 |
+
|
| 18 |
+
However, existing text-to-image models can encounter challenges in effectively suppressing the generation of the negative target. For example, when requesting an image using the prompt "a face without glasses", the diffusion models (i.e., SD) synthesize the subject without "glasses", as shown in Fig. 1 (the first column). However, when using the prompt "a man without glasses", both SD and DeepFloyd-IF models still generate the subject with "glasses" $^{1}$ , as shown in Fig. 1 (the second and fifth columns). Fig. 1 (the last column) quantitatively show that SD has 0.819 DetScore for "glasses" using 1000 randomly generated images, indicating a very common failure cases in diffusion models. Also, when giving the prompt "a man", often the glasses are included, see Fig. 1 (the third and sixth columns). This is partially due to the fact that many of the collected man training images contain glasses, but often do not contain the glasses label (see in Appendix A Fig. 9).
|
| 19 |
+
|
| 20 |
+
Few works have addressed the aforementioned problem. The negative prompt technique<sup>2</sup> guides a diffusion model to exclude specific elements or features from the generated image. It, however, often
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
Figure 1: Failure cases of Stable Diffusion (SD) and DeepFloyd-IF. Given the prompt "A man without glasses", both SD and DeepFloyd-IF fail to suppress the generation of negative target glasses. Our method successfully removes the "glasses". (Right) we use DetScore (see Sec. 4) to detect the "glasses" from 1000 generated images. The DetScore of SD with prompt "A face without glasses" is 0.122. See Appendix E for additional examples.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
DetScore with "glasses"
|
| 49 |
+
|
| 50 |
+
leads to an unexpected impact on other aspects of the image, such as changes to its structure and style (see Fig. 6). Both P2P (Hertz et al., 2022) and SEGA (Brack et al., 2023) allow steering the diffusion process along several directions, such as weakening a target object from the generated image. We empirically observe these methods to lead to inferior performance (see Fig. 6 and Table 1 below). This is expected since they are not the tailored for this problem. Recent works (Gandikota et al., 2023; Kumari et al., 2023; Zhang et al., 2023) fine-tune the SD model to eliminate completely some targeted object information, resulting in catastrophic neglect (Kumari et al., 2022). A drawback of these methods is that the model is unable to generate this context in future text-prompts. Finally, Inst-Inpaint (Yildirim et al., 2023) requires paired images to train a model to erase unwanted pixels.
|
| 51 |
+
|
| 52 |
+
In this work, we propose an alternative approach for negative target content suppression. Our method does not require fine-tuning the image generator, or collecting paired images. It consists of two main steps. In the first step, we aim to remove this information from the text embeddings<sup>3</sup> which decide what particular visual content is generated. To suppress the negative target generation, we eliminate its information from the whole text embeddings. We construct a text embedding matrix, which consists of both the negative target and [EOT] embeddings. We then propose a soft-weighted regularization for this matrix, which explicitly suppresses the corresponding negative target information from the [EOT] embeddings. In the second step, to further improve results, we apply inference-time text embedding optimization which consists of optimizing the whole embeddings (processed in the first step) with respect to two losses. The first loss, called negative target prompt suppression, weakens the attention maps of the negative target further suppressing negative target generation. This may lead to the unexpected suppression of the positive target (see Appendix D. Fig. 13 (the third row)). To overcome this, we propose a positive target prompt preservation loss that strengthens the attention maps of the positive target. Finally, the combination of our proposed regularization of the text embedding matrix and the inference-time embedding optimization leads to improved negative target content removal during image generation.
|
| 53 |
+
|
| 54 |
+
In summary, our work makes the following contributions: (I) Our analysis shows that the [EOT] embeddings contain significant, redundant and duplicated semantic information of the whole input prompt (the whole embeddings). This needs to be taken into account when removing negative target information. Therefore, we propose soft-weighted regularization to eliminate the negative target information from the [EOT] embeddings. (II) To further suppress the negative target generation, and encourage the positive target content, we propose inference-time text embedding optimization. Ablation results confirm that this step significantly improves final results. (III) Through extensive experiments, we show the effectiveness of our method to correctly remove the negative target information without detrimental effects on the generation of the positive target content. Our code is available in https://github.com/sen-mao/SuppressEOT.
|
| 55 |
+
|
| 56 |
+
# 2 RELATED WORK
|
| 57 |
+
|
| 58 |
+
Text-to-Image Generation. Text-to-image synthesis aims to synthesize highly realistic images which are semantically consistent with the text descriptions. More recently, text-to-image mod
|
| 59 |
+
|
| 60 |
+
els (Saharia et al., 2022; Ramesh et al., 2022; Rombach et al., 2021) have obtained amazing performance in image generation. With powerful image generation capability, diffusion models allow users to provide a text prompt, and generate images of unprecedented quality. Furthermore, a series of recent works investigated knowledge transfer on diffusion models (Kawar et al., 2022; Ruiz et al., 2022; Valevski et al., 2022; Kumari et al., 2022) with one or few images. In this paper, we focus on the Stable Diffusion (SD) model without fientuning, and address the failure case that the generated subjects are not corresponding to the input text prompts.
|
| 61 |
+
|
| 62 |
+
Diffusion-Based Image Generation. Most recent works explore the ability to control or edit a generated image with extra conditional information, as well as text information. It contains label-to-image generation, layout-to-image generation and (reference) image-to-image generation. Specifically, label-to-image translation (Avrahami et al., 2022a;b; Nichol et al., 2021) aims to synthesize high-realistic images conditioning on semantic segmentation information, as well as text information. P2P (Hertz et al., 2022) proposes a mask-free editing method. Similar to label-to-image translation, both layout-to-image (Li et al., 2023b; Zhang & Agrawala, 2023) and (reference) image-to-image (Brooks et al., 2022; Parmar et al., 2023) generations aim to learn a mapping from the input image map to the output image. GLIGEN(Li et al., 2023b) boosts the controllability of the generated image by inserting bounding boxes with object categories. Some works investigate Diffusion-based inversion. (Dhariwal & Nichol, 2021) shows that a given real image can be reconstructed by DDIM (Song et al., 2020) sampling. Recent works investigate either the text embeddings of the conditional input (Gal et al., 2022; Li et al., 2023a; Wang et al., 2023), or the null-text optimization of the unconditional input(i.e., Null-Text Inversion (Mokady et al., 2022)).
|
| 63 |
+
|
| 64 |
+
Diffusion-Based Semantic Erosion. Current approaches (Gandikota et al., 2023; Kumari et al., 2023; Zhang et al., 2023) have noted the importance of erasure, including the erasure of copyright, artistic style, nudity, etc. ESD (Gandikota et al., 2023) utilizes negative guidance to lead the fine-tuning of a pre-trained model, aiming to achieve a model that erases specific styles or objects. (Kumari et al., 2023) fine-tunes the model using two prompts with and without erasure terms, such that the model distribution matches the erasure prompt. Inst-Inpaint (Yildirim et al., 2023) is a novel inpainting framework that trains a diffusion model to map source images to target images with the inclusion of conditional text prompts. However, these works fine-tune the SD model, resulting in catastrophic neglect for the unexpected suppression from input prompt. In this paper, we aim to remove unwanted subjects in output images without further training or fine-tuning the SD model.
|
| 65 |
+
|
| 66 |
+
# 3 METHOD
|
| 67 |
+
|
| 68 |
+
We aim to suppress the negative target generation in diffusion models. To achieve this goal, we focus on manipulating the text embeddings, which essentially control the subject generation. Naively eliminating a target text embedding fails to exclude the corresponding object from the output (Fig. 2a (the second and third columns)). We conduct a comprehensive analysis that shows this failure is caused by the appended [EOT] embeddings (see Sec. 3.2). Our method consists of two main steps. In the first step, we propose soft-weighted regularization to largely reduce the negative target text information from the [EOT] embeddings (Sec. 3.3). In the second step, we apply inference-time text embedding optimization which consists of optimizing the whole text embeddings (processed in the first step) with respect to two losses. The first loss, called the negative target prompt suppression loss, aims to weaken the attention map of the negative target to guide the update of the whole text embeddings, thus further suppressing the subject generation of the negative target. To prevent undesired side effects, namely the unexpected suppression from the positive target in the output (see Appendix D. Fig. 13 (the third row)), we propose the positive target prompt preservation loss. This strengthens the attention map of the positive target. The inference-time text embedding optimization is presented in Sec. 3.4. In Sec. 3.1, we provide a simple introduction to the SD model, although our method is not limited to a specific diffusion model.
|
| 69 |
+
|
| 70 |
+
# 3.1 PRELIMINARY: DIFFUSION MODEL
|
| 71 |
+
|
| 72 |
+
The SD firstly trains an encoder $E$ and a decoder $D$ . The encoder maps the image $x$ into the latent representation $z_0 = E(x)$ , and the decoder reverses the latent representation $z_0$ into the image
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
Figure 2: Analysis of [EOT] embeddings. (a) [EOT] embeddings contain significant information as can be seen when zeroed out. (b) when performing WNNM (Gu et al., 2014), we find that [EOT] embeddings have redundant semantic information. (c) distance matrix between all text embeddings. Note that each [EOT] embedding contains similar semantic information and they have near zero distance.
|
| 76 |
+
|
| 77 |
+
$\hat{\pmb{x}} = D(\pmb{z}_0)$ . SD trains a UNet-based denoiser network $\epsilon_{\theta}$ to predict noise $\pmb{\epsilon}$ , following the objective:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\min _ {\theta} E _ {\boldsymbol {z} _ {0}, \epsilon \sim N (0, I), t \sim [ 1, T ]} \| \epsilon - \epsilon_ {\theta} (\boldsymbol {z} _ {t}, t, \boldsymbol {c}) \| _ {2} ^ {2}, \tag {1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where the encoded text embeddings $c$ is extracted by a pre-trained CLIP text encoder $\Gamma$ with given a conditioning prompt $p$ : $c = \Gamma(p)$ , $z_{t}$ is a noise sample at timestamp $t \sim [1, T]$ , and $T$ is the number of the timestep. The SD model introduces the cross-attention layer by incorporating the prompt. We could extract the internal cross-attention maps $A$ , which are high-dimensional tensors that bind pixels and tokens extracted from the prompt text.
|
| 84 |
+
|
| 85 |
+
# 3.2 ANALYSIS OF [EOT] EMBEDDINGS
|
| 86 |
+
|
| 87 |
+
The text encoder $\Gamma$ maps input prompt $p$ into text embeddings $c = \Gamma(p) \in \mathbb{R}^{M \times N}$ (i.e., $M = 768$ , $N = 77$ in the SD model). This works by prepending a Start of Text ([SOT]) symbol to the input prompt $p$ and appending $N - |p| - 1$ End of Text ([EOT]) padding symbols at the end, to obtain $N$ symbols in total. We define text embeddings $c = \{c^{SOT}, c_0^P, \dots, c_{|p|-1}^P, c_{0}^{EOT}, \dots, c_{N-p-2}^{EOT}\}$ . Below, we explore several aspects of the [EOT] embeddings.
|
| 88 |
+
|
| 89 |
+
What semantic information [EOT] embeddings contain? We observe that [EOT] embeddings carry significant semantic information. For example, when requesting an image with the prompt "a man without glasses", SD synthesizes the subject including the negative target "glasses" (Fig. 2a (the first column)). When zeroing out the token embedding of "glasses" from the text embeddings $c$ , SD fails to discard "glasses" (Fig. 2a (the second and third columns)). Similarly, zeroing out all [EOT] embeddings still generates the "glasses" subject (Fig. 2a (the fourth and fifth columns)). Finally, when zeroing out both "glasses" and the [EOT] token embeddings, we successfully remove "glasses" from the generated image (Fig. 2a (the sixth and seventh columns)). The results suggest that the [EOT] embeddings contain significant information about the input prompt. Note that naively zeroing them out often leads to unexpected changes (Fig. 2a (the seventh column)).
|
| 90 |
+
|
| 91 |
+
How much information whole [EOT] embeddings contain? We experimentally observe that [EOT] embeddings have the low-rank property $^{4}$ , indicating they contain redundant semantic information. The weighted nuclear norm minimization (WNNM) (Gu et al., 2014) is an effective
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
Figure 3: Overview of the proposed method. (a) We devise a negative target embedding matrix $\chi$ : $\chi = [c^{NE}, c_{0}^{EOT}, \dots, c_{N - |p| - 2}^{EOT}]$ . We perform SVD for the embedding matrix $\chi = U\Sigma V^T$ . We introduce a soft-weight regularization for each largest eigenvalue. Then we recover the embedding matrix $\hat{\chi} = U\hat{\Sigma} V^T$ . (b) We propose inference-time text embedding optimization (ITO). We align the attention maps of both $c^{PE}$ and $\hat{c}^{PE}$ , and widen the ones of both $c^{NE}$ and $\hat{c}^{NE}$ .
|
| 95 |
+
|
| 96 |
+
low-rank analysis method. We leverage WNNM to analyze the [EOT] embeddings. Specifically, we construct a [EOT] embeddings matrix $\Psi = [c_0^{EOT}, c_1^{EOT}, \dots, c_{N - |\pmb{p}| - 2}^{EOT}]$ , and perform WNNM as follows $\mathcal{D}_w(\Psi) = \mathbf{U}\mathcal{D}_w(\Sigma)\mathbf{V}^T$ , where $\Psi = \mathbf{U}\Sigma \mathbf{V}^T$ is the Single Value Decomposition (SVD) of $\Psi$ , and $\mathcal{D}_w(\Sigma)$ is the generalized soft-thresholding operator with the weighted vector $\pmb{w}$ , i.e., $\mathcal{D}_w(\Sigma)_{ii} = \mathrm{soft}(\Sigma_{ii}, w_i) = \max (\Sigma_{ii} - w_i, 0)$ . The singular values $\sigma_0 \geq \dots \geq \sigma_{N - |\pmb{p}| - 2}$ and the weights satisfy $0 \leq w_0 \leq \dots \leq w_{N - |\pmb{p}| - 2}$ .
|
| 97 |
+
|
| 98 |
+
To verify the low-rank property of [EOT] embeddings, WNNM mainly keeps the top- $K$ largest singular values of $\Sigma$ , zero out the small singular values, and finally reconstruct $\hat{\Psi} = \left[\hat{c}_0^{EOT}, \hat{c}_1^{EOT}, \dots, \hat{c}_{N - |p| - 2}^{EOT}\right]$ . We use $\mathrm{Rank}(\hat{\Psi})$ to represent the rank of $\hat{\Psi}$ . We explore the impact of different $\mathrm{Rank}(\hat{\Psi})$ values on the generated image. For example, as shown in Fig. 2b, with the prompt "White and black long coated puppy" (here $|p| = 6$ ), we use PSNR and SSIM metrics to evaluate the modified image against the SD model's output. Setting $\mathrm{Rank}(\hat{\Psi}) = 0$ , zeroing all [EOT] embeddings, the generated image preserves similar semantic information as when using all [EOT] embeddings. As $\mathrm{Rank}(\hat{\Psi})$ increases, the generated image gets closer to the SD model's output. Visually, the generated image looks similar to the one of the SD model with $\mathrm{Rank}(\hat{\Psi}) = 4$ . Achieving acceptable metric values (PSNR=40.288, SSIM=0.994) with $\mathrm{Rank}(\hat{\Psi}) = 9$ in Fig. 2b (middle). The results indicate that the [EOT] embeddings have the low-rank property, and contain redundant semantic information.
|
| 99 |
+
|
| 100 |
+
Semantic alignment for each [EOT] embedding There exist a total of $76 - |p|$ [EOT] embeddings. However, we find that the various [EOT] embeddings are highly correlated, and they typically contain the semantic information of the input prompt. This phenomenon is demonstrated both qualitatively and quantitatively in Fig. 2c. For example, we input the prompt "A man with a beard wearing glasses and a beanie in a blue shirt". We randomly select one [EOT] embedding to replace input text embeddings like Fig. 2c (left) $^{5}$ . The generated images have similar semantic information (Fig. 2c (right)). This conclusion also is demonstrated by the distance of each [EOT] embedding (Fig. 2c (middle)). Most [EOT] embeddings have small distance among themselves. In conclusion, we need to remove the negative target information from the $76 - |p|$ [EOT] embeddings.
|
| 101 |
+
|
| 102 |
+
# 3.3 TEXT EMBEDDING-BASED SEMANTIC SUPPRESSION
|
| 103 |
+
|
| 104 |
+
Our goal is to suppress negative target information during image generation. Based on the aforementioned analysis, we must eliminate the negative target information from the [EOT] embeddings. To achieve this goal, we introduce two strategies, which we refer to as soft-weighted regularization and inference-time text embedding optimization. For the former, we devise a negative target embedding matrix, and propose a new method to regularize the negative target information. The
|
| 105 |
+
|
| 106 |
+
inference-time text embedding optimization aims to further suppress the negative target generation of the target prompt, and encourages the generation of the positive target. We give an overview of the two strategies in Fig. 3.
|
| 107 |
+
|
| 108 |
+
Soft-weighted Regularization. We propose to use Single Value Decomposition (SVD) to extract negative target information (e.g., glasses) from the text embeddings. Let $c = \{c^{SOT}, c_0^P, \dots, c_{|p|-1}^P, c_0^{EOT}, \dots, c_{N-|p|-2}^{EOT}\}$ be the text embeddings from CLIP text encoder. As shown in Fig. 3 (left), we split the embeddings $c_i^P (i = 0,1,\dots,|p|-1)$ into the negative target embedding set $c^{NE}$ and the positive target embedding set $c^{PE}$ . Thus we have $c = \{c^{SOT}, c_0^P, \dots, c_{|p|-1}^P, c_0^{EOT}, \dots, c_{N-|p|-2}^{EOT}\} = \{c^{SOT}, c^{PE}, c^{NE}, c_0^{EOT}, \dots, c_{N-|p|-2}^{EOT}\}$ . We construct a negative target embedding matrix $\chi: \chi = [c^{NE}, c_0^{EOT}, \dots, c_{N-|p|-2}^{EOT}]$ . We perform SVD: $\chi = U\Sigma V^T$ , where $\Sigma = \text{diag}(\sigma_0, \sigma_1, \dots, \sigma_{n_0})$ , the singular values $\sigma_1 \geq \dots \geq \sigma_{n_0}$ , $n_0 = \min(M, N-|p|-1)$ . Intuitively, the negative target embedding matrix $\chi = [c^{NE}, c_0^{EOT}, \dots, c_{N-|p|-2}^{EOT}]$ mainly contains the expected suppressed information. After performing SVD, we assume that the main singular values are corresponding to the suppressed information (the negative target). Then, to suppress negative target information, we introduce soft-weighted regularization for each singular value<sup>6</sup>:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\hat {\sigma} = e ^ {- \sigma} * \sigma . \tag {2}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
We then recover the embedding matrix $\hat{\chi} = \pmb{U}\hat{\Sigma}\pmb{V}^T$ , here $\hat{\Sigma} = \text{diag}(\hat{\sigma}_0, \hat{\sigma}_1, \dots, \hat{\sigma}_{n_0})$ . Note that the recovered structure is $\hat{\chi} = \left[\hat{c}^{NE}, \hat{c}_{0}^{EOT}, \dots, \hat{c}_{N - |\pmb{p}| - 2}^{EOT}\right]$ , and $\hat{c} = \{c^{SOT}, c^{PE}, \hat{c}^{NE}, \hat{c}_{0}^{EOT}, \dots, \hat{c}_{N - |\pmb{p}| - 2}^{EOT}\}$ .
|
| 115 |
+
|
| 116 |
+
We consider a special case where we reset top-K or bottom-K singular values to 0. As shown on Fig. 4, we are able to remove the negative target prompt (e.g., glasses or beard) when setting the top-K (here, $\mathrm{K} = 2$ ) singular values to 0. And the negative target prompt information is preserved when the bottom-K singular values are set to 0 (here, $\mathrm{K} = 70$ ). This supports our assumption that main singular values of $\chi$ are corresponding to the negative target information.
|
| 117 |
+
|
| 118 |
+
# 3.4 INFERENCE-TIME TEXT EMBEDDING OPTIMIZATION
|
| 119 |
+
|
| 120 |
+
As illustrated in Fig. 3 (right), for a specific timestep $t$ , during the diffusion process $T \rightarrow 1$ , we get the diffusion network output: $\epsilon_{\theta}(\widetilde{z}_t, t, c)$ , and the corresponding attention maps: $(\boldsymbol{A}_t^{PE}, \boldsymbol{A}_t^{NE})$ , where $c = \{c^{SOT}, c^{PE}, c^{NE}, c_{0}^{EOT}, \dots, c_{N - |p| - 2}^{EOT}\}$ . The attention maps $\boldsymbol{A}_t^{PE}$ are corresponding to $c^{PE}$ , while $\boldsymbol{A}_t^{NE}$ are corresponding to $c^{NE}$ which we aim to suppress. After soft-weighted regularization, we have the new text embeddings $\hat{c} = \{c^{SOT}, c^{PE}, \hat{c}^{NE}, \hat{c}_{0}^{EOT}, \dots, \hat{c}_{N - |p| - 2}^{EOT}\}$ . Similarly, we are able to get the attention maps: $(\hat{\boldsymbol{A}}_t^{PE}, \hat{\boldsymbol{A}}_t^{NE})$ .
|
| 121 |
+
|
| 122 |
+
Here, we aim to further suppress the negative target generation, and encourage the positive target information. We propose two attention losses to regularize the attention maps, and modify the text embeddings $\hat{c}$ to guide the attention maps to focus on the particular region, which is corresponding to the positive target prompt. We introduce an positive target prompt preservation loss:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathcal {L} _ {p l} = \left\| \hat {\boldsymbol {A}} _ {t} ^ {\boldsymbol {P E}} - \boldsymbol {A} _ {t} ^ {\boldsymbol {P E}} \right\| ^ {2}. \tag {3}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
That is, the loss attempts to strengthen the attention maps of the positive target prompt at the timestep $t$ . To further suppress generation for the negative target prompt, we propose the negative target prompt suppression loss:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathcal {L} _ {n l} = - \left\| \hat {\boldsymbol {A}} _ {\boldsymbol {t}} ^ {N E} - \boldsymbol {A} _ {\boldsymbol {t}} ^ {N E} \right\| ^ {2}, \tag {4}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
Full objective. The full objective function of our model is:
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\mathcal {L} = \lambda_ {p l} \mathcal {L} _ {p l} + \lambda_ {n l} \mathcal {L} _ {n l}, \tag {5}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
Table 1: Comparison with baselines. The best results are in bold, and the second best results are underlined.
|
| 141 |
+
|
| 142 |
+
<table><tr><td rowspan="3">Method</td><td colspan="5">Real-image editing</td><td colspan="8">Generated-image editing</td></tr><tr><td colspan="3">Random negative target</td><td colspan="2">Random negative target</td><td colspan="3">Negative target: Car</td><td colspan="3">Negative target: Tyler Edlin</td><td colspan="2">Negative target: Van Gogh</td></tr><tr><td>Clipscore↓</td><td>IFID↑</td><td>DetScore↓</td><td>Clipscore↓</td><td>IFID↑</td><td>DetScore↓</td><td>Clipscore↓</td><td>IFID↑</td><td>DetScore↓</td><td>Clipscore↓</td><td>IFID↑</td><td>DetScore↓</td><td>IFID↑</td></tr><tr><td>Real image or SD (Generated image)</td><td>0.7986</td><td>0</td><td>0.3381</td><td>0.8225</td><td>0</td><td>0.4509</td><td>0.8654</td><td>0</td><td>0.6643</td><td>0.7414</td><td>0</td><td>0.8770</td><td>0</td></tr><tr><td>Negative prompt</td><td>0.7983</td><td>175.8</td><td>0.2402</td><td>0.7619</td><td>169.0</td><td>0.1408</td><td>0.8458</td><td>151.7</td><td>0.5130</td><td>0.7437</td><td>233.9</td><td>0.8039</td><td>242.1</td></tr><tr><td>P2P (Hertz et al., 2022)</td><td>0.7666</td><td>92.53</td><td>0.1758</td><td>0.8118</td><td>103.3</td><td>0.3391</td><td>0.8638</td><td>21.7</td><td>0.6343</td><td>0.7470</td><td>86.3</td><td>0.8849</td><td>139.7</td></tr><tr><td>ESD (Gandikota et al., 2023)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.7986</td><td>165.7</td><td>0.2223</td><td>0.6954</td><td>256.5</td><td>0.7292</td><td>267.5</td></tr><tr><td>Concept-ablation (Kumari et al., 2023)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.7642</td><td>179.3</td><td>0.0935</td><td>0.7411</td><td>211.4</td><td>0.8290</td><td>219.9</td></tr><tr><td>Forget-Me-Not (Zhang et al., 2023)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.8701</td><td>158.7</td><td>0.5867</td><td>0.7495</td><td>227.9</td><td>0.8391</td><td>203.5</td></tr><tr><td>Inst-Inpaint (Yildirim et al., 2023)</td><td>0.7327</td><td>135.5</td><td>0.1125</td><td>0.7602</td><td>150.4</td><td>0.1744</td><td>0.8009</td><td>126.9</td><td>0.2361</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SEGA (Brack et al., 2023)</td><td>-</td><td>-</td><td>-</td><td>0.7960</td><td>172.2</td><td>0.3005</td><td>0.8001</td><td>168.8</td><td>0.4767</td><td>0.7678</td><td>209.9</td><td>0.8730</td><td>175.0</td></tr><tr><td>Ours</td><td>0.6857</td><td>166.3</td><td>0.0384</td><td>0.6647</td><td>176.4</td><td>0.1321</td><td>0.7426</td><td>206.8</td><td>0.0419</td><td>0.7402</td><td>217.7</td><td>0.6448</td><td>307.5</td></tr></table>
|
| 143 |
+
|
| 144 |
+
where $\lambda_{pl} = 1$ and $\lambda_{nl} = 0.5$ are used to balance the effect of preservation and suppression. We use this loss to update the text embeddings $\hat{c}$ .
|
| 145 |
+
|
| 146 |
+
For real image editing, we first utilize the text embeddings $c$ to apply Null-Text (Mokady et al., 2022) to invert a given real image into the latent representation. Then we use the proposed soft-weighted regularization to suppress negative target information from $c$ resulting in $\hat{c}$ . Next, we apply inference-time text embedding optimization to update $\hat{c}_t$ during inference, resulting in the final edited image. Our full algorithm is presented in Algorithm 1. See Appendix C for more detail about negative target generation of SD model without the reference real image.
|
| 147 |
+
|
| 148 |
+
# Algorithm 1: Our algorithm
|
| 149 |
+
|
| 150 |
+
Input: A text embeddings $c = \Gamma(p)$ and real image $\mathcal{I}$ .
|
| 151 |
+
Output: Edited image $\hat{\mathcal{I}}$ .
|
| 152 |
+
|
| 153 |
+
$\widetilde{\pmb{z}}_T = \mathrm{Inversion}(E(\mathcal{I}),\pmb {c}); / /$ e.g., Null-text $\hat{c}\gets$ SWR(c) (Eq.2); // SWR
|
| 154 |
+
for $t = T,T - 1,\dots ,1$ do
|
| 155 |
+
$\hat{c}_t = \hat{c};$
|
| 156 |
+
// ITO
|
| 157 |
+
for ite $= 0,\ldots ,9$ do
|
| 158 |
+
$\begin{array}{rl} & {\pmb{A}_{t}^{PE},\pmb{A}_{t}^{NE}\leftarrow \epsilon_{\theta}(\widetilde{\pmb{z}}_{t},t,\pmb {c});}\\ & {\hat{\pmb{A}}_{t}^{PE},\hat{\pmb{A}}_{t}^{NE}\leftarrow \epsilon_{\theta}(\widetilde{\pmb{z}}_{t},t,\hat{\pmb{c}}_{t});}\\ & {\mathcal{L}\leftarrow \lambda_{pl}\mathcal{L}_{pl} + \lambda_{nl}\mathcal{L}_{nl}(\mathrm{Eqs.3 - 6});}\\ & {\hat{c}_t\leftarrow \hat{c}_t - \eta \nabla \hat{c}_t\mathcal{L};} \end{array}$
|
| 159 |
+
end
|
| 160 |
+
$\tilde{z}_{t - 1, - , - }\gets \epsilon_{\theta}(\tilde{z}_t,t,\hat{c}_t)$
|
| 161 |
+
end
|
| 162 |
+
|
| 163 |
+
Return Edited image $\hat{\mathcal{I}} = D(\tilde{z}_0)$
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
Figure 4: Effect of resetting top- $K$ or bottom- $K$ singular values to 0. Main singular values correspond to the target information that we expect to be suppressed.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
|
| 182 |
+
# 4 EXPERIMENTS
|
| 183 |
+
|
| 184 |
+
Baseline Implementations. We compare with the following baselines: Negative prompt, ESD (Gandikota et al., 2023), Concept-ablation (Kumari et al., 2023), Forget-Me-Not (Zhang et al., 2023), Inst-Inpaint (Yildirim et al., 2023) and SEGA (Brack et al., 2023). We use P2P (Hertz et al., 2022) with Attention Re-weighting.
|
| 185 |
+
|
| 186 |
+
Evaluation datasets. We evaluate the proposed method from two perspectives: generated-image editing and real-image editing. In the former, we suppress the negative target generation from a generated image of the SD model with a text prompt, and the latter refers to editing a real-image input and a text input. Similar to recent editing-related works (Mokady et al., 2022; Gandikota et al., 2023; Patashnik et al., 2023), we use nearly 100 images for evaluation. For generated-image negative target surpression, we randomly select 100 captions provided in the COCO's validation set (Chen et al., 2015) as prompts. The Tyler Edlin and Van Gogh related data (prompts and seeds) are obtained from the official code of ESD (Gandikota et al., 2023). For real-image negative target suppression, we randomly select 100 images and their corresponding prompts from the Unsplash and COCO datasets. We also evaluate our approach on the GQA-Inpaint dataset, which contains 18,883 unique source-target-prompt pairs for testing. See Appendix A for more details on experiments involving this dataset. We show the optimization details and more results in Appendix A, D and E, respectively.
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
Figure 5: (Left) We detect the negative target from the edited images and and show the DetScore below. (Middle) Real image negative target suppression results. Inst-Inpaint fills the erased area with unrealistic pixels (the red dotted line frame). Our method exploits surrounding content information. (Right) User study.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 6: Real image (Left) and generated image (Middle and Right) negative target suppression results. (Middle) We are able to suppress the negative target, without further finetuning the SD model. (Right) Examples of negative target suppression.
|
| 193 |
+
|
| 194 |
+
Metrics. Clipscore (Hessel et al., 2021) is a metric that evaluates the quality of a pair of a negative prompt and an edited image. We also employ the widely used Fréchet Inception Distance (FID) (Heusel et al., 2017) for evaluation. To evaluate the suppression of the target prompt information after editing, we use inverted FID (IFID), which measures the similarity between two sets. In this metric, the larger the better. We also propose to use the DetScore metric, which is based on MMDetection (Chen et al., 2019) with GLIP (Li et al., 2022). We detect the negative target object in the edited image, successful editing should lead to a low DetScore (see Fig. 5 and Appendix A for more detail). Following Inst-Inpaint (Yildirim et al., 2023), we use FID and CLIP Accuracy to evaluate the accuracy of the removal operation on the GQA-Inpaint dataset.
|
| 195 |
+
|
| 196 |
+
For real-image negative target suppression, as reported in Table 1 we achieve the best score in both Clipscore and DetScore (Table 3 (the second and fourth columns)), and a comparable result in IFID.
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 7: Additional applications. Our method can be applied to image restoration tasks, such as shadow, cracks, and rain removal. Also we can strengthen the object generation (6-9 column).
|
| 200 |
+
|
| 201 |
+
Negative prompt has the best performance in IFID score. However, it often changes the structure and style of the image (Fig. 6 (left, the second row)). In contrast, our method achieves a better balance between preservation and suppression (Fig. 6 (left, the last row)). For generated image negative target suppression, we have the best performance for both a random and specific negative target, except for removing Tyler Edlin's style, for which ESD obtains the best scores. However, ESD requires to finetune the SD model, resulting in catastrophic neglect. Our advantage is further substantiated by visualized results (Fig. 6).
|
| 202 |
+
|
| 203 |
+
As shown in Fig. 5 (middle) and Table 2, we achieve superior suppression results and higher CLIP Accuracy scores on the GQA-Inpaint dataset. Inst-Inpaint achieves the best FID score (Table 2 (the third column)) primarily because its results (Fig. 5 (the second row, the sixth column)) closely resemble the ground truth (GT). However, the GT images contain unrealistic pixels. Our method yields more photo-realistic results. These results demonstrate that the proposed method is effective in suppressing the negative target. See Appendix. A for more experimental details.
|
| 204 |
+
|
| 205 |
+
User study. As shown in Fig. 5 (Right), we conduct a user study. We require users to select the figure in which the negative target is more accurately suppressed. We performed septuplets comparisons (forced choice) with 20 users (20 quadruplets/user). The results demonstrate that our method outperforms other methods. See Appendix. E for more details.
|
| 206 |
+
|
| 207 |
+
Ablation analysis. We conduct an ablation study for the proposed approach. We report the quantitative result in Table 3. Using soft-weighted regularization (SWR) alone cannot completely remove objects from the image. The results indicate that using both SWR and inference-time text embedding optimization leads to the best scores. The visualized results are presented in Appendix. D.
|
| 208 |
+
|
| 209 |
+
Additional applications. As shown in Fig. 7 (the first to the fifth columns), we perform experiments on a variety of image restoration tasks, including shadow removal, cracks removal and rain removal. Interestingly, our method can also be used to remove these undesired image artifacts. Instead of extracting the negative target embedding, we can also strengthen the added prompt and [EOT] embeddings. As shown in Fig. 7 (the sixth to the ninth columns), our method can be successfully adapted to strengthen image content, and obtain results that are similar to methods like GLIGEN (Li et al., 2023b) and Attend-and-Excite (Chefer et al., 2023) (See Appendix. F for a complete explanation and more results).
|
| 210 |
+
|
| 211 |
+
Table 2: Quantitative comparison on the GQA- Table 3: Ablation study. The effectiveness of both soft-weighted regularization and inference-time text embedding optimization. Inpaint dataset for real image negative target suppression task.
|
| 212 |
+
|
| 213 |
+
<table><tr><td>Methods</td><td>Paired data</td><td>FID ↓</td><td>CLIP Acc ↑</td><td>CLIP Acc (top5) ↑</td></tr><tr><td>X-Decoder</td><td>✓</td><td>6.86</td><td>69.9</td><td>46.5</td></tr><tr><td>Inst-Inpaint</td><td>✓</td><td>5.50</td><td>80.5</td><td>60.4</td></tr><tr><td>Ours</td><td>✗</td><td>13.87</td><td>92.8</td><td>83.3</td></tr></table>
|
| 214 |
+
|
| 215 |
+
<table><tr><td></td><td>Clipscore↓</td><td>IFID↑</td><td>DetScore↓</td></tr><tr><td>SD</td><td>0.8225</td><td>0</td><td>0.4509</td></tr><tr><td>SWR</td><td>0.7996</td><td>85.9</td><td>0.3668</td></tr><tr><td>SWR+Lpl</td><td>0.8015</td><td>100.2</td><td>0.3331</td></tr><tr><td>SWR+Lpl+Lnl</td><td>0.6647</td><td>176.4</td><td>0.1321</td></tr></table>
|
| 216 |
+
|
| 217 |
+
# 5 CONCLUSIONS AND LIMITATIONS
|
| 218 |
+
|
| 219 |
+
We observe that diffusion models often fail to suppress the generation of negative target information in the input prompt. We explore the corresponding text embeddings and find that [EOT] embed
|
| 220 |
+
|
| 221 |
+
dings contain significant, redundant and duplicated semantic information. To suppress the generation of negative target information, we provide two contributions: soft-weighted regularization and inference-time text embedding optimization. In the former, we suppress the negative target information from the text embedding matrix. The inference-time text embedding optimization encourages the positive target to be preserved, as well as further removing the negative target information. Limitations: Currently, the test-time optimization costs around half a minute making the proposed method unfit for applications that require fast results. But, we believe that a dedicated engineering effort can cut down this time significantly.
|
| 222 |
+
|
| 223 |
+
# ACKNOWLEDGEMENTS
|
| 224 |
+
|
| 225 |
+
This work was supported by funding by projects TED2021-132513B-I00 and PID2022-143257NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by the European Union NextGenerationEU/PRTR and FEDER. Computation is supported by the Supercomputing Center of Nankai University.
|
| 226 |
+
|
| 227 |
+
# REFERENCES
|
| 228 |
+
|
| 229 |
+
Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. arXiv preprint arXiv:2206.02779, 2022a.
|
| 230 |
+
Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18208-18218, 2022b.
|
| 231 |
+
Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, and Kristian Kersting.Sega: Instructing diffusion using semantic dimensions. arXiv preprint arXiv:2301.12247, 2023.
|
| 232 |
+
Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800, 2022.
|
| 233 |
+
Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. arXiv preprint arXiv:2301.13826, 2023.
|
| 234 |
+
Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
|
| 235 |
+
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
|
| 236 |
+
Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021.
|
| 237 |
+
Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022.
|
| 238 |
+
Rohit Gandikota, Joanna Materzyńska, Jaden Fiotto-Kaufman, and David Bau. Erasing concepts from diffusion models. arXiv preprint arXiv:2303.07345, 2023.
|
| 239 |
+
Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2862-2869, 2014.
|
| 240 |
+
|
| 241 |
+
Ligong Han, Song Wen, Qi Chen, Zhixing Zhang, Kunpeng Song, Mengwei Ren, Ruijiang Gao, Yuxiao Chen, Di Liu, Qilong Zhangli, et al. Improving negative-prompt inversion via proximal guidance. arXiv preprint arXiv:2306.05414, 2023.
|
| 242 |
+
Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
|
| 243 |
+
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. CLIPScore: a reference-free evaluation metric for image captioning. In EMNLP, 2021.
|
| 244 |
+
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. pp. 6626-6637, 2017.
|
| 245 |
+
Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Hui-Tang Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imag: Text-based real image editing with diffusion models. ArXiv, abs/2210.09276, 2022.
|
| 246 |
+
Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. arXiv preprint arXiv:2212.04488, 2022.
|
| 247 |
+
Nupur Kumari, Bingliang Zhang, Sheng-Yu Wang, Eli Shechtman, Richard Zhang, and Jun-Yan Zhu. Ablating concepts in text-to-image diffusion models. arXiv preprint arXiv:2303.13516, 2023.
|
| 248 |
+
Lianian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Li-juan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10965-10975, 2022.
|
| 249 |
+
Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz Khan, Qibin Hou, Yaxing Wang, and Jian Yang. *Stylediffusion: Prompt-embedding inversion for text-based editing*. arXiv preprint arXiv:2303.15649, 2023a.
|
| 250 |
+
Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. arXiv preprint arXiv:2301.07093, 2023b.
|
| 251 |
+
Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021.
|
| 252 |
+
Daiki Miyake, Akihiro Iohara, Yu Saito, and Toshiyuki Tanaka. Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models. arXiv preprint arXiv:2305.16807, 2023.
|
| 253 |
+
Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. arXiv preprint arXiv:2211.09794, 2022.
|
| 254 |
+
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
|
| 255 |
+
Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027, 2023.
|
| 256 |
+
Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, and Daniel Cohen-Or. Localizing object-level shape variations with text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
|
| 257 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
|
| 258 |
+
|
| 259 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.
|
| 260 |
+
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242, 2022.
|
| 261 |
+
Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
|
| 262 |
+
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020.
|
| 263 |
+
Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1921-1930, 2023.
|
| 264 |
+
Dani Valevski, Matan Kalman, Yossi Matias, and Yaniv Leviathan. Unitune: Text-driven image editing by fine tuning an image generation model on a single image. arXiv preprint arXiv:2210.09477, 2022.
|
| 265 |
+
Kai Wang, Fei Yang, Shiqi Yang, Muhammad Atif Butt, and Joost van de Weijer. Dynamic prompt learning: Addressing cross-attention leakage for text-based image editing. In Proc. NeurIPS, 2023.
|
| 266 |
+
Ahmet Burak Yildirim, Vedat Baday, Erkut Erdem, Aykut Erdem, and Aysegul Dundar. Inst-inpaint: Instructing to remove objects with diffusion models, 2023.
|
| 267 |
+
Yu Zeng, Zhe Lin, Huchuan Lu, and Vishal M Patel. Cr-fill: Generative image inpainting with auxiliary contextual reconstruction. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 14164-14173, 2021.
|
| 268 |
+
Eric Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, and Humphrey Shi. Forget-me-not: Learning to forget in text-to-image diffusion models. arXiv preprint arXiv:2211.08332, 2023.
|
| 269 |
+
Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023.
|
| 270 |
+
|
| 271 |
+
# A APPENDIX: IMPLEMENTATION DETAILS
|
| 272 |
+
|
| 273 |
+
Configure. We suppress semantic information by optimizing whole text embeddings at inference time, and it takes as little as 35 seconds. No extra network parameters are required in our optimization process. We mainly use the Stable Diffusion v1.4 pre-trained model $^{8}$ . All of our experiments are conducted using a Quadro RTX 3090 GPU (24GB VRAM).
|
| 274 |
+
|
| 275 |
+
Early stop. Recent works Hertz et al. (2022); Chefer et al. (2023) demonstrate that the spatial location of each subject is decided in the early step. Thus we validate our method on different steps in inference time. Fig 8 (left) shows that our method suffers from artifacts after 20 timesteps. In this paper, at inference time we apply the proposed method among $0 \rightarrow 20$ timesteps, and for the remaining timesteps, we perform the original image generation as done in the SD model.
|
| 276 |
+
|
| 277 |
+
Inner iterations. Fig 8 (right) shows the generation at different iterations within each timestep. We observe that the output images undergo unexpected change after 10 iterations. We set the iteration number to 10.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
Figure 8: (Left) We stop optimizing at step 20 and keep the original model operating for the rest of the steps. (Right) The synthesized images with different iterations. We observe that we have better performance when setting iteration to 10.
|
| 281 |
+
|
| 282 |
+
Inaccuracy label. Fig. 9 shows that the collected man training images contain glasses, but often do not contain the glasses label.
|
| 283 |
+
|
| 284 |
+
IFID. We use the official FID code to compute the similarity/distance between two distributions of image dataset, namely the edited images and the ground truth (GT) images. This measurement assesses the overall distribution rather than a single image. In the ideal case, our goal is to suppress only the content associated with the negative target in the image while leaving the content related to the positive target unaffected. We evaluate the effectiveness of suppression by comparing the FID values of the image dataset before and after suppression. A higher FID indicates a more successful suppression effect (referred to as IFID). However, we experimentally observed that many suppression methods (e.g., Negative prompt) can inadvertently impact the positive target while suppressing the negative target. Therefore, we will use IFID as the secondary metric, and Clipscore and DetScore as the primary metrics.
|
| 285 |
+
|
| 286 |
+
DetScore. We introduce a DetScore metric. It use MMDetection (Chen et al., 2019) with GLIP (Li et al., 2022) to detect the negative prompt object from the generated image and real image (e.g., the negative prompt object "laptop" in prompt "A laptop on sofa" in Fig. 5 (left)). We refer to the prediction score as DetScore. We set the prediction score threshold to 0.7, and our method achieves the best value in quantitative evaluation in both generated and real images (see Table 1 (the fourth, seventh and tenth columns)).
|
| 287 |
+
|
| 288 |
+
Generated-image editing experiment details. The proposed method aims to focus attention on content suppression based on text embeddings, so we compare with various baselines on different
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
|
| 292 |
+
# This Man
|
| 293 |
+
|
| 294 |
+

|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
|
| 298 |
+
This is not the boy I fell in love with. This is not the boy who would make me laugh until I thought I'd pee, who made a mix tape of his favorite metal songs. This is not the boy I'd stay up all night talking to or 'cruse' with. This is not the boy who first told me he loved me in a bathroom. This is the man I've chosen to share my life with, to raise a family with, to have and to hold, for better or worse, in sickness and in health as long as we both shall live. There are days I'm not sure who this man is, what he is thinking, where we are going. There are days I struggle to recall the boy I fell in love with. I struggle to understand this man, his goals, his vision. Whatever may come-I love this man.
|
| 299 |
+
|
| 300 |
+

|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
Oleh SAMSUDIN ADLAWI (143)
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
Steve Davison Blue
|
| 309 |
+
Figure 9: We find the collected man training images contain glasses, but often do not contain the glasses label.
|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
Who? The MAN
|
| 313 |
+
|
| 314 |
+
Steve
|
| 315 |
+
Mana
|
| 316 |
+
|
| 317 |
+
Development M...
|
| 318 |
+
|
| 319 |
+
Peasant Woman Binding Sheaves by Vincent van Gogh
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
|
| 327 |
+

|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
SD
|
| 335 |
+
|
| 336 |
+

|
| 337 |
+
Negative prompt
|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
P2P
|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
ESD
|
| 344 |
+
|
| 345 |
+

|
| 346 |
+
Concept-ablation
|
| 347 |
+
Forget-Me-Not
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
SEGA
|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
Ours
|
| 354 |
+
|
| 355 |
+
A car and traffic light on a city street
|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
SD
|
| 373 |
+
Negative prompt
|
| 374 |
+
Figure 10: (Top) Comparisons with various baselines for generated images in the style of Van Gogh and Tyler Edlin. (Bottom) Comparisons with various baselines for generated car-related images.
|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
|
| 378 |
+

|
| 379 |
+
P2P
|
| 380 |
+
ESD
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
Concept-ablation
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
Forget-Me-Not
|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
Inst-Inpaint
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
SEGA
|
| 393 |
+
|
| 394 |
+

|
| 395 |
+
Ours
|
| 396 |
+
|
| 397 |
+
types of generated images. (1) We compare our method with various baselines for generating images in the style of Van Gogh and Tyler Edlin (see Fig. 10 (Top) and Table 1 (the eleventh to the fourteenth columns)). The data related to Van Gogh and Tyler Edlin styles are sourced from the official code of ESD (Gandikota et al., 2023). This dataset comprises 50 prompts for Van Gogh style and 40 prompts for Tyler Edlin style. (2) To generate car-related images, we randomly select 50 car-related captions from COCO's validation set as prompts for input into SD. Additionally, we use multiple seeds for the same prompts. We chose to conduct experiments using car-related images for the specific reason that all baselines can effectively erase cars from the images, whereas the removal of other content is not universally suitable across all baselines. As shown in Table 1 (the eighth to the tenth columns), our method achieves the best values on the three evaluation metrics compared with all the baselines. Quantitative comparisons to various baselines are presented in Fig. 10 (Bottom). (3) For the other generated images used in the experiments (see Fig. 6 (the fourth to the sixth columns) and Table 1 (the fifth to the seventh columns)), we randomly select 100 captions provided in the COCO's validation set (Chen et al., 2015) as prompts, and input to the SD model.
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
Figure 11: As an example, the instruction used in Inst-Inpaint is "Remove the airplane at the center", while our prompt is "The airplane at the center". GT is obtained using the image inpainting method CRFill (Zeng et al., 2021).
|
| 401 |
+
|
| 402 |
+
GQA-Inpaint dataset experiment details. Inst-Inpaint reports FID and CLIP Accuracy metrics for verification on the GQA-Inpaint dataset. FID compares the distributions of ground truth images (GT) and generated image distributions to assess the quality of images produced by a generative model. In the evaluation of Inst-Inpaint, the target image from the GQA-Inpaint dataset serves as the ground truth image when calculating FID. In Table 2 (the third column), Inst-Inpaint achieves the best FID score on the GQA-Inpaint dataset, primarily because the erasure results produced by InstInpaint (Fig. 5 (the second row, the sixth column)) closely resemble the ground truth (GT) images (Fig. 5 (the second row, the fifth column)). Inst-Inpaint introduces CLIP Accuracy as a metric to assess the accuracy of the removal operation. For CLIP Accuracy, we use the official implementation of Inst-Inpaint. Inst-Inpaint use CLIP as a zero-shot classifier to predict the semantic labels of image regions based on bounding boxes. It compare the Top1 and Top5 predictions between the source image and inpainted image, considering a success when the source image class is not in the Top1 and Top5 predictions of the inpainted image. CLIP Accuracy is defined as the percentage of success. In Table 2 (the fourth and fifth columns), ours achieves the highest CLIP Accuracy scores for both Top1 and Top5 predictions on the GQA-Inpaint dataset. This result indicates the superior accuracy of our removal process.
|
| 403 |
+
|
| 404 |
+
Inst-Inpaint requires obtaining the target image corresponding to the source image as paired data for training. It extracts segmentation masks for each object from the source image and uses them to remove objects from the source image using the inpainting method CRFill (Zeng et al., 2021). The resulting target image is used as GT (e.g., Fig. 11 (the second column)).
|
| 405 |
+
|
| 406 |
+
There are 18883 pairs of test data in the GQA-Inpaint dataset, including source image, target image, and prompt. Inst-Inpaint attempts to remove objects from the source image based on the provided prompt as an instruction (e.g., "Remove the airplane at the center" in Fig. 11 (the third column)). We suppress the noun immediately following "remove" in the instruction (e.g., "airplane") and use the remaining part, deleting the word "remove" at the beginning of the instruction to form our input prompt (e.g., "The airplane at the center" in Fig. 11 (the fourth column)).
|
| 407 |
+
|
| 408 |
+
Baseline Implementations. For the comparisons in section 4, we use the official implementation of ESD (Gandikota et al., 2023)<sup>9</sup>, Concept-ablation (Kumari et al., 2023)<sup>10</sup>, Forget-Me-Not (Zhang et al., 2023)<sup>11</sup>, Inst-Inpaint (Yildirim et al., 2023)<sup>12</sup> and SEGA (Brack et al., 2023)<sup>13</sup>. We use P2P (Hertz et al., 2022)<sup>14</sup> with Attention Re-weighting to weaken the extent of content in the resulting images.
|
| 409 |
+
|
| 410 |
+
Failure cases. Fig. 12 shows some failure cases.
|
| 411 |
+
|
| 412 |
+
$^{9}$ https://github.com/rohitgandikota/erasing
|
| 413 |
+
10 https://github.com/nupurkmr9/concept-ablation
|
| 414 |
+
11https://github.com/SHI-Labs/Forget-Me-Not
|
| 415 |
+
12https://github.com/abyildirim/inst-inpaint
|
| 416 |
+
$^{13}$ https://github.com/ml-research/semantic-image-editing
|
| 417 |
+
$^{14}$ https://github.com/google/prompt-to-prompt
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
Figure 12: Failure cases.
|
| 421 |
+
|
| 422 |
+
# B APPENDIX: EQ. 2 IN SOFT-WEIGHTED REGULARIZATION.
|
| 423 |
+
|
| 424 |
+
We take inspiration from WNNM, a method used for image denoising tasks, which demonstrates that singular values have a clear physical meaning and should be treated differently. WNNM considers that the noise in the image mainly resides in the bottom- $K$ singular values. Each singular value $\sigma$ of the image patch can be updated using the formula $\sigma -\frac{\lambda}{(\sigma + \epsilon)}$ and set to 0 when the updated singular value becomes less than 0. The weight $\frac{\lambda}{(\sigma + \epsilon)}$ is introduced to ensure that components corresponding to smaller singular values undergo more shrinkage, where $\lambda$ is a positive constant used to scale the singular values, and $\epsilon$ is a small positive constant used to avoid division by zero. In this paper, based on our observation, the top- $K$ singular values in the constructed negative target embedding matrix $\chi = [c^{NE},c_{0}^{EOT},\dots ,c_{N - |p| - 2}^{EOT}]$ mainly resides the content in the expected suppressed embedding $c^{NE}$ . Therefore, we utilize the formula $e^{-\sigma}* \sigma$ to ensure that the components corresponding to larger singular values undergo more shrinkage.
|
| 425 |
+
|
| 426 |
+
# C APPENDIX: ALGORITHM DETAIL OF GENERATED IMAGE.
|
| 427 |
+
|
| 428 |
+
Algorithm 2: Our algorithm
|
| 429 |
+
Require: A text embeddings $c = \Gamma (p)$ and noise vector $\widetilde{z}_T$
|
| 430 |
+
Output: Edited image $\hat{\mathcal{I}}$ $\hat{c}\gets \mathrm{SWR}(\boldsymbol {c})$ Eq.2); // Soft-weighted Regularization
|
| 431 |
+
for $t = T,.,T - 1,\dots ,1$ do
|
| 432 |
+
$\hat{c}_t = \hat{c};$
|
| 433 |
+
// Inference-time text embedding optimization
|
| 434 |
+
for ite $= 0,\ldots ,Ite - 1$ do
|
| 435 |
+
$\begin{array}{rlr}{\mathbf{\Phi}_{-},\mathbf{A}_{t}^{PE},\mathbf{A}_{t}^{NE}\leftarrow \epsilon_{\theta}(\widetilde{\mathbf{z}}_{t},t,\mathbf{c});}\\ {\mathbf{\Phi}_{-},\hat{\mathbf{A}}_{t}^{PE},\hat{\mathbf{A}}_{t}^{NE}\leftarrow \epsilon_{\theta}(\widetilde{\mathbf{z}}_{t},t,\hat{\mathbf{c}}_{t})}\end{array}$ Eqs.3-6);
|
| 436 |
+
$\mathcal{L}\gets \min (\lambda_{pl}\mathcal{L}_{pl} + \lambda_{nl}\mathcal{L}_{nl})$ $\hat{c}_t\gets \hat{c}_t - \eta \nabla_{\hat{c}_t}\mathcal{L};$
|
| 437 |
+
end
|
| 438 |
+
$\tilde{z}_{t - 1, - , - }\gets \epsilon_{\theta}(\tilde{z}_{t},t,\hat{c}_{t})$
|
| 439 |
+
end
|
| 440 |
+
$\hat{\mathcal{I}} = D(\widetilde{z}_0)$
|
| 441 |
+
Return Edited image $\hat{\mathcal{I}}$
|
| 442 |
+
|
| 443 |
+
# D APPENDIX: ABLATION ANALYSIS
|
| 444 |
+
|
| 445 |
+
Verification alignment loss. As shown in Fig. 13, $\mathcal{L}_{pl}$ can mainly hold regions that we do not want to suppress. In addition, we employ the SSIM metric to assess the influence of the $\mathcal{L}_{pl}$ . Increasing
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
Figure 13: The regions that are not expected to be suppress are structurally altered without $\mathcal{L}_{pl}$ (third row). Our method removes the subject while mainly preserving the rest of the regions (fourth row).
|
| 449 |
+
|
| 450 |
+
$\mathcal{L}_{pl}$ raised SSIM from 0.407 $(\mathrm{SWR} + \mathcal{L}_{nl})$ to 0.552 $(\mathrm{SWR} + \mathcal{L}_{nl} + \mathcal{L}_{pl})$ , indicating that $\mathcal{L}_{pl}$ can help preserve the rest of the regions. $\mathrm{SWR} + \mathcal{L}_{nl}$ , while capable of removing objects (DetScore=0.0692), tends to change the original image structure and style (IFID=242.3).
|
| 451 |
+
|
| 452 |
+
Variant of soft-weighted regularization. We also explore another way to regulate the target text embedding. We directly zero out the Top-K singular values of $\boldsymbol{\Sigma}$ (here, $\boldsymbol{\Sigma} = \text{diag}(\sigma_0, \sigma_1, \dots, \sigma_{n_0})$ , $\chi = U\Sigma V^T$ , $\chi = [c^{NE}, c_{0}^{EOT}, \dots, c_{N - |\pmb{p}| - 2}^{EOT}]$ ), and reconstruct $\hat{\chi}$ , which is fed into SD model to generate image. Although directly zeroing out Top-K contributes to suppress the generation from the input prompt, it suffers from unexpected results (Fig. 14 (the third to the fifth columns)).
|
| 453 |
+
|
| 454 |
+
Attention map to zero. Recent work Hertz et al. (2022); Chefer et al. (2023); Parmar et al. (2023) explore the attention map to conduct varying tasks. In this paper, we also zero out the attention map which is corresponding to the target prompt, which is defined as $\text{attn2zero}$ . As shown in Fig. 15, $\text{attn2zero}$ method fails to suppress the target prompt in output images.
|
| 455 |
+
|
| 456 |
+
Analysis of our method in long sentences. We use the object detection method to investigate the behavior of glasses when zeroing out both "glasses" and [EOT] embeddings in long sentences. We first randomly generate 1000 images using SD with the prompt $p^{src}$ "A man without glasses" while generating a version that zeros out both "glasses" and [EOT] embeddings. We use MMDetection with GLIP and the prompt "glasses" to detect the probability of glasses being present in the generated images and obtain the prediction score for "glasses". The average prediction scores of MMDetection of the two versions above-mentioned on 1000 images are 0.819 and 0.084 (see Table 4 (third row, first and second column)), respectively, which proves that when using prompt $p^{src}$ , "A man without glasses", zeroing out the text embeddings of both "glasses" and [EOT] results in the disappearance of "glasses" in almost all generated images. It should be noted that the prediction
|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
A man without glasses
|
| 460 |
+
Figure 14: Variant of soft-weighted regularization. We zero out the Top-K singular values of $\pmb{\Sigma}$ ( $\pmb{\Sigma} = \text{diag}(\sigma_0, \sigma_1, \dots, \sigma_{n_0})$ ). We experimentally observe that naively zeroing out the singular values suppresses the target prompt, but in some cases it leads to unwanted changes and expected results (the third to fifth columns).
|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
Figure 15: We set the attention map of the suppressed subject (e.g. glasses) to 0. We find it fail to remove this subject (third column). Ours successfully removes the subject (second column).
|
| 464 |
+
|
| 465 |
+
score of MMDetection does not indicate that $81.9\%$ of the 1000 images contain glasses. Instead, it represents the probability that the image is detected as containing glasses.
|
| 466 |
+
|
| 467 |
+
To investigate the behavior of glasses with long sentences, we use ChatGPT to generate description words of lengths 8, 16, and 32 after prompt $p^{src}$ to form new prompts denoted as $p^{src + 8ws}$ , $p^{src + 16ws}$ , and $p^{src + 32ws}$ , respectively. As shown in Table 4, when zeroing out both "glasses" and [EOT] embeddings, long sentences are harder to drop glasses than short sentences. This is due to the fact that other embeddings, except "glasses" and [EOT], contain more glasses information compared to short sentences. However, we observe that zeroing out both "glasses" and [EOT] embeddings works when most of the words in the prompt correspond to objects in the image, even when the sentence is long. (e.g. "A man with a beard wearing glasses and a hat in blue shirt") Therefore, our method requires a concise prompt that mainly describes the object, avoiding lengthy abstract descriptions.
|
| 468 |
+
|
| 469 |
+
Table 4: The average prediction score of MMDetection with GLIP using prompt "glasses".
|
| 470 |
+
|
| 471 |
+
<table><tr><td rowspan="2">Method</td><td>SD</td><td colspan="4">Zeroing out both "glasses" and [EOT] embeddings</td></tr><tr><td>psrc</td><td>psrc</td><td>psrc+8ws</td><td>psrc+16ws</td><td>psrc+32ws</td></tr><tr><td>DetScore↓</td><td>0.819</td><td>0.084</td><td>0.393</td><td>0.455</td><td>0.427</td></tr></table>
|
| 472 |
+
|
| 473 |
+
Different suppression levels for soft-weighted regularization. We observe that the disappearance of the negative target (e.g., glasses in Fig. 2a (the sixth and seventh columns)) occurs when the negative target information diminishes to a certain level. We perform an analysis experiment to validate this conclusion. For example, we use $\gamma$ to control the suppression levels in soft-weighted regularization using $\hat{\sigma} = e^{-\gamma \sigma} * \sigma$ (in Eq. 2). When $\gamma = 0$ , then $\hat{\sigma} = \sigma$ , there is no change in singular values. When $\gamma = 1$ , then $\hat{\sigma} = e^{-\sigma} * \sigma$ , which equals to Eq. 2 that we used. When $\gamma \to \infty$ , then $\hat{\sigma} = \lim_{\gamma \to \infty} e^{-\gamma \sigma} * \sigma = 0$ , which equals to zero out both "glasses" and [EOT] embeddings in
|
| 474 |
+
|
| 475 |
+
Fig. 2a (the sixth and seventh columns). As shown in Fig. 16, as $\gamma$ increases, the degree to which singular values are penalized gradually increases. When $\gamma$ increases to a certain level, at which the content of glasses in both the "glasses" and [EOT] embeddings decreases to a certain level, glasses will be erased.
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
A man without glasses
|
| 479 |
+
Figure 16: Different suppression levels for soft-weighted regularization.
|
| 480 |
+
|
| 481 |
+
Robustness to diverse input prompts. As shown in Fig. 17, we showcase our robustness to diverse input prompts by effectively suppressing the content in an image using multiple prompts. It is important to emphasize that the suppressed content must be explicitly specified in the input prompt to enable our prompt-based content suppression.
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
Figure 17: We can suppress the content using diverse input prompts.
|
| 485 |
+
|
| 486 |
+
Evaluation the attenuation factor. We experimentally observed that employing an attenuation factor (e.g., 0.1) for the negative target embedding matrix would impact the positive target (see Fig. 20). Hence, using an attenuation factor leads to unexpected subject changes as well as changes to the target subject. This is due to the fact that the [EOT] embeddings contain significant information about the input prompt, including both the negative target and the positive target (see Sec. 3.2). Furthermore, the selection of factors needs to be carefully performed for each image to achieve satisfactory suppression results.
|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
Figure 18: Additional reference-guided negative target generation results. Comparisons with various baselines for real image and the target prompt.
|
| 490 |
+
|
| 491 |
+
[EOT] embedding in text prompts with various lengths. We observe that the [EOT] embedding contains small yet useful semantic information, as demonstrated in Fig. 2c in our main paper. As shown in Fig. 2c, we randomly select one [EOT] embedding to replace input text embeddings. The generated images following this replacement have similar semantic information (see Fig. 2c). To further evaluate whether the [EOT] embedding contains useful semantic information in text prompts of various lengths, we replace the input text embeddings with not just one [EOT] embedding, but multiple. We use part of the [EOT] embedding when its length exceeds that of the input text embeddings (short sentence), and we copy multiple copies of the whole [EOT] embedding when its length is shorter than input text embeddings (long sentence).
|
| 492 |
+
|
| 493 |
+
In more detail, we first randomly chose 50 prompts from the prompt sets as mentioned in Sec. 4. These text prompts include various syntactical structures, such as "A living area with a television and a table", "A black and white cat relaxing inside a laptop" and "There is a homemade pizza on a cutting board". We add description words with lengths 8, 16, 32 and 56 following the initial text prompt $\mathbf{p}^{src}$ to obtain a long sentence, dubbed as $\mathbf{p}^{src + 8ws}$ , $\mathbf{p}^{src + 16ws}$ , $\mathbf{p}^{src + 32ws}$ , and $\mathbf{p}^{src + 56ws}$ , respectively. For instance, when $\mathbf{p}^{src}$ is "A living area with a television and a table", $\mathbf{p}^{src + 8ws}$ would be extended to "A living area with a television and a table, highly detailed and precision with extreme detail description".
|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
Figure 19: Additional latent-guided negative target generation results. Examples of our method and the baselines for generated image. We are able to suppress the target prompt, without further finetuning the SD model.
|
| 497 |
+
|
| 498 |
+
We use Clipscore to evaluate that the generated images match the given prompt. In this case, we test our model under various length prompts ( $\mathbf{p}^{src}, \mathbf{p}^{src+8ws}$ , $\mathbf{p}^{src+16ws}$ , $\mathbf{p}^{src+32ws}$ , and $\mathbf{p}^{src+56ws}$ ) (see Table 5 (the second and third rows)). As shown in Table 5, the generated images corresponding [EOT] embedding replacement prompts also contain similar semantic information compared to the initial prompt. The degeneration of the Clipscore is small (less than 0.11), indicating that the [EOT] embedding also contains semantic information. Fig. 21 shows some more qualitative results.
|
| 499 |
+
|
| 500 |
+
Related work also consider the [EOT] embedding for other tasks. For example, P2P manipulates the [EOT] attention injection when conducting image-to-image translation. P2P (Hertz et al., 2022)
|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
Figure 20: SWR with an attenuation factor and SVD. Note how the usage of an attenuation factor leads to undesired changes in the hat of the man (the second column).
|
| 504 |
+
|
| 505 |
+
Table 5: Comparison results with original tokens and their replacement version. We evaluate it with Clipscore.
|
| 506 |
+
|
| 507 |
+
<table><tr><td>Mehod</td><td>psrc</td><td>psrc+8ws</td><td>psrc+16ws</td><td>psrc+32ws</td><td>psrc+56ws</td></tr><tr><td>SD</td><td>0.8208</td><td>0.8173</td><td>0.8162</td><td>0.8102</td><td>0.8058</td></tr><tr><td>SD w/ replacement</td><td>0.7674</td><td>0.7505</td><td>0.7479</td><td>0.7264</td><td>0.7035</td></tr></table>
|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
Figure 21: Both SD and its w/ replacement results.
|
| 511 |
+
|
| 512 |
+
swaps whole embeddings attention, including both the input text embeddings and [EOT] embedding attentions.
|
| 513 |
+
|
| 514 |
+
Taking a simple mean of the [EOT] embedding. We extract the semantic component by taking a simple Mean of the Padding Embedding ([EOT] embedding), referred as MPE. We evaluate the proposed method (i.e., SVD) and MPE. We suppress "glasses" subject from 1000 randomly generated images with the prompt "A man without glasses". Then we use MMDetection to detect the probability of glasses in the generated images. Final, we report the prediction score (DetScore).
|
| 515 |
+
|
| 516 |
+
As reported in Table 6 (the third and fourth columns), we have 0.1065 MMDetection score, while MPE is 0.6266. This finding suggests that simply averaging the [EOT] embedding often fails to extract the main semantic component. Furthermore, we further zero the 'glasses' token embedding as well as MPE, it still struggles to extract 'glasses' information (0.4892 MMDetection). Fig. 22 qualitatively shows more results.
|
| 517 |
+
|
| 518 |
+
Inference-time optimization with value regulation. We propose inference-time embedding optimization to further suppress the negative target generation and encourage the positive target content, following soft-weighted regularization. This optimization method involves updating the whole text embedding, which is then transferred to both the key and value components in the cross-attention layer. Therefore, our method implicitly changes the value component in the cross-attention layer.
|
| 519 |
+
|
| 520 |
+
Table 6: Comparison between ours and MPE. We report Clipscore.
|
| 521 |
+
|
| 522 |
+
<table><tr><td>Method</td><td>SD</td><td>Ours</td><td>MPE</td><td>MPE + zeroing embedding</td></tr><tr><td>DetScore ↓</td><td>0.8052</td><td>0.1065</td><td>0.6266</td><td>0.4892</td></tr></table>
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
SD (0.758)
|
| 526 |
+
|
| 527 |
+

|
| 528 |
+
Ours (0.000)
|
| 529 |
+
|
| 530 |
+

|
| 531 |
+
MPE (0.763)
|
| 532 |
+
Figure 22: The visualization and DetScore when using a mean of the [EOT] embedding.
|
| 533 |
+
|
| 534 |
+

|
| 535 |
+
MPE and zering "glasses" embedding (0.591)
|
| 536 |
+
|
| 537 |
+
Furthermore, similar to the proposed two attention losses, we attempt to use two value losses to regulate the value component in the cross-attention layer:
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
\mathcal {L} _ {v l} = \lambda_ {p l} \mathcal {L} _ {p l} + \lambda_ {n l} \mathcal {L} _ {n l},
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
$$
|
| 544 |
+
\mathcal {L} _ {p l} = \left\| \hat {\boldsymbol {V}} _ {t} ^ {P E} - \boldsymbol {V} _ {t} ^ {P E} \right\| ^ {2}, \tag {6}
|
| 545 |
+
$$
|
| 546 |
+
|
| 547 |
+
$$
|
| 548 |
+
\mathcal {L} _ {n l} = - \left\| \hat {\boldsymbol {V}} _ {t} ^ {N E} - \boldsymbol {V} _ {t} ^ {N E} \right\| ^ {2},
|
| 549 |
+
$$
|
| 550 |
+
|
| 551 |
+
where hyper-parameters $\lambda_{pl}$ and $\lambda_{nl}$ are used to balance the effects of preservation and suppression of the value. When utilizing this value loss, we find that it is hard to generate high-quality images images (Fig. 23 (the third and sixth columns)). This result indicates that directly optimizing the value embedding does not work. The potential reason is that it also influences positive target, since each token embedding contains other token embedding information after CliptextEncoder.
|
| 552 |
+
|
| 553 |
+

|
| 554 |
+
Figure 23: The results for generated image (left) and real image (right) of attention loss and value loss in the inference-time embedding optimization.
|
| 555 |
+
|
| 556 |
+
# E APPENDIX: ADDITIONAL RESULTS
|
| 557 |
+
|
| 558 |
+
User study. The study participants were volunteers from our college. The questionnaire consisted of 20 questions, each presenting the original image generated by SD, as well as the results of various baselines and our method. Users are tasked with selecting an image in which the target subject (i.e., a car) is more accurately suppressed compared to the original image. Each question in the questionnaire presents eight options, including baselines (Negative prompt, P2P, ESD, Concept-ablation, Forget-Me-Not, Inst-Inpaint and SEGA) and our method, from which users were instructed to choose one. A total of 20 users participated, resulting in a combined total of 400 samples (20 questions $\times$ 1 option $\times$ 20 users), with 159 samples $(39.75\%)$ favoring our method (see Fig. 5
|
| 559 |
+
|
| 560 |
+
(Right)). In the results of the user study, the values for Ours, Negative Prompt, P2P, ESD, Concept-Ablation, Forget-Me-Not, Inst-Inpaint, and SEGA are 0.3975, 0.0475, 0.03, 0.1625, 0.0525, 0.01, 0.285, and 0.015, respectively.
|
| 561 |
+
|
| 562 |
+
Additional results in our approach method. Fig. 18 shows additional real-image editing results, and Fig. 19 shows additional generated-image editing results. It should be noted that the generated images, as shown in Fig. 19 (the first to fourth columns. i.e., "Girl without earring", "Woman without mask", "Girl not wearing jewelry" and "A car without flowers"). are not used for quantitative evaluation metrics in Table 1 (the fifth to the seventh columns), as the occasional failure of Clipscore (Hessel et al., 2021) to recognize negative words.
|
| 563 |
+
|
| 564 |
+
Real image results in mask-based methods. Mask-based removal methods work well for isolated objects. However, they tend to fail for objects that are closely related to their surroundings. Compared to mask-based methods, our prompt-based method can automatically complete regions of removed content based on surrounding content and works equally well when removed content is closely related to surrounding content. For example, in Fig. 24, the prompt "A man with a beard wearing glasses and a hat in blue shirt" and the corresponding input image show that "beard", "glasses", and "hat" are closely related to the man (Left). Our method can successfully remove "beard", "glasses", and "hat", and fill in the removed area based on the context of the "man" (Middle), while the mask-based removal method appears very aggressive (Right).
|
| 565 |
+
|
| 566 |
+
A man with a beard wearing glasses and a hat in blue shirt
|
| 567 |
+
|
| 568 |
+

|
| 569 |
+
Real image
|
| 570 |
+
Figure 24: Ours can successfully remove "beard", "glasses", and "hat" and fill in the removed area based on the context of the "man" (Meddle), while the mask-based method (e.g., PlaygroundAI) fails (Right). The method reliant on masking necessitates the provision of user-specified masks that define the erased areas during the inference process.
|
| 571 |
+
|
| 572 |
+

|
| 573 |
+
|
| 574 |
+

|
| 575 |
+
Ours
|
| 576 |
+
|
| 577 |
+

|
| 578 |
+
|
| 579 |
+

|
| 580 |
+
|
| 581 |
+

|
| 582 |
+
mask-based method
|
| 583 |
+
|
| 584 |
+

|
| 585 |
+
|
| 586 |
+
Real image results in various inversion methods. Our method can combine various real image inversion techniques, including Null-text, Textual inversion mentioned in the Appendix (Textual inversion with a pivot.) of Null-text, StyleDiffusion (Li et al., 2023a), NPI (Miyake et al., 2023) and ProxNPI (Han et al., 2023) (see Fig. 25).
|
| 587 |
+
|
| 588 |
+
A woman wearing sunglasses and scarf
|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
|
| 592 |
+

|
| 593 |
+
|
| 594 |
+

|
| 595 |
+
|
| 596 |
+

|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
|
| 600 |
+

|
| 601 |
+
|
| 602 |
+

|
| 603 |
+
Real image
|
| 604 |
+
Figure 25: Our method can combine various real image inversion techniques.
|
| 605 |
+
|
| 606 |
+

|
| 607 |
+
Ours+Textual inversion
|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
Ours+Null-text
|
| 611 |
+
|
| 612 |
+

|
| 613 |
+
Ours+StyleDiffusion
|
| 614 |
+
|
| 615 |
+

|
| 616 |
+
Ours+NPI
|
| 617 |
+
|
| 618 |
+

|
| 619 |
+
Ours+ProxNPI
|
| 620 |
+
|
| 621 |
+
Implementation on DeepFloyd-IF diffusion model. We use Deepfloyd-IF based on the T5 transformer to extract text embeddings using the prompt "a man without glasses" for generation. The generated output still includes the subject with "glasses" (see Fig. 26 (Up)), although the T5 text encoder used in Deepfloyd-IF has a larger number of parameters compared to the CLIP text encoder used in SD (T5: 4762.31M vs. CLIP: 123.06M). Our method also works very well on DeepFloyd-IF diffusion model (see Fig. 26 (Bottom)).
|
| 622 |
+
|
| 623 |
+

|
| 624 |
+
Figure 26: (Up) Results from DeepFloyd-IF still generate the man wearing "glasses". (Bottom) Implementation of our method on DeepFloyd-IF.
|
| 625 |
+
|
| 626 |
+
"A man without glasses" results on other diffusion models. When we use other diffusion models for image generation with the prompt "A man without glasses" as input, the generated images still show the presence of "glasses" (see Fig. 27 (Top)). Our method can also be implemented in other versions of the StableDiffusion model, including StableDiffusion $1.5^{15}$ and StableDiffusion $2.1^{16}$ (see Fig. 27 (Bottom)).
|
| 627 |
+
|
| 628 |
+

|
| 629 |
+
Figure 27: (Top) Results from StableDiffusion 1.5, StableDiffusion 2.1, Ideogram and Midjourney still generate the man wearing "glasses". (Bottom) Our method's implementation on StableDiffusion 1.5 and StableDiffusion 2.1.
|
| 630 |
+
|
| 631 |
+
# F APPENDIX: ADDITIONAL APPLICATIONS
|
| 632 |
+
|
| 633 |
+
Additional cracks removal and rain removal results. As shown in Fig. 28, we present additional results for both cracks removal and rain removal. (Up) Additional results for cracks removal. (Middle) We demonstrate additional results for the synthetic rainy image. (Down) Additionally, we also demonstrate additional results for the real-world rainy image.
|
| 634 |
+
|
| 635 |
+
Attend-and-Excite similar results (Generating subjects for generated image). Attend-and-Excitet (Chefer et al., 2023) find that the SD model sometimes encounters failure in generating one or more subjects from the input prompt (see Fig. 29 (the first, third, and fifth columns)). They refine the cross-attention map to attend to subject tokens and excite activations. The Eq. 2 used in soft-weighted regularization utilizes the weight $e^{-\sigma}$ to ensure that the components corresponding to larger singular values undergo more shrinkage, as we assume that the main singular values are corresponding to the suppressed information. We make a simple modification to the weight $e^{-\sigma}$ in
|
| 636 |
+
|
| 637 |
+

|
| 638 |
+
Figure 28: (Top) Cracks removal results. (Middle) Rain removal for synthetic rainy image. (Bottom) Rain removal for real-world rainy image.
|
| 639 |
+
|
| 640 |
+
Eq. 2 by using $\beta \cdot e^{\alpha \sigma}$ to ensure that the components corresponding to larger singular values undergo more strengthen (i.e., $\hat{\sigma} = \beta \cdot e^{\alpha \sigma} * \sigma$ ), where $\beta = 1.2$ and $\alpha = 0.001$ . This straightforward modification, merely involving the update of text embeddings, addresses situations where the SD model encounters failures in generating subjects (see Fig. 29 (the second, fourth, and sixth columns)).
|
| 641 |
+
|
| 642 |
+
GLIGEN similar results (Adding subjects for real image). GLIGEN (Li et al., 2023b) can enable real image grounded inpainting, allowing users to integrate reference images into the real image. We can achieve results similar to real-image grounded inpainting using only the prompt (see Fig. 30 (the second, third, fifth, and sixth columns)). In detail, we add the prompt (blue underline) of the desired subject to the prompt describing the real image and then adopt the same strategy as in the previous subsection (Attend-and-Excite similar results).
|
| 643 |
+
|
| 644 |
+
Replacing subject in the real image with another. Subject replacement is a common task in various image editing methods Meng et al. (2021); Parmar et al. (2023); Mokady et al. (2022); Li et al. (2023a); Tumanyan et al. (2023). We can edit an image by replacing subject with another using only the prompt (see Fig. 31 (the second, fourth, and sixth columns)). We replace the text of the
|
| 645 |
+
|
| 646 |
+

|
| 647 |
+
Figure 29: Attend-and-Excite similar results (Generating subjects for generated image).
|
| 648 |
+
|
| 649 |
+

|
| 650 |
+
Figure 30: GLIGEN similar results (Adding subjects for real image).
|
| 651 |
+
|
| 652 |
+
edited subject in the source prompt with the desired one to create the target prompt. Subsequently, we translate the real image using the target prompt into latent code. We then apply the same strategy as in the previous subsection (Attend-and-Excite similar results) to obtain the edited image. For example, we can replace the "toothbrush" in the "Girl holding toothbrush" image with the "pen". The DetScore with "toothbrush" of the source image is 0.790, and the Clipscore with "pen" of the target image is 0.728. We find that the brace is also replaced with the pen, as the cross attention map for "toothbrush" couples the toothbrush and the brace.
|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
Figure 31: Replacing subject in the real image with another.
|
2402.05xxx/2402.05375/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c0e036693522842a080f8ffe075e9aba8222800047988b37ccef5badf269f8b
|
| 3 |
+
size 3437751
|
2402.05xxx/2402.05375/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_content_list.json
ADDED
|
@@ -0,0 +1,1815 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "TASK-CUSTOMIZED MASKED AUTOENCODER VIA MIXTURE OF CLUSTER-CONDITIONAL EXPERTS",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
171,
|
| 8 |
+
99,
|
| 9 |
+
823,
|
| 10 |
+
146
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zhili Liu $^{1,2*}$ , Kai Chen $^{1*}$ , Jianhua Han $^{2}$ , Lanqing Hong $^{2}$ , Hang Xu $^{2}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
183,
|
| 19 |
+
167,
|
| 20 |
+
661,
|
| 21 |
+
184
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Zhenguo Li $^{2}$ , James T. Kwok $^{1}$",
|
| 28 |
+
"bbox": [
|
| 29 |
+
183,
|
| 30 |
+
184,
|
| 31 |
+
395,
|
| 32 |
+
198
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "list",
|
| 38 |
+
"sub_type": "text",
|
| 39 |
+
"list_items": [
|
| 40 |
+
"<sup>1</sup> Department of Computer Science and Engineering, Hong Kong University of Science and Technology",
|
| 41 |
+
"$^{2}$ Huawei Noah's Ark Lab"
|
| 42 |
+
],
|
| 43 |
+
"bbox": [
|
| 44 |
+
183,
|
| 45 |
+
199,
|
| 46 |
+
861,
|
| 47 |
+
227
|
| 48 |
+
],
|
| 49 |
+
"page_idx": 0
|
| 50 |
+
},
|
| 51 |
+
{
|
| 52 |
+
"type": "text",
|
| 53 |
+
"text": "{zhili.liu, kai.chen}@connect.ust.hk, {hanjianhua4, honglanqing, xuHang, li.zhenguo}@huawei.com jamesk@cse.ust.hk",
|
| 54 |
+
"bbox": [
|
| 55 |
+
183,
|
| 56 |
+
228,
|
| 57 |
+
750,
|
| 58 |
+
271
|
| 59 |
+
],
|
| 60 |
+
"page_idx": 0
|
| 61 |
+
},
|
| 62 |
+
{
|
| 63 |
+
"type": "text",
|
| 64 |
+
"text": "ABSTRACT",
|
| 65 |
+
"text_level": 1,
|
| 66 |
+
"bbox": [
|
| 67 |
+
450,
|
| 68 |
+
292,
|
| 69 |
+
545,
|
| 70 |
+
306
|
| 71 |
+
],
|
| 72 |
+
"page_idx": 0
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"type": "text",
|
| 76 |
+
"text": "Masked Autoencoder (MAE) is a prevailing self-supervised learning method that achieves promising results in model pre-training. However, when the various downstream tasks have data distributions different from the pre-training data, the semantically irrelevant pre-training information might result in negative transfer, impeding MAE's scalability. To address this issue, we propose a novel MAE-based pre-training paradigm, Mixture of Cluster-conditional Experts (MoCE), which can be trained once but provides customized pre-training models for diverse downstream tasks. Different from the mixture of experts (MoE), our MoCE trains each expert only with semantically relevant images by using cluster-conditional gates. Thus, each downstream task can be allocated to its customized model pretrained with data most similar to the downstream data. Experiments on a collection of 11 downstream tasks show that MoCE outperforms the vanilla MAE by $2.45\\%$ on average. It also obtains new state-of-the-art self-supervised learning results on detection and segmentation.",
|
| 77 |
+
"bbox": [
|
| 78 |
+
228,
|
| 79 |
+
324,
|
| 80 |
+
767,
|
| 81 |
+
518
|
| 82 |
+
],
|
| 83 |
+
"page_idx": 0
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"type": "text",
|
| 87 |
+
"text": "1 INTRODUCTION",
|
| 88 |
+
"text_level": 1,
|
| 89 |
+
"bbox": [
|
| 90 |
+
173,
|
| 91 |
+
546,
|
| 92 |
+
336,
|
| 93 |
+
561
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "text",
|
| 99 |
+
"text": "Self-supervised learning (SSL), which learns effective transferable representations without human annotations, has become a prevailing model pre-training paradigm (He et al., 2020; Chen et al., 2021a; Bao et al., 2022). Currently, the most prevalent SSL method is the Masked Autoencoder (MAE) (He et al., 2022), which constructs supervision signals from raw image data by masking random input patches and then reconstructing the missing pixels. This simple strategy has proved efficient in the training of large-scale models. For example, ViT (Dosovitskiy et al., 2021) shows impressive performance on popular benchmarks such as the ImageNet $^{1}$ (Deng et al., 2009). However, does MAE really scale well for various downstream tasks (Deng et al., 2009; Lin et al., 2014; Zhou et al., 2019; Han et al., 2021; Li et al., 2022a)?",
|
| 100 |
+
"bbox": [
|
| 101 |
+
169,
|
| 102 |
+
578,
|
| 103 |
+
823,
|
| 104 |
+
705
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 0
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "Preliminary studies (in Section 3.1) show that the MAE indeed suffers from negative transfer (Liu et al., 2022) when transferring to downstream tasks with very different semantics. Figure 1(a) shows that on 9 of 11 downstream tasks, an MAE pre-trained on the full ImageNet data is outperformed by the one that is pre-trained on only the semantically relevant data subsets. Hence, using pre-training data that are semantically irrelevant can hurt transfer performance.",
|
| 111 |
+
"bbox": [
|
| 112 |
+
169,
|
| 113 |
+
710,
|
| 114 |
+
823,
|
| 115 |
+
781
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "text",
|
| 121 |
+
"text": "The above observation motivates the need for task-customized pre-training. A promising model for this is the Mixture of Experts (MoE) (Shazeer et al., 2017; Riquelme et al., 2021), which uses a multi-expert architecture to provide customized models for different input tokens. However, unlike supervised pre-training, self-supervised learning lacks semantic labels, and thus the experts differ more on low-level information than semantics, referring to Figure 1(b). Experiments in Section 4.2 show that a naive adoption of MoE to the MAE has inferior performance. Since various downstream tasks contain different semantics, semantic-related experts may be preferred.",
|
| 122 |
+
"bbox": [
|
| 123 |
+
169,
|
| 124 |
+
787,
|
| 125 |
+
826,
|
| 126 |
+
886
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "header",
|
| 132 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 133 |
+
"bbox": [
|
| 134 |
+
171,
|
| 135 |
+
32,
|
| 136 |
+
478,
|
| 137 |
+
47
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "page_footnote",
|
| 143 |
+
"text": "* Equal contribution.",
|
| 144 |
+
"bbox": [
|
| 145 |
+
192,
|
| 146 |
+
896,
|
| 147 |
+
315,
|
| 148 |
+
909
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "page_footnote",
|
| 154 |
+
"text": "<sup>1</sup>We refer to ImageNet-1K as ImageNet if not specified in this paper.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
192,
|
| 157 |
+
909,
|
| 158 |
+
598,
|
| 159 |
+
922
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "aside_text",
|
| 165 |
+
"text": "arXiv:2402.05382v1 [cs.CV] 8 Feb 2024",
|
| 166 |
+
"bbox": [
|
| 167 |
+
22,
|
| 168 |
+
268,
|
| 169 |
+
58,
|
| 170 |
+
700
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "page_number",
|
| 176 |
+
"text": "1",
|
| 177 |
+
"bbox": [
|
| 178 |
+
493,
|
| 179 |
+
948,
|
| 180 |
+
504,
|
| 181 |
+
959
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 0
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "image",
|
| 187 |
+
"img_path": "images/70039001948c6e50fddae03c36bcc83adb0f7cfba508fc721e4579087bb88bd3.jpg",
|
| 188 |
+
"image_caption": [
|
| 189 |
+
"(a) Negative transfer phenomenon on MAE."
|
| 190 |
+
],
|
| 191 |
+
"image_footnote": [],
|
| 192 |
+
"bbox": [
|
| 193 |
+
196,
|
| 194 |
+
119,
|
| 195 |
+
509,
|
| 196 |
+
273
|
| 197 |
+
],
|
| 198 |
+
"page_idx": 1
|
| 199 |
+
},
|
| 200 |
+
{
|
| 201 |
+
"type": "image",
|
| 202 |
+
"img_path": "images/fa4c2e72e388293cc1d715884d92f8d34f92da647863836acdceadff50aaa013.jpg",
|
| 203 |
+
"image_caption": [],
|
| 204 |
+
"image_footnote": [],
|
| 205 |
+
"bbox": [
|
| 206 |
+
563,
|
| 207 |
+
125,
|
| 208 |
+
687,
|
| 209 |
+
196
|
| 210 |
+
],
|
| 211 |
+
"page_idx": 1
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"type": "image",
|
| 215 |
+
"img_path": "images/a9ba000b0a8de61208e123721c0d7a87c2ca4ee136b92cd20f55a5c99376caf2.jpg",
|
| 216 |
+
"image_caption": [
|
| 217 |
+
"Figure 1: (a) Transfer performance of MAEs pre-trained on Split-A (blue), Split-B (red) and full ImageNet data (white). Only two of the eleven downstream tasks benefit from using the full ImageNet data for pre-training (more details in Section 3.1). (b) TokenMoE uses pixel RGB values as reconstruction targets. Thus, tokens with similar pixel values tend to be routed to the same expert, leading to two types of mistakes: (i) same semantics but routed to different experts, (ii) different semantics but routed to the same expert."
|
| 218 |
+
],
|
| 219 |
+
"image_footnote": [],
|
| 220 |
+
"bbox": [
|
| 221 |
+
563,
|
| 222 |
+
207,
|
| 223 |
+
687,
|
| 224 |
+
270
|
| 225 |
+
],
|
| 226 |
+
"page_idx": 1
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"type": "image",
|
| 230 |
+
"img_path": "images/4102af891f3770b12c9850765da8c0b3545bc33b7744c030a833457a04ee1084.jpg",
|
| 231 |
+
"image_caption": [],
|
| 232 |
+
"image_footnote": [],
|
| 233 |
+
"bbox": [
|
| 234 |
+
697,
|
| 235 |
+
125,
|
| 236 |
+
805,
|
| 237 |
+
196
|
| 238 |
+
],
|
| 239 |
+
"page_idx": 1
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"type": "image",
|
| 243 |
+
"img_path": "images/c256325b1cd1ab32cf4b5b44c9da95f0fd291deac1c68e06e675b47f2c203ba7.jpg",
|
| 244 |
+
"image_caption": [],
|
| 245 |
+
"image_footnote": [],
|
| 246 |
+
"bbox": [
|
| 247 |
+
694,
|
| 248 |
+
202,
|
| 249 |
+
807,
|
| 250 |
+
233
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "image",
|
| 256 |
+
"img_path": "images/dd29b18a7ced5c1eac55f840a57cd590d1aa38ffd9015851c30a3d8b571fb790.jpg",
|
| 257 |
+
"image_caption": [
|
| 258 |
+
"(b) Problem with TokenMoE."
|
| 259 |
+
],
|
| 260 |
+
"image_footnote": [],
|
| 261 |
+
"bbox": [
|
| 262 |
+
694,
|
| 263 |
+
238,
|
| 264 |
+
807,
|
| 265 |
+
268
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"text": "In this paper, we propose the Mixture of Cluster-conditional Expert (MoCE), a novel paradigm to achieve task-customized self-supervised pre-training by data clustering and explicitly training each expert with images of similar semantics. The MoCE procedure has three stages. First, we cluster the whole dataset by using a pre-trained, dense MAE model. We then construct the MoCE with a multi-expert structure. Each expert is trained using clusters selected by routing tokens based on cluster embedding (instead of token embedding). To stabilize training and enhance confidence of the gate results, a regularization loss is proposed. Finally, with the arrival of a downstream task, we propose a search procedure to select the closest cluster. Empirically, the proposed MoCE shows superior performance over MAE on a collection of 11 downstream tasks. Besides, one can use only a MoCE sub-model on deployment, thus saving inference time and model capacity.",
|
| 272 |
+
"bbox": [
|
| 273 |
+
169,
|
| 274 |
+
419,
|
| 275 |
+
826,
|
| 276 |
+
559
|
| 277 |
+
],
|
| 278 |
+
"page_idx": 1
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"text": "To summarize, our main contributions are:",
|
| 283 |
+
"bbox": [
|
| 284 |
+
171,
|
| 285 |
+
565,
|
| 286 |
+
455,
|
| 287 |
+
579
|
| 288 |
+
],
|
| 289 |
+
"page_idx": 1
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "list",
|
| 293 |
+
"sub_type": "text",
|
| 294 |
+
"list_items": [
|
| 295 |
+
"1. We systematically analyze the negative transfer phenomenon of MAE, and show that naively adopting the MoE to MAE cannot improve transfer performance of downstream tasks.",
|
| 296 |
+
"2. We propose the MoCE, which trains each expert with semantics-aware clusters so that similar clusters can be routed to the same expert.",
|
| 297 |
+
"3. We demonstrate effectiveness of the proposed MoCE on a collection of 11 downstream tasks, and achieve up to $2.45\\%$ performance improvement in Top-1 accuracy. State-of-the-art self-supervised results are also achieved on the detection and segmentation tasks. To the best of our knowledge, this is the first work that achieves state-of-the-art transfer performance by training vision MoE models with ImageNet under the SSL setting."
|
| 298 |
+
],
|
| 299 |
+
"bbox": [
|
| 300 |
+
207,
|
| 301 |
+
592,
|
| 302 |
+
826,
|
| 303 |
+
744
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 1
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "2 RELATED WORK",
|
| 310 |
+
"text_level": 1,
|
| 311 |
+
"bbox": [
|
| 312 |
+
171,
|
| 313 |
+
766,
|
| 314 |
+
344,
|
| 315 |
+
781
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 1
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "Self-supervised Learning. Previous works mainly focus on the design of pretext tasks with image transformations (Doersch et al., 2015; Gidaris et al., 2018), inpainting (Pathak et al., 2016), colorization (Zhang et al., 2016), contrastive learning (Chen et al., 2020; He et al., 2020; Grill et al., 2020; Caron et al., 2020; Radford et al., 2021b; Yao et al., 2022b), and for specific downstream tasks (Wang et al., 2020; Xie et al., 2020; 2021a; Chen et al., 2021a; Yao et al., 2022a). Motivated by the design of BERT (Devlin et al., 2018), masked image modeling (MIM) is recently proposed to learn by reconstructing masked images. BEiT (Bao et al., 2022) is the pioneering work that predicts visual tokens generated by a pre-trained tokenizer (Radford et al., 2021a). SimMIM (Xie et al., 2021c) simplifies the framework by directly utilizing the pixel RGB values as reconstruction targets.",
|
| 322 |
+
"bbox": [
|
| 323 |
+
169,
|
| 324 |
+
797,
|
| 325 |
+
826,
|
| 326 |
+
925
|
| 327 |
+
],
|
| 328 |
+
"page_idx": 1
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"type": "header",
|
| 332 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 333 |
+
"bbox": [
|
| 334 |
+
171,
|
| 335 |
+
32,
|
| 336 |
+
478,
|
| 337 |
+
47
|
| 338 |
+
],
|
| 339 |
+
"page_idx": 1
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"type": "page_number",
|
| 343 |
+
"text": "2",
|
| 344 |
+
"bbox": [
|
| 345 |
+
493,
|
| 346 |
+
946,
|
| 347 |
+
504,
|
| 348 |
+
959
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 1
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "text",
|
| 354 |
+
"text": "MAE (He et al., 2022) proposes an asymmetric encoder-decoder architecture for better training efficiency. MixedAE (Chen et al., 2023) further explores image mixing for object-aware pre-training. In this paper, we will focus on the MAE due to its effectiveness and efficiency.",
|
| 355 |
+
"bbox": [
|
| 356 |
+
169,
|
| 357 |
+
103,
|
| 358 |
+
823,
|
| 359 |
+
147
|
| 360 |
+
],
|
| 361 |
+
"page_idx": 2
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"type": "text",
|
| 365 |
+
"text": "While self-supervised learning methods have achieved improved transfer performance, most of them only provide a unified representation to various downstream tasks. This may suffer from negative transfer as demonstrated in Section 3.1. The work most relevant to ours is SDR (Liu et al., 2022), which trains 256 subnets with 256 disjoint ImageNet subsets simultaneously. However, this paper differs from SDR in three ways: (i) the mapping from subsets to subnets in SDR is randomly selected and fixed during pre-training, while MoCE achieves self-adaptive mapping with cluster-conditional gates; (ii) Progressive training is required in SDR, while MoCE enjoys one-time end-to-end training; (iii) During the transfer process, SDR uses brute force to select the best sub-model, while MoCE reuses the clustering module to achieve more efficient selection.",
|
| 366 |
+
"bbox": [
|
| 367 |
+
169,
|
| 368 |
+
152,
|
| 369 |
+
826,
|
| 370 |
+
280
|
| 371 |
+
],
|
| 372 |
+
"page_idx": 2
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"type": "text",
|
| 376 |
+
"text": "Mixture of Experts. The mixture of experts (MoE) has a long history (Jacobs et al., 1991; Jordan & Jacobs, 1994; Shazeer et al., 2017). Recently, it is considered as an effective tool for model scale-up in natural language processing (Lepikhin et al., 2020; Fedus et al., 2021; Yang et al., 2021; Lewis et al., 2021). With the growing interest of the Vision Transformer (Dosovitskiy et al., 2021; Liu et al., 2021; Wang et al., 2021; Xie et al., 2021b), MoE for vision (Riquelme et al., 2021; Wu et al., 2022) is also explored recently. However, there is still no self-supervised MoE model that can be trained on medium-sized datasets such as the ImageNet-1k.",
|
| 377 |
+
"bbox": [
|
| 378 |
+
169,
|
| 379 |
+
294,
|
| 380 |
+
823,
|
| 381 |
+
393
|
| 382 |
+
],
|
| 383 |
+
"page_idx": 2
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"type": "text",
|
| 387 |
+
"text": "Kudugunta et al. (2021); Ma et al. (2018) regard the MoE as a multi-task learning model, and use it for multi-language translation and recommendation systems, respectively. In this paper, we show that for self-supervised learning on images, an additional clustering component is crucial in the learning of a highly performant MoE model. Moreover, while the downstream tasks should follow the pre-training task in (Kudugunta et al., 2021; Ma et al., 2018), the MoCE can be used with any downstream task due to its unsupervised pre-training. Puigcerver et al. (2020) shares a similar setting with us, but their model is pre-trained in a supervised learning manner. Moreover, their mapping between experts and data is pre-defined and fixed during training, while that for the MoCE is learned dynamically and achieves better performance.",
|
| 388 |
+
"bbox": [
|
| 389 |
+
169,
|
| 390 |
+
398,
|
| 391 |
+
826,
|
| 392 |
+
526
|
| 393 |
+
],
|
| 394 |
+
"page_idx": 2
|
| 395 |
+
},
|
| 396 |
+
{
|
| 397 |
+
"type": "text",
|
| 398 |
+
"text": "Multi-Task Learning aims to learn a model that is appropriate for multiple tasks. Hard-parameter sharing, which uses a shared backbone with multi-heads for the different tasks, has been shown to be effective on time series, language and graph data (Liu et al., 2019; Hu et al., 2019; McDermott et al., 2021). Gao et al. (2021) claims that the network design may further benefit from the use of task relationships, and trains masks for different tasks. However, they require the task information be available during model training, which is not possible for downstream tasks in SSL pre-training.",
|
| 399 |
+
"bbox": [
|
| 400 |
+
169,
|
| 401 |
+
540,
|
| 402 |
+
826,
|
| 403 |
+
625
|
| 404 |
+
],
|
| 405 |
+
"page_idx": 2
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"type": "text",
|
| 409 |
+
"text": "3 PROPOSED METHOD",
|
| 410 |
+
"text_level": 1,
|
| 411 |
+
"bbox": [
|
| 412 |
+
171,
|
| 413 |
+
645,
|
| 414 |
+
377,
|
| 415 |
+
660
|
| 416 |
+
],
|
| 417 |
+
"page_idx": 2
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"type": "text",
|
| 421 |
+
"text": "In this section, we first empirically demonstrate the negative transfer phenomenon in MAE (Section 3.1). We then discuss the limitations of adopting TokenMoE (Riquelme et al., 2021) with MAE (Section 3.2), and propose the Mixture of Cluster-conditional Experts (MoCE), a novel paradigm achieving customized pre-training for various downstream tasks (Section 3.3).",
|
| 422 |
+
"bbox": [
|
| 423 |
+
169,
|
| 424 |
+
676,
|
| 425 |
+
823,
|
| 426 |
+
734
|
| 427 |
+
],
|
| 428 |
+
"page_idx": 2
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"type": "text",
|
| 432 |
+
"text": "3.1 NEGATIVE TRANSFER IN MASKED AUTOENCODER",
|
| 433 |
+
"text_level": 1,
|
| 434 |
+
"bbox": [
|
| 435 |
+
171,
|
| 436 |
+
751,
|
| 437 |
+
568,
|
| 438 |
+
765
|
| 439 |
+
],
|
| 440 |
+
"page_idx": 2
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"type": "text",
|
| 444 |
+
"text": "In this section, we evaluate the transfer performance of MAE models pre-trained with data of different semantics on various downstream tasks. As in (Huh et al., 2016; Liu et al., 2022), we first split the ImageNet data into two disjoint subsets, Split-A and Split-B, based on the labels' semantic dissimilarities in the WordNet tree (Miller, 1998). Split-A mainly contains inanimate objects (such as cars and airplanes), while Split-B primarily involves organisms (such as plants and animals). We then pre-train MAEs on Split-A, Split-B and the full ImageNet without data annotation, and evaluate the three resulting models on 11 downstream tasks. See more implementation details in Section 4.1.",
|
| 445 |
+
"bbox": [
|
| 446 |
+
169,
|
| 447 |
+
777,
|
| 448 |
+
823,
|
| 449 |
+
876
|
| 450 |
+
],
|
| 451 |
+
"page_idx": 2
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"type": "text",
|
| 455 |
+
"text": "As shown in Figure 1(a), the MAE pre-trained with Split-A performs best on Aircraft, Cars, SUN397 and DTD, while the MAE pre-trained with Split-B performs best on Flowers, Food, Pets, Cifar-10 and Cifar-100. Only two of the eleven tasks (Caltech and VOC) benefit from using the full",
|
| 456 |
+
"bbox": [
|
| 457 |
+
169,
|
| 458 |
+
881,
|
| 459 |
+
826,
|
| 460 |
+
925
|
| 461 |
+
],
|
| 462 |
+
"page_idx": 2
|
| 463 |
+
},
|
| 464 |
+
{
|
| 465 |
+
"type": "header",
|
| 466 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 467 |
+
"bbox": [
|
| 468 |
+
171,
|
| 469 |
+
32,
|
| 470 |
+
478,
|
| 471 |
+
47
|
| 472 |
+
],
|
| 473 |
+
"page_idx": 2
|
| 474 |
+
},
|
| 475 |
+
{
|
| 476 |
+
"type": "page_number",
|
| 477 |
+
"text": "3",
|
| 478 |
+
"bbox": [
|
| 479 |
+
493,
|
| 480 |
+
948,
|
| 481 |
+
504,
|
| 482 |
+
959
|
| 483 |
+
],
|
| 484 |
+
"page_idx": 2
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"type": "image",
|
| 488 |
+
"img_path": "images/d62545465c85e08393fb737c528cb43f016a5991379a482d43f80c8f6757b789.jpg",
|
| 489 |
+
"image_caption": [
|
| 490 |
+
"Figure 2: Model design comparison between (a) TokenMoE (Riquelme et al., 2021) and (b) MoCE. Both methods utilize the multi-expert architecture with the main difference about the input of the gating network. MoCE adopts the corresponding cluster embedding of the current token as in Eqn. 4, instead of the token embedding in Eqn. 3.2. Therefore, each expert can be trained by semantically similar images to alleviate the negative transfer phenomenon."
|
| 491 |
+
],
|
| 492 |
+
"image_footnote": [],
|
| 493 |
+
"bbox": [
|
| 494 |
+
174,
|
| 495 |
+
99,
|
| 496 |
+
444,
|
| 497 |
+
219
|
| 498 |
+
],
|
| 499 |
+
"page_idx": 3
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"type": "image",
|
| 503 |
+
"img_path": "images/b3849d47479070c4bb497dfc148013eeebcf2399ba7e447b193c232ed2db485b.jpg",
|
| 504 |
+
"image_caption": [],
|
| 505 |
+
"image_footnote": [],
|
| 506 |
+
"bbox": [
|
| 507 |
+
452,
|
| 508 |
+
99,
|
| 509 |
+
820,
|
| 510 |
+
219
|
| 511 |
+
],
|
| 512 |
+
"page_idx": 3
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"type": "text",
|
| 516 |
+
"text": "ImageNet data. This suggests that for tasks whose semantics are close to inanimate objects, adding pre-training data from Split-B is not useful, and vice versa for tasks whose semantics are close to organisms. To conclude, the introduction of semantically irrelevant pre-training data may impede transfer performance for downstream tasks. This negative transfer phenomenon motivates us to develop an efficient and automatic paradigm for task-customized pre-training.",
|
| 517 |
+
"bbox": [
|
| 518 |
+
169,
|
| 519 |
+
332,
|
| 520 |
+
823,
|
| 521 |
+
402
|
| 522 |
+
],
|
| 523 |
+
"page_idx": 3
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"type": "text",
|
| 527 |
+
"text": "3.2 EXPLORINGTOKENMOE TO MASKED AUTOENCODER",
|
| 528 |
+
"text_level": 1,
|
| 529 |
+
"bbox": [
|
| 530 |
+
171,
|
| 531 |
+
421,
|
| 532 |
+
591,
|
| 533 |
+
435
|
| 534 |
+
],
|
| 535 |
+
"page_idx": 3
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"text": "Overview of TokenMoE. TokenMoE (Riquelme et al., 2021) is a successful customized supervised pre-training model built upon the ViT (Dosovitskiy et al., 2021), which mainly consists of transformer blocks with alternating multi-head self-attention (MSA) and multi-layer perceptron (MLP). Specifically, the TokenMoE converts several transformer blocks to Mixture of Expert (MoE) blocks by expanding the MLP layer $N$ times, each of them is considered as an expert (denoted as $E_{i}(\\cdot)$ , $i = 1,2,\\dots,N$ ). Conditional computation on the $N$ experts is controlled by a gate, which is a linear layer whose input is the token embedding $\\pmb{x}$ , and the output is the top- $K$ probabilities on the experts: $G(\\pmb{x}) = TopK(\\sigma(\\pmb{W}_{g}\\pmb{x} + \\epsilon))$ , where $K$ is the number of experts to be activated, $\\pmb{W}_{g}$ is the gate parameter, $\\sigma$ is the softmax function, and $\\epsilon \\sim \\mathcal{N}(0,\\frac{1}{N})$ . $TopK(\\cdot)$ returns the $K$ largest entries of $\\sigma(\\pmb{W}_{g}\\pmb{x} + \\epsilon)$ unchanged but set the others to zero. Thus, each token is routed to its corresponding experts. The final output is represented as",
|
| 540 |
+
"bbox": [
|
| 541 |
+
169,
|
| 542 |
+
448,
|
| 543 |
+
823,
|
| 544 |
+
603
|
| 545 |
+
],
|
| 546 |
+
"page_idx": 3
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "equation",
|
| 550 |
+
"text": "\n$$\n\\mathbf {y} = \\sum_ {i = 1} ^ {N} [ G (\\boldsymbol {x}) ] _ {i} E _ {i} (\\boldsymbol {x}). \\tag {1}\n$$\n",
|
| 551 |
+
"text_format": "latex",
|
| 552 |
+
"bbox": [
|
| 553 |
+
416,
|
| 554 |
+
613,
|
| 555 |
+
823,
|
| 556 |
+
655
|
| 557 |
+
],
|
| 558 |
+
"page_idx": 3
|
| 559 |
+
},
|
| 560 |
+
{
|
| 561 |
+
"type": "text",
|
| 562 |
+
"text": "As in (Riquelme et al., 2021), importance loss and load loss are also used to enforce a balanced use of the experts. Unless otherwise specified, we set $K = 1$ and $N = 8$ in all our experiments.",
|
| 563 |
+
"bbox": [
|
| 564 |
+
169,
|
| 565 |
+
662,
|
| 566 |
+
823,
|
| 567 |
+
693
|
| 568 |
+
],
|
| 569 |
+
"page_idx": 3
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"type": "text",
|
| 573 |
+
"text": "Limitation of TokenMoE. As will be shown in the experimental results (Table 3), naively adopting TokenMoE to the MAE cannot improve performance, even with intense hyper-parameter tuning and data augmentations (e.g., Repeat Augment (Hoffer et al., 2020) and RandAugment (Cubuk et al., 2020) with larger magnitude). Figure 3(a) shows the routing heatmaps of the pre-trained TokenMoE model. As can be seen, the routing process has little correlation with the ImageNet labels. Moreover, expert 3 is selected most of the time (91.8% of the classes). This degenerates the multi-expert network into a single-expert network. As demonstrated in Figure 1(b), we speculate that this is due to the use of low-level pixel values (instead of semantic class labels in the original TokenMoE) as reconstruction targets. This is also observed in Li et al. (2022b).",
|
| 574 |
+
"bbox": [
|
| 575 |
+
169,
|
| 576 |
+
709,
|
| 577 |
+
823,
|
| 578 |
+
835
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 3
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "text",
|
| 584 |
+
"text": "3.3 MIXTURE OF CLUSTER-CONDITIONAL EXPERTS",
|
| 585 |
+
"text_level": 1,
|
| 586 |
+
"bbox": [
|
| 587 |
+
171,
|
| 588 |
+
854,
|
| 589 |
+
547,
|
| 590 |
+
868
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 3
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"text": "To address the limitations of TokenMoE, we propose the Mixture of Cluster-conditional Experts (MoCE), which trains each expert in a semantic-aware manner. The procedure consists of data clustering, architecture and gate design, and deployment.",
|
| 597 |
+
"bbox": [
|
| 598 |
+
169,
|
| 599 |
+
881,
|
| 600 |
+
823,
|
| 601 |
+
925
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 3
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "header",
|
| 607 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 608 |
+
"bbox": [
|
| 609 |
+
171,
|
| 610 |
+
32,
|
| 611 |
+
478,
|
| 612 |
+
47
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 3
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "page_number",
|
| 618 |
+
"text": "4",
|
| 619 |
+
"bbox": [
|
| 620 |
+
493,
|
| 621 |
+
948,
|
| 622 |
+
503,
|
| 623 |
+
959
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 3
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "text",
|
| 629 |
+
"text": "Data Clustering. To train each expert semantically, a clustering procedure is first performed to simulate the label partitioning in Section 3.1. With a pre-trained MAE model, we collect all the image features $\\pmb{f}_i$ 's (normalized to unit length $\\| \\pmb{f}_i \\| = 1$ ), and represent the feature matrix as $\\pmb{F} = [f_1, f_2, \\dots, f_n] \\in \\mathbb{R}^{d \\times n}$ , where $n$ is the number of images and $d$ is the dimension of the feature. The learnable cluster centroids are represented as $\\pmb{C} = [c_1, c_2, \\dots, c_m] \\in \\mathbb{R}^{d \\times m}$ , (with $\\| c_i \\| = 1$ ) where $m$ is the desired number of clusters. The assignment of feature to clusters is computed as $\\pmb{A} = \\pmb{F}^T \\pmb{C}$ . Following Asano et al. (2019), let $Q \\in \\mathbb{R}^{m \\times n}$ be the posterior distribution of clustering, whose objective is",
|
| 630 |
+
"bbox": [
|
| 631 |
+
169,
|
| 632 |
+
103,
|
| 633 |
+
826,
|
| 634 |
+
218
|
| 635 |
+
],
|
| 636 |
+
"page_idx": 4
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "equation",
|
| 640 |
+
"text": "\n$$\n\\max _ {\\boldsymbol {Q}} \\operatorname {T r} \\left(\\boldsymbol {Q} ^ {T} \\boldsymbol {A}\\right) + \\epsilon H (\\boldsymbol {Q}) \\quad s. t. \\quad \\boldsymbol {Q} \\mathbf {1} _ {n} = \\frac {1}{m} \\mathbf {1} _ {m}, \\quad \\boldsymbol {Q} ^ {T} \\mathbf {1} _ {m} = \\frac {1}{n} \\mathbf {1} _ {n}, \\tag {2}\n$$\n",
|
| 641 |
+
"text_format": "latex",
|
| 642 |
+
"bbox": [
|
| 643 |
+
272,
|
| 644 |
+
223,
|
| 645 |
+
823,
|
| 646 |
+
253
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 4
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "where $\\mathbf{1}_m$ is the $m$ -dimensional vector of all ones, $H$ is the entropy function, and the constraints force the clustering results to be balanced. $Q$ and $C$ are optimized iteratively. For a given $C$ , $Q$ is solved by the Sinkhorn-Knopp algorithm (Cuturi, 2013); while for a given $Q$ , $C$ is obtained by minimizing the cross entropy between $Q$ and $A$ with SGD. We take the final $C$ and $Q$ as the cluster centroids and clustering assignments, respectively. The implementation details are in Appendix A.1.",
|
| 653 |
+
"bbox": [
|
| 654 |
+
169,
|
| 655 |
+
262,
|
| 656 |
+
823,
|
| 657 |
+
333
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 4
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "Architecture. The whole network is trained on the full ImageNet data, with each expert trained by images from selected clusters decided by the MoCE gates' routing results. As on average each data cluster has only a fraction of $1 / K$ of the original sample size, the training time of each expert is also $K$ times shorter than the other parameters with dense modeling (e.g., MSA parameters (Riquelme et al., 2021)), we further adopt a distillation loss $\\mathcal{L}_{distill}$ , which is defined as the $\\ell_2$ distance between the features generated by the whole network and each expert. This loss function can be formulated as",
|
| 664 |
+
"bbox": [
|
| 665 |
+
169,
|
| 666 |
+
348,
|
| 667 |
+
823,
|
| 668 |
+
444
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 4
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "equation",
|
| 674 |
+
"text": "\n$$\n\\min _ {\\boldsymbol {\\theta}} \\sum_ {i = 1} ^ {m} \\mathcal {L} _ {M A E} \\left(D _ {i}; \\boldsymbol {\\theta} _ {i}\\right) + \\mathcal {L} _ {\\text {d i s t i l l}}, \\tag {3}\n$$\n",
|
| 675 |
+
"text_format": "latex",
|
| 676 |
+
"bbox": [
|
| 677 |
+
380,
|
| 678 |
+
445,
|
| 679 |
+
823,
|
| 680 |
+
484
|
| 681 |
+
],
|
| 682 |
+
"page_idx": 4
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "text",
|
| 686 |
+
"text": "where $D_{i}$ is the $i$ th cluster, $\\theta_{i}$ is the parameter used for training $D_{i}$ , and $\\mathcal{L}_{MAE}(D_i;\\theta_i)$ is the reconstruction loss for masked image modeling. $\\theta_{i}$ consists of several experts in different layers, as explained in the following.",
|
| 687 |
+
"bbox": [
|
| 688 |
+
169,
|
| 689 |
+
488,
|
| 690 |
+
823,
|
| 691 |
+
532
|
| 692 |
+
],
|
| 693 |
+
"page_idx": 4
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "text",
|
| 697 |
+
"text": "Gate Design. As in the TokenMoE, we replace several MLP layers in the ViT with layers equipped with MoCE gates. In TokenMoE, routings of the tokens to experts are considered separately. In MoCE, we route tokens from images of the same cluster to the same expert. The MoCE gate output can thus be written as",
|
| 698 |
+
"bbox": [
|
| 699 |
+
169,
|
| 700 |
+
547,
|
| 701 |
+
823,
|
| 702 |
+
603
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 4
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "equation",
|
| 708 |
+
"text": "\n$$\nG (\\mathbf {x}) = \\operatorname {T o p} K \\left(\\sigma \\left(\\boldsymbol {W} _ {g} \\cdot \\boldsymbol {C} _ {[ \\mathbf {x} ]} + \\epsilon\\right)\\right), \\tag {4}\n$$\n",
|
| 709 |
+
"text_format": "latex",
|
| 710 |
+
"bbox": [
|
| 711 |
+
380,
|
| 712 |
+
604,
|
| 713 |
+
823,
|
| 714 |
+
622
|
| 715 |
+
],
|
| 716 |
+
"page_idx": 4
|
| 717 |
+
},
|
| 718 |
+
{
|
| 719 |
+
"type": "text",
|
| 720 |
+
"text": "where $W_{g}$ is the gate parameter, and $C_{[\\mathbf{x}]}$ is the embedding of the cluster that $\\mathbf{x}$ belongs to. Empirically, we find that the confidence of $G(\\mathbf{x})$ (the max entry) is low and consequently, the mapping between clusters and experts varies a lot during pre-training. Inspired by the importance and load losses (Riquelme et al., 2021), we add the following loss $\\mathcal{L}_{imbalance}$ to enhance the confidence of the gates. Since it makes $G(\\mathbf{x})$ shaper, we call it imbalance loss.",
|
| 721 |
+
"bbox": [
|
| 722 |
+
169,
|
| 723 |
+
625,
|
| 724 |
+
823,
|
| 725 |
+
698
|
| 726 |
+
],
|
| 727 |
+
"page_idx": 4
|
| 728 |
+
},
|
| 729 |
+
{
|
| 730 |
+
"type": "equation",
|
| 731 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {i m b a l a n c e}} = - \\sum_ {i = 1} ^ {n} \\left(\\frac {\\operatorname {s t d} (G (\\mathbf {x}) _ {i})}{\\operatorname {m e a n} (G (\\mathbf {x}) _ {i})}\\right) ^ {2}, \\tag {5}\n$$\n",
|
| 732 |
+
"text_format": "latex",
|
| 733 |
+
"bbox": [
|
| 734 |
+
359,
|
| 735 |
+
705,
|
| 736 |
+
823,
|
| 737 |
+
747
|
| 738 |
+
],
|
| 739 |
+
"page_idx": 4
|
| 740 |
+
},
|
| 741 |
+
{
|
| 742 |
+
"type": "text",
|
| 743 |
+
"text": "For practical implementation, the loss is calculated over the samples in a batch. The imbalance loss penalizes on the negative variance of the gate confidence.",
|
| 744 |
+
"bbox": [
|
| 745 |
+
169,
|
| 746 |
+
753,
|
| 747 |
+
823,
|
| 748 |
+
782
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 4
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "**Deployment.** On deployment, customized experts are selected from MoCE, and fine-tuned for each downstream task. As shown in Section 3.1, we prefer to use the experts that is pre-trained from data whose semantics is closest to that of the downstream task. This can be obtained by reusing the data clustering module. Specifically, we feed images for the downstream task through the pre-trained MAE model and collect all the image features as $\\boldsymbol{F}_{task}$ . The assignment of downstream images to the clusters is then computed as $\\boldsymbol{A}_{task} = \\boldsymbol{F}_{task}^T\\boldsymbol{C}$ . We select the largest cluster with assigned downstream images, and use the corresponding experts (a sub-model of the whole MoCE model) for deployment. In the case when only one expert is activated at each MoCE layer ( $K = 1$ ), a regular ViT model is needed for downstream fine-tuning, which is much more efficient than MoE.",
|
| 755 |
+
"bbox": [
|
| 756 |
+
169,
|
| 757 |
+
797,
|
| 758 |
+
825,
|
| 759 |
+
925
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 4
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "header",
|
| 765 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 766 |
+
"bbox": [
|
| 767 |
+
171,
|
| 768 |
+
32,
|
| 769 |
+
478,
|
| 770 |
+
47
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 4
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "page_number",
|
| 776 |
+
"text": "5",
|
| 777 |
+
"bbox": [
|
| 778 |
+
493,
|
| 779 |
+
948,
|
| 780 |
+
503,
|
| 781 |
+
959
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 4
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "table",
|
| 787 |
+
"img_path": "images/2d5509e4fd6789cb111d12d35311ff164e0e74020cf3b587f68dc4c70543bf5e.jpg",
|
| 788 |
+
"table_caption": [
|
| 789 |
+
"Table 1: Transfer accuracy (%) of self-supervised learning models on 11 downstream tasks."
|
| 790 |
+
],
|
| 791 |
+
"table_footnote": [],
|
| 792 |
+
"table_body": "<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td colspan=\"13\">ResNet-50</td></tr><tr><td>BYOL</td><td>82.39</td><td>90.12</td><td>87.33</td><td>96.28</td><td>82.15</td><td>74.57</td><td>95.96</td><td>82.13</td><td>88.52</td><td>64.41</td><td>83.97</td><td>84.35</td></tr><tr><td>DeepCluster-v2</td><td>78.75</td><td>90.51</td><td>86.33</td><td>96.48</td><td>82.28</td><td>75.43</td><td>96.16</td><td>83.68</td><td>90.33</td><td>66.68</td><td>81.37</td><td>84.36</td></tr><tr><td colspan=\"13\">Vision Transformer</td></tr><tr><td>Supervised</td><td>76.55</td><td>89.98</td><td>86.19</td><td>96.79</td><td>83.96</td><td>75.09</td><td>93.94</td><td>85.17</td><td>92.54</td><td>64.54</td><td>87.22</td><td>84.72</td></tr><tr><td>DINO</td><td>66.50</td><td>91.65</td><td>76.37</td><td>98.12</td><td>86.69</td><td>75.73</td><td>96.40</td><td>93.77</td><td>93.97</td><td>59.33</td><td>86.62</td><td>84.10</td></tr><tr><td>MoCo v3</td><td>76.29</td><td>91.64</td><td>85.18</td><td>97.99</td><td>86.98</td><td>72.64</td><td>95.33</td><td>83.94</td><td>92.35</td><td>65.54</td><td>84.21</td><td>84.74</td></tr><tr><td>BEiT</td><td>53.16</td><td>79.02</td><td>68.11</td><td>94.34</td><td>73.54</td><td>68.04</td><td>91.33</td><td>79.59</td><td>84.02</td><td>56.13</td><td>65.65</td><td>73.90</td></tr><tr><td>MAE</td><td>72.38</td><td>90.47</td><td>83.51</td><td>95.69</td><td>68.40</td><td>75.48</td><td>96.10</td><td>79.98</td><td>92.35</td><td>62.43</td><td>84.79</td><td>81.96</td></tr><tr><td>MAE*</td><td>72.71</td><td>91.24</td><td>84.47</td><td>96.15</td><td>77.33</td><td>75.05</td><td>96.25</td><td>80.49</td><td>92.78</td><td>62.46</td><td>85.02</td><td>83.09</td></tr><tr><td>MoCE (Ours)</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>\\( 85.54^{+2.45} \\)</td></tr></table>",
|
| 793 |
+
"bbox": [
|
| 794 |
+
200,
|
| 795 |
+
130,
|
| 796 |
+
808,
|
| 797 |
+
308
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 5
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "table",
|
| 803 |
+
"img_path": "images/309b2f1e58a17b2629d483a980cd34c6a821f057159df5112bb4d8bbce3a595a.jpg",
|
| 804 |
+
"table_caption": [
|
| 805 |
+
"Table 2: Transfer accuracy (%) on detection and segmentation."
|
| 806 |
+
],
|
| 807 |
+
"table_footnote": [],
|
| 808 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td>ADE20K</td><td colspan=\"6\">COCO</td></tr><tr><td>mIoU</td><td>APbb</td><td>APbb50</td><td>APbb75</td><td>APmk</td><td>APmk50</td><td>APmk75</td></tr><tr><td>Supervised</td><td>46.9</td><td>48.8</td><td>68.7</td><td>52.7</td><td>42.5</td><td>65.9</td><td>45.5</td></tr><tr><td>DINO</td><td>46.9</td><td>49.5</td><td>69.1</td><td>53.6</td><td>42.9</td><td>66.0</td><td>46.3</td></tr><tr><td>MoCo v3</td><td>46.8</td><td>47.2</td><td>66.9</td><td>50.8</td><td>41.1</td><td>63.6</td><td>44.1</td></tr><tr><td>BEiT</td><td>45.6</td><td>40.8</td><td>59.4</td><td>44.1</td><td>36.0</td><td>56.8</td><td>38.2</td></tr><tr><td>MAE</td><td>48.1</td><td>50.6</td><td>69.4</td><td>55.0</td><td>43.8</td><td>66.6</td><td>47.5</td></tr><tr><td>MoCE</td><td>48.3</td><td>51.1</td><td>69.8</td><td>55.4</td><td>44.2</td><td>67.0</td><td>48.1</td></tr></table>",
|
| 809 |
+
"bbox": [
|
| 810 |
+
282,
|
| 811 |
+
352,
|
| 812 |
+
718,
|
| 813 |
+
465
|
| 814 |
+
],
|
| 815 |
+
"page_idx": 5
|
| 816 |
+
},
|
| 817 |
+
{
|
| 818 |
+
"type": "text",
|
| 819 |
+
"text": "4 EXPERIMENTS",
|
| 820 |
+
"text_level": 1,
|
| 821 |
+
"bbox": [
|
| 822 |
+
171,
|
| 823 |
+
488,
|
| 824 |
+
328,
|
| 825 |
+
503
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 5
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"text": "In this section, we first introduce the setup of pre-training and fine-tuning stage of MoCE in Sec. 4.1. Then we demonstrate the effectiveness of MoCE by evaluating the pre-trained models on a collection of 11 downstream tasks with detailed analysis of our MoCE superior to vanilla MAE and TokenMoE in Sec. 4.2. Finally we take ablation studies on the key components of MoCE in Sec. 4.3.",
|
| 832 |
+
"bbox": [
|
| 833 |
+
169,
|
| 834 |
+
521,
|
| 835 |
+
823,
|
| 836 |
+
578
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 5
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "4.1 SETUP",
|
| 843 |
+
"text_level": 1,
|
| 844 |
+
"bbox": [
|
| 845 |
+
171,
|
| 846 |
+
595,
|
| 847 |
+
263,
|
| 848 |
+
609
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 5
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "For all experiments, we replace two MLP layers with MoCE layers in the original ViT-B (Dosovitskiy et al., 2021). Following Wu et al. (2022), layers with the greatest gradient magnitude are selected (which are the last two MLP layers in our experiments). Unless otherwise specified, the number of experts is 8 and the number of clusters is 256. Our model utilizes the officially released 1600-epoch pre-trained MAE model $^{2}$ and continues to train for an extra 200 epochs. Each expert is initialized by the corresponding dense model with a small weight perturbation. The training procedure mainly follows that of MAE, except that we multiply the base learning rate by 0.1. All regularization loss weight is set to 0.01 by default.",
|
| 855 |
+
"bbox": [
|
| 856 |
+
169,
|
| 857 |
+
623,
|
| 858 |
+
823,
|
| 859 |
+
736
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 5
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "To ensure a fair comparison with the vision transformer on downstream classification tasks, we mainly follow the hyper-parameter settings in (Dosovitskiy et al., 2021; Riquelme et al., 2021) and the benchmark settings in (Ericsson et al., 2021). The proposed model is compared with various self-supervised models, including DINO (Caron et al., 2021), MoCo v3 (Chen et al., 2021b), BEiT (Bao et al., 2022), and the highly-performant ResNet-50 models of BYOL (Grill et al., 2020) and DeepCluster-v2 (Caron et al., 2018). We also compare with the supervised pre-trained model DeiT (Touvron et al., 2021). To make a fair comparison of training time, we continue to train a 1600-epoch pre-trained MAE for 200 epochs with total ImageNet as our baseline, and is denoted as MAE* in Table 1. For detection and segmentation tasks, following Bao et al. (2022), we perform experiments on ADE20K (Zhou et al., 2019) and COCO (Lin et al., 2014). We utilize the officially released checkpoints for all baselines. Details are in Appendix A.5.",
|
| 866 |
+
"bbox": [
|
| 867 |
+
169,
|
| 868 |
+
742,
|
| 869 |
+
826,
|
| 870 |
+
896
|
| 871 |
+
],
|
| 872 |
+
"page_idx": 5
|
| 873 |
+
},
|
| 874 |
+
{
|
| 875 |
+
"type": "header",
|
| 876 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 877 |
+
"bbox": [
|
| 878 |
+
173,
|
| 879 |
+
32,
|
| 880 |
+
478,
|
| 881 |
+
47
|
| 882 |
+
],
|
| 883 |
+
"page_idx": 5
|
| 884 |
+
},
|
| 885 |
+
{
|
| 886 |
+
"type": "page_footnote",
|
| 887 |
+
"text": "$^{2}$ https://github.com/facebookresearch/mae",
|
| 888 |
+
"bbox": [
|
| 889 |
+
191,
|
| 890 |
+
909,
|
| 891 |
+
444,
|
| 892 |
+
924
|
| 893 |
+
],
|
| 894 |
+
"page_idx": 5
|
| 895 |
+
},
|
| 896 |
+
{
|
| 897 |
+
"type": "page_number",
|
| 898 |
+
"text": "6",
|
| 899 |
+
"bbox": [
|
| 900 |
+
493,
|
| 901 |
+
948,
|
| 902 |
+
504,
|
| 903 |
+
959
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 5
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "table",
|
| 909 |
+
"img_path": "images/263b28f135f00fbac46bced84160c65b56d381cf2258070177223e6cf35556fc.jpg",
|
| 910 |
+
"table_caption": [
|
| 911 |
+
"Table 3: Transfer accuracy of MAE, TokenMoE, SDR and MoCE. SDR(ViT) is our reimplementation of SDR under ViT. We observe that TokenMoE cannot outperform vanilla MAE, while SDR(ViT) achieves better performance, which is further outperformed by MoCE."
|
| 912 |
+
],
|
| 913 |
+
"table_footnote": [],
|
| 914 |
+
"table_body": "<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE*</td><td>72.71</td><td>91.24</td><td>84.47</td><td>96.15</td><td>77.33</td><td>75.05</td><td>96.25</td><td>80.49</td><td>92.78</td><td>62.46</td><td>85.02</td><td>83.09</td></tr><tr><td>TokenMoE</td><td>70.51</td><td>89.70</td><td>81.40</td><td>95.18</td><td>76.44</td><td>73.67</td><td>95.09</td><td>77.45</td><td>90.71</td><td>61.12</td><td>80.15</td><td>81.04</td></tr><tr><td>SDR</td><td>75.77</td><td>89.73</td><td>86.65</td><td>95.31</td><td>83.60</td><td>73.62</td><td>95.53</td><td>84.77</td><td>91.25</td><td>64.64</td><td>83.51</td><td>84.03</td></tr><tr><td>SDR(ViT)</td><td>76.57</td><td>90.04</td><td>86.95</td><td>96.92</td><td>81.42</td><td>73.09</td><td>96.14</td><td>82.90</td><td>92.65</td><td>64.40</td><td>85.37</td><td>84.22</td></tr><tr><td>MoCE</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>85.54</td></tr></table>",
|
| 915 |
+
"bbox": [
|
| 916 |
+
189,
|
| 917 |
+
159,
|
| 918 |
+
816,
|
| 919 |
+
258
|
| 920 |
+
],
|
| 921 |
+
"page_idx": 6
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"type": "text",
|
| 925 |
+
"text": "4.2 RESULTS",
|
| 926 |
+
"text_level": 1,
|
| 927 |
+
"bbox": [
|
| 928 |
+
174,
|
| 929 |
+
270,
|
| 930 |
+
277,
|
| 931 |
+
282
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 6
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"text": "Transfer Results. The classification transfer performance of various self-supervised models are shown in Table 1. As can be seen, MoCE achieves a $2.45\\%$ improvement over MAE* and reaches the state-of-the-art averaged accuracy, demonstrating the effectiveness of the task-customized pretraining paradigm. On fine-grained datasets such as Aircraft, Cars and Food, MoCE outperforms the baseline model by a large margin. This is because these fine-grained tasks are similar to only a subset of the pre-training dataset. Hence, MoCE can alleviate negative transfer by using the model that is trained by the cluster most similar to the particular downstream task. On the other hand, MoCE shows only limited improvement on tasks such as Caltech, Cifar-10 and VOC. These tasks are more general and contain images covering the various semantics in the pre-training dataset, and thus negative transfer does not exist.",
|
| 938 |
+
"bbox": [
|
| 939 |
+
174,
|
| 940 |
+
297,
|
| 941 |
+
823,
|
| 942 |
+
436
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 6
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"text": "Table 2 shows the transfer performance on the detection and segmentation tasks. As can be seen, MoCE outperforms MAE and the other baselines (including the supervised one), and achieves state-of-the-art results.",
|
| 949 |
+
"bbox": [
|
| 950 |
+
174,
|
| 951 |
+
444,
|
| 952 |
+
823,
|
| 953 |
+
484
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 6
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"text": "Comparison between MoCE, TokenMoE, MAE and SDR. In this experiment, we compare MoCE with the following models: (i) MAE, (ii) TokenMoE, (iii) SDR (Liu et al., 2022), a task-customized model that aims at alleviating negative transfer, and (iv) SDR(ViT), which re-implements SDR with the ViT architecture. Table 3 shows the transfer accuracy on 11 downstream tasks. As can be seen, TokenMoE performs even worse than MAE, suggesting that naively adopting MoE to MAE is not desirable. Both MoCE and SDR(ViT) outperform MAE, demonstrating the effectiveness of task-customized methods for alleviating negative transfer. MoCE further outperforms SDR(ViT), indicating the importance of self-adaptive routing.",
|
| 960 |
+
"bbox": [
|
| 961 |
+
174,
|
| 962 |
+
506,
|
| 963 |
+
823,
|
| 964 |
+
617
|
| 965 |
+
],
|
| 966 |
+
"page_idx": 6
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"text": "Figure 3(d) shows the peak signal-to-noise ratio (PSNR) (Sara et al., 2019), which reflects the generation quality of these autoencoder models. MoCE exhibits improvement over TokenMoE and MAE on most datasets. We also provide the comparisons in the case of a fair parameter count, large architectures, and training from scratch in the Appendix A.2, A.3 and A.4, respectively.",
|
| 971 |
+
"bbox": [
|
| 972 |
+
174,
|
| 973 |
+
625,
|
| 974 |
+
823,
|
| 975 |
+
679
|
| 976 |
+
],
|
| 977 |
+
"page_idx": 6
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"text": "Analysis on experts. Figure 3(a) and Figure 3(c) show the routing heatmaps for TokenMoE and MoCE, respectively. As can be seen, routing of the TokenMoE experts has little correlation with semantics. On the other hand, each MoCE expert is trained by several classes, showing a more balanced assignment of images to experts. This verifies that the improvement of MoCE is due to more effective learning of the experts. Moreover, notice that the importance loss and load balance loss (Riquelme et al., 2021) are applied and indeed work as \"expected\" because they are only applied with respect to patch tokens instead of semantic classes. On the other hand, MoCE can balance the experts both at the token level and semantic level.",
|
| 982 |
+
"bbox": [
|
| 983 |
+
174,
|
| 984 |
+
700,
|
| 985 |
+
823,
|
| 986 |
+
811
|
| 987 |
+
],
|
| 988 |
+
"page_idx": 6
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"text": "Figure 3(b) shows example pre-training samples for 3 random MoCE experts. Note that expert 1 is mostly trained by images containing clothes, experts 2 is pre-trained mostly by bird images, while expert 3 is pre-trained mostly by dog images.",
|
| 993 |
+
"bbox": [
|
| 994 |
+
174,
|
| 995 |
+
819,
|
| 996 |
+
823,
|
| 997 |
+
859
|
| 998 |
+
],
|
| 999 |
+
"page_idx": 6
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "text",
|
| 1003 |
+
"text": "Next, we show that each expert is trained by samples with similar semantics. Following (Mikolov et al., 2013), we select the label set used by each expert, and then compute the $\\ell_2$ distances between CLIP embeddings (Radford et al., 2021a) of labels used by the same expert and by different experts. The average distance between labels used by the same expert is 0.84, while that between labels",
|
| 1004 |
+
"bbox": [
|
| 1005 |
+
174,
|
| 1006 |
+
868,
|
| 1007 |
+
823,
|
| 1008 |
+
922
|
| 1009 |
+
],
|
| 1010 |
+
"page_idx": 6
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "header",
|
| 1014 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1015 |
+
"bbox": [
|
| 1016 |
+
173,
|
| 1017 |
+
32,
|
| 1018 |
+
477,
|
| 1019 |
+
46
|
| 1020 |
+
],
|
| 1021 |
+
"page_idx": 6
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "page_number",
|
| 1025 |
+
"text": "7",
|
| 1026 |
+
"bbox": [
|
| 1027 |
+
493,
|
| 1028 |
+
949,
|
| 1029 |
+
503,
|
| 1030 |
+
958
|
| 1031 |
+
],
|
| 1032 |
+
"page_idx": 6
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "image",
|
| 1036 |
+
"img_path": "images/8c1bc9468779f04b03bcdbfff898292ce5f7bc24bc32881ede2c13d67f89e9bc.jpg",
|
| 1037 |
+
"image_caption": [
|
| 1038 |
+
"(a) Routing heatmap for TokenMoE experts."
|
| 1039 |
+
],
|
| 1040 |
+
"image_footnote": [],
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
240,
|
| 1043 |
+
109,
|
| 1044 |
+
421,
|
| 1045 |
+
212
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 7
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "image",
|
| 1051 |
+
"img_path": "images/fdb2db91d3a19e1971d41c6ec1a9b7bb0f6e298c8fb68c43c0c877af4de0b66f.jpg",
|
| 1052 |
+
"image_caption": [
|
| 1053 |
+
"(b) Examples pre-training samples for expert 1 (top), expert 2 (middle), and expert 3 (bottom)."
|
| 1054 |
+
],
|
| 1055 |
+
"image_footnote": [],
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
555,
|
| 1058 |
+
108,
|
| 1059 |
+
785,
|
| 1060 |
+
215
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 7
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "image",
|
| 1066 |
+
"img_path": "images/861dc522af64658fffb9912b1e98c819c2acfbf515fa88c9e6c407a035385047.jpg",
|
| 1067 |
+
"image_caption": [
|
| 1068 |
+
"(c) Routing heatmap for MoCE experts."
|
| 1069 |
+
],
|
| 1070 |
+
"image_footnote": [],
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
240,
|
| 1073 |
+
263,
|
| 1074 |
+
421,
|
| 1075 |
+
367
|
| 1076 |
+
],
|
| 1077 |
+
"page_idx": 7
|
| 1078 |
+
},
|
| 1079 |
+
{
|
| 1080 |
+
"type": "image",
|
| 1081 |
+
"img_path": "images/0f6a59e75d0e87eafd66bf3e36181ab6ba2ef4fc11f3ec67c7de147e22754e20.jpg",
|
| 1082 |
+
"image_caption": [
|
| 1083 |
+
"(d) Relative PSNR improvement over MAE.",
|
| 1084 |
+
"Figure 3: (a),(c): Routing heatmaps for experts in TokenMoE and MoCE. The x-axis is the expert ID, and the y-axis is the ImageNet semantic label ID. Darker green means a higher proportion of tokens belonging to the corresponding class are allocated to the expert. The label is sorted differently in each figure to make it readable. (b): Example samples from the pre-training dataset of 3 MoCE experts. (d): Relative PSNR improvement of TokenMoE and MoCE over MAE for each downstream task."
|
| 1085 |
+
],
|
| 1086 |
+
"image_footnote": [],
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
537,
|
| 1089 |
+
260,
|
| 1090 |
+
803,
|
| 1091 |
+
371
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 7
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "used by different experts is 0.92, indicating that the MoCE gate automatically aggregates labels with similar semantics to each expert, thus benefiting downstream transfer.",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
169,
|
| 1100 |
+
508,
|
| 1101 |
+
823,
|
| 1102 |
+
539
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 7
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "text",
|
| 1108 |
+
"text": "Training and testing efficiency. Table 4 compares the efficiencies of MAE, TokenMoE and MoCE during training and testing. As can be seen, all of them have similar FLOPs. However, TokenMoE needs to use the whole model during both training and testing, while MoCE only needs to use a single expert and thus halves the required number of parameters when testing. In addition, the training and testing speeds are improved by respectively $18\\%$ and $37\\%$ , which is attributed to the reduction of token shuffle operations as tokens in the same image do not need to be split and are dispatched to the same expert, significantly reducing the communication overhead.",
|
| 1109 |
+
"bbox": [
|
| 1110 |
+
169,
|
| 1111 |
+
551,
|
| 1112 |
+
823,
|
| 1113 |
+
650
|
| 1114 |
+
],
|
| 1115 |
+
"page_idx": 7
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "text",
|
| 1119 |
+
"text": "4.3 ABLATION",
|
| 1120 |
+
"text_level": 1,
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
171,
|
| 1123 |
+
666,
|
| 1124 |
+
290,
|
| 1125 |
+
679
|
| 1126 |
+
],
|
| 1127 |
+
"page_idx": 7
|
| 1128 |
+
},
|
| 1129 |
+
{
|
| 1130 |
+
"type": "text",
|
| 1131 |
+
"text": "Search method. When a downstream task arrives, it is expensive to fine-tune all experts to choose the best one. To find the task-customized expert ( $K = 1$ ), we compare the method proposed in Section 3.3 with (i) early stop, (ii) KNN (Liu et al., 2022), (iii) LogME (You et al., 2021). The experiment is performed on the task with the most samples (Food), the task with the least samples (Flowers), and the one with a medium number of samples (Aircraft). For comparison, we additionally show the performance of the best and worst experts based on an exhaustive search. As can be seen in Table 5, MoCE performs stably among different sizes of the dataset, and the search cost is negligible as we only need to infer the downstream task once and feed it to the clustering module. This illustrates another advantage of combining clustering and pre-training in a single paradigm.",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
169,
|
| 1134 |
+
691,
|
| 1135 |
+
823,
|
| 1136 |
+
819
|
| 1137 |
+
],
|
| 1138 |
+
"page_idx": 7
|
| 1139 |
+
},
|
| 1140 |
+
{
|
| 1141 |
+
"type": "text",
|
| 1142 |
+
"text": "MoCE Architectures. In this experiment, we study the different architecture hyper-parameters of MoCE in three aspects. First, we vary the number of experts in each MoCE layer. As can be seen from Table 6, using more experts leads to consistent improvement on the accuracy.",
|
| 1143 |
+
"bbox": [
|
| 1144 |
+
169,
|
| 1145 |
+
832,
|
| 1146 |
+
823,
|
| 1147 |
+
875
|
| 1148 |
+
],
|
| 1149 |
+
"page_idx": 7
|
| 1150 |
+
},
|
| 1151 |
+
{
|
| 1152 |
+
"type": "text",
|
| 1153 |
+
"text": "Next, we vary the location of the MoCE layers. As mentioned in Section 4.1, we select the MoCE layers based on the gradient magnitudes. In the experiments, MoCE selects the 11th and 12th MLP layers. On the other hand, TokenMoE chooses the last 2 even-numbered (i.e., 10th and 12th) MLP",
|
| 1154 |
+
"bbox": [
|
| 1155 |
+
169,
|
| 1156 |
+
881,
|
| 1157 |
+
825,
|
| 1158 |
+
925
|
| 1159 |
+
],
|
| 1160 |
+
"page_idx": 7
|
| 1161 |
+
},
|
| 1162 |
+
{
|
| 1163 |
+
"type": "header",
|
| 1164 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
171,
|
| 1167 |
+
32,
|
| 1168 |
+
478,
|
| 1169 |
+
47
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 7
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "page_number",
|
| 1175 |
+
"text": "8",
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
493,
|
| 1178 |
+
948,
|
| 1179 |
+
503,
|
| 1180 |
+
959
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 7
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "table",
|
| 1186 |
+
"img_path": "images/043709207c79e4265d0a39d5aeb8ad50a4362ad528e1e905fc760b68d28255af.jpg",
|
| 1187 |
+
"table_caption": [
|
| 1188 |
+
"Table 4: Efficiency during training (top) and testing (bottom)."
|
| 1189 |
+
],
|
| 1190 |
+
"table_footnote": [],
|
| 1191 |
+
"table_body": "<table><tr><td></td><td>MAE</td><td>TokenMoE</td><td>MoCE</td></tr><tr><td>Params (M)</td><td>111.91</td><td>178.03</td><td>178.03</td></tr><tr><td>FLOPs (G)</td><td>9.80</td><td>9.81</td><td>9.81</td></tr><tr><td>Speed↑</td><td>1.41x</td><td>1x</td><td>1.18x</td></tr><tr><td># Params (M)</td><td>85.88</td><td>152.00</td><td>85.88</td></tr><tr><td>FLOPs (G)</td><td>16.88</td><td>16.88</td><td>16.88</td></tr><tr><td>Speed↑</td><td>1.37x</td><td>1x</td><td>1.37x</td></tr></table>",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
173,
|
| 1194 |
+
143,
|
| 1195 |
+
467,
|
| 1196 |
+
265
|
| 1197 |
+
],
|
| 1198 |
+
"page_idx": 8
|
| 1199 |
+
},
|
| 1200 |
+
{
|
| 1201 |
+
"type": "table",
|
| 1202 |
+
"img_path": "images/1fe9b317654a00f8ddc315545a7adefceed09dd267e28f2406f0b29265951e6b.jpg",
|
| 1203 |
+
"table_caption": [
|
| 1204 |
+
"Table 5: The search cost (in GPU hours) for different expert search algorithms."
|
| 1205 |
+
],
|
| 1206 |
+
"table_footnote": [],
|
| 1207 |
+
"table_body": "<table><tr><td></td><td>Aircraft</td><td>Flowers</td><td>Food</td><td>GPU hours</td></tr><tr><td>Best</td><td>79.92</td><td>97.96</td><td>86.24</td><td>288</td></tr><tr><td>Worst</td><td>69.84</td><td>94.97</td><td>81.51</td><td>288</td></tr><tr><td>Early stop</td><td>77.00</td><td>96.83</td><td>85.33</td><td>144</td></tr><tr><td>KNN</td><td>71.40</td><td>95.10</td><td>83.32</td><td>5</td></tr><tr><td>LogME</td><td>73.84</td><td>96.54</td><td>85.11</td><td>5</td></tr><tr><td>MoCE</td><td>78.73</td><td>96.94</td><td>86.24</td><td>1</td></tr></table>",
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
483,
|
| 1210 |
+
143,
|
| 1211 |
+
797,
|
| 1212 |
+
266
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 8
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "table",
|
| 1218 |
+
"img_path": "images/726fa5a1c53f29fcee79add98389f65f2ed1a1e65f01176a5cee85559973f152.jpg",
|
| 1219 |
+
"table_caption": [
|
| 1220 |
+
"Table 6: Accuracies with different numbers of experts in a MoCE layer. (default setting used is in bold)."
|
| 1221 |
+
],
|
| 1222 |
+
"table_footnote": [],
|
| 1223 |
+
"table_body": "<table><tr><td># experts</td><td>Acc (%)</td></tr><tr><td>1</td><td>83.09</td></tr><tr><td>2</td><td>83.01</td></tr><tr><td>4</td><td>84.22</td></tr><tr><td>8</td><td>85.54</td></tr></table>",
|
| 1224 |
+
"bbox": [
|
| 1225 |
+
192,
|
| 1226 |
+
345,
|
| 1227 |
+
352,
|
| 1228 |
+
431
|
| 1229 |
+
],
|
| 1230 |
+
"page_idx": 8
|
| 1231 |
+
},
|
| 1232 |
+
{
|
| 1233 |
+
"type": "table",
|
| 1234 |
+
"img_path": "images/320480c8a28b69c30ffe8245bd63f3db52047858cda2dbc0e61369f206931b69.jpg",
|
| 1235 |
+
"table_caption": [
|
| 1236 |
+
"Table 7: Accuracies with different numbers and locations of the MoCE layers (default setting used is in bold)."
|
| 1237 |
+
],
|
| 1238 |
+
"table_footnote": [],
|
| 1239 |
+
"table_body": "<table><tr><td># MoCE layers</td><td>Acc (%)</td></tr><tr><td>1</td><td>83.09</td></tr><tr><td>2 (10th & 12th)</td><td>83.08</td></tr><tr><td>2 (11th & 12th)</td><td>85.54</td></tr><tr><td>4</td><td>85.59</td></tr></table>",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
388,
|
| 1242 |
+
345,
|
| 1243 |
+
596,
|
| 1244 |
+
431
|
| 1245 |
+
],
|
| 1246 |
+
"page_idx": 8
|
| 1247 |
+
},
|
| 1248 |
+
{
|
| 1249 |
+
"type": "table",
|
| 1250 |
+
"img_path": "images/2cfbd94f99ba1d3c7e0dd7b07e013085bcb9d1690e3e816a546b1442b9fdf0a5.jpg",
|
| 1251 |
+
"table_caption": [
|
| 1252 |
+
"Table 8: Accuracies with different numbers of clusters (default setting used is in bold)."
|
| 1253 |
+
],
|
| 1254 |
+
"table_footnote": [],
|
| 1255 |
+
"table_body": "<table><tr><td># clusters</td><td>Acc (%)</td></tr><tr><td>16</td><td>82.00</td></tr><tr><td>64</td><td>84.02</td></tr><tr><td>256</td><td>85.54</td></tr><tr><td>512</td><td>85.33</td></tr></table>",
|
| 1256 |
+
"bbox": [
|
| 1257 |
+
632,
|
| 1258 |
+
347,
|
| 1259 |
+
794,
|
| 1260 |
+
434
|
| 1261 |
+
],
|
| 1262 |
+
"page_idx": 8
|
| 1263 |
+
},
|
| 1264 |
+
{
|
| 1265 |
+
"type": "text",
|
| 1266 |
+
"text": "layers. Furthermore, we exhibit the performance with only 1 and 4 MoCE layers, which are also selected based on the gradient magnitudes. As shown in Table 7, we notice that it is essential to choose the right layer to be MoCE layer. Adding more MoCE layers shows little improvement.",
|
| 1267 |
+
"bbox": [
|
| 1268 |
+
169,
|
| 1269 |
+
459,
|
| 1270 |
+
823,
|
| 1271 |
+
502
|
| 1272 |
+
],
|
| 1273 |
+
"page_idx": 8
|
| 1274 |
+
},
|
| 1275 |
+
{
|
| 1276 |
+
"type": "text",
|
| 1277 |
+
"text": "We also train MoCE with different numbers of clusters. As shown in Table 8, the accuracy increases up to 256 clusters, and then begins to drop. We hypothesize that with a moderate number of clusters, MoCE can produce a variety of task-customized models. With even more clusters, the number of experts become the bottleneck and performance starts to saturate.",
|
| 1278 |
+
"bbox": [
|
| 1279 |
+
169,
|
| 1280 |
+
508,
|
| 1281 |
+
826,
|
| 1282 |
+
565
|
| 1283 |
+
],
|
| 1284 |
+
"page_idx": 8
|
| 1285 |
+
},
|
| 1286 |
+
{
|
| 1287 |
+
"type": "text",
|
| 1288 |
+
"text": "5 CONCLUSION",
|
| 1289 |
+
"text_level": 1,
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
171,
|
| 1292 |
+
585,
|
| 1293 |
+
320,
|
| 1294 |
+
599
|
| 1295 |
+
],
|
| 1296 |
+
"page_idx": 8
|
| 1297 |
+
},
|
| 1298 |
+
{
|
| 1299 |
+
"type": "text",
|
| 1300 |
+
"text": "In this work, we first show that the negative transfer phenomenon exists in the prevailing self-supervised learning method MAE through extensive experiments. It will impede the scalability of MAE as more pre-training data may instead degenerate the downstream performance. In order to tackle the problem, we introduce Mixture of Expert to MAE as the multi-experts design can equip MAE with different ability that aids transfer. However, different from supervised pre-training, TokenMoE suffers from the fact that the gate shows no correlation to the semantics and the transfer ability is not improved. Based on this, we propose MoCE to explicitly train each expert with different clusters through the MoCE gate design and several losses to stabilize the training process. A search algorithm for selecting the best model for transfer is also proposed based on the clustering priors. Extensive experiments show that MoCE trains each expert with meaningful semantics and achieves state-of-the-art transfer performance on a collection of 11 downstream tasks and both detection and segmentation tasks. It is the first work that successfully trains a self-supervised learning MoE model on ImageNet only. We hope such a design will motivate more research on the self-supervised MoE models.",
|
| 1301 |
+
"bbox": [
|
| 1302 |
+
169,
|
| 1303 |
+
614,
|
| 1304 |
+
826,
|
| 1305 |
+
811
|
| 1306 |
+
],
|
| 1307 |
+
"page_idx": 8
|
| 1308 |
+
},
|
| 1309 |
+
{
|
| 1310 |
+
"type": "text",
|
| 1311 |
+
"text": "ACKNOWLEDGMENTS",
|
| 1312 |
+
"text_level": 1,
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
171,
|
| 1315 |
+
825,
|
| 1316 |
+
328,
|
| 1317 |
+
839
|
| 1318 |
+
],
|
| 1319 |
+
"page_idx": 8
|
| 1320 |
+
},
|
| 1321 |
+
{
|
| 1322 |
+
"type": "text",
|
| 1323 |
+
"text": "We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
169,
|
| 1326 |
+
849,
|
| 1327 |
+
823,
|
| 1328 |
+
878
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 8
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "header",
|
| 1334 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
171,
|
| 1337 |
+
32,
|
| 1338 |
+
478,
|
| 1339 |
+
47
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 8
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "page_number",
|
| 1345 |
+
"text": "9",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
493,
|
| 1348 |
+
948,
|
| 1349 |
+
504,
|
| 1350 |
+
959
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 8
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "text",
|
| 1356 |
+
"text": "REFERENCES",
|
| 1357 |
+
"text_level": 1,
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
174,
|
| 1360 |
+
102,
|
| 1361 |
+
287,
|
| 1362 |
+
117
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 9
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "list",
|
| 1368 |
+
"sub_type": "ref_text",
|
| 1369 |
+
"list_items": [
|
| 1370 |
+
"Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. Preprint arXiv:1911.05371, 2019.",
|
| 1371 |
+
"Yutong Bai, Zeyu Wang, Junfei Xiao, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, and Cihang Xie. Masked autoencoders enable efficient knowledge distillers. Preprint arXiv:2208.122561, 2022.",
|
| 1372 |
+
"Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. In ICLR, 2022.",
|
| 1373 |
+
"Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: delving into high quality object detection. In CVPR, 2018.",
|
| 1374 |
+
"Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, pp. 132-149, 2018.",
|
| 1375 |
+
"Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 33:9912-9924, 2020.",
|
| 1376 |
+
"Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, pp. 9650-9660, 2021.",
|
| 1377 |
+
"Kai Chen, Lanqing Hong, Hang Xu, Zhenguo Li, and Dit-Yan Yeung. Multisiam: Self-supervised multi-instance siamese representation learning for autonomous driving. In ICCV, pp. 7546-7554, 2021a.",
|
| 1378 |
+
"Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, and Dit-Yan Yeung. Mixed autoencoder for self-supervised visual representation learning. In CVPR, 2023.",
|
| 1379 |
+
"Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020.",
|
| 1380 |
+
"Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In ICCV, pp. 9640-9649, 2021b.",
|
| 1381 |
+
"Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. In ICLR, 2020.",
|
| 1382 |
+
"Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In NeurIPS, volume 33, pp. 18613-18624. Curran Associates, Inc., 2020.",
|
| 1383 |
+
"Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS, 26, 2013.",
|
| 1384 |
+
"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.",
|
| 1385 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. Preprint arXiv:1810.04805, 2018.",
|
| 1386 |
+
"Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, pp. 1422-1430, 2015.",
|
| 1387 |
+
"Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.",
|
| 1388 |
+
"Linus Ericsson, Henry Gouk, and Timothy M Hospedales. How well do self-supervised models transfer? In CVPR, pp. 5414-5423, 2021."
|
| 1389 |
+
],
|
| 1390 |
+
"bbox": [
|
| 1391 |
+
171,
|
| 1392 |
+
125,
|
| 1393 |
+
825,
|
| 1394 |
+
924
|
| 1395 |
+
],
|
| 1396 |
+
"page_idx": 9
|
| 1397 |
+
},
|
| 1398 |
+
{
|
| 1399 |
+
"type": "header",
|
| 1400 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1401 |
+
"bbox": [
|
| 1402 |
+
171,
|
| 1403 |
+
32,
|
| 1404 |
+
478,
|
| 1405 |
+
47
|
| 1406 |
+
],
|
| 1407 |
+
"page_idx": 9
|
| 1408 |
+
},
|
| 1409 |
+
{
|
| 1410 |
+
"type": "page_number",
|
| 1411 |
+
"text": "10",
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
490,
|
| 1414 |
+
946,
|
| 1415 |
+
509,
|
| 1416 |
+
960
|
| 1417 |
+
],
|
| 1418 |
+
"page_idx": 9
|
| 1419 |
+
},
|
| 1420 |
+
{
|
| 1421 |
+
"type": "list",
|
| 1422 |
+
"sub_type": "ref_text",
|
| 1423 |
+
"list_items": [
|
| 1424 |
+
"William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2021.",
|
| 1425 |
+
"Dehong Gao, Wenjing Yang, Huiling Zhou, Yi Wei, Yi Hu, and Hao Wang. Network clustering for multi-task learning. Preprint arXiv:2101.09018, 2021.",
|
| 1426 |
+
"Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In ICLR, 2018.",
|
| 1427 |
+
"Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. In NeurlPS, 2020.",
|
| 1428 |
+
"Jianhua Han, Xiwen Liang, Hang Xu, Kai Chen, Lanqing Hong, Jiageng Mao, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Xiaodan Liang, and Chunjing Xu. Soda10m: A large-scale 2d self/semi-supervised object detection dataset for autonomous driving. Preprint arXiv:2106.11118, 2021.",
|
| 1429 |
+
"Kaiming He, Georgia Gkioxari, Piotr Dolkar, and Ross Girshick. Mask r-cnn. In ICCV, 2017.",
|
| 1430 |
+
"Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, pp. 9729-9738, 2020.",
|
| 1431 |
+
"Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, pp. 16000-16009, 2022.",
|
| 1432 |
+
"Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, and Daniel Soudry. Augment your batch: Improving generalization through instance repetition. In CVPR, June 2020.",
|
| 1433 |
+
"Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. Preprint arXiv:1905.122651, 2019.",
|
| 1434 |
+
"Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? Preprint arXiv:1608.08614, 2016.",
|
| 1435 |
+
"Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79-87, 1991.",
|
| 1436 |
+
"Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181-214, 1994.",
|
| 1437 |
+
"Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan Firat. Beyond distillation: Task-level mixture-of-experts for efficient inference. Preprint arXiv:2110.03742, 2021.",
|
| 1438 |
+
"Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. Preprint arXiv:2006.16668, 2020.",
|
| 1439 |
+
"Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. Base layers: Simplifying training of large, sparse models. In ICML, pp. 6265-6274. PMLR, 2021.",
|
| 1440 |
+
"Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua Han, Yukuai Chen, Wei Zhang, Chunjing Xu, Dit-Yan Yeung, et al. Coda: A real-world road corner case dataset for object detection in autonomous driving. Preprint arXiv:2203.07724, 2022a.",
|
| 1441 |
+
"Xiaotong Li, Yixiao Ge, Kun Yi, Zixuan Hu, Ying Shan, and Ling-Yu Duan. mc-beit: Multi-choice discretization for image bert pre-training. Preprint arXiv:2203.15371, 2022b.",
|
| 1442 |
+
"Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.",
|
| 1443 |
+
"Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. Preprint arXiv:1901.11504, 2019."
|
| 1444 |
+
],
|
| 1445 |
+
"bbox": [
|
| 1446 |
+
171,
|
| 1447 |
+
102,
|
| 1448 |
+
825,
|
| 1449 |
+
924
|
| 1450 |
+
],
|
| 1451 |
+
"page_idx": 10
|
| 1452 |
+
},
|
| 1453 |
+
{
|
| 1454 |
+
"type": "header",
|
| 1455 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1456 |
+
"bbox": [
|
| 1457 |
+
171,
|
| 1458 |
+
32,
|
| 1459 |
+
478,
|
| 1460 |
+
47
|
| 1461 |
+
],
|
| 1462 |
+
"page_idx": 10
|
| 1463 |
+
},
|
| 1464 |
+
{
|
| 1465 |
+
"type": "page_number",
|
| 1466 |
+
"text": "11",
|
| 1467 |
+
"bbox": [
|
| 1468 |
+
490,
|
| 1469 |
+
948,
|
| 1470 |
+
506,
|
| 1471 |
+
959
|
| 1472 |
+
],
|
| 1473 |
+
"page_idx": 10
|
| 1474 |
+
},
|
| 1475 |
+
{
|
| 1476 |
+
"type": "list",
|
| 1477 |
+
"sub_type": "ref_text",
|
| 1478 |
+
"list_items": [
|
| 1479 |
+
"Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, pp. 10012-10022, 2021.",
|
| 1480 |
+
"Zhili Liu, Jianhua Han, Kai Chen, Lanqing Hong, Hang Xu, Chunjing Xu, and Zhenguo Li. Taskcustomized self-supervised pre-training with scalable dynamic routing. AAAI, 55:65, 2022.",
|
| 1481 |
+
"Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. Preprint arXiv:1711.05101, 2017.",
|
| 1482 |
+
"Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1930-1939, 2018.",
|
| 1483 |
+
"Matthew McDermott, Bret Nestor, Evan Kim, Wancong Zhang, Anna Goldenberg, Peter Szolovits, and Marzyeh Ghassemi. A comprehensive ehr timeseries pre-training benchmark. In Proceedings of the Conference on Health, Inference, and Learning, pp. 257-278, 2021.",
|
| 1484 |
+
"Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. Preprint arXiv:1301.3781, 2013.",
|
| 1485 |
+
"George A Miller. WordNet: An electronic lexical database. MIT press, 1998.",
|
| 1486 |
+
"Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.",
|
| 1487 |
+
"Joan Puigcerver, Carlos Riquelme Ruiz, Basil Mustafa, Cedric Reenggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. In ICLR, 2020.",
|
| 1488 |
+
"A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, and J. Clark. Learning transferable visual models from natural language supervision. In ICML, 2021a.",
|
| 1489 |
+
"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pp. 8748-8763. PMLR, 2021b.",
|
| 1490 |
+
"Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. NeurIPS, 34:8583-8595, 2021.",
|
| 1491 |
+
"Umme Sara, Morium Akter, and Mohammad Shorif Uddin. Image quality assessment through fsim, ssim, mse and psnr—a comparative study. Journal of Computer and Communications, 7(3):8-18, 2019.",
|
| 1492 |
+
"Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In ICLR, 2017.",
|
| 1493 |
+
"Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve Jegou. Training data-efficient image transformers & distillation through attention. In ICML, volume 139, pp. 10347-10357, July 2021.",
|
| 1494 |
+
"Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, pp. 568-578, 2021.",
|
| 1495 |
+
"Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. Preprint arXiv:2011.09157, 2020.",
|
| 1496 |
+
"Lemeng Wu, Mengchen Liu, Yinpeng Chen, Dongdong Chen, Xiyang Dai, and Lu Yuan. Residual mixture of experts. Preprint arXiv:2204.09636, 2022."
|
| 1497 |
+
],
|
| 1498 |
+
"bbox": [
|
| 1499 |
+
171,
|
| 1500 |
+
102,
|
| 1501 |
+
825,
|
| 1502 |
+
924
|
| 1503 |
+
],
|
| 1504 |
+
"page_idx": 11
|
| 1505 |
+
},
|
| 1506 |
+
{
|
| 1507 |
+
"type": "header",
|
| 1508 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1509 |
+
"bbox": [
|
| 1510 |
+
171,
|
| 1511 |
+
32,
|
| 1512 |
+
478,
|
| 1513 |
+
47
|
| 1514 |
+
],
|
| 1515 |
+
"page_idx": 11
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "page_number",
|
| 1519 |
+
"text": "12",
|
| 1520 |
+
"bbox": [
|
| 1521 |
+
490,
|
| 1522 |
+
946,
|
| 1523 |
+
508,
|
| 1524 |
+
959
|
| 1525 |
+
],
|
| 1526 |
+
"page_idx": 11
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "list",
|
| 1530 |
+
"sub_type": "ref_text",
|
| 1531 |
+
"list_items": [
|
| 1532 |
+
"Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Zhenguo Li, and Ping Luo. Detco: Unsupervised contrastive learning for object detection. Preprint arXiv:2102.04803, 2021a.",
|
| 1533 |
+
"Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS, 2021b.",
|
| 1534 |
+
"Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. Preprint arXiv:2011.10043, 2020.",
|
| 1535 |
+
"Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. \nSimmim: A simple framework for masked image modeling. Preprint arXiv:2111.09886, 2021c.",
|
| 1536 |
+
"An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. M6-t: Exploring sparse expert models and beyond. Preprint arXiv:2105.15082, 2021.",
|
| 1537 |
+
"Lewei Yao, Jianhua Han, Youpeng Wen, Xiaodan Liang, Dan Xu, Wei Zhang, Zhenguo Li, Chunjing Xu, and Hang Xu. Detclip: Dictionary-enriched visual-concept paralleled pre-training for open-world detection. In NeurIPS, 2022a.",
|
| 1538 |
+
"Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. Filip: Fine-grained interactive language-image pre-training. In ICLR, 2022b.",
|
| 1539 |
+
"Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. Logme: Practical assessment of pre-trained models for transfer learning. In ICML, pp. 12133-12143. PMLR, 2021.",
|
| 1540 |
+
"Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016.",
|
| 1541 |
+
"Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. In IJCV, 2019.",
|
| 1542 |
+
"Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer. Preprint arXiv:2111.07832, 2021."
|
| 1543 |
+
],
|
| 1544 |
+
"bbox": [
|
| 1545 |
+
171,
|
| 1546 |
+
102,
|
| 1547 |
+
825,
|
| 1548 |
+
566
|
| 1549 |
+
],
|
| 1550 |
+
"page_idx": 12
|
| 1551 |
+
},
|
| 1552 |
+
{
|
| 1553 |
+
"type": "header",
|
| 1554 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1555 |
+
"bbox": [
|
| 1556 |
+
171,
|
| 1557 |
+
32,
|
| 1558 |
+
478,
|
| 1559 |
+
47
|
| 1560 |
+
],
|
| 1561 |
+
"page_idx": 12
|
| 1562 |
+
},
|
| 1563 |
+
{
|
| 1564 |
+
"type": "page_number",
|
| 1565 |
+
"text": "13",
|
| 1566 |
+
"bbox": [
|
| 1567 |
+
490,
|
| 1568 |
+
946,
|
| 1569 |
+
508,
|
| 1570 |
+
959
|
| 1571 |
+
],
|
| 1572 |
+
"page_idx": 12
|
| 1573 |
+
},
|
| 1574 |
+
{
|
| 1575 |
+
"type": "text",
|
| 1576 |
+
"text": "A APPENDIX",
|
| 1577 |
+
"text_level": 1,
|
| 1578 |
+
"bbox": [
|
| 1579 |
+
171,
|
| 1580 |
+
102,
|
| 1581 |
+
299,
|
| 1582 |
+
118
|
| 1583 |
+
],
|
| 1584 |
+
"page_idx": 13
|
| 1585 |
+
},
|
| 1586 |
+
{
|
| 1587 |
+
"type": "text",
|
| 1588 |
+
"text": "A.1 DETAILS FOR CLUSTERING.",
|
| 1589 |
+
"text_level": 1,
|
| 1590 |
+
"bbox": [
|
| 1591 |
+
171,
|
| 1592 |
+
133,
|
| 1593 |
+
410,
|
| 1594 |
+
148
|
| 1595 |
+
],
|
| 1596 |
+
"page_idx": 13
|
| 1597 |
+
},
|
| 1598 |
+
{
|
| 1599 |
+
"type": "text",
|
| 1600 |
+
"text": "For data clustering, the features are computed by inferring the pre-train MAE, and the matrix $\\mathbf{Q}$ and $\\mathbf{C}$ are solved by the Sinkhorn-Knopp algorithm and SGD optimizer iteratively. For the Sinkhorn-Knopp algorithm, we set the iteration number as 3. The learning rate of SGD is set to 0.1, the momentum is 0.9 and weight decay is set to 0.9 for the sparse assignment of cluster results. We train 10 epochs in total and it costs 3 minutes and 20 seconds on average for a single GPU.",
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
169,
|
| 1603 |
+
159,
|
| 1604 |
+
826,
|
| 1605 |
+
231
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 13
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "text",
|
| 1611 |
+
"text": "A.2 COMPARISON UNDER FAIR PARAMETER COUNTS.",
|
| 1612 |
+
"text_level": 1,
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
171,
|
| 1615 |
+
247,
|
| 1616 |
+
555,
|
| 1617 |
+
262
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 13
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "text",
|
| 1623 |
+
"text": "The setting used in our work focuses on a fair comparison of FLOPs, referring to Table 4 in the main paper. Since TokenMoE and MoCE always activate only one expert throughout the whole pre-training and fine-tuning procedure, the FLOPs value is maintained close to MAE. Apart from this criterion, we further provide the comparison on equal parameter counts. As shown in Table 9, we train MAE under the same parameter count as the whole model of MoCE, and MoCE still outperforms MAE consistently.",
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
169,
|
| 1626 |
+
272,
|
| 1627 |
+
823,
|
| 1628 |
+
358
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 13
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "table",
|
| 1634 |
+
"img_path": "images/5786be322277eca58ace4011f2ced01a975678404f26014759aac8bf998b41c9.jpg",
|
| 1635 |
+
"table_caption": [
|
| 1636 |
+
"Table 9: Comparison of MAE and MoCE under equal parameter counts. We train MAE with a larger model that shares the same parameter count as the whole model of MoCE."
|
| 1637 |
+
],
|
| 1638 |
+
"table_footnote": [],
|
| 1639 |
+
"table_body": "<table><tr><td></td><td># Params</td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE</td><td>178.03</td><td>74.43</td><td>90.30</td><td>85.50</td><td>96.90</td><td>83.80</td><td>74.84</td><td>96.30</td><td>81.86</td><td>92.97</td><td>62.98</td><td>85.51</td><td>84.13</td></tr><tr><td>MoCE</td><td>178.03</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>85.54</td></tr></table>",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
176,
|
| 1642 |
+
410,
|
| 1643 |
+
823,
|
| 1644 |
+
463
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 13
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "text",
|
| 1650 |
+
"text": "A.3 MOCE FOR LARGER ARCHITECTURE.",
|
| 1651 |
+
"text_level": 1,
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
171,
|
| 1654 |
+
486,
|
| 1655 |
+
477,
|
| 1656 |
+
500
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 13
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "text",
|
| 1662 |
+
"text": "Here we provide the analysis on the larger architecture(ViT-L, $2.57\\times$ larger than ViT-B) to explore the scalability of MoCE. We first demonstrate that negative transfer still exists for larger architecture by training MAE with total ImageNet, Split-A and Split-B following the same setting in Sec. 3.1. As shown in the first three rows of Table. 10, a similar phenomenon is observed that MAE-L trained by Split-A performs better in Aircraft, Cars, DTD and SUN while Split-B in Flowers, Food, and Pets. On the other hand, MoCE-L can still alleviate the problem and therefore transfers better. We believe that the negative transfer phenomenon mainly exists when a common pre-trained model is used for various downstream tasks, due to the inevitable semantic gaps between the pre-training and downstream datasets, rather than the architecture.",
|
| 1663 |
+
"bbox": [
|
| 1664 |
+
169,
|
| 1665 |
+
512,
|
| 1666 |
+
823,
|
| 1667 |
+
638
|
| 1668 |
+
],
|
| 1669 |
+
"page_idx": 13
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "table",
|
| 1673 |
+
"img_path": "images/4fde99d7cd858ce3d1b7ed9d895f02c31c37f57f3640b8f21d52988cf3d4aa74.jpg",
|
| 1674 |
+
"table_caption": [
|
| 1675 |
+
"Table 10: Comparison of MAE and MoCE on ViT-L. We also train MAE with 2 subsets of ImageNet, namely Split-A and Split-B, following the same setting mentioned in Sec. 3.1. This table shows that negative transfer still exists on larger architectures, while MoCE can alleviate this problem and achieve better transfer results."
|
| 1676 |
+
],
|
| 1677 |
+
"table_footnote": [],
|
| 1678 |
+
"table_body": "<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE-L (full set)</td><td>74.30</td><td>93.97</td><td>88.60</td><td>97.85</td><td>82.47</td><td>77.61</td><td>96.67</td><td>81.22</td><td>93.97</td><td>67.99</td><td>88.30</td><td>85.72</td></tr><tr><td>MAE-L (Split-A)</td><td>79.70</td><td>91.59</td><td>89.33</td><td>96.97</td><td>80.38</td><td>78.67</td><td>95.44</td><td>82.97</td><td>92.49</td><td>68.73</td><td>82.41</td><td>85.33</td></tr><tr><td>MAE-L (Split-B)</td><td>73.42</td><td>90.80</td><td>86.00</td><td>96.18</td><td>78.73</td><td>77.34</td><td>96.75</td><td>83.63</td><td>94.92</td><td>66.06</td><td>85.85</td><td>84.52</td></tr><tr><td>MoCE-L</td><td>87.04</td><td>94.86</td><td>90.72</td><td>98.29</td><td>87.49</td><td>76.65</td><td>97.38</td><td>88.21</td><td>95.89</td><td>69.49</td><td>89.13</td><td>88.65+2.93</td></tr></table>",
|
| 1679 |
+
"bbox": [
|
| 1680 |
+
179,
|
| 1681 |
+
715,
|
| 1682 |
+
823,
|
| 1683 |
+
789
|
| 1684 |
+
],
|
| 1685 |
+
"page_idx": 13
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "text",
|
| 1689 |
+
"text": "A.4 PERFORMANCE OF MOCE WITHOUT PRE-TRAINING.",
|
| 1690 |
+
"text_level": 1,
|
| 1691 |
+
"bbox": [
|
| 1692 |
+
171,
|
| 1693 |
+
814,
|
| 1694 |
+
581,
|
| 1695 |
+
829
|
| 1696 |
+
],
|
| 1697 |
+
"page_idx": 13
|
| 1698 |
+
},
|
| 1699 |
+
{
|
| 1700 |
+
"type": "text",
|
| 1701 |
+
"text": "We provide results of MoCE trained from scratch for 200 epochs and 1600 epochs in Table 11. In this experiment, for clustering, we first pre-train MAE for 50 epochs and perform clustering. We then train MoCE from scratch for 200 epochs and 1600 epochs based on the clustering results. Although it is a common practice to utilize pre-trained dense models as initialization to accelerate pre-training (Wu et al., 2022; Bai et al., 2022), MoCE still outperforms MAE consistently in various downstream tasks when trained from scratch.",
|
| 1702 |
+
"bbox": [
|
| 1703 |
+
169,
|
| 1704 |
+
839,
|
| 1705 |
+
823,
|
| 1706 |
+
924
|
| 1707 |
+
],
|
| 1708 |
+
"page_idx": 13
|
| 1709 |
+
},
|
| 1710 |
+
{
|
| 1711 |
+
"type": "header",
|
| 1712 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
171,
|
| 1715 |
+
32,
|
| 1716 |
+
478,
|
| 1717 |
+
47
|
| 1718 |
+
],
|
| 1719 |
+
"page_idx": 13
|
| 1720 |
+
},
|
| 1721 |
+
{
|
| 1722 |
+
"type": "page_number",
|
| 1723 |
+
"text": "14",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
490,
|
| 1726 |
+
946,
|
| 1727 |
+
508,
|
| 1728 |
+
959
|
| 1729 |
+
],
|
| 1730 |
+
"page_idx": 13
|
| 1731 |
+
},
|
| 1732 |
+
{
|
| 1733 |
+
"type": "table",
|
| 1734 |
+
"img_path": "images/d8c6dc593df449f7e5d46549b921ae443435032616a558e94f2f58a3a5db5f4d.jpg",
|
| 1735 |
+
"table_caption": [
|
| 1736 |
+
"Table 11: Comparison of MAE and MoCE both training from scratch for 200 epochs (first two rows) and 1600 epochs (last two rows)."
|
| 1737 |
+
],
|
| 1738 |
+
"table_footnote": [],
|
| 1739 |
+
"table_body": "<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE</td><td>64.73</td><td>85.91</td><td>77.10</td><td>92.92</td><td>72.50</td><td>73.30</td><td>93.11</td><td>73.14</td><td>88.70</td><td>57.84</td><td>73.27</td><td>77.50</td></tr><tr><td>MoCE</td><td>71.16</td><td>90.55</td><td>82.46</td><td>96.06</td><td>76.56</td><td>74.57</td><td>95.70</td><td>79.67</td><td>92.58</td><td>62.20</td><td>84.25</td><td>82.34</td></tr><tr><td>MAE</td><td>72.38</td><td>90.47</td><td>83.51</td><td>95.69</td><td>68.40</td><td>75.48</td><td>96.10</td><td>79.98</td><td>92.35</td><td>62.43</td><td>84.79</td><td>81.96</td></tr><tr><td>MoCE</td><td>78.75</td><td>91.64</td><td>87.04</td><td>97.15</td><td>83.12</td><td>73.62</td><td>96.08</td><td>83.84</td><td>93.06</td><td>65.49</td><td>85.81</td><td>85.05</td></tr></table>",
|
| 1740 |
+
"bbox": [
|
| 1741 |
+
215,
|
| 1742 |
+
141,
|
| 1743 |
+
797,
|
| 1744 |
+
227
|
| 1745 |
+
],
|
| 1746 |
+
"page_idx": 14
|
| 1747 |
+
},
|
| 1748 |
+
{
|
| 1749 |
+
"type": "text",
|
| 1750 |
+
"text": "A.5 EVALUATION DETAILS FOR DOWNSSTREAM TASKS.",
|
| 1751 |
+
"text_level": 1,
|
| 1752 |
+
"bbox": [
|
| 1753 |
+
171,
|
| 1754 |
+
251,
|
| 1755 |
+
562,
|
| 1756 |
+
265
|
| 1757 |
+
],
|
| 1758 |
+
"page_idx": 14
|
| 1759 |
+
},
|
| 1760 |
+
{
|
| 1761 |
+
"type": "text",
|
| 1762 |
+
"text": "Classification. We mainly follow the settings of Ericsson et al. (2021). to make a fair comparison. Specifically, all models are trained by SGD with a momentum of 0.9. Weight decay is set to be 0 and the learning rate is searched among [1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2, 1e-1, 3e-1]. Each model is fine-tuned for 2500 steps with cosine learning rate decay, a batch size of 64, and $224 \\times 224$ resolution. We fine-tune each model 3 times and report the average performance. We find such a setting generates a stable result.",
|
| 1763 |
+
"bbox": [
|
| 1764 |
+
169,
|
| 1765 |
+
276,
|
| 1766 |
+
826,
|
| 1767 |
+
361
|
| 1768 |
+
],
|
| 1769 |
+
"page_idx": 14
|
| 1770 |
+
},
|
| 1771 |
+
{
|
| 1772 |
+
"type": "text",
|
| 1773 |
+
"text": "Semantic segmentation. We evaluate MoCE on the semantic segmentation task that aims to predict the class for each pixel in the image. We report the metric of mean Intersection of Union (mIoU) averaged over all semantic categories in ADE20K (Zhou et al., 2019). We choose the best expert by applying ADE20K images to our clustering module and selecting the cluster that contains the most images. We use Adam (Loshchilov & Hutter, 2017) as the optimizer. The learning rate is set to 1e-3 with layer-wise learning rate decay (Clark et al., 2020) to be 0.65. We conduct fine-tuning for 160K steps. The batch size is 16. The detailed hyper-parameters can refer to Bao et al. (2022).",
|
| 1774 |
+
"bbox": [
|
| 1775 |
+
169,
|
| 1776 |
+
376,
|
| 1777 |
+
823,
|
| 1778 |
+
474
|
| 1779 |
+
],
|
| 1780 |
+
"page_idx": 14
|
| 1781 |
+
},
|
| 1782 |
+
{
|
| 1783 |
+
"type": "text",
|
| 1784 |
+
"text": "Detection and Instance segmentation are also evaluated on COCO (Lin et al., 2014). We follow the same deployment method as the one used in the semantic segmentation task to choose the best expert. Following iBOT (Zhou et al., 2021) we adopt the Cascade Mask R-CNN (Cai & Vasconcelos, 2018; He et al., 2017) and the multi-scale training. The shorter side is randomly resized between 480 and 800 while the longer one is no longer than 1333. The batch size is 16, and the initial learning rate is 1e-4. The layer-wise learning rate decay ratio (Clark et al., 2020) is set to 0.75. We train the model for 12 epochs and decrease the learning rate by $10\\mathrm{x}$ at epoch 9 and 11.",
|
| 1785 |
+
"bbox": [
|
| 1786 |
+
169,
|
| 1787 |
+
489,
|
| 1788 |
+
823,
|
| 1789 |
+
587
|
| 1790 |
+
],
|
| 1791 |
+
"page_idx": 14
|
| 1792 |
+
},
|
| 1793 |
+
{
|
| 1794 |
+
"type": "header",
|
| 1795 |
+
"text": "Published as a conference paper at ICLR 2023",
|
| 1796 |
+
"bbox": [
|
| 1797 |
+
171,
|
| 1798 |
+
32,
|
| 1799 |
+
478,
|
| 1800 |
+
47
|
| 1801 |
+
],
|
| 1802 |
+
"page_idx": 14
|
| 1803 |
+
},
|
| 1804 |
+
{
|
| 1805 |
+
"type": "page_number",
|
| 1806 |
+
"text": "15",
|
| 1807 |
+
"bbox": [
|
| 1808 |
+
490,
|
| 1809 |
+
946,
|
| 1810 |
+
508,
|
| 1811 |
+
959
|
| 1812 |
+
],
|
| 1813 |
+
"page_idx": 14
|
| 1814 |
+
}
|
| 1815 |
+
]
|
2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05382/769b324c-0be5-4ea3-b101-152fbb504e72_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:93b72cc5d54dc79606bcb566bb98abb308abfc9d59593cd144fbafd5efd0d7ff
|
| 3 |
+
size 3765117
|
2402.05xxx/2402.05382/full.md
ADDED
|
@@ -0,0 +1,332 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TASK-CUSTOMIZED MASKED AUTOENCODER VIA MIXTURE OF CLUSTER-CONDITIONAL EXPERTS
|
| 2 |
+
|
| 3 |
+
Zhili Liu $^{1,2*}$ , Kai Chen $^{1*}$ , Jianhua Han $^{2}$ , Lanqing Hong $^{2}$ , Hang Xu $^{2}$
|
| 4 |
+
|
| 5 |
+
Zhenguo Li $^{2}$ , James T. Kwok $^{1}$
|
| 6 |
+
|
| 7 |
+
<sup>1</sup> Department of Computer Science and Engineering, Hong Kong University of Science and Technology
|
| 8 |
+
$^{2}$ Huawei Noah's Ark Lab
|
| 9 |
+
|
| 10 |
+
{zhili.liu, kai.chen}@connect.ust.hk, {hanjianhua4, honglanqing, xuHang, li.zhenguo}@huawei.com jamesk@cse.ust.hk
|
| 11 |
+
|
| 12 |
+
# ABSTRACT
|
| 13 |
+
|
| 14 |
+
Masked Autoencoder (MAE) is a prevailing self-supervised learning method that achieves promising results in model pre-training. However, when the various downstream tasks have data distributions different from the pre-training data, the semantically irrelevant pre-training information might result in negative transfer, impeding MAE's scalability. To address this issue, we propose a novel MAE-based pre-training paradigm, Mixture of Cluster-conditional Experts (MoCE), which can be trained once but provides customized pre-training models for diverse downstream tasks. Different from the mixture of experts (MoE), our MoCE trains each expert only with semantically relevant images by using cluster-conditional gates. Thus, each downstream task can be allocated to its customized model pretrained with data most similar to the downstream data. Experiments on a collection of 11 downstream tasks show that MoCE outperforms the vanilla MAE by $2.45\%$ on average. It also obtains new state-of-the-art self-supervised learning results on detection and segmentation.
|
| 15 |
+
|
| 16 |
+
# 1 INTRODUCTION
|
| 17 |
+
|
| 18 |
+
Self-supervised learning (SSL), which learns effective transferable representations without human annotations, has become a prevailing model pre-training paradigm (He et al., 2020; Chen et al., 2021a; Bao et al., 2022). Currently, the most prevalent SSL method is the Masked Autoencoder (MAE) (He et al., 2022), which constructs supervision signals from raw image data by masking random input patches and then reconstructing the missing pixels. This simple strategy has proved efficient in the training of large-scale models. For example, ViT (Dosovitskiy et al., 2021) shows impressive performance on popular benchmarks such as the ImageNet $^{1}$ (Deng et al., 2009). However, does MAE really scale well for various downstream tasks (Deng et al., 2009; Lin et al., 2014; Zhou et al., 2019; Han et al., 2021; Li et al., 2022a)?
|
| 19 |
+
|
| 20 |
+
Preliminary studies (in Section 3.1) show that the MAE indeed suffers from negative transfer (Liu et al., 2022) when transferring to downstream tasks with very different semantics. Figure 1(a) shows that on 9 of 11 downstream tasks, an MAE pre-trained on the full ImageNet data is outperformed by the one that is pre-trained on only the semantically relevant data subsets. Hence, using pre-training data that are semantically irrelevant can hurt transfer performance.
|
| 21 |
+
|
| 22 |
+
The above observation motivates the need for task-customized pre-training. A promising model for this is the Mixture of Experts (MoE) (Shazeer et al., 2017; Riquelme et al., 2021), which uses a multi-expert architecture to provide customized models for different input tokens. However, unlike supervised pre-training, self-supervised learning lacks semantic labels, and thus the experts differ more on low-level information than semantics, referring to Figure 1(b). Experiments in Section 4.2 show that a naive adoption of MoE to the MAE has inferior performance. Since various downstream tasks contain different semantics, semantic-related experts may be preferred.
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
(a) Negative transfer phenomenon on MAE.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 1: (a) Transfer performance of MAEs pre-trained on Split-A (blue), Split-B (red) and full ImageNet data (white). Only two of the eleven downstream tasks benefit from using the full ImageNet data for pre-training (more details in Section 3.1). (b) TokenMoE uses pixel RGB values as reconstruction targets. Thus, tokens with similar pixel values tend to be routed to the same expert, leading to two types of mistakes: (i) same semantics but routed to different experts, (ii) different semantics but routed to the same expert.
|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
(b) Problem with TokenMoE.
|
| 38 |
+
|
| 39 |
+
In this paper, we propose the Mixture of Cluster-conditional Expert (MoCE), a novel paradigm to achieve task-customized self-supervised pre-training by data clustering and explicitly training each expert with images of similar semantics. The MoCE procedure has three stages. First, we cluster the whole dataset by using a pre-trained, dense MAE model. We then construct the MoCE with a multi-expert structure. Each expert is trained using clusters selected by routing tokens based on cluster embedding (instead of token embedding). To stabilize training and enhance confidence of the gate results, a regularization loss is proposed. Finally, with the arrival of a downstream task, we propose a search procedure to select the closest cluster. Empirically, the proposed MoCE shows superior performance over MAE on a collection of 11 downstream tasks. Besides, one can use only a MoCE sub-model on deployment, thus saving inference time and model capacity.
|
| 40 |
+
|
| 41 |
+
To summarize, our main contributions are:
|
| 42 |
+
|
| 43 |
+
1. We systematically analyze the negative transfer phenomenon of MAE, and show that naively adopting the MoE to MAE cannot improve transfer performance of downstream tasks.
|
| 44 |
+
2. We propose the MoCE, which trains each expert with semantics-aware clusters so that similar clusters can be routed to the same expert.
|
| 45 |
+
3. We demonstrate effectiveness of the proposed MoCE on a collection of 11 downstream tasks, and achieve up to $2.45\%$ performance improvement in Top-1 accuracy. State-of-the-art self-supervised results are also achieved on the detection and segmentation tasks. To the best of our knowledge, this is the first work that achieves state-of-the-art transfer performance by training vision MoE models with ImageNet under the SSL setting.
|
| 46 |
+
|
| 47 |
+
# 2 RELATED WORK
|
| 48 |
+
|
| 49 |
+
Self-supervised Learning. Previous works mainly focus on the design of pretext tasks with image transformations (Doersch et al., 2015; Gidaris et al., 2018), inpainting (Pathak et al., 2016), colorization (Zhang et al., 2016), contrastive learning (Chen et al., 2020; He et al., 2020; Grill et al., 2020; Caron et al., 2020; Radford et al., 2021b; Yao et al., 2022b), and for specific downstream tasks (Wang et al., 2020; Xie et al., 2020; 2021a; Chen et al., 2021a; Yao et al., 2022a). Motivated by the design of BERT (Devlin et al., 2018), masked image modeling (MIM) is recently proposed to learn by reconstructing masked images. BEiT (Bao et al., 2022) is the pioneering work that predicts visual tokens generated by a pre-trained tokenizer (Radford et al., 2021a). SimMIM (Xie et al., 2021c) simplifies the framework by directly utilizing the pixel RGB values as reconstruction targets.
|
| 50 |
+
|
| 51 |
+
MAE (He et al., 2022) proposes an asymmetric encoder-decoder architecture for better training efficiency. MixedAE (Chen et al., 2023) further explores image mixing for object-aware pre-training. In this paper, we will focus on the MAE due to its effectiveness and efficiency.
|
| 52 |
+
|
| 53 |
+
While self-supervised learning methods have achieved improved transfer performance, most of them only provide a unified representation to various downstream tasks. This may suffer from negative transfer as demonstrated in Section 3.1. The work most relevant to ours is SDR (Liu et al., 2022), which trains 256 subnets with 256 disjoint ImageNet subsets simultaneously. However, this paper differs from SDR in three ways: (i) the mapping from subsets to subnets in SDR is randomly selected and fixed during pre-training, while MoCE achieves self-adaptive mapping with cluster-conditional gates; (ii) Progressive training is required in SDR, while MoCE enjoys one-time end-to-end training; (iii) During the transfer process, SDR uses brute force to select the best sub-model, while MoCE reuses the clustering module to achieve more efficient selection.
|
| 54 |
+
|
| 55 |
+
Mixture of Experts. The mixture of experts (MoE) has a long history (Jacobs et al., 1991; Jordan & Jacobs, 1994; Shazeer et al., 2017). Recently, it is considered as an effective tool for model scale-up in natural language processing (Lepikhin et al., 2020; Fedus et al., 2021; Yang et al., 2021; Lewis et al., 2021). With the growing interest of the Vision Transformer (Dosovitskiy et al., 2021; Liu et al., 2021; Wang et al., 2021; Xie et al., 2021b), MoE for vision (Riquelme et al., 2021; Wu et al., 2022) is also explored recently. However, there is still no self-supervised MoE model that can be trained on medium-sized datasets such as the ImageNet-1k.
|
| 56 |
+
|
| 57 |
+
Kudugunta et al. (2021); Ma et al. (2018) regard the MoE as a multi-task learning model, and use it for multi-language translation and recommendation systems, respectively. In this paper, we show that for self-supervised learning on images, an additional clustering component is crucial in the learning of a highly performant MoE model. Moreover, while the downstream tasks should follow the pre-training task in (Kudugunta et al., 2021; Ma et al., 2018), the MoCE can be used with any downstream task due to its unsupervised pre-training. Puigcerver et al. (2020) shares a similar setting with us, but their model is pre-trained in a supervised learning manner. Moreover, their mapping between experts and data is pre-defined and fixed during training, while that for the MoCE is learned dynamically and achieves better performance.
|
| 58 |
+
|
| 59 |
+
Multi-Task Learning aims to learn a model that is appropriate for multiple tasks. Hard-parameter sharing, which uses a shared backbone with multi-heads for the different tasks, has been shown to be effective on time series, language and graph data (Liu et al., 2019; Hu et al., 2019; McDermott et al., 2021). Gao et al. (2021) claims that the network design may further benefit from the use of task relationships, and trains masks for different tasks. However, they require the task information be available during model training, which is not possible for downstream tasks in SSL pre-training.
|
| 60 |
+
|
| 61 |
+
# 3 PROPOSED METHOD
|
| 62 |
+
|
| 63 |
+
In this section, we first empirically demonstrate the negative transfer phenomenon in MAE (Section 3.1). We then discuss the limitations of adopting TokenMoE (Riquelme et al., 2021) with MAE (Section 3.2), and propose the Mixture of Cluster-conditional Experts (MoCE), a novel paradigm achieving customized pre-training for various downstream tasks (Section 3.3).
|
| 64 |
+
|
| 65 |
+
# 3.1 NEGATIVE TRANSFER IN MASKED AUTOENCODER
|
| 66 |
+
|
| 67 |
+
In this section, we evaluate the transfer performance of MAE models pre-trained with data of different semantics on various downstream tasks. As in (Huh et al., 2016; Liu et al., 2022), we first split the ImageNet data into two disjoint subsets, Split-A and Split-B, based on the labels' semantic dissimilarities in the WordNet tree (Miller, 1998). Split-A mainly contains inanimate objects (such as cars and airplanes), while Split-B primarily involves organisms (such as plants and animals). We then pre-train MAEs on Split-A, Split-B and the full ImageNet without data annotation, and evaluate the three resulting models on 11 downstream tasks. See more implementation details in Section 4.1.
|
| 68 |
+
|
| 69 |
+
As shown in Figure 1(a), the MAE pre-trained with Split-A performs best on Aircraft, Cars, SUN397 and DTD, while the MAE pre-trained with Split-B performs best on Flowers, Food, Pets, Cifar-10 and Cifar-100. Only two of the eleven tasks (Caltech and VOC) benefit from using the full
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
Figure 2: Model design comparison between (a) TokenMoE (Riquelme et al., 2021) and (b) MoCE. Both methods utilize the multi-expert architecture with the main difference about the input of the gating network. MoCE adopts the corresponding cluster embedding of the current token as in Eqn. 4, instead of the token embedding in Eqn. 3.2. Therefore, each expert can be trained by semantically similar images to alleviate the negative transfer phenomenon.
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
|
| 76 |
+
ImageNet data. This suggests that for tasks whose semantics are close to inanimate objects, adding pre-training data from Split-B is not useful, and vice versa for tasks whose semantics are close to organisms. To conclude, the introduction of semantically irrelevant pre-training data may impede transfer performance for downstream tasks. This negative transfer phenomenon motivates us to develop an efficient and automatic paradigm for task-customized pre-training.
|
| 77 |
+
|
| 78 |
+
# 3.2 EXPLORINGTOKENMOE TO MASKED AUTOENCODER
|
| 79 |
+
|
| 80 |
+
Overview of TokenMoE. TokenMoE (Riquelme et al., 2021) is a successful customized supervised pre-training model built upon the ViT (Dosovitskiy et al., 2021), which mainly consists of transformer blocks with alternating multi-head self-attention (MSA) and multi-layer perceptron (MLP). Specifically, the TokenMoE converts several transformer blocks to Mixture of Expert (MoE) blocks by expanding the MLP layer $N$ times, each of them is considered as an expert (denoted as $E_{i}(\cdot)$ , $i = 1,2,\dots,N$ ). Conditional computation on the $N$ experts is controlled by a gate, which is a linear layer whose input is the token embedding $\pmb{x}$ , and the output is the top- $K$ probabilities on the experts: $G(\pmb{x}) = TopK(\sigma(\pmb{W}_{g}\pmb{x} + \epsilon))$ , where $K$ is the number of experts to be activated, $\pmb{W}_{g}$ is the gate parameter, $\sigma$ is the softmax function, and $\epsilon \sim \mathcal{N}(0,\frac{1}{N})$ . $TopK(\cdot)$ returns the $K$ largest entries of $\sigma(\pmb{W}_{g}\pmb{x} + \epsilon)$ unchanged but set the others to zero. Thus, each token is routed to its corresponding experts. The final output is represented as
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathbf {y} = \sum_ {i = 1} ^ {N} [ G (\boldsymbol {x}) ] _ {i} E _ {i} (\boldsymbol {x}). \tag {1}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
As in (Riquelme et al., 2021), importance loss and load loss are also used to enforce a balanced use of the experts. Unless otherwise specified, we set $K = 1$ and $N = 8$ in all our experiments.
|
| 87 |
+
|
| 88 |
+
Limitation of TokenMoE. As will be shown in the experimental results (Table 3), naively adopting TokenMoE to the MAE cannot improve performance, even with intense hyper-parameter tuning and data augmentations (e.g., Repeat Augment (Hoffer et al., 2020) and RandAugment (Cubuk et al., 2020) with larger magnitude). Figure 3(a) shows the routing heatmaps of the pre-trained TokenMoE model. As can be seen, the routing process has little correlation with the ImageNet labels. Moreover, expert 3 is selected most of the time (91.8% of the classes). This degenerates the multi-expert network into a single-expert network. As demonstrated in Figure 1(b), we speculate that this is due to the use of low-level pixel values (instead of semantic class labels in the original TokenMoE) as reconstruction targets. This is also observed in Li et al. (2022b).
|
| 89 |
+
|
| 90 |
+
# 3.3 MIXTURE OF CLUSTER-CONDITIONAL EXPERTS
|
| 91 |
+
|
| 92 |
+
To address the limitations of TokenMoE, we propose the Mixture of Cluster-conditional Experts (MoCE), which trains each expert in a semantic-aware manner. The procedure consists of data clustering, architecture and gate design, and deployment.
|
| 93 |
+
|
| 94 |
+
Data Clustering. To train each expert semantically, a clustering procedure is first performed to simulate the label partitioning in Section 3.1. With a pre-trained MAE model, we collect all the image features $\pmb{f}_i$ 's (normalized to unit length $\| \pmb{f}_i \| = 1$ ), and represent the feature matrix as $\pmb{F} = [f_1, f_2, \dots, f_n] \in \mathbb{R}^{d \times n}$ , where $n$ is the number of images and $d$ is the dimension of the feature. The learnable cluster centroids are represented as $\pmb{C} = [c_1, c_2, \dots, c_m] \in \mathbb{R}^{d \times m}$ , (with $\| c_i \| = 1$ ) where $m$ is the desired number of clusters. The assignment of feature to clusters is computed as $\pmb{A} = \pmb{F}^T \pmb{C}$ . Following Asano et al. (2019), let $Q \in \mathbb{R}^{m \times n}$ be the posterior distribution of clustering, whose objective is
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\max _ {\boldsymbol {Q}} \operatorname {T r} \left(\boldsymbol {Q} ^ {T} \boldsymbol {A}\right) + \epsilon H (\boldsymbol {Q}) \quad s. t. \quad \boldsymbol {Q} \mathbf {1} _ {n} = \frac {1}{m} \mathbf {1} _ {m}, \quad \boldsymbol {Q} ^ {T} \mathbf {1} _ {m} = \frac {1}{n} \mathbf {1} _ {n}, \tag {2}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where $\mathbf{1}_m$ is the $m$ -dimensional vector of all ones, $H$ is the entropy function, and the constraints force the clustering results to be balanced. $Q$ and $C$ are optimized iteratively. For a given $C$ , $Q$ is solved by the Sinkhorn-Knopp algorithm (Cuturi, 2013); while for a given $Q$ , $C$ is obtained by minimizing the cross entropy between $Q$ and $A$ with SGD. We take the final $C$ and $Q$ as the cluster centroids and clustering assignments, respectively. The implementation details are in Appendix A.1.
|
| 101 |
+
|
| 102 |
+
Architecture. The whole network is trained on the full ImageNet data, with each expert trained by images from selected clusters decided by the MoCE gates' routing results. As on average each data cluster has only a fraction of $1 / K$ of the original sample size, the training time of each expert is also $K$ times shorter than the other parameters with dense modeling (e.g., MSA parameters (Riquelme et al., 2021)), we further adopt a distillation loss $\mathcal{L}_{distill}$ , which is defined as the $\ell_2$ distance between the features generated by the whole network and each expert. This loss function can be formulated as
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\min _ {\boldsymbol {\theta}} \sum_ {i = 1} ^ {m} \mathcal {L} _ {M A E} \left(D _ {i}; \boldsymbol {\theta} _ {i}\right) + \mathcal {L} _ {\text {d i s t i l l}}, \tag {3}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
where $D_{i}$ is the $i$ th cluster, $\theta_{i}$ is the parameter used for training $D_{i}$ , and $\mathcal{L}_{MAE}(D_i;\theta_i)$ is the reconstruction loss for masked image modeling. $\theta_{i}$ consists of several experts in different layers, as explained in the following.
|
| 109 |
+
|
| 110 |
+
Gate Design. As in the TokenMoE, we replace several MLP layers in the ViT with layers equipped with MoCE gates. In TokenMoE, routings of the tokens to experts are considered separately. In MoCE, we route tokens from images of the same cluster to the same expert. The MoCE gate output can thus be written as
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
G (\mathbf {x}) = \operatorname {T o p} K \left(\sigma \left(\boldsymbol {W} _ {g} \cdot \boldsymbol {C} _ {[ \mathbf {x} ]} + \epsilon\right)\right), \tag {4}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $W_{g}$ is the gate parameter, and $C_{[\mathbf{x}]}$ is the embedding of the cluster that $\mathbf{x}$ belongs to. Empirically, we find that the confidence of $G(\mathbf{x})$ (the max entry) is low and consequently, the mapping between clusters and experts varies a lot during pre-training. Inspired by the importance and load losses (Riquelme et al., 2021), we add the following loss $\mathcal{L}_{imbalance}$ to enhance the confidence of the gates. Since it makes $G(\mathbf{x})$ shaper, we call it imbalance loss.
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\mathcal {L} _ {\text {i m b a l a n c e}} = - \sum_ {i = 1} ^ {n} \left(\frac {\operatorname {s t d} (G (\mathbf {x}) _ {i})}{\operatorname {m e a n} (G (\mathbf {x}) _ {i})}\right) ^ {2}, \tag {5}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
For practical implementation, the loss is calculated over the samples in a batch. The imbalance loss penalizes on the negative variance of the gate confidence.
|
| 123 |
+
|
| 124 |
+
**Deployment.** On deployment, customized experts are selected from MoCE, and fine-tuned for each downstream task. As shown in Section 3.1, we prefer to use the experts that is pre-trained from data whose semantics is closest to that of the downstream task. This can be obtained by reusing the data clustering module. Specifically, we feed images for the downstream task through the pre-trained MAE model and collect all the image features as $\boldsymbol{F}_{task}$ . The assignment of downstream images to the clusters is then computed as $\boldsymbol{A}_{task} = \boldsymbol{F}_{task}^T\boldsymbol{C}$ . We select the largest cluster with assigned downstream images, and use the corresponding experts (a sub-model of the whole MoCE model) for deployment. In the case when only one expert is activated at each MoCE layer ( $K = 1$ ), a regular ViT model is needed for downstream fine-tuning, which is much more efficient than MoE.
|
| 125 |
+
|
| 126 |
+
Table 1: Transfer accuracy (%) of self-supervised learning models on 11 downstream tasks.
|
| 127 |
+
|
| 128 |
+
<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td colspan="13">ResNet-50</td></tr><tr><td>BYOL</td><td>82.39</td><td>90.12</td><td>87.33</td><td>96.28</td><td>82.15</td><td>74.57</td><td>95.96</td><td>82.13</td><td>88.52</td><td>64.41</td><td>83.97</td><td>84.35</td></tr><tr><td>DeepCluster-v2</td><td>78.75</td><td>90.51</td><td>86.33</td><td>96.48</td><td>82.28</td><td>75.43</td><td>96.16</td><td>83.68</td><td>90.33</td><td>66.68</td><td>81.37</td><td>84.36</td></tr><tr><td colspan="13">Vision Transformer</td></tr><tr><td>Supervised</td><td>76.55</td><td>89.98</td><td>86.19</td><td>96.79</td><td>83.96</td><td>75.09</td><td>93.94</td><td>85.17</td><td>92.54</td><td>64.54</td><td>87.22</td><td>84.72</td></tr><tr><td>DINO</td><td>66.50</td><td>91.65</td><td>76.37</td><td>98.12</td><td>86.69</td><td>75.73</td><td>96.40</td><td>93.77</td><td>93.97</td><td>59.33</td><td>86.62</td><td>84.10</td></tr><tr><td>MoCo v3</td><td>76.29</td><td>91.64</td><td>85.18</td><td>97.99</td><td>86.98</td><td>72.64</td><td>95.33</td><td>83.94</td><td>92.35</td><td>65.54</td><td>84.21</td><td>84.74</td></tr><tr><td>BEiT</td><td>53.16</td><td>79.02</td><td>68.11</td><td>94.34</td><td>73.54</td><td>68.04</td><td>91.33</td><td>79.59</td><td>84.02</td><td>56.13</td><td>65.65</td><td>73.90</td></tr><tr><td>MAE</td><td>72.38</td><td>90.47</td><td>83.51</td><td>95.69</td><td>68.40</td><td>75.48</td><td>96.10</td><td>79.98</td><td>92.35</td><td>62.43</td><td>84.79</td><td>81.96</td></tr><tr><td>MAE*</td><td>72.71</td><td>91.24</td><td>84.47</td><td>96.15</td><td>77.33</td><td>75.05</td><td>96.25</td><td>80.49</td><td>92.78</td><td>62.46</td><td>85.02</td><td>83.09</td></tr><tr><td>MoCE (Ours)</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>\( 85.54^{+2.45} \)</td></tr></table>
|
| 129 |
+
|
| 130 |
+
Table 2: Transfer accuracy (%) on detection and segmentation.
|
| 131 |
+
|
| 132 |
+
<table><tr><td rowspan="2">Method</td><td>ADE20K</td><td colspan="6">COCO</td></tr><tr><td>mIoU</td><td>APbb</td><td>APbb50</td><td>APbb75</td><td>APmk</td><td>APmk50</td><td>APmk75</td></tr><tr><td>Supervised</td><td>46.9</td><td>48.8</td><td>68.7</td><td>52.7</td><td>42.5</td><td>65.9</td><td>45.5</td></tr><tr><td>DINO</td><td>46.9</td><td>49.5</td><td>69.1</td><td>53.6</td><td>42.9</td><td>66.0</td><td>46.3</td></tr><tr><td>MoCo v3</td><td>46.8</td><td>47.2</td><td>66.9</td><td>50.8</td><td>41.1</td><td>63.6</td><td>44.1</td></tr><tr><td>BEiT</td><td>45.6</td><td>40.8</td><td>59.4</td><td>44.1</td><td>36.0</td><td>56.8</td><td>38.2</td></tr><tr><td>MAE</td><td>48.1</td><td>50.6</td><td>69.4</td><td>55.0</td><td>43.8</td><td>66.6</td><td>47.5</td></tr><tr><td>MoCE</td><td>48.3</td><td>51.1</td><td>69.8</td><td>55.4</td><td>44.2</td><td>67.0</td><td>48.1</td></tr></table>
|
| 133 |
+
|
| 134 |
+
# 4 EXPERIMENTS
|
| 135 |
+
|
| 136 |
+
In this section, we first introduce the setup of pre-training and fine-tuning stage of MoCE in Sec. 4.1. Then we demonstrate the effectiveness of MoCE by evaluating the pre-trained models on a collection of 11 downstream tasks with detailed analysis of our MoCE superior to vanilla MAE and TokenMoE in Sec. 4.2. Finally we take ablation studies on the key components of MoCE in Sec. 4.3.
|
| 137 |
+
|
| 138 |
+
# 4.1 SETUP
|
| 139 |
+
|
| 140 |
+
For all experiments, we replace two MLP layers with MoCE layers in the original ViT-B (Dosovitskiy et al., 2021). Following Wu et al. (2022), layers with the greatest gradient magnitude are selected (which are the last two MLP layers in our experiments). Unless otherwise specified, the number of experts is 8 and the number of clusters is 256. Our model utilizes the officially released 1600-epoch pre-trained MAE model $^{2}$ and continues to train for an extra 200 epochs. Each expert is initialized by the corresponding dense model with a small weight perturbation. The training procedure mainly follows that of MAE, except that we multiply the base learning rate by 0.1. All regularization loss weight is set to 0.01 by default.
|
| 141 |
+
|
| 142 |
+
To ensure a fair comparison with the vision transformer on downstream classification tasks, we mainly follow the hyper-parameter settings in (Dosovitskiy et al., 2021; Riquelme et al., 2021) and the benchmark settings in (Ericsson et al., 2021). The proposed model is compared with various self-supervised models, including DINO (Caron et al., 2021), MoCo v3 (Chen et al., 2021b), BEiT (Bao et al., 2022), and the highly-performant ResNet-50 models of BYOL (Grill et al., 2020) and DeepCluster-v2 (Caron et al., 2018). We also compare with the supervised pre-trained model DeiT (Touvron et al., 2021). To make a fair comparison of training time, we continue to train a 1600-epoch pre-trained MAE for 200 epochs with total ImageNet as our baseline, and is denoted as MAE* in Table 1. For detection and segmentation tasks, following Bao et al. (2022), we perform experiments on ADE20K (Zhou et al., 2019) and COCO (Lin et al., 2014). We utilize the officially released checkpoints for all baselines. Details are in Appendix A.5.
|
| 143 |
+
|
| 144 |
+
Table 3: Transfer accuracy of MAE, TokenMoE, SDR and MoCE. SDR(ViT) is our reimplementation of SDR under ViT. We observe that TokenMoE cannot outperform vanilla MAE, while SDR(ViT) achieves better performance, which is further outperformed by MoCE.
|
| 145 |
+
|
| 146 |
+
<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE*</td><td>72.71</td><td>91.24</td><td>84.47</td><td>96.15</td><td>77.33</td><td>75.05</td><td>96.25</td><td>80.49</td><td>92.78</td><td>62.46</td><td>85.02</td><td>83.09</td></tr><tr><td>TokenMoE</td><td>70.51</td><td>89.70</td><td>81.40</td><td>95.18</td><td>76.44</td><td>73.67</td><td>95.09</td><td>77.45</td><td>90.71</td><td>61.12</td><td>80.15</td><td>81.04</td></tr><tr><td>SDR</td><td>75.77</td><td>89.73</td><td>86.65</td><td>95.31</td><td>83.60</td><td>73.62</td><td>95.53</td><td>84.77</td><td>91.25</td><td>64.64</td><td>83.51</td><td>84.03</td></tr><tr><td>SDR(ViT)</td><td>76.57</td><td>90.04</td><td>86.95</td><td>96.92</td><td>81.42</td><td>73.09</td><td>96.14</td><td>82.90</td><td>92.65</td><td>64.40</td><td>85.37</td><td>84.22</td></tr><tr><td>MoCE</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>85.54</td></tr></table>
|
| 147 |
+
|
| 148 |
+
# 4.2 RESULTS
|
| 149 |
+
|
| 150 |
+
Transfer Results. The classification transfer performance of various self-supervised models are shown in Table 1. As can be seen, MoCE achieves a $2.45\%$ improvement over MAE* and reaches the state-of-the-art averaged accuracy, demonstrating the effectiveness of the task-customized pretraining paradigm. On fine-grained datasets such as Aircraft, Cars and Food, MoCE outperforms the baseline model by a large margin. This is because these fine-grained tasks are similar to only a subset of the pre-training dataset. Hence, MoCE can alleviate negative transfer by using the model that is trained by the cluster most similar to the particular downstream task. On the other hand, MoCE shows only limited improvement on tasks such as Caltech, Cifar-10 and VOC. These tasks are more general and contain images covering the various semantics in the pre-training dataset, and thus negative transfer does not exist.
|
| 151 |
+
|
| 152 |
+
Table 2 shows the transfer performance on the detection and segmentation tasks. As can be seen, MoCE outperforms MAE and the other baselines (including the supervised one), and achieves state-of-the-art results.
|
| 153 |
+
|
| 154 |
+
Comparison between MoCE, TokenMoE, MAE and SDR. In this experiment, we compare MoCE with the following models: (i) MAE, (ii) TokenMoE, (iii) SDR (Liu et al., 2022), a task-customized model that aims at alleviating negative transfer, and (iv) SDR(ViT), which re-implements SDR with the ViT architecture. Table 3 shows the transfer accuracy on 11 downstream tasks. As can be seen, TokenMoE performs even worse than MAE, suggesting that naively adopting MoE to MAE is not desirable. Both MoCE and SDR(ViT) outperform MAE, demonstrating the effectiveness of task-customized methods for alleviating negative transfer. MoCE further outperforms SDR(ViT), indicating the importance of self-adaptive routing.
|
| 155 |
+
|
| 156 |
+
Figure 3(d) shows the peak signal-to-noise ratio (PSNR) (Sara et al., 2019), which reflects the generation quality of these autoencoder models. MoCE exhibits improvement over TokenMoE and MAE on most datasets. We also provide the comparisons in the case of a fair parameter count, large architectures, and training from scratch in the Appendix A.2, A.3 and A.4, respectively.
|
| 157 |
+
|
| 158 |
+
Analysis on experts. Figure 3(a) and Figure 3(c) show the routing heatmaps for TokenMoE and MoCE, respectively. As can be seen, routing of the TokenMoE experts has little correlation with semantics. On the other hand, each MoCE expert is trained by several classes, showing a more balanced assignment of images to experts. This verifies that the improvement of MoCE is due to more effective learning of the experts. Moreover, notice that the importance loss and load balance loss (Riquelme et al., 2021) are applied and indeed work as "expected" because they are only applied with respect to patch tokens instead of semantic classes. On the other hand, MoCE can balance the experts both at the token level and semantic level.
|
| 159 |
+
|
| 160 |
+
Figure 3(b) shows example pre-training samples for 3 random MoCE experts. Note that expert 1 is mostly trained by images containing clothes, experts 2 is pre-trained mostly by bird images, while expert 3 is pre-trained mostly by dog images.
|
| 161 |
+
|
| 162 |
+
Next, we show that each expert is trained by samples with similar semantics. Following (Mikolov et al., 2013), we select the label set used by each expert, and then compute the $\ell_2$ distances between CLIP embeddings (Radford et al., 2021a) of labels used by the same expert and by different experts. The average distance between labels used by the same expert is 0.84, while that between labels
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
(a) Routing heatmap for TokenMoE experts.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
(b) Examples pre-training samples for expert 1 (top), expert 2 (middle), and expert 3 (bottom).
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
(c) Routing heatmap for MoCE experts.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
(d) Relative PSNR improvement over MAE.
|
| 175 |
+
Figure 3: (a),(c): Routing heatmaps for experts in TokenMoE and MoCE. The x-axis is the expert ID, and the y-axis is the ImageNet semantic label ID. Darker green means a higher proportion of tokens belonging to the corresponding class are allocated to the expert. The label is sorted differently in each figure to make it readable. (b): Example samples from the pre-training dataset of 3 MoCE experts. (d): Relative PSNR improvement of TokenMoE and MoCE over MAE for each downstream task.
|
| 176 |
+
|
| 177 |
+
used by different experts is 0.92, indicating that the MoCE gate automatically aggregates labels with similar semantics to each expert, thus benefiting downstream transfer.
|
| 178 |
+
|
| 179 |
+
Training and testing efficiency. Table 4 compares the efficiencies of MAE, TokenMoE and MoCE during training and testing. As can be seen, all of them have similar FLOPs. However, TokenMoE needs to use the whole model during both training and testing, while MoCE only needs to use a single expert and thus halves the required number of parameters when testing. In addition, the training and testing speeds are improved by respectively $18\%$ and $37\%$ , which is attributed to the reduction of token shuffle operations as tokens in the same image do not need to be split and are dispatched to the same expert, significantly reducing the communication overhead.
|
| 180 |
+
|
| 181 |
+
# 4.3 ABLATION
|
| 182 |
+
|
| 183 |
+
Search method. When a downstream task arrives, it is expensive to fine-tune all experts to choose the best one. To find the task-customized expert ( $K = 1$ ), we compare the method proposed in Section 3.3 with (i) early stop, (ii) KNN (Liu et al., 2022), (iii) LogME (You et al., 2021). The experiment is performed on the task with the most samples (Food), the task with the least samples (Flowers), and the one with a medium number of samples (Aircraft). For comparison, we additionally show the performance of the best and worst experts based on an exhaustive search. As can be seen in Table 5, MoCE performs stably among different sizes of the dataset, and the search cost is negligible as we only need to infer the downstream task once and feed it to the clustering module. This illustrates another advantage of combining clustering and pre-training in a single paradigm.
|
| 184 |
+
|
| 185 |
+
MoCE Architectures. In this experiment, we study the different architecture hyper-parameters of MoCE in three aspects. First, we vary the number of experts in each MoCE layer. As can be seen from Table 6, using more experts leads to consistent improvement on the accuracy.
|
| 186 |
+
|
| 187 |
+
Next, we vary the location of the MoCE layers. As mentioned in Section 4.1, we select the MoCE layers based on the gradient magnitudes. In the experiments, MoCE selects the 11th and 12th MLP layers. On the other hand, TokenMoE chooses the last 2 even-numbered (i.e., 10th and 12th) MLP
|
| 188 |
+
|
| 189 |
+
Table 4: Efficiency during training (top) and testing (bottom).
|
| 190 |
+
|
| 191 |
+
<table><tr><td></td><td>MAE</td><td>TokenMoE</td><td>MoCE</td></tr><tr><td>Params (M)</td><td>111.91</td><td>178.03</td><td>178.03</td></tr><tr><td>FLOPs (G)</td><td>9.80</td><td>9.81</td><td>9.81</td></tr><tr><td>Speed↑</td><td>1.41x</td><td>1x</td><td>1.18x</td></tr><tr><td># Params (M)</td><td>85.88</td><td>152.00</td><td>85.88</td></tr><tr><td>FLOPs (G)</td><td>16.88</td><td>16.88</td><td>16.88</td></tr><tr><td>Speed↑</td><td>1.37x</td><td>1x</td><td>1.37x</td></tr></table>
|
| 192 |
+
|
| 193 |
+
Table 5: The search cost (in GPU hours) for different expert search algorithms.
|
| 194 |
+
|
| 195 |
+
<table><tr><td></td><td>Aircraft</td><td>Flowers</td><td>Food</td><td>GPU hours</td></tr><tr><td>Best</td><td>79.92</td><td>97.96</td><td>86.24</td><td>288</td></tr><tr><td>Worst</td><td>69.84</td><td>94.97</td><td>81.51</td><td>288</td></tr><tr><td>Early stop</td><td>77.00</td><td>96.83</td><td>85.33</td><td>144</td></tr><tr><td>KNN</td><td>71.40</td><td>95.10</td><td>83.32</td><td>5</td></tr><tr><td>LogME</td><td>73.84</td><td>96.54</td><td>85.11</td><td>5</td></tr><tr><td>MoCE</td><td>78.73</td><td>96.94</td><td>86.24</td><td>1</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 6: Accuracies with different numbers of experts in a MoCE layer. (default setting used is in bold).
|
| 198 |
+
|
| 199 |
+
<table><tr><td># experts</td><td>Acc (%)</td></tr><tr><td>1</td><td>83.09</td></tr><tr><td>2</td><td>83.01</td></tr><tr><td>4</td><td>84.22</td></tr><tr><td>8</td><td>85.54</td></tr></table>
|
| 200 |
+
|
| 201 |
+
Table 7: Accuracies with different numbers and locations of the MoCE layers (default setting used is in bold).
|
| 202 |
+
|
| 203 |
+
<table><tr><td># MoCE layers</td><td>Acc (%)</td></tr><tr><td>1</td><td>83.09</td></tr><tr><td>2 (10th & 12th)</td><td>83.08</td></tr><tr><td>2 (11th & 12th)</td><td>85.54</td></tr><tr><td>4</td><td>85.59</td></tr></table>
|
| 204 |
+
|
| 205 |
+
Table 8: Accuracies with different numbers of clusters (default setting used is in bold).
|
| 206 |
+
|
| 207 |
+
<table><tr><td># clusters</td><td>Acc (%)</td></tr><tr><td>16</td><td>82.00</td></tr><tr><td>64</td><td>84.02</td></tr><tr><td>256</td><td>85.54</td></tr><tr><td>512</td><td>85.33</td></tr></table>
|
| 208 |
+
|
| 209 |
+
layers. Furthermore, we exhibit the performance with only 1 and 4 MoCE layers, which are also selected based on the gradient magnitudes. As shown in Table 7, we notice that it is essential to choose the right layer to be MoCE layer. Adding more MoCE layers shows little improvement.
|
| 210 |
+
|
| 211 |
+
We also train MoCE with different numbers of clusters. As shown in Table 8, the accuracy increases up to 256 clusters, and then begins to drop. We hypothesize that with a moderate number of clusters, MoCE can produce a variety of task-customized models. With even more clusters, the number of experts become the bottleneck and performance starts to saturate.
|
| 212 |
+
|
| 213 |
+
# 5 CONCLUSION
|
| 214 |
+
|
| 215 |
+
In this work, we first show that the negative transfer phenomenon exists in the prevailing self-supervised learning method MAE through extensive experiments. It will impede the scalability of MAE as more pre-training data may instead degenerate the downstream performance. In order to tackle the problem, we introduce Mixture of Expert to MAE as the multi-experts design can equip MAE with different ability that aids transfer. However, different from supervised pre-training, TokenMoE suffers from the fact that the gate shows no correlation to the semantics and the transfer ability is not improved. Based on this, we propose MoCE to explicitly train each expert with different clusters through the MoCE gate design and several losses to stabilize the training process. A search algorithm for selecting the best model for transfer is also proposed based on the clustering priors. Extensive experiments show that MoCE trains each expert with meaningful semantics and achieves state-of-the-art transfer performance on a collection of 11 downstream tasks and both detection and segmentation tasks. It is the first work that successfully trains a self-supervised learning MoE model on ImageNet only. We hope such a design will motivate more research on the self-supervised MoE models.
|
| 216 |
+
|
| 217 |
+
# ACKNOWLEDGMENTS
|
| 218 |
+
|
| 219 |
+
We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
|
| 220 |
+
|
| 221 |
+
# REFERENCES
|
| 222 |
+
|
| 223 |
+
Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. Preprint arXiv:1911.05371, 2019.
|
| 224 |
+
Yutong Bai, Zeyu Wang, Junfei Xiao, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, and Cihang Xie. Masked autoencoders enable efficient knowledge distillers. Preprint arXiv:2208.122561, 2022.
|
| 225 |
+
Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. In ICLR, 2022.
|
| 226 |
+
Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: delving into high quality object detection. In CVPR, 2018.
|
| 227 |
+
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, pp. 132-149, 2018.
|
| 228 |
+
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 33:9912-9924, 2020.
|
| 229 |
+
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, pp. 9650-9660, 2021.
|
| 230 |
+
Kai Chen, Lanqing Hong, Hang Xu, Zhenguo Li, and Dit-Yan Yeung. Multisiam: Self-supervised multi-instance siamese representation learning for autonomous driving. In ICCV, pp. 7546-7554, 2021a.
|
| 231 |
+
Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, and Dit-Yan Yeung. Mixed autoencoder for self-supervised visual representation learning. In CVPR, 2023.
|
| 232 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020.
|
| 233 |
+
Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In ICCV, pp. 9640-9649, 2021b.
|
| 234 |
+
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. In ICLR, 2020.
|
| 235 |
+
Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In NeurIPS, volume 33, pp. 18613-18624. Curran Associates, Inc., 2020.
|
| 236 |
+
Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. NeurIPS, 26, 2013.
|
| 237 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
|
| 238 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. Preprint arXiv:1810.04805, 2018.
|
| 239 |
+
Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, pp. 1422-1430, 2015.
|
| 240 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
|
| 241 |
+
Linus Ericsson, Henry Gouk, and Timothy M Hospedales. How well do self-supervised models transfer? In CVPR, pp. 5414-5423, 2021.
|
| 242 |
+
|
| 243 |
+
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2021.
|
| 244 |
+
Dehong Gao, Wenjing Yang, Huiling Zhou, Yi Wei, Yi Hu, and Hao Wang. Network clustering for multi-task learning. Preprint arXiv:2101.09018, 2021.
|
| 245 |
+
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In ICLR, 2018.
|
| 246 |
+
Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. In NeurlPS, 2020.
|
| 247 |
+
Jianhua Han, Xiwen Liang, Hang Xu, Kai Chen, Lanqing Hong, Jiageng Mao, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Xiaodan Liang, and Chunjing Xu. Soda10m: A large-scale 2d self/semi-supervised object detection dataset for autonomous driving. Preprint arXiv:2106.11118, 2021.
|
| 248 |
+
Kaiming He, Georgia Gkioxari, Piotr Dolkar, and Ross Girshick. Mask r-cnn. In ICCV, 2017.
|
| 249 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, pp. 9729-9738, 2020.
|
| 250 |
+
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, pp. 16000-16009, 2022.
|
| 251 |
+
Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, and Daniel Soudry. Augment your batch: Improving generalization through instance repetition. In CVPR, June 2020.
|
| 252 |
+
Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. Preprint arXiv:1905.122651, 2019.
|
| 253 |
+
Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? Preprint arXiv:1608.08614, 2016.
|
| 254 |
+
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79-87, 1991.
|
| 255 |
+
Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181-214, 1994.
|
| 256 |
+
Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan Firat. Beyond distillation: Task-level mixture-of-experts for efficient inference. Preprint arXiv:2110.03742, 2021.
|
| 257 |
+
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. Preprint arXiv:2006.16668, 2020.
|
| 258 |
+
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. Base layers: Simplifying training of large, sparse models. In ICML, pp. 6265-6274. PMLR, 2021.
|
| 259 |
+
Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua Han, Yukuai Chen, Wei Zhang, Chunjing Xu, Dit-Yan Yeung, et al. Coda: A real-world road corner case dataset for object detection in autonomous driving. Preprint arXiv:2203.07724, 2022a.
|
| 260 |
+
Xiaotong Li, Yixiao Ge, Kun Yi, Zixuan Hu, Ying Shan, and Ling-Yu Duan. mc-beit: Multi-choice discretization for image bert pre-training. Preprint arXiv:2203.15371, 2022b.
|
| 261 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
|
| 262 |
+
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. Preprint arXiv:1901.11504, 2019.
|
| 263 |
+
|
| 264 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, pp. 10012-10022, 2021.
|
| 265 |
+
Zhili Liu, Jianhua Han, Kai Chen, Lanqing Hong, Hang Xu, Chunjing Xu, and Zhenguo Li. Taskcustomized self-supervised pre-training with scalable dynamic routing. AAAI, 55:65, 2022.
|
| 266 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. Preprint arXiv:1711.05101, 2017.
|
| 267 |
+
Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1930-1939, 2018.
|
| 268 |
+
Matthew McDermott, Bret Nestor, Evan Kim, Wancong Zhang, Anna Goldenberg, Peter Szolovits, and Marzyeh Ghassemi. A comprehensive ehr timeseries pre-training benchmark. In Proceedings of the Conference on Health, Inference, and Learning, pp. 257-278, 2021.
|
| 269 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. Preprint arXiv:1301.3781, 2013.
|
| 270 |
+
George A Miller. WordNet: An electronic lexical database. MIT press, 1998.
|
| 271 |
+
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
|
| 272 |
+
Joan Puigcerver, Carlos Riquelme Ruiz, Basil Mustafa, Cedric Reenggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. In ICLR, 2020.
|
| 273 |
+
A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, and J. Clark. Learning transferable visual models from natural language supervision. In ICML, 2021a.
|
| 274 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pp. 8748-8763. PMLR, 2021b.
|
| 275 |
+
Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. NeurIPS, 34:8583-8595, 2021.
|
| 276 |
+
Umme Sara, Morium Akter, and Mohammad Shorif Uddin. Image quality assessment through fsim, ssim, mse and psnr—a comparative study. Journal of Computer and Communications, 7(3):8-18, 2019.
|
| 277 |
+
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In ICLR, 2017.
|
| 278 |
+
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve Jegou. Training data-efficient image transformers & distillation through attention. In ICML, volume 139, pp. 10347-10357, July 2021.
|
| 279 |
+
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, pp. 568-578, 2021.
|
| 280 |
+
Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. Preprint arXiv:2011.09157, 2020.
|
| 281 |
+
Lemeng Wu, Mengchen Liu, Yinpeng Chen, Dongdong Chen, Xiyang Dai, and Lu Yuan. Residual mixture of experts. Preprint arXiv:2204.09636, 2022.
|
| 282 |
+
|
| 283 |
+
Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Zhenguo Li, and Ping Luo. Detco: Unsupervised contrastive learning for object detection. Preprint arXiv:2102.04803, 2021a.
|
| 284 |
+
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS, 2021b.
|
| 285 |
+
Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. Preprint arXiv:2011.10043, 2020.
|
| 286 |
+
Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu.
|
| 287 |
+
Simmim: A simple framework for masked image modeling. Preprint arXiv:2111.09886, 2021c.
|
| 288 |
+
An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. M6-t: Exploring sparse expert models and beyond. Preprint arXiv:2105.15082, 2021.
|
| 289 |
+
Lewei Yao, Jianhua Han, Youpeng Wen, Xiaodan Liang, Dan Xu, Wei Zhang, Zhenguo Li, Chunjing Xu, and Hang Xu. Detclip: Dictionary-enriched visual-concept paralleled pre-training for open-world detection. In NeurIPS, 2022a.
|
| 290 |
+
Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. Filip: Fine-grained interactive language-image pre-training. In ICLR, 2022b.
|
| 291 |
+
Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. Logme: Practical assessment of pre-trained models for transfer learning. In ICML, pp. 12133-12143. PMLR, 2021.
|
| 292 |
+
Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016.
|
| 293 |
+
Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. In IJCV, 2019.
|
| 294 |
+
Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer. Preprint arXiv:2111.07832, 2021.
|
| 295 |
+
|
| 296 |
+
# A APPENDIX
|
| 297 |
+
|
| 298 |
+
# A.1 DETAILS FOR CLUSTERING.
|
| 299 |
+
|
| 300 |
+
For data clustering, the features are computed by inferring the pre-train MAE, and the matrix $\mathbf{Q}$ and $\mathbf{C}$ are solved by the Sinkhorn-Knopp algorithm and SGD optimizer iteratively. For the Sinkhorn-Knopp algorithm, we set the iteration number as 3. The learning rate of SGD is set to 0.1, the momentum is 0.9 and weight decay is set to 0.9 for the sparse assignment of cluster results. We train 10 epochs in total and it costs 3 minutes and 20 seconds on average for a single GPU.
|
| 301 |
+
|
| 302 |
+
# A.2 COMPARISON UNDER FAIR PARAMETER COUNTS.
|
| 303 |
+
|
| 304 |
+
The setting used in our work focuses on a fair comparison of FLOPs, referring to Table 4 in the main paper. Since TokenMoE and MoCE always activate only one expert throughout the whole pre-training and fine-tuning procedure, the FLOPs value is maintained close to MAE. Apart from this criterion, we further provide the comparison on equal parameter counts. As shown in Table 9, we train MAE under the same parameter count as the whole model of MoCE, and MoCE still outperforms MAE consistently.
|
| 305 |
+
|
| 306 |
+
Table 9: Comparison of MAE and MoCE under equal parameter counts. We train MAE with a larger model that shares the same parameter count as the whole model of MoCE.
|
| 307 |
+
|
| 308 |
+
<table><tr><td></td><td># Params</td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE</td><td>178.03</td><td>74.43</td><td>90.30</td><td>85.50</td><td>96.90</td><td>83.80</td><td>74.84</td><td>96.30</td><td>81.86</td><td>92.97</td><td>62.98</td><td>85.51</td><td>84.13</td></tr><tr><td>MoCE</td><td>178.03</td><td>78.73</td><td>90.61</td><td>88.56</td><td>97.79</td><td>84.68</td><td>74.04</td><td>96.94</td><td>86.24</td><td>93.07</td><td>65.05</td><td>85.26</td><td>85.54</td></tr></table>
|
| 309 |
+
|
| 310 |
+
# A.3 MOCE FOR LARGER ARCHITECTURE.
|
| 311 |
+
|
| 312 |
+
Here we provide the analysis on the larger architecture(ViT-L, $2.57\times$ larger than ViT-B) to explore the scalability of MoCE. We first demonstrate that negative transfer still exists for larger architecture by training MAE with total ImageNet, Split-A and Split-B following the same setting in Sec. 3.1. As shown in the first three rows of Table. 10, a similar phenomenon is observed that MAE-L trained by Split-A performs better in Aircraft, Cars, DTD and SUN while Split-B in Flowers, Food, and Pets. On the other hand, MoCE-L can still alleviate the problem and therefore transfers better. We believe that the negative transfer phenomenon mainly exists when a common pre-trained model is used for various downstream tasks, due to the inevitable semantic gaps between the pre-training and downstream datasets, rather than the architecture.
|
| 313 |
+
|
| 314 |
+
Table 10: Comparison of MAE and MoCE on ViT-L. We also train MAE with 2 subsets of ImageNet, namely Split-A and Split-B, following the same setting mentioned in Sec. 3.1. This table shows that negative transfer still exists on larger architectures, while MoCE can alleviate this problem and achieve better transfer results.
|
| 315 |
+
|
| 316 |
+
<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE-L (full set)</td><td>74.30</td><td>93.97</td><td>88.60</td><td>97.85</td><td>82.47</td><td>77.61</td><td>96.67</td><td>81.22</td><td>93.97</td><td>67.99</td><td>88.30</td><td>85.72</td></tr><tr><td>MAE-L (Split-A)</td><td>79.70</td><td>91.59</td><td>89.33</td><td>96.97</td><td>80.38</td><td>78.67</td><td>95.44</td><td>82.97</td><td>92.49</td><td>68.73</td><td>82.41</td><td>85.33</td></tr><tr><td>MAE-L (Split-B)</td><td>73.42</td><td>90.80</td><td>86.00</td><td>96.18</td><td>78.73</td><td>77.34</td><td>96.75</td><td>83.63</td><td>94.92</td><td>66.06</td><td>85.85</td><td>84.52</td></tr><tr><td>MoCE-L</td><td>87.04</td><td>94.86</td><td>90.72</td><td>98.29</td><td>87.49</td><td>76.65</td><td>97.38</td><td>88.21</td><td>95.89</td><td>69.49</td><td>89.13</td><td>88.65+2.93</td></tr></table>
|
| 317 |
+
|
| 318 |
+
# A.4 PERFORMANCE OF MOCE WITHOUT PRE-TRAINING.
|
| 319 |
+
|
| 320 |
+
We provide results of MoCE trained from scratch for 200 epochs and 1600 epochs in Table 11. In this experiment, for clustering, we first pre-train MAE for 50 epochs and perform clustering. We then train MoCE from scratch for 200 epochs and 1600 epochs based on the clustering results. Although it is a common practice to utilize pre-trained dense models as initialization to accelerate pre-training (Wu et al., 2022; Bai et al., 2022), MoCE still outperforms MAE consistently in various downstream tasks when trained from scratch.
|
| 321 |
+
|
| 322 |
+
Table 11: Comparison of MAE and MoCE both training from scratch for 200 epochs (first two rows) and 1600 epochs (last two rows).
|
| 323 |
+
|
| 324 |
+
<table><tr><td></td><td>Aircraft</td><td>Caltech</td><td>Cars</td><td>C10</td><td>C100</td><td>DTD</td><td>Flowers</td><td>Food</td><td>Pets</td><td>SUN</td><td>VOC</td><td>Avg.</td></tr><tr><td>MAE</td><td>64.73</td><td>85.91</td><td>77.10</td><td>92.92</td><td>72.50</td><td>73.30</td><td>93.11</td><td>73.14</td><td>88.70</td><td>57.84</td><td>73.27</td><td>77.50</td></tr><tr><td>MoCE</td><td>71.16</td><td>90.55</td><td>82.46</td><td>96.06</td><td>76.56</td><td>74.57</td><td>95.70</td><td>79.67</td><td>92.58</td><td>62.20</td><td>84.25</td><td>82.34</td></tr><tr><td>MAE</td><td>72.38</td><td>90.47</td><td>83.51</td><td>95.69</td><td>68.40</td><td>75.48</td><td>96.10</td><td>79.98</td><td>92.35</td><td>62.43</td><td>84.79</td><td>81.96</td></tr><tr><td>MoCE</td><td>78.75</td><td>91.64</td><td>87.04</td><td>97.15</td><td>83.12</td><td>73.62</td><td>96.08</td><td>83.84</td><td>93.06</td><td>65.49</td><td>85.81</td><td>85.05</td></tr></table>
|
| 325 |
+
|
| 326 |
+
# A.5 EVALUATION DETAILS FOR DOWNSSTREAM TASKS.
|
| 327 |
+
|
| 328 |
+
Classification. We mainly follow the settings of Ericsson et al. (2021). to make a fair comparison. Specifically, all models are trained by SGD with a momentum of 0.9. Weight decay is set to be 0 and the learning rate is searched among [1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2, 1e-1, 3e-1]. Each model is fine-tuned for 2500 steps with cosine learning rate decay, a batch size of 64, and $224 \times 224$ resolution. We fine-tune each model 3 times and report the average performance. We find such a setting generates a stable result.
|
| 329 |
+
|
| 330 |
+
Semantic segmentation. We evaluate MoCE on the semantic segmentation task that aims to predict the class for each pixel in the image. We report the metric of mean Intersection of Union (mIoU) averaged over all semantic categories in ADE20K (Zhou et al., 2019). We choose the best expert by applying ADE20K images to our clustering module and selecting the cluster that contains the most images. We use Adam (Loshchilov & Hutter, 2017) as the optimizer. The learning rate is set to 1e-3 with layer-wise learning rate decay (Clark et al., 2020) to be 0.65. We conduct fine-tuning for 160K steps. The batch size is 16. The detailed hyper-parameters can refer to Bao et al. (2022).
|
| 331 |
+
|
| 332 |
+
Detection and Instance segmentation are also evaluated on COCO (Lin et al., 2014). We follow the same deployment method as the one used in the semantic segmentation task to choose the best expert. Following iBOT (Zhou et al., 2021) we adopt the Cascade Mask R-CNN (Cai & Vasconcelos, 2018; He et al., 2017) and the multi-scale training. The shorter side is randomly resized between 480 and 800 while the longer one is no longer than 1333. The batch size is 16, and the initial learning rate is 1e-4. The layer-wise learning rate decay ratio (Clark et al., 2020) is set to 0.75. We train the model for 12 epochs and decrease the learning rate by $10\mathrm{x}$ at epoch 9 and 11.
|
2402.05xxx/2402.05382/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:40e490efeac78d3a4685eb1647a3e34521005d6154b69065d9e058647bcbcaa8
|
| 3 |
+
size 547659
|
2402.05xxx/2402.05382/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2402.05xxx/2402.05388/4a5d066f-517b-40fb-a726-5553033aca85_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|