SlowGuess commited on
Commit
93a0596
·
verified ·
1 Parent(s): b6a0716

Add Batch 71c5004e-2da9-4129-874e-94f782cdb6f9

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +63 -0
  2. 2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_content_list.json +1716 -0
  3. 2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_model.json +0 -0
  4. 2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_origin.pdf +3 -0
  5. 2502.09xxx/2502.09533/full.md +335 -0
  6. 2502.09xxx/2502.09533/images.zip +3 -0
  7. 2502.09xxx/2502.09533/layout.json +0 -0
  8. 2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_content_list.json +0 -0
  9. 2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_model.json +0 -0
  10. 2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_origin.pdf +3 -0
  11. 2502.09xxx/2502.09560/full.md +0 -0
  12. 2502.09xxx/2502.09560/images.zip +3 -0
  13. 2502.09xxx/2502.09560/layout.json +0 -0
  14. 2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_content_list.json +1748 -0
  15. 2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_model.json +0 -0
  16. 2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_origin.pdf +3 -0
  17. 2502.09xxx/2502.09565/full.md +366 -0
  18. 2502.09xxx/2502.09565/images.zip +3 -0
  19. 2502.09xxx/2502.09565/layout.json +0 -0
  20. 2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_content_list.json +0 -0
  21. 2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_model.json +0 -0
  22. 2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_origin.pdf +3 -0
  23. 2502.09xxx/2502.09597/full.md +0 -0
  24. 2502.09xxx/2502.09597/images.zip +3 -0
  25. 2502.09xxx/2502.09597/layout.json +0 -0
  26. 2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_content_list.json +0 -0
  27. 2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_model.json +0 -0
  28. 2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_origin.pdf +3 -0
  29. 2502.09xxx/2502.09601/full.md +395 -0
  30. 2502.09xxx/2502.09601/images.zip +3 -0
  31. 2502.09xxx/2502.09601/layout.json +0 -0
  32. 2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_content_list.json +0 -0
  33. 2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_model.json +0 -0
  34. 2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_origin.pdf +3 -0
  35. 2502.09xxx/2502.09604/full.md +484 -0
  36. 2502.09xxx/2502.09604/images.zip +3 -0
  37. 2502.09xxx/2502.09604/layout.json +0 -0
  38. 2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_content_list.json +0 -0
  39. 2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_model.json +0 -0
  40. 2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_origin.pdf +3 -0
  41. 2502.09xxx/2502.09615/full.md +449 -0
  42. 2502.09xxx/2502.09615/images.zip +3 -0
  43. 2502.09xxx/2502.09615/layout.json +0 -0
  44. 2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_content_list.json +0 -0
  45. 2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_model.json +0 -0
  46. 2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_origin.pdf +3 -0
  47. 2502.09xxx/2502.09620/full.md +423 -0
  48. 2502.09xxx/2502.09620/images.zip +3 -0
  49. 2502.09xxx/2502.09620/layout.json +0 -0
  50. 2502.09xxx/2502.09621/e78f0acc-7014-41e7-aad6-9f1853dd8aec_content_list.json +0 -0
.gitattributes CHANGED
@@ -4528,3 +4528,66 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
4528
  2502.12xxx/2502.12213/23663d45-49ae-44f0-b8da-bc28a0f398e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4529
  2502.12xxx/2502.12215/17901061-b7f5-431f-bdc4-b765cb55ca10_origin.pdf filter=lfs diff=lfs merge=lfs -text
4530
  2502.13xxx/2502.13165/88ba2a5f-68b7-4f11-b360-bf441d2278cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4528
  2502.12xxx/2502.12213/23663d45-49ae-44f0-b8da-bc28a0f398e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4529
  2502.12xxx/2502.12215/17901061-b7f5-431f-bdc4-b765cb55ca10_origin.pdf filter=lfs diff=lfs merge=lfs -text
4530
  2502.13xxx/2502.13165/88ba2a5f-68b7-4f11-b360-bf441d2278cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
4531
+ 2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_origin.pdf filter=lfs diff=lfs merge=lfs -text
4532
+ 2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_origin.pdf filter=lfs diff=lfs merge=lfs -text
4533
+ 2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_origin.pdf filter=lfs diff=lfs merge=lfs -text
4534
+ 2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4535
+ 2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
4536
+ 2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_origin.pdf filter=lfs diff=lfs merge=lfs -text
4537
+ 2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4538
+ 2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_origin.pdf filter=lfs diff=lfs merge=lfs -text
4539
+ 2502.09xxx/2502.09621/e78f0acc-7014-41e7-aad6-9f1853dd8aec_origin.pdf filter=lfs diff=lfs merge=lfs -text
4540
+ 2502.09xxx/2502.09622/9eb2b38f-442b-4076-ae76-2439e05b7377_origin.pdf filter=lfs diff=lfs merge=lfs -text
4541
+ 2502.09xxx/2502.09696/d9b18375-0732-4f5e-97ff-ab549227179c_origin.pdf filter=lfs diff=lfs merge=lfs -text
4542
+ 2502.09xxx/2502.09747/1a38756f-0c58-46b8-9628-9a46b558cf19_origin.pdf filter=lfs diff=lfs merge=lfs -text
4543
+ 2502.09xxx/2502.09768/94b012dd-325e-4ec9-abce-9f142d524508_origin.pdf filter=lfs diff=lfs merge=lfs -text
4544
+ 2502.09xxx/2502.09783/8f57b78d-664f-48bc-b2ed-b209b16e9c29_origin.pdf filter=lfs diff=lfs merge=lfs -text
4545
+ 2502.09xxx/2502.09838/2ccdc191-5076-4835-907c-e4ddb552ff32_origin.pdf filter=lfs diff=lfs merge=lfs -text
4546
+ 2502.09xxx/2502.09858/5711fed8-a762-4ea0-87a4-4cab6629c6e4_origin.pdf filter=lfs diff=lfs merge=lfs -text
4547
+ 2502.09xxx/2502.09886/ce9370a0-1e53-4681-a81e-5b0c733cfaf6_origin.pdf filter=lfs diff=lfs merge=lfs -text
4548
+ 2502.09xxx/2502.09891/516e0c2a-f422-4ff9-8926-d86928d1fbc2_origin.pdf filter=lfs diff=lfs merge=lfs -text
4549
+ 2502.09xxx/2502.09956/6aa82b52-9548-488a-b9db-b7cffc4b901f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4550
+ 2502.09xxx/2502.09977/d661b1d5-5062-4aa0-b773-3bfeea06f2d6_origin.pdf filter=lfs diff=lfs merge=lfs -text
4551
+ 2502.09xxx/2502.09980/20a5e2f3-8bef-473f-845e-a592163ee88c_origin.pdf filter=lfs diff=lfs merge=lfs -text
4552
+ 2502.09xxx/2502.09990/35bd8beb-2cf2-4119-a57d-5c99332ed753_origin.pdf filter=lfs diff=lfs merge=lfs -text
4553
+ 2502.09xxx/2502.09992/2baf1ef0-6fd5-4aef-be7f-a053a9a1dfa9_origin.pdf filter=lfs diff=lfs merge=lfs -text
4554
+ 2502.09xxx/2502.09994/dafcc100-ac27-401a-a045-48143c95ac90_origin.pdf filter=lfs diff=lfs merge=lfs -text
4555
+ 2502.10xxx/2502.10038/bad23e5a-9baf-4410-9e8a-c89f5f7b18b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4556
+ 2502.10xxx/2502.10050/0140e20d-7b51-414e-b5b3-7829b2d878d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
4557
+ 2502.10xxx/2502.10059/95e37970-83f3-4dae-bc8c-73be65d84717_origin.pdf filter=lfs diff=lfs merge=lfs -text
4558
+ 2502.10xxx/2502.10080/80e015a5-75f6-4294-bcaa-16177fb7932c_origin.pdf filter=lfs diff=lfs merge=lfs -text
4559
+ 2502.10xxx/2502.10248/15dee681-c641-4f31-830f-cbf3db91538e_origin.pdf filter=lfs diff=lfs merge=lfs -text
4560
+ 2502.10xxx/2502.10297/80ca533b-8f17-48ff-9e0d-44af7c8e6479_origin.pdf filter=lfs diff=lfs merge=lfs -text
4561
+ 2502.10xxx/2502.10325/ab082c8e-5348-4fd9-b329-e85c7a98350f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4562
+ 2502.10xxx/2502.10341/5fa617ca-960b-4d42-a0c7-6df819c87f17_origin.pdf filter=lfs diff=lfs merge=lfs -text
4563
+ 2502.10xxx/2502.10362/60dee984-6c4b-4054-b33c-f6b545ff30ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
4564
+ 2502.10xxx/2502.10363/45412580-4e05-4141-a427-fb5369f8a138_origin.pdf filter=lfs diff=lfs merge=lfs -text
4565
+ 2502.10xxx/2502.10389/c6948a8c-c154-43be-b2a5-f6005b5becb9_origin.pdf filter=lfs diff=lfs merge=lfs -text
4566
+ 2502.10xxx/2502.10391/a7906a70-88c5-4afb-a7b1-66d69645a1f3_origin.pdf filter=lfs diff=lfs merge=lfs -text
4567
+ 2502.10xxx/2502.10515/62d1fc28-7b4f-4b7d-8ca6-7c01fc915f25_origin.pdf filter=lfs diff=lfs merge=lfs -text
4568
+ 2502.10xxx/2502.10517/79c828b9-e6ba-463c-92be-fe06de3158f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
4569
+ 2502.10xxx/2502.10645/532715a2-3773-41cd-be13-e0b6f9d222ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
4570
+ 2502.10xxx/2502.10707/d71950ad-cdd2-4fe4-bab1-211a7db67f79_origin.pdf filter=lfs diff=lfs merge=lfs -text
4571
+ 2502.10xxx/2502.10708/f70a14c1-3406-4031-9486-bf4f13692542_origin.pdf filter=lfs diff=lfs merge=lfs -text
4572
+ 2502.10xxx/2502.10709/f1f7b7c8-7711-469c-a746-70e300b16f06_origin.pdf filter=lfs diff=lfs merge=lfs -text
4573
+ 2502.10xxx/2502.10721/f81c4819-a3b4-4877-8832-9dbe5af2d0d0_origin.pdf filter=lfs diff=lfs merge=lfs -text
4574
+ 2502.10xxx/2502.10731/d1d09ac9-2cf2-4ffc-b85f-ab38fc4fd82b_origin.pdf filter=lfs diff=lfs merge=lfs -text
4575
+ 2502.10xxx/2502.10794/e4e1766f-5e44-4473-aa8c-d1d114cf9ede_origin.pdf filter=lfs diff=lfs merge=lfs -text
4576
+ 2502.10xxx/2502.10810/f0a4787c-0022-46b9-984b-867dbf84a70e_origin.pdf filter=lfs diff=lfs merge=lfs -text
4577
+ 2502.10xxx/2502.10818/7e234d2b-80e1-4c18-8b89-55c8e924a80f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4578
+ 2502.10xxx/2502.10833/48d05e3b-cd26-44cb-9e9c-c0ced00dc0b4_origin.pdf filter=lfs diff=lfs merge=lfs -text
4579
+ 2502.10xxx/2502.10841/bb25f017-e70f-4482-9924-c4a49a75cc15_origin.pdf filter=lfs diff=lfs merge=lfs -text
4580
+ 2502.10xxx/2502.10843/45d54f6c-741f-4a97-8c7e-922fdd45619d_origin.pdf filter=lfs diff=lfs merge=lfs -text
4581
+ 2502.10xxx/2502.10844/4c9ce074-2a20-44ac-9b67-888011488f1e_origin.pdf filter=lfs diff=lfs merge=lfs -text
4582
+ 2502.10xxx/2502.10860/714e2197-7fd8-463a-900e-f08553043346_origin.pdf filter=lfs diff=lfs merge=lfs -text
4583
+ 2502.10xxx/2502.10867/a05fa346-2ab1-47b1-a273-8788dab628da_origin.pdf filter=lfs diff=lfs merge=lfs -text
4584
+ 2502.10xxx/2502.10884/eadbee98-e9ae-49ab-bdef-845ce3ca8a94_origin.pdf filter=lfs diff=lfs merge=lfs -text
4585
+ 2502.11xxx/2502.11051/977fcd99-1ad3-4979-a948-b06c4ad1735d_origin.pdf filter=lfs diff=lfs merge=lfs -text
4586
+ 2502.11xxx/2502.11054/0172d82b-177d-4ce6-b70c-62b783d22f28_origin.pdf filter=lfs diff=lfs merge=lfs -text
4587
+ 2502.11xxx/2502.11057/01f86fa9-7060-4c26-8bb3-1c4a913e8180_origin.pdf filter=lfs diff=lfs merge=lfs -text
4588
+ 2502.11xxx/2502.11079/187f02b3-919a-4d9f-b98d-43b92464c825_origin.pdf filter=lfs diff=lfs merge=lfs -text
4589
+ 2502.12xxx/2502.12176/6abbac6e-5b3e-4d95-9b0f-5d75d96a3387_origin.pdf filter=lfs diff=lfs merge=lfs -text
4590
+ 2502.12xxx/2502.12191/9be9f0d6-8f70-4d7e-8ccf-9f5f04518fda_origin.pdf filter=lfs diff=lfs merge=lfs -text
4591
+ 2502.12xxx/2502.12197/e33714b6-0629-43fa-9bd2-f84e28b9d164_origin.pdf filter=lfs diff=lfs merge=lfs -text
4592
+ 2502.12xxx/2502.12202/dc3be134-69a4-4e11-ac99-9bdcdf432c18_origin.pdf filter=lfs diff=lfs merge=lfs -text
4593
+ 2502.14xxx/2502.14881/11a16747-a941-457f-85f3-1533c7761815_origin.pdf filter=lfs diff=lfs merge=lfs -text
2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_content_list.json ADDED
@@ -0,0 +1,1716 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 253,
8
+ 109,
9
+ 718,
10
+ 151
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Fei Shen 1 Cong Wang 2 Junyao Gao 3 Qin Guo 4 Jisheng Dang 5 Jinhui Tang 1 Tat-Seng Chua 6",
17
+ "bbox": [
18
+ 135,
19
+ 195,
20
+ 831,
21
+ 212
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Abstract",
28
+ "text_level": 1,
29
+ "bbox": [
30
+ 241,
31
+ 239,
32
+ 318,
33
+ 255
34
+ ],
35
+ "page_idx": 0
36
+ },
37
+ {
38
+ "type": "text",
39
+ "text": "Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the Motion-priors Conditional Diffusion Model (MCDM), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also release the TalkingFace-Wild dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation. Code, models, and datasets will be publicly available.",
40
+ "bbox": [
41
+ 117,
42
+ 263,
43
+ 444,
44
+ 672
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "1. Introduction",
51
+ "text_level": 1,
52
+ "bbox": [
53
+ 86,
54
+ 705,
55
+ 217,
56
+ 720
57
+ ],
58
+ "page_idx": 0
59
+ },
60
+ {
61
+ "type": "text",
62
+ "text": "TalkingFace generation (Tan et al., 2024; Peng et al., 2024; Zhou et al., 2021; Ye et al., 2024; Ji et al., 2021; Tan et al., 2023; Kim et al., 2018; Liang et al., 2022; Ye et al., 2023; Pumarola et al., 2018; Vougioukas et al., 2020) aims to create realistic and expressive videos from a reference face",
63
+ "bbox": [
64
+ 84,
65
+ 729,
66
+ 475,
67
+ 806
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "$^{1}$ Nanjing University of Science and Technology $^{2}$ Nanjing University $^{3}$ Tongji University $^{4}$ Peking University $^{5}$ Sun Yat-sen University $^{6}$ National University of Singapore. Correspondence to: Jinhui Tang, Contact <feishen@njust.edu.cn>.",
74
+ "bbox": [
75
+ 84,
76
+ 815,
77
+ 475,
78
+ 868
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "and audio, with applications in virtual avatars, gaming, and filmmaking. However, the complexity of facial movements, including head, lip, and expression motions, presents challenges, along with the need to maintain identity consistency across extended sequences.",
85
+ "bbox": [
86
+ 496,
87
+ 239,
88
+ 887,
89
+ 316
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Early methods (Vougioukas et al., 2020; Wang et al., 2021b; Hong et al., 2022a; Chan et al., 2022; Guo et al., 2024) use GANs (Goodfellow et al., 2014; Mirza & Osindero, 2014) to synthesize facial motions onto a reference image through a two-step process: decoupling motion features from audio and mapping them onto intermediate representations like facial landmarks (Yang et al., 2023), 3DMM (Sun et al., 2023), or HeadNeRF (Hong et al., 2022b). Despite their promise, GAN-based methods suffer from training instability and inaccuracies in motion extraction, often leading to artifacts like blurriness and flickering that compromise video realism. Recent diffusion models (Wei et al., 2024; Stypułkowski et al., 2024; Tian et al., 2024; Guo et al., 2024; Zheng et al., 2024; Jiang et al., 2024) have improved TalkingFace generation by enhancing video realism through multi-step denoising that preserves conditional input information. These methods typically use a Reference UNet (Hu, 2024) to encode identity features and integrate audio via cross-attention. However, reliance on static audio features and weak correlations between audio and motion complicate the decoupling of identity and motion cues, often resulting in artifacts like motion distortion and flickering, especially in long-term generation.",
96
+ "bbox": [
97
+ 496,
98
+ 323,
99
+ 888,
100
+ 671
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "While some methods (Wang et al., 2024; Ma et al., 2024; Yang et al., 2024) improve long-term stability by introducing motion constraints like facial landmarks and emotion tags, these constraints often overly bind poses to the reference image, limiting expression diversity. Models trained with driven landmark fail to learn natural audio-driven motion patterns, reducing audio-visual synergy. Additionally, static emotion tags cannot capture dynamic shifts, leading to rigid, inauthentic animations over extended sequences. Besides, some approaches (Xu et al., 2024; Chen et al., 2024) inject brief motion reference frames, usually fewer than five over 0.2 seconds, which is insufficient to establish coherent motion, resulting in random, less dynamic movements.",
107
+ "bbox": [
108
+ 495,
109
+ 678,
110
+ 888,
111
+ 875
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "In this paper, we propose the Motion-priors Conditional",
118
+ "bbox": [
119
+ 496,
120
+ 882,
121
+ 883,
122
+ 897
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "aside_text",
128
+ "text": "arXiv:2502.09533v1 [cs.CV] 13 Feb 2025",
129
+ "bbox": [
130
+ 22,
131
+ 263,
132
+ 57,
133
+ 705
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "page_number",
139
+ "text": "1",
140
+ "bbox": [
141
+ 480,
142
+ 922,
143
+ 491,
144
+ 934
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "Diffusion Model (MCDM) to address the challenges in achieving long-term consistency in TalkingFace generation. The MCDM comprises three key modules: the archived-clip motion-prior, the present-clip motion-prior diffusion model, and a memory-efficient temporal attention mechanism. Unlike conventional reference UNet-based identity learning, the archived-clip motion-prior introduces historical frames along with a reference frame via frame-aligned attention, enhancing identity representation and creating a cohesive facial context over extended sequences. Then, the present-clip motion-prior diffusion model leverages multimodal causality and temporal interactions to effectively decouple and predict motion states, including head, lip, and expression movements, ensuring a clear separation between identity and motion features and promoting temporal consistency across frames. To support long-term stability, we devise a memory-efficient temporal attention that dynamically stores and updates historical motion features, integrating them with current motion cues via a memory update mechanism. This structure reduces error accumulation often observed in diffusion-based long-term TalkingFace generation, enabling more stable and consistent outputs. Additionally, we present the TalkingFace-Wild dataset, a high-quality, multilingual video dataset with over 200 hours of footage in 10 languages, offering a valuable resource for further research in TalkingFace generation. Our main contributions are summarized as follows:",
151
+ "bbox": [
152
+ 88,
153
+ 85,
154
+ 475,
155
+ 489
156
+ ],
157
+ "page_idx": 1
158
+ },
159
+ {
160
+ "type": "list",
161
+ "sub_type": "text",
162
+ "list_items": [
163
+ "- We propose MCDM to enhance robust identity consistency and support temporal consistency in long-term TalkingFace generation.",
164
+ "- We develop the archived-clip motion-prior module to enhance identity representation and construct a comprehensive facial context from historical frames.",
165
+ "- We devise the present-clip motion-prior diffusion model to decouple current identity and motion features via multimodal causality and temporal interactions.",
166
+ "- We present a memory-efficient temporal attention to dynamically update and integrate historical motion features with current ones, reducing error accumulation.",
167
+ "- We release the TalkingFace-Wild dataset, covering 10 languages and over 200 hours of video for advancing TalkingFace research."
168
+ ],
169
+ "bbox": [
170
+ 102,
171
+ 511,
172
+ 473,
173
+ 787
174
+ ],
175
+ "page_idx": 1
176
+ },
177
+ {
178
+ "type": "text",
179
+ "text": "2. Related Work",
180
+ "text_level": 1,
181
+ "bbox": [
182
+ 84,
183
+ 819,
184
+ 225,
185
+ 834
186
+ ],
187
+ "page_idx": 1
188
+ },
189
+ {
190
+ "type": "text",
191
+ "text": "GAN-Based Methods. GAN-based approaches (Kim et al., 2018; Zhou et al., 2020; Pumarola et al., 2018; Vougioukas et al., 2020; Zhang et al., 2023; Wang et al., 2021b; Hong et al., 2022a; Chan et al., 2022; Guo et al., 2024) for",
192
+ "bbox": [
193
+ 84,
194
+ 844,
195
+ 475,
196
+ 905
197
+ ],
198
+ "page_idx": 1
199
+ },
200
+ {
201
+ "type": "text",
202
+ "text": "TalkingFace generation extract motion features from audio or visual inputs and map them to intermediate representations such as facial landmarks (Yang et al., 2023), 3DMM (Sun et al., 2023), or HeadNeRF (Hong et al., 2022b). MakeItTalk (Zhou et al., 2020) employs LSTMs to predict landmarks from audio, followed by a warp-based GAN for video synthesis. GANimation (Pumarola et al., 2018) models facial motion via continuous manifolds, enhancing expression dynamics. SadTalker (Zhang et al., 2023) integrates ExpNet and PoseVAE to refine motion representations within the FaceVid2Vid (Wang et al., 2021b) framework. DaGAN (Hong et al., 2022a) introduces self-supervised geometric learning to capture dense 3D motion fields. While effective, GAN-based methods suffer from adversarial training instability and motion inaccuracies, often resulting in artifacts that degrade realism.",
203
+ "bbox": [
204
+ 495,
205
+ 84,
206
+ 887,
207
+ 325
208
+ ],
209
+ "page_idx": 1
210
+ },
211
+ {
212
+ "type": "text",
213
+ "text": "Diffusion-Based Methods. Diffusion models (Rombach et al., 2022) have gained traction in TalkingFace generation, producing high-quality, diverse outputs. AntiPortrait (Wei et al., 2024) maps audio to 3D facial structures, generating temporally coherent videos with expressive detail. MegActor- $\\Sigma$ (Wang et al., 2024) synchronizes lip movements, expressions, and head poses using a reference UNet (Hu, 2024) and facial loss functions to enhance fidelity. Hallo (Xu et al., 2024) and EchoMimic (Chen et al., 2024) leverage limited motion reference frames to improve expression diversity and pose alignment. However, reliance on short-term frame histories (2-4 frames) compromises long-term motion consistency, while increased frame dependencies escalate computational costs. Additionally, static audio features and restricted references fail to capture natural motion variations, leading to artifacts such as motion distortion and rigid expressions in extended sequences.",
214
+ "bbox": [
215
+ 495,
216
+ 333,
217
+ 887,
218
+ 590
219
+ ],
220
+ "page_idx": 1
221
+ },
222
+ {
223
+ "type": "text",
224
+ "text": "Unlike prior work, our approach introduces motion priors from both archived and present clips to enhance long-term motion prediction and identity consistency. By leveraging historical frames and memory-efficient temporal attention, MCDM improves motion continuity while maintaining realism in TalkingFace generation.",
225
+ "bbox": [
226
+ 495,
227
+ 598,
228
+ 887,
229
+ 688
230
+ ],
231
+ "page_idx": 1
232
+ },
233
+ {
234
+ "type": "text",
235
+ "text": "3. Method",
236
+ "text_level": 1,
237
+ "bbox": [
238
+ 496,
239
+ 708,
240
+ 588,
241
+ 723
242
+ ],
243
+ "page_idx": 1
244
+ },
245
+ {
246
+ "type": "text",
247
+ "text": "Task Definition. Given a reference image, audio, and optional facial landmarks, TalkingFace generation aims to produce temporally coherent and realistic videos. The key challenges include maintaining consistent identity over time, achieving natural head movements, and ensuring expressive and precise lip alignment with audio cues. However, existing methods often encounter limitations such as error accumulation, inconsistent identity preservation, suboptimal audio-lip synchronization, and rigid expressions.",
248
+ "bbox": [
249
+ 495,
250
+ 733,
251
+ 887,
252
+ 869
253
+ ],
254
+ "page_idx": 1
255
+ },
256
+ {
257
+ "type": "header",
258
+ "text": "Motion-Prior Conditional Diffusion Model",
259
+ "bbox": [
260
+ 349,
261
+ 56,
262
+ 620,
263
+ 69
264
+ ],
265
+ "page_idx": 1
266
+ },
267
+ {
268
+ "type": "page_number",
269
+ "text": "2",
270
+ "bbox": [
271
+ 480,
272
+ 922,
273
+ 491,
274
+ 934
275
+ ],
276
+ "page_idx": 1
277
+ },
278
+ {
279
+ "type": "image",
280
+ "img_path": "images/8a8e05f8d72436e995c0ea01becb7479be5be810d07722def043cf5a0df34fbd.jpg",
281
+ "image_caption": [
282
+ "Figure 1. Our MCDM architecture. On the upper, the archived-clip motion-prior leverages frame-aligned attention with archived-clip, enhancing identity coherence over extended sequences. On the right, the present-clip motion-prior diffusion model uses multimodal causality and temporal interactions to decouple and predict motion states, covering head, lip, and expression movements while maintaining a clear separation of identity and motion features."
283
+ ],
284
+ "image_footnote": [],
285
+ "bbox": [
286
+ 89,
287
+ 84,
288
+ 614,
289
+ 300
290
+ ],
291
+ "page_idx": 2
292
+ },
293
+ {
294
+ "type": "image",
295
+ "img_path": "images/f45a544d472e7ccf50b45b6eb40697973bb9a007b0e2f5231b28f15ee4ad1b16.jpg",
296
+ "image_caption": [],
297
+ "image_footnote": [],
298
+ "bbox": [
299
+ 617,
300
+ 85,
301
+ 879,
302
+ 300
303
+ ],
304
+ "page_idx": 2
305
+ },
306
+ {
307
+ "type": "text",
308
+ "text": "3.1. Overall Framework",
309
+ "text_level": 1,
310
+ "bbox": [
311
+ 84,
312
+ 380,
313
+ 259,
314
+ 393
315
+ ],
316
+ "page_idx": 2
317
+ },
318
+ {
319
+ "type": "text",
320
+ "text": "To address the above challenges, we introduce MCDM, a framework centered on a denoising UNet resembling Stable Diffusion v1.5 (SD v1.5)<sup>1</sup>, tailored to denoise multi-frame noisy latent inputs under conditional guidance. As illustrated in Figure 1, unlike standard UNet architectures, each Transformer block in MCDM incorporates four attention layers. The first layer, a self-attention, mirrors that in SD v1.5. The second and third layers are parallel cross attention (spatial-wise), designed for distinct interactions: the archived-clip reference attention layer, which integrates motion priors from archived clip encoded by the archived-clip motion-prior module (Section 3.2), and the present-clip reference attention, which engages with present clip priors from the present-clip motion-prior diffusion model (Section 3.3). The fourth layer, the memory-efficient temporal attention (Section 3.4), is a temporal-wise self attention that dynamically updates and merges archived motion features with current motion features, effectively mitigating error accumulation.",
321
+ "bbox": [
322
+ 83,
323
+ 404,
324
+ 475,
325
+ 690
326
+ ],
327
+ "page_idx": 2
328
+ },
329
+ {
330
+ "type": "text",
331
+ "text": "3.2. Archived-Clip Motion-Prior Module",
332
+ "text_level": 1,
333
+ "bbox": [
334
+ 84,
335
+ 707,
336
+ 374,
337
+ 722
338
+ ],
339
+ "page_idx": 2
340
+ },
341
+ {
342
+ "type": "text",
343
+ "text": "Motivation. Existing methods typically use the past $2 - 4$ frames to guide the denoising network for generating temporally consistent videos. However, this limited history frame is insufficient for maintaining long-term consistency, and incorporating more frames exponentially increases computational demand, making it impractical for real-world applications. To overcome these limitations, we propose an archived-clip motion prior that integrates long-term historical frames and a reference frame into the denoising UNet",
344
+ "bbox": [
345
+ 84,
346
+ 729,
347
+ 475,
348
+ 867
349
+ ],
350
+ "page_idx": 2
351
+ },
352
+ {
353
+ "type": "text",
354
+ "text": "via conditional frame-aligned attention, enhancing identity representation and establishing motion context.",
355
+ "bbox": [
356
+ 496,
357
+ 380,
358
+ 885,
359
+ 411
360
+ ],
361
+ "page_idx": 2
362
+ },
363
+ {
364
+ "type": "text",
365
+ "text": "Architecture. As illustrated in Figure 1, the archived-clip motion-prior consists of two frozen VAE encoders, two learnable patchy layers, and a frame-aligned attention mechanism. Given a reference frame $X_{\\mathrm{ref}} \\in \\mathbb{R}^{b \\times 1 \\times c \\times h \\times w}$ and a archived clip $X_{\\mathrm{arch}} \\in \\mathbb{R}^{b \\times a \\times c \\times h \\times w}$ , where $b$ , $c$ , $h$ , $w$ , and $a$ represent the batch size, channels, height, width, and the number of archived frames, respectively. First, the frozen VAE encoder extracts latent features from both the reference and archived frames, resulting in $f_x \\in \\mathbb{R}^{b \\times 1 \\times 4 \\times \\frac{h}{8} \\times \\frac{w}{8}}$ and $f_a \\in \\mathbb{R}^{b \\times a \\times 4 \\times \\frac{h}{8} \\times \\frac{w}{8}}$ , respectively. Next, the learnable patchy layers, consisting of 2D convolutions followed by flattening operations, transform these latent features into tokens, yielding $F_x \\in \\mathbb{R}^{b \\times 1 \\times m \\times d}$ and $F_a \\in \\mathbb{R}^{b \\times a \\times m \\times d}$ , where $m$ and $d$ denote the token length and embedding dimension.",
366
+ "bbox": [
367
+ 496,
368
+ 417,
369
+ 887,
370
+ 645
371
+ ],
372
+ "page_idx": 2
373
+ },
374
+ {
375
+ "type": "text",
376
+ "text": "In the frame-aligned attention, we adopt a frame-wise computation approach to improve efficiency and adaptability for long temporal sequences. For each archived frame $i \\in [1,a]$ , the Key $K_{i}$ is derived from the reference tokens $F_{x}$ , while the Value $V_{i}$ is derived from the tokens of the corresponding archived frame $F_{a}^{i}$ :",
377
+ "bbox": [
378
+ 495,
379
+ 652,
380
+ 887,
381
+ 743
382
+ ],
383
+ "page_idx": 2
384
+ },
385
+ {
386
+ "type": "equation",
387
+ "text": "\n$$\nK _ {i} = F _ {x} \\mathbf {W} _ {K}, \\quad V _ {i} = F _ {a _ {i}} \\mathbf {W} _ {V}, \\tag {1}\n$$\n",
388
+ "text_format": "latex",
389
+ "bbox": [
390
+ 583,
391
+ 753,
392
+ 885,
393
+ 771
394
+ ],
395
+ "page_idx": 2
396
+ },
397
+ {
398
+ "type": "text",
399
+ "text": "where $\\mathbf{W}_K\\in \\mathbb{R}^{d\\times d}$ and $\\mathbf{W}_V\\in \\mathbb{R}^{d\\times d}$ are learnable projection matrices for the Key and Value. The attention for each frame $i$ is then computed as:",
400
+ "bbox": [
401
+ 495,
402
+ 777,
403
+ 887,
404
+ 824
405
+ ],
406
+ "page_idx": 2
407
+ },
408
+ {
409
+ "type": "equation",
410
+ "text": "\n$$\n\\operatorname {A t t e n t i o n} (Q, K _ {i}, V _ {i}) = \\operatorname {S o f t m a x} \\left(\\frac {Q K _ {i} ^ {\\top}}{\\sqrt {d}}\\right) V _ {i}, \\tag {2}\n$$\n",
411
+ "text_format": "latex",
412
+ "bbox": [
413
+ 535,
414
+ 833,
415
+ 885,
416
+ 868
417
+ ],
418
+ "page_idx": 2
419
+ },
420
+ {
421
+ "type": "text",
422
+ "text": "where $Q \\in \\mathbb{R}^{n \\times d}$ represents a learnable query tokens, with $n$ denoting the number of queries. Aggregating the outputs",
423
+ "bbox": [
424
+ 496,
425
+ 875,
426
+ 885,
427
+ 906
428
+ ],
429
+ "page_idx": 2
430
+ },
431
+ {
432
+ "type": "header",
433
+ "text": "Motion-Prior Conditional Diffusion Model",
434
+ "bbox": [
435
+ 349,
436
+ 56,
437
+ 620,
438
+ 69
439
+ ],
440
+ "page_idx": 2
441
+ },
442
+ {
443
+ "type": "page_footnote",
444
+ "text": "<sup>1</sup>https://huggingface.co/runwayml/stable-diffusion-v1-5",
445
+ "bbox": [
446
+ 106,
447
+ 875,
448
+ 439,
449
+ 891
450
+ ],
451
+ "page_idx": 2
452
+ },
453
+ {
454
+ "type": "page_number",
455
+ "text": "3",
456
+ "bbox": [
457
+ 480,
458
+ 922,
459
+ 491,
460
+ 934
461
+ ],
462
+ "page_idx": 2
463
+ },
464
+ {
465
+ "type": "text",
466
+ "text": "across all frames yields the final output $F_{ac}\\in \\mathbb{R}^{b\\times a\\times n\\times d}$ where each frame's attended tokens reflect both the static reference and dynamic temporal information.",
467
+ "bbox": [
468
+ 84,
469
+ 83,
470
+ 478,
471
+ 131
472
+ ],
473
+ "page_idx": 3
474
+ },
475
+ {
476
+ "type": "text",
477
+ "text": "3.3. Present-Clip Motion-Prior Diffusion Model",
478
+ "text_level": 1,
479
+ "bbox": [
480
+ 84,
481
+ 146,
482
+ 421,
483
+ 162
484
+ ],
485
+ "page_idx": 3
486
+ },
487
+ {
488
+ "type": "text",
489
+ "text": "Motivation. Motion information is typically driven either by landmark signals from a driving video or directly by audio cues. The landmark-driven approach guides reference image movements but limits the natural diversity of head motions and expressions. In contrast, audio-driven methods rely solely on audio cues, often lacking sufficient guidance for realistic head movement. To address these limitations, we propose the present-clip motion-prior diffusion model, which first predicts motion states, including head, lip, and expressions motions, rather than directly generating TalkingFace videos.",
490
+ "bbox": [
491
+ 84,
492
+ 169,
493
+ 475,
494
+ 335
495
+ ],
496
+ "page_idx": 3
497
+ },
498
+ {
499
+ "type": "text",
500
+ "text": "Architecture. We aim to predict motion in head, lip, and expressions lip movements, conditioned on audio and image tokens. As shown in Figure 1 (right), we begin by extracting feature tokens from the audio encoder, image encoder, head encoder, lip encoder, and express encoder.",
501
+ "bbox": [
502
+ 84,
503
+ 343,
504
+ 473,
505
+ 420
506
+ ],
507
+ "page_idx": 3
508
+ },
509
+ {
510
+ "type": "text",
511
+ "text": "Audio Encoder: Audio sequence tokens are extracted from the input audio via a frozen Wav2Vec model (Baevski et al., 2020).",
512
+ "bbox": [
513
+ 84,
514
+ 426,
515
+ 475,
516
+ 470
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "Image Encoder: Image tokens are extracted from the reference frame using a frozen CLIP (Radford et al., 2021) and are replicated along the temporal dimension to align with audio features.",
523
+ "bbox": [
524
+ 84,
525
+ 479,
526
+ 475,
527
+ 540
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "Head Encoder: Head tokens are extracted from reference landmark video through a frozen Landmark Guider $^{2}$ ; notably, these tokens are optional, allowing simulation of conditions with or without reference video guidance.",
534
+ "bbox": [
535
+ 84,
536
+ 547,
537
+ 475,
538
+ 609
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "text",
544
+ "text": "- Lip and Express Encoders: Lip and expression tokens are extracted from the target video using a custom-trained encoder. Details of the lip and express encoders are provided in the supplementary material.",
545
+ "bbox": [
546
+ 84,
547
+ 616,
548
+ 473,
549
+ 676
550
+ ],
551
+ "page_idx": 3
552
+ },
553
+ {
554
+ "type": "text",
555
+ "text": "We then pass the audio and image tokens through a feature-wise linear modulation (FiLM) layer (Perez et al., 2018) to adaptively learn multimodal correlation tokens. These tokens, along with the timestep $t$ , and noise-added tokens for head, lip, and expression movements, are prepended to the input sequence. This composite input is fed into an $L$ -layer structure consisting of a multimodal causal transformer block (Peebles & Xie, 2023) and a temporal interaction transformer block (Hu, 2024), with added noise in facial motion tokens acting as the supervision. The training loss $L_{\\mathrm{prior}}$ for the present-clip motion-prior diffusion model $\\epsilon_{\\theta}$ is",
556
+ "bbox": [
557
+ 84,
558
+ 683,
559
+ 475,
560
+ 851
561
+ ],
562
+ "page_idx": 3
563
+ },
564
+ {
565
+ "type": "image",
566
+ "img_path": "images/141683f3dd6f9e15e88a83a35e740a2f1e22d83bf26d85abb1da87f78d073251.jpg",
567
+ "image_caption": [
568
+ "Figure 2. The overview of memory-efficient temporal attention. It can dynamically update and integrate historical motion features with current ones."
569
+ ],
570
+ "image_footnote": [],
571
+ "bbox": [
572
+ 501,
573
+ 80,
574
+ 883,
575
+ 214
576
+ ],
577
+ "page_idx": 3
578
+ },
579
+ {
580
+ "type": "text",
581
+ "text": "defined as:",
582
+ "bbox": [
583
+ 496,
584
+ 273,
585
+ 571,
586
+ 286
587
+ ],
588
+ "page_idx": 3
589
+ },
590
+ {
591
+ "type": "equation",
592
+ "text": "\n$$\nL _ {\\text {p r i o r}} = \\mathbb {E} _ {t, F _ {p}, z _ {t}, \\epsilon ,} \\| \\epsilon - \\epsilon_ {\\theta} (z _ {t}, t, F _ {p}) \\| ^ {2}. \\tag {3}\n$$\n",
593
+ "text_format": "latex",
594
+ "bbox": [
595
+ 557,
596
+ 297,
597
+ 885,
598
+ 318
599
+ ],
600
+ "page_idx": 3
601
+ },
602
+ {
603
+ "type": "text",
604
+ "text": "Without landmark guidance, $F_{p}$ represent multimodal interaction tokens from audio and the reference frame. $z_{t}$ represent noise-added tokens for head, lip, and expression movements at timestep $t$ . With landmark guidance, $F_{p}$ additionally include landmark tokens. $z_{t}$ represent noise-added lip and expression tokens. This design allows flexible conditioning, incorporating landmark guidance when available, while effectively leveraging multimodal interactions for accurate motion state predictions.",
605
+ "bbox": [
606
+ 496,
607
+ 327,
608
+ 888,
609
+ 464
610
+ ],
611
+ "page_idx": 3
612
+ },
613
+ {
614
+ "type": "text",
615
+ "text": "3.4. Memory-Efficient Temporal Attention",
616
+ "text_level": 1,
617
+ "bbox": [
618
+ 496,
619
+ 479,
620
+ 797,
621
+ 494
622
+ ],
623
+ "page_idx": 3
624
+ },
625
+ {
626
+ "type": "text",
627
+ "text": "Motivation. For long-term TalkingFace generation, current methods primarily adopt either fully or semi-autoregressive strategies: the former generates one frame per iteration, while the latter produces a fixed-length clip. However, due to GPU memory limitations, relying on a restricted frame history for extrapolation often results in error accumulation, as limited prior motion information undermines consistency over extended sequences. Therefore, we propose a memory-efficient temporal attention to dynamically update and integrate historical motion features with current ones, reducing error accumulation.",
628
+ "bbox": [
629
+ 495,
630
+ 503,
631
+ 887,
632
+ 669
633
+ ],
634
+ "page_idx": 3
635
+ },
636
+ {
637
+ "type": "text",
638
+ "text": "Architecture.AnimateDiff (Guo et al., 2023) demonstrates that the temporal layer in self-attention ensures smooth temporal continuity and consistency of appearance details across frames. We replace traditional self-attention with fast attention (Choromanski et al., 2020) in the temporal layer to enhance temporal continuity and manage memory efficiently, allowing the accumulation of extensive historical motion information for consistent long-sequence generation. As shown in Figure 2, let $F_{ac}$ and $F_{pc}$ denote the output features of the archived-clip and present-clip motion-prior modules, respectively, and let $Z_{t}$ represent the noisy latent feature at time step $t$ . These features undergo reference attention, yielding refined representations $F_{ac}^{\\mathrm{ref}}$ and $F_{pc}^{\\mathrm{ref}}$ , which capture spatial-domain motion characteristics. $F_{ac}^{\\mathrm{ref}}$ is then input into the motion memory update mechanism, which ag",
639
+ "bbox": [
640
+ 495,
641
+ 676,
642
+ 888,
643
+ 906
644
+ ],
645
+ "page_idx": 3
646
+ },
647
+ {
648
+ "type": "header",
649
+ "text": "Motion-Prior Conditional Diffusion Model",
650
+ "bbox": [
651
+ 349,
652
+ 56,
653
+ 622,
654
+ 69
655
+ ],
656
+ "page_idx": 3
657
+ },
658
+ {
659
+ "type": "page_footnote",
660
+ "text": "$^{2}$ https://github.com/MooreThreads/Moore-AnimateAnyone",
661
+ "bbox": [
662
+ 104,
663
+ 859,
664
+ 460,
665
+ 875
666
+ ],
667
+ "page_idx": 3
668
+ },
669
+ {
670
+ "type": "page_number",
671
+ "text": "4",
672
+ "bbox": [
673
+ 480,
674
+ 922,
675
+ 491,
676
+ 934
677
+ ],
678
+ "page_idx": 3
679
+ },
680
+ {
681
+ "type": "text",
682
+ "text": "gregates motion across frames, producing the update feature $M_{f}$ . The memory update mechanism is defined as follows,",
683
+ "bbox": [
684
+ 84,
685
+ 85,
686
+ 475,
687
+ 116
688
+ ],
689
+ "page_idx": 4
690
+ },
691
+ {
692
+ "type": "text",
693
+ "text": "(1) Initialization: At the first frame, the memory $M_{1}$ is initialized with $F_{ac}^{\\mathrm{ref}}$ since no prior motion information exists:",
694
+ "bbox": [
695
+ 84,
696
+ 122,
697
+ 475,
698
+ 154
699
+ ],
700
+ "page_idx": 4
701
+ },
702
+ {
703
+ "type": "equation",
704
+ "text": "\n$$\nM _ {1} = F _ {a c} ^ {\\text {r e f}}. \\tag {4}\n$$\n",
705
+ "text_format": "latex",
706
+ "bbox": [
707
+ 240,
708
+ 181,
709
+ 475,
710
+ 200
711
+ ],
712
+ "page_idx": 4
713
+ },
714
+ {
715
+ "type": "text",
716
+ "text": "(2) Memory Update: For each frame $f$ , the memory $M_{f}$ is updated by combining the current feature $F_{ac}^{\\mathrm{ref}}$ with the previous memory $M_{f-1}$ as:",
717
+ "bbox": [
718
+ 84,
719
+ 222,
720
+ 473,
721
+ 268
722
+ ],
723
+ "page_idx": 4
724
+ },
725
+ {
726
+ "type": "equation",
727
+ "text": "\n$$\nM _ {f} = \\alpha M _ {f - 1} + (1 - \\alpha) F _ {a c} ^ {\\text {r e f}}, \\tag {5}\n$$\n",
728
+ "text_format": "latex",
729
+ "bbox": [
730
+ 178,
731
+ 296,
732
+ 473,
733
+ 315
734
+ ],
735
+ "page_idx": 4
736
+ },
737
+ {
738
+ "type": "text",
739
+ "text": "where $\\alpha \\in [0,1]$ controls the balance between past and current frames. This fixed memory update mechanism avoids storage bottlenecks of historical information. We then concatenate $F_{pc}^{\\mathrm{ref}}$ with $M_f$ along the temporal dimension, creating $F_m^{\\mathrm{cat}}$ , which integrates past and current motion. $F_m^{\\mathrm{cat}}$ is processed through Fast Attention along the temporal axis to capture dependencies across frames, with the lower half of the resulting feature map used as the output $F_m$ .",
740
+ "bbox": [
741
+ 84,
742
+ 335,
743
+ 475,
744
+ 458
745
+ ],
746
+ "page_idx": 4
747
+ },
748
+ {
749
+ "type": "text",
750
+ "text": "3.5. Training and Inference",
751
+ "text_level": 1,
752
+ "bbox": [
753
+ 84,
754
+ 474,
755
+ 279,
756
+ 489
757
+ ],
758
+ "page_idx": 4
759
+ },
760
+ {
761
+ "type": "text",
762
+ "text": "Training. Our training process is divided into three stages, each with specific learning objectives. Each stage is supervised using standard MSE loss (Rombach et al., 2022).",
763
+ "bbox": [
764
+ 84,
765
+ 498,
766
+ 475,
767
+ 542
768
+ ],
769
+ "page_idx": 4
770
+ },
771
+ {
772
+ "type": "text",
773
+ "text": "Stage1. The archived-clip motion-prior is trained to enhance identity representation and establish a robust facial motion context across extended sequences. The present-clip reference attention and memory-efficient temporal attention modules remain frozen during this stage.",
774
+ "bbox": [
775
+ 84,
776
+ 550,
777
+ 475,
778
+ 626
779
+ ],
780
+ "page_idx": 4
781
+ },
782
+ {
783
+ "type": "text",
784
+ "text": "Stage2. The present-clip motion-prior diffusion model is trained to predict the motion states of facial expressions, lip, and head movements. To simulate scenarios without a driving video, we randomly drop the entire landmark clip.",
785
+ "bbox": [
786
+ 84,
787
+ 633,
788
+ 473,
789
+ 694
790
+ ],
791
+ "page_idx": 4
792
+ },
793
+ {
794
+ "type": "text",
795
+ "text": "Stage3. The full motion-priors conditional diffusion model is trained for generating stable and consistent long-term TalkingFace videos. Only the present-clip reference and memory-efficient temporal attentions are trained.",
796
+ "bbox": [
797
+ 84,
798
+ 700,
799
+ 473,
800
+ 762
801
+ ],
802
+ "page_idx": 4
803
+ },
804
+ {
805
+ "type": "text",
806
+ "text": "Inference. The present-clip motion-prior diffusion model first predicts distinct motion tokens based on the given conditions (either with or without landmark guidance). Landmarks are not used by default unless specified. Subsequently, MCDM utilizes these motion tokens, alongside a single reference image and audio input, to generate the video sequence. For the initial archived clip, we initialize it using the reference image and then progressively update the motion memory to ensure temporal consistency.",
807
+ "bbox": [
808
+ 84,
809
+ 768,
810
+ 475,
811
+ 905
812
+ ],
813
+ "page_idx": 4
814
+ },
815
+ {
816
+ "type": "image",
817
+ "img_path": "images/9e0594185aff3adeb831bb135df096d74964443f72489100838eef30b9f0d1e4.jpg",
818
+ "image_caption": [
819
+ "Table 1. Statistics of our TalkingFace-Wild dataset. We release a TalkingFace dataset that is well-balanced across 10 languages."
820
+ ],
821
+ "image_footnote": [],
822
+ "bbox": [
823
+ 509,
824
+ 77,
825
+ 880,
826
+ 179
827
+ ],
828
+ "page_idx": 4
829
+ },
830
+ {
831
+ "type": "text",
832
+ "text": "4. Experiments",
833
+ "text_level": 1,
834
+ "bbox": [
835
+ 496,
836
+ 243,
837
+ 629,
838
+ 260
839
+ ],
840
+ "page_idx": 4
841
+ },
842
+ {
843
+ "type": "text",
844
+ "text": "4.1. Experimental Settings",
845
+ "text_level": 1,
846
+ "bbox": [
847
+ 496,
848
+ 268,
849
+ 686,
850
+ 284
851
+ ],
852
+ "page_idx": 4
853
+ },
854
+ {
855
+ "type": "text",
856
+ "text": "Datasets. The HDTF dataset (Zhang et al., 2021) comprises 410 videos with over 10,000 unique speech sentences, varied head poses, and movement patterns. Following prior work (Chen et al., 2024; Tian et al., 2024; Xu et al., 2024), we split HDTF into training and testing sets with a 9:1 ratio. The CelebV-HQ dataset (Zhu et al., 2022) includes 35,666 clips (3–20 seconds each) across 15,653 identities, totaling roughly 65 hours. Both datasets present quality issues, such as audio-lip misalignment, facial occlusions, small facial regions, and low resolution. To mitigate these, we developed a custom data processing pipeline for high-quality TalkingFace data, detailed in the following subsection.",
857
+ "bbox": [
858
+ 496,
859
+ 292,
860
+ 887,
861
+ 474
862
+ ],
863
+ "page_idx": 4
864
+ },
865
+ {
866
+ "type": "text",
867
+ "text": "Additionally, mostly methods (Wang et al., 2024; Xu et al., 2024; Jiang et al., 2024) employ proprietary datasets for supplementary training and testing. Similarly, we sourced a variety of TalkingFace videos from YouTube using targeted keyword queries (e.g., \"nationality,\" \"interview,\" \"dialogue\") across different languages and contexts. From Table 1, we collect a new high-quality dataset, TalkingFace-Wild, covering 10 languages and totaling over 200 hours after processing through our data pipeline. To assess the generalization capability of models, we also constructed an open-set test collection of 20 diverse portrait images and 20 audio clips.",
868
+ "bbox": [
869
+ 496,
870
+ 481,
871
+ 888,
872
+ 662
873
+ ],
874
+ "page_idx": 4
875
+ },
876
+ {
877
+ "type": "text",
878
+ "text": "Data Processing. First, we detect scene transitions in raw videos using PySceneDetect<sup>3</sup> and trim each clip to a maximum duration of 30 seconds. Next, we apply face detection (Guo et al., 2021) to exclude videos lacking complete faces or containing multiple faces, using the bounding boxes to extract talking head regions. Third, an image quality assessment model (Su et al., 2020) filters out low-quality and low-resolution clips. Fourth, SyncNet (Prajwal et al., 2020) assesses audio-lip synchronization, discarding clips with misaligned audio. Finally, we manually inspect a subset to verify audio-lip synchronization and overall video quality, ensuring precise filtering. In addition, to ensure a fair comparison, we report results trained independently on each of the previously mentioned datasets.",
879
+ "bbox": [
880
+ 496,
881
+ 670,
882
+ 888,
883
+ 881
884
+ ],
885
+ "page_idx": 4
886
+ },
887
+ {
888
+ "type": "header",
889
+ "text": "Motion-Prior Conditional Diffusion Model",
890
+ "bbox": [
891
+ 349,
892
+ 56,
893
+ 622,
894
+ 69
895
+ ],
896
+ "page_idx": 4
897
+ },
898
+ {
899
+ "type": "page_footnote",
900
+ "text": "<sup>3</sup>https://github.com/Breakthrough/PySceneDetect",
901
+ "bbox": [
902
+ 517,
903
+ 890,
904
+ 813,
905
+ 905
906
+ ],
907
+ "page_idx": 4
908
+ },
909
+ {
910
+ "type": "page_number",
911
+ "text": "5",
912
+ "bbox": [
913
+ 480,
914
+ 922,
915
+ 491,
916
+ 934
917
+ ],
918
+ "page_idx": 4
919
+ },
920
+ {
921
+ "type": "table",
922
+ "img_path": "images/e18fca42014fd159e7eefa6c2e78c886186d0ea5301eda3dda40166dd9e240bc.jpg",
923
+ "table_caption": [
924
+ "Motion-Prior Conditional Diffusion Model"
925
+ ],
926
+ "table_footnote": [],
927
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"6\">HDTF</td><td colspan=\"6\">CelebV-HQ</td></tr><tr><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>Audio2Head</td><td>76.08</td><td>1417.65</td><td>3.16</td><td>17.62</td><td>0.572</td><td>3.81</td><td>127.30</td><td>1882.64</td><td>1.96</td><td>17.36</td><td>0.391</td><td>8.42</td></tr><tr><td>V-Express</td><td>57.14</td><td>1152.29</td><td>5.05</td><td>11.68</td><td>0.706</td><td>1.83</td><td>98.07</td><td>1465.26</td><td>3.71</td><td>13.41</td><td>0.514</td><td>5.18</td></tr><tr><td>AniPortrait</td><td>54.81</td><td>1072.63</td><td>5.40</td><td>11.39</td><td>0.727</td><td>1.95</td><td>94.25</td><td>1260.74</td><td>3.98</td><td>12.88</td><td>0.536</td><td>4.91</td></tr><tr><td>SadTalker</td><td>52.77</td><td>956.24</td><td>5.73</td><td>10.65</td><td>0.736</td><td>1.87</td><td>88.22</td><td>1055.49</td><td>4.05</td><td>11.20</td><td>0.565</td><td>4.66</td></tr><tr><td>Hallo</td><td>37.29</td><td>616.04</td><td>6.33</td><td>8.64</td><td>0.774</td><td>1.67</td><td>72.46</td><td>907.60</td><td>6.48</td><td>8.61</td><td>0.620</td><td>2.93</td></tr><tr><td>EchoMimic</td><td>31.44</td><td>595.17</td><td>6.96</td><td>8.59</td><td>0.782</td><td>1.64</td><td>71.47</td><td>893.28</td><td>6.70</td><td>8.45</td><td>0.637</td><td>2.81</td></tr><tr><td>MegActor-Σ</td><td>31.37</td><td>586.10</td><td>6.87</td><td>8.55</td><td>0.778</td><td>1.62</td><td>70.82</td><td>875.21</td><td>6.77</td><td>8.32</td><td>0.634</td><td>2.74</td></tr><tr><td>MCDM (Ours)</td><td>26.45</td><td>543.28</td><td>7.49</td><td>8.04</td><td>0.824</td><td>1.51</td><td>67.29</td><td>784.53</td><td>7.25</td><td>7.84</td><td>0.662</td><td>2.31</td></tr></table>",
928
+ "bbox": [
929
+ 86,
930
+ 70,
931
+ 885,
932
+ 214
933
+ ],
934
+ "page_idx": 5
935
+ },
936
+ {
937
+ "type": "text",
938
+ "text": "Metrics. We utilize a comprehensive set of metrics to assess the quality of generated videos and audio-lip synchronization. Fréchet Inception Distance (FID) (Heusel et al., 2017) evaluates individual frame quality by comparing feature distributions from a pre-trained model. Fréchet Video Distance (FVD) (Unterthiner et al., 2019) quantifies the distributional distance between real and generated videos, providing an overall assessment of video fidelity. Sync-C and Sync-D (Chung & Zisserman, 2017) evaluate lip synchronization from content and dynamic perspectives, with higher Sync-C and lower Sync-D scores indicating superior alignment with audio. Structural Similarity Index (SSIM) (Wang et al., 2004) measures structural consistency between ground truth and generated videos, while E-FID (Deng et al., 2019) provides a refined image fidelity evaluation based on Inception network features.",
939
+ "bbox": [
940
+ 88,
941
+ 247,
942
+ 473,
943
+ 487
944
+ ],
945
+ "page_idx": 5
946
+ },
947
+ {
948
+ "type": "text",
949
+ "text": "Implementations. The experiments are conducted on a computing platform equipped with 8 NVIDIA V100 GPUs. Training is performed in three stages, with each stage consisting of 30,000 iterations and a batch size of 4. Video data is processed at a resolution of $512 \\times 512$ . The learning rate is fixed at $1 \\times 10^{-5}$ across all stages, and the AdamW optimizer is employed to stabilize training. Each training clip comprised 16 video frames. In the archived-clip motion-prior module, we set $\\alpha = 16$ , $m = 256$ , and $n = 16$ . In the present-clip motion-prior diffusion model, the number of layers $L$ is set to 8, and the weighting factor $\\alpha$ in Eq. 5 is configured to 0.1 to balance the influence of prior motion information. This setup is chosen to optimize long-term identity preservation and enhance motion consistency within generated TalkingFace videos.",
950
+ "bbox": [
951
+ 88,
952
+ 497,
953
+ 473,
954
+ 722
955
+ ],
956
+ "page_idx": 5
957
+ },
958
+ {
959
+ "type": "text",
960
+ "text": "4.2. Main Results",
961
+ "text_level": 1,
962
+ "bbox": [
963
+ 88,
964
+ 739,
965
+ 210,
966
+ 753
967
+ ],
968
+ "page_idx": 5
969
+ },
970
+ {
971
+ "type": "text",
972
+ "text": "We compare our method with several SOTA methods, including Audio2Head (Wang et al., 2021a), V-Express (Wang et al., 2024), AniPortrait (Wei et al., 2024), SadTalker (Zhang et al., 2023), Hallo (Xu et al., 2024), EchoMimic (Chen et al., 2024), and MegActor- $\\Sigma$ (Yang et al., 2024), from quantitative, qualitative, and user study. Unless otherwise specified, all methods do not use landmarks to ensure a fair comparison.",
973
+ "bbox": [
974
+ 88,
975
+ 763,
976
+ 473,
977
+ 883
978
+ ],
979
+ "page_idx": 5
980
+ },
981
+ {
982
+ "type": "table",
983
+ "img_path": "images/6863d406dafc39b4496b6450b0bfd386b0f6f8a32193ec6cf1256232ccf36415.jpg",
984
+ "table_caption": [
985
+ "Table 2. Quantitative comparisons on HDTF and CelebV-HQ. MCDM achieves the top results across all metrics, with best in bold."
986
+ ],
987
+ "table_footnote": [],
988
+ "table_body": "<table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>Audio2Head</td><td>87.21</td><td>1836.25</td><td>2.32</td><td>13.92</td><td>0.613</td><td>3.12</td></tr><tr><td>V-Express</td><td>62.18</td><td>1324.57</td><td>5.45</td><td>9.04</td><td>0.674</td><td>2.81</td></tr><tr><td>AniPortrait</td><td>56.11</td><td>954.91</td><td>6.37</td><td>8.29</td><td>0.706</td><td>2.60</td></tr><tr><td>SadTalker</td><td>52.77</td><td>847.20</td><td>6.94</td><td>7.92</td><td>0.724</td><td>2.49</td></tr><tr><td>Hallo</td><td>51.35</td><td>792.38</td><td>6.85</td><td>7.65</td><td>0.728</td><td>2.35</td></tr><tr><td>EchoMimic</td><td>49.20</td><td>751.44</td><td>7.06</td><td>7.18</td><td>0.737</td><td>2.31</td></tr><tr><td>MegActor-Σ</td><td>48.57</td><td>724.40</td><td>7.22</td><td>7.14</td><td>0.745</td><td>2.29</td></tr><tr><td>MCDM (Ours)</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>",
989
+ "bbox": [
990
+ 501,
991
+ 247,
992
+ 890,
993
+ 375
994
+ ],
995
+ "page_idx": 5
996
+ },
997
+ {
998
+ "type": "text",
999
+ "text": "Table 3. Quantitative comparisons on TalkingFace-Wild. MCDM achieves a significant advantage over other methods.",
1000
+ "bbox": [
1001
+ 501,
1002
+ 380,
1003
+ 885,
1004
+ 407
1005
+ ],
1006
+ "page_idx": 5
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "Quantitative Evaluation. Table 2 presents a quantitative comparison on the HDTF (Zhang et al., 2021) and CelebV-HQ (Zhu et al., 2022), illustrating the overall superior performance of diffusion-based methods compared to GAN-based methods. Our proposed MCDM achieves the best scores across all metrics, outperforming existing diffusion-based approaches. Specifically, MCDM achieves superior lip-sync accuracy, reflected in higher Sync-C and lower Sync-D scores, outperforming methods like EchoMimic (Chen et al., 2024) and MegActor- $\\Sigma$ (Yang et al., 2024), which show notable declines in synchronization quality. MCDM's outstanding SSIM and E-FID scores also highlight its ability to generate visually appealing, temporally consistent content with precise lip synchronization.",
1011
+ "bbox": [
1012
+ 501,
1013
+ 422,
1014
+ 885,
1015
+ 633
1016
+ ],
1017
+ "page_idx": 5
1018
+ },
1019
+ {
1020
+ "type": "text",
1021
+ "text": "Table 3 summarizes the quantitative performance on the proposed TalkingFace-Wild dataset. Consistent with results on HDTF (Zhang et al., 2021) and CelebV-HQ (Zhu et al., 2022), MCDM surpasses all competing SOTA methods across evaluation metrics, demonstrating marked improvements in visual quality and temporal consistency. Achieving the best FID, FVD, and an E-FID of 1.97, MCDM shows strong capability in generating high-fidelity TalkingFace videos under diverse conditions, effectively maintaining temporal coherence across audio, expressions, and lip synchronization.",
1022
+ "bbox": [
1023
+ 501,
1024
+ 641,
1025
+ 885,
1026
+ 805
1027
+ ],
1028
+ "page_idx": 5
1029
+ },
1030
+ {
1031
+ "type": "text",
1032
+ "text": "Qualitative Evaluation. Figure 3 provides a qualitative comparison of our method against other SOTA approaches. Compared to V-Express (Wang et al., 2024) and EchoMimic (Chen et al., 2024), our approach shows superior head and lip synchronization, benefiting from the audiovisual consistency introduced by motion priors. Addition",
1033
+ "bbox": [
1034
+ 501,
1035
+ 815,
1036
+ 885,
1037
+ 904
1038
+ ],
1039
+ "page_idx": 5
1040
+ },
1041
+ {
1042
+ "type": "page_number",
1043
+ "text": "6",
1044
+ "bbox": [
1045
+ 480,
1046
+ 924,
1047
+ 491,
1048
+ 934
1049
+ ],
1050
+ "page_idx": 5
1051
+ },
1052
+ {
1053
+ "type": "image",
1054
+ "img_path": "images/21c37f389b56476ffbb7367ef560e17747ba89feb2e4b964cda5d5cf3be947fa.jpg",
1055
+ "image_caption": [
1056
+ "Error Eye Sync",
1057
+ "Error",
1058
+ "Expression Sync",
1059
+ "Error",
1060
+ "Identity Sync",
1061
+ "Reference",
1062
+ "Image",
1063
+ "$\\frac{1}{2}x - 1 > 0$ ②",
1064
+ "$\\therefore m - 1 \\neq 0$ ;",
1065
+ "$\\therefore m - 1 \\neq 0$ ;",
1066
+ "$\\therefore m - 1 \\neq 0$ ;",
1067
+ "$\\frac{1}{2}x - 1 > 0$",
1068
+ "$\\therefore m = \\frac{3}{11}$",
1069
+ "$\\therefore m = \\frac{3}{11}$ ;",
1070
+ "$\\therefore m = \\frac{3}{11}$ ;",
1071
+ "$\\therefore m - 1 \\neq 0$ ;",
1072
+ "$\\therefore m - 1 \\neq 0$ ;",
1073
+ "$\\therefore m - 1 \\neq 0$ ;",
1074
+ "$\\therefore m - 1 \\neq 0$ ;",
1075
+ "$\\therefore m = \\frac{3}{11}$ ;",
1076
+ "Figure 3. Qualitative comparison on HDTF and CelebV-HQ. Our method achieves the best generation results, particularly in identity consistency and motion detail."
1077
+ ],
1078
+ "image_footnote": [],
1079
+ "bbox": [
1080
+ 106,
1081
+ 84,
1082
+ 460,
1083
+ 396
1084
+ ],
1085
+ "page_idx": 6
1086
+ },
1087
+ {
1088
+ "type": "image",
1089
+ "img_path": "images/a5e329a60ed41c6cea3ea9a81fc5bcbfbda7850cc52ad0fad19c383240c19c83.jpg",
1090
+ "image_caption": [],
1091
+ "image_footnote": [],
1092
+ "bbox": [
1093
+ 462,
1094
+ 140,
1095
+ 519,
1096
+ 146
1097
+ ],
1098
+ "page_idx": 6
1099
+ },
1100
+ {
1101
+ "type": "image",
1102
+ "img_path": "images/9b30078b7aab5e4b00084ec39d3259f90625c79e17a8361273bb6c81ceb25e01.jpg",
1103
+ "image_caption": [
1104
+ "Error Lip Sync"
1105
+ ],
1106
+ "image_footnote": [],
1107
+ "bbox": [
1108
+ 522,
1109
+ 84,
1110
+ 859,
1111
+ 395
1112
+ ],
1113
+ "page_idx": 6
1114
+ },
1115
+ {
1116
+ "type": "image",
1117
+ "img_path": "images/33f4dc0ae860d80985bd1abd247f7db15646938bbc3bd9ef3b58df86a7db2b6e.jpg",
1118
+ "image_caption": [
1119
+ "A: Audio2Head",
1120
+ "B: V-Express",
1121
+ "C: AniPortrait",
1122
+ "Figure 4. User study results of identity consistency, motion synchronization, and video quality. Higher values indicate better performance."
1123
+ ],
1124
+ "image_footnote": [],
1125
+ "bbox": [
1126
+ 127,
1127
+ 446,
1128
+ 354,
1129
+ 540
1130
+ ],
1131
+ "page_idx": 6
1132
+ },
1133
+ {
1134
+ "type": "image",
1135
+ "img_path": "images/e9747603bbcab656f71728012b61ddf563a3b79e4a8bf2ef237338ed345a6dc4.jpg",
1136
+ "image_caption": [
1137
+ "D: SadTalker",
1138
+ "E: Hallo",
1139
+ "F: EchoMimic"
1140
+ ],
1141
+ "image_footnote": [],
1142
+ "bbox": [
1143
+ 370,
1144
+ 446,
1145
+ 596,
1146
+ 540
1147
+ ],
1148
+ "page_idx": 6
1149
+ },
1150
+ {
1151
+ "type": "image",
1152
+ "img_path": "images/fa6c272af71b5e3da57bf175e8ace76565e61300068e93821530f373236e83f3.jpg",
1153
+ "image_caption": [
1154
+ "G: MegActor-",
1155
+ "H:Ours"
1156
+ ],
1157
+ "image_footnote": [],
1158
+ "bbox": [
1159
+ 612,
1160
+ 446,
1161
+ 841,
1162
+ 540
1163
+ ],
1164
+ "page_idx": 6
1165
+ },
1166
+ {
1167
+ "type": "text",
1168
+ "text": "ally, unlike Hallo (Xu et al., 2024) and MegActor- $\\Sigma$ (Yang et al., 2024). Our method accurately captures subtle facial actions, including blinks and expression nuances through the archived-clip, while better preserving identity consistency. Overall, our approach demonstrates the best visual results.",
1169
+ "bbox": [
1170
+ 84,
1171
+ 582,
1172
+ 475,
1173
+ 672
1174
+ ],
1175
+ "page_idx": 6
1176
+ },
1177
+ {
1178
+ "type": "text",
1179
+ "text": "User Study. The quantitative and qualitative comparisons underscore the substantial advantages of our proposed MCDM in generating consistent TalkingFace videos. To further evaluate video quality, we conduct a user study, focusing on identity consistency, motion synchronization, and overall video quality. We randomly selected 10 cases, shuffled the generated videos from each method, and recruited 20 participants (10 male, 10 female) to provide rank-3 preferences. From Figure 4, our method consistently achieved the highest scores across all metrics in the user preference evaluation. This user study highlights the significant advantage of our approach in user-centric TalkingFace generation.",
1180
+ "bbox": [
1181
+ 84,
1182
+ 681,
1183
+ 475,
1184
+ 863
1185
+ ],
1186
+ "page_idx": 6
1187
+ },
1188
+ {
1189
+ "type": "table",
1190
+ "img_path": "images/2aafaf40e045913db2dcbb542f13dbd5086c0afdd4c84464661f17dad3a66cdc.jpg",
1191
+ "table_caption": [],
1192
+ "table_footnote": [],
1193
+ "table_body": "<table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>w/o Fa</td><td>46.25</td><td>708.93</td><td>7.37</td><td>7.05</td><td>0.749</td><td>2.25</td></tr><tr><td>w/o Fpc</td><td>45.63</td><td>684.20</td><td>7.49</td><td>6.97</td><td>0.758</td><td>2.13</td></tr><tr><td>w/o MTA</td><td>44.27</td><td>671.05</td><td>7.62</td><td>6.84</td><td>0.771</td><td>2.04</td></tr><tr><td>Ours</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>",
1194
+ "bbox": [
1195
+ 500,
1196
+ 580,
1197
+ 883,
1198
+ 659
1199
+ ],
1200
+ "page_idx": 6
1201
+ },
1202
+ {
1203
+ "type": "text",
1204
+ "text": "Table 4. Ablation results on the TalkingFace-Wild dataset.",
1205
+ "bbox": [
1206
+ 496,
1207
+ 665,
1208
+ 846,
1209
+ 679
1210
+ ],
1211
+ "page_idx": 6
1212
+ },
1213
+ {
1214
+ "type": "text",
1215
+ "text": "4.3. Ablation Results",
1216
+ "text_level": 1,
1217
+ "bbox": [
1218
+ 496,
1219
+ 693,
1220
+ 645,
1221
+ 707
1222
+ ],
1223
+ "page_idx": 6
1224
+ },
1225
+ {
1226
+ "type": "text",
1227
+ "text": "We conduct an ablation study to assess the impact of each component in our method. Table 4 shows the results: w/o $F_{a}$ omits historical frame information, w/o $F_{pc}$ adds an audio attention module for audio feature input, and w/o MTA applies a standard temporal attention module.",
1228
+ "bbox": [
1229
+ 495,
1230
+ 715,
1231
+ 885,
1232
+ 792
1233
+ ],
1234
+ "page_idx": 6
1235
+ },
1236
+ {
1237
+ "type": "text",
1238
+ "text": "Archived-Clip Motion-Prior. The results in Table 4 show that removing historical frame information (w/o $F_{a}$ ) significantly degrades performance across all metrics, underscoring the importance of the archived-clip motion-prior. To further assess the effect of $F_{a}$ on long-term generation, we visualized frames 30, 300, 1800, 3600, and 7200 with corresponding SSIM scores, as shown in Figure 5. Fig-",
1239
+ "bbox": [
1240
+ 495,
1241
+ 799,
1242
+ 887,
1243
+ 905
1244
+ ],
1245
+ "page_idx": 6
1246
+ },
1247
+ {
1248
+ "type": "header",
1249
+ "text": "Motion-Prior Conditional Diffusion Model",
1250
+ "bbox": [
1251
+ 349,
1252
+ 56,
1253
+ 620,
1254
+ 68
1255
+ ],
1256
+ "page_idx": 6
1257
+ },
1258
+ {
1259
+ "type": "page_number",
1260
+ "text": "7",
1261
+ "bbox": [
1262
+ 480,
1263
+ 922,
1264
+ 491,
1265
+ 934
1266
+ ],
1267
+ "page_idx": 6
1268
+ },
1269
+ {
1270
+ "type": "image",
1271
+ "img_path": "images/dec7c2714f614ffa47ba0661f25a3706fca26ac49784ebfe61b835d30a77bac5.jpg",
1272
+ "image_caption": [
1273
+ "(a) Visualization results of different frames"
1274
+ ],
1275
+ "image_footnote": [],
1276
+ "bbox": [
1277
+ 89,
1278
+ 80,
1279
+ 452,
1280
+ 179
1281
+ ],
1282
+ "page_idx": 7
1283
+ },
1284
+ {
1285
+ "type": "image",
1286
+ "img_path": "images/2e9e1f84ea1dbb6dec024d0f488c13822c09382fe4483a9e539c0d2be8ce541a.jpg",
1287
+ "image_caption": [
1288
+ "(b) SSIM scores of different frames"
1289
+ ],
1290
+ "image_footnote": [],
1291
+ "bbox": [
1292
+ 88,
1293
+ 185,
1294
+ 452,
1295
+ 297
1296
+ ],
1297
+ "page_idx": 7
1298
+ },
1299
+ {
1300
+ "type": "image",
1301
+ "img_path": "images/1db265af102eeddc16fb14b4f4ab77b7396aeba4ef12f3c2aafdd1b8efdb12bd.jpg",
1302
+ "image_caption": [
1303
+ "Figure 5. Visualization results and SSIM scores during long-term generation. We find that $\\mathrm{w} / F_{a}$ offers a distinct advantage in maintaining both identity and contextual consistency.",
1304
+ "(a) Heatmap Results of Lip"
1305
+ ],
1306
+ "image_footnote": [],
1307
+ "bbox": [
1308
+ 86,
1309
+ 357,
1310
+ 472,
1311
+ 430
1312
+ ],
1313
+ "page_idx": 7
1314
+ },
1315
+ {
1316
+ "type": "image",
1317
+ "img_path": "images/d5ce0fbeadcf0e1cac1bd7c7eb05a7d184d0501bfb61afad590d1f7c0e22c27a.jpg",
1318
+ "image_caption": [
1319
+ "(b) Clusters Results of Expression",
1320
+ "Figure 6. Lip heatmap and expression cluster. We find that w/ $F_{p}$ effectively tracks the lip region and conveys expressions."
1321
+ ],
1322
+ "image_footnote": [],
1323
+ "bbox": [
1324
+ 125,
1325
+ 441,
1326
+ 434,
1327
+ 500
1328
+ ],
1329
+ "page_idx": 7
1330
+ },
1331
+ {
1332
+ "type": "text",
1333
+ "text": "ure 5(a) indicates that without the archived-clip (w/o $F_{a}$ ), identity consistency worsens with frame progression, resulting in visible artifacts and inconsistencies in head, mouth, and expression. In Figure 5(b), the SSIM scores highlight error accumulation increases with frame count, showing a rapid decline in (w/o $F_{a}$ ), while (w/ $F_{a}$ ) remains stable at a higher value. These findings validate the effectiveness of the archived-clip motion-prior in preserving both identity and temporal coherence over extended sequences.",
1334
+ "bbox": [
1335
+ 84,
1336
+ 565,
1337
+ 473,
1338
+ 702
1339
+ ],
1340
+ "page_idx": 7
1341
+ },
1342
+ {
1343
+ "type": "text",
1344
+ "text": "Present-Clip Motion-Prior. Similarly, excluding the present-clip motion-prior and injecting audio information directly via audio attention (w/o $F_{pc}$ ) leads to a drop in performance across all metrics. This decline highlights the effectiveness of the present-clip motion-prior in leveraging multimodal causality and temporal interactions to decouple and predict motion states, including expressions, lip movement, and head motion (see Table 4). To further validate this decoupling capability, we visualize heatmaps of the predicted lip tokens, as shown in Figure 6(a), where the present-clip motion-prior accurately localizes and tracks lip motion. For expression decoupling, t-SNE (Van der Maaten & Hinton, 2008) visualization of expression tokens reveals",
1345
+ "bbox": [
1346
+ 84,
1347
+ 709,
1348
+ 475,
1349
+ 905
1350
+ ],
1351
+ "page_idx": 7
1352
+ },
1353
+ {
1354
+ "type": "image",
1355
+ "img_path": "images/fe6ca791fe5bd7219f0d2f69262d5cd59424ab5f5d3be36f376e88f5fc43230b.jpg",
1356
+ "image_caption": [
1357
+ "Figure 7. SSIM results for different $\\alpha$ values. Our method performs comparably well when the $\\alpha$ value is smaller than 0.9."
1358
+ ],
1359
+ "image_footnote": [],
1360
+ "bbox": [
1361
+ 516,
1362
+ 80,
1363
+ 864,
1364
+ 202
1365
+ ],
1366
+ "page_idx": 7
1367
+ },
1368
+ {
1369
+ "type": "table",
1370
+ "img_path": "images/029cfd090e23375df150e67ebbb5c990dc2bed1dc7000ba56dee221411fb726e.jpg",
1371
+ "table_caption": [],
1372
+ "table_footnote": [],
1373
+ "table_body": "<table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>B1</td><td>42.49</td><td>668.24</td><td>7.69</td><td>6.78</td><td>0.771</td><td>2.02</td></tr><tr><td>B2</td><td>47.12</td><td>721.17</td><td>7.30</td><td>6.84</td><td>0.732</td><td>2.29</td></tr><tr><td>Ours</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>",
1374
+ "bbox": [
1375
+ 516,
1376
+ 237,
1377
+ 864,
1378
+ 304
1379
+ ],
1380
+ "page_idx": 7
1381
+ },
1382
+ {
1383
+ "type": "text",
1384
+ "text": "Table 5. More results of variant MCDM.",
1385
+ "bbox": [
1386
+ 566,
1387
+ 304,
1388
+ 815,
1389
+ 316
1390
+ ],
1391
+ "page_idx": 7
1392
+ },
1393
+ {
1394
+ "type": "text",
1395
+ "text": "tighter clustering within each of the eight distinct emotion categories when using the present-clip motion-prior, indicating improved separation of emotional content from audio input.",
1396
+ "bbox": [
1397
+ 496,
1398
+ 323,
1399
+ 887,
1400
+ 383
1401
+ ],
1402
+ "page_idx": 7
1403
+ },
1404
+ {
1405
+ "type": "text",
1406
+ "text": "Memory-Efficient Temporal Attention. Following the standard approach (Hu, 2024), we replace the proposed memory-efficient temporal attention with conventional temporal attention by directly summing $F_{ac}^{ref}$ and $F_{pc}^{ref}$ . As shown in Table 4, this modification significantly degrades performance across all metrics. This drop in quality is primarily due to the absence of an update mechanism, which introduces gaps between the archived clip and the present clip, compromising video smoothness. Next, we analyzed the effect of different $\\alpha$ values in Eq. 5, which control the update rate, on the model's SSIM performance, as shown in Figure 7. We observed that as $\\alpha$ increases, SSIM gradually declines. When $\\alpha$ is below 0.9, our approach significantly outperforms the w/o WTA configuration. However, at $\\alpha = 0.9$ , the performance is weaker than w/o WTA, due to the excessive accumulation of historical frame information and a reduced proportion of the present clip. Consequently, we set $\\alpha = 0.1$ as the default value in this paper.",
1407
+ "bbox": [
1408
+ 495,
1409
+ 391,
1410
+ 888,
1411
+ 664
1412
+ ],
1413
+ "page_idx": 7
1414
+ },
1415
+ {
1416
+ "type": "text",
1417
+ "text": "More Results. Table 5 evaluates different design variants. In B1, Q-Former (Li et al., 2023) replaces frame-aligned attention, while in B2, Reference UNet (Hu, 2024) substitutes VAE with Reference UNet, omitting archived-clip information. Results show that frame-aligned attention outperforms Q-Former by effectively capturing temporal context and integrating long-term dependencies. Additionally, using a frozen VAE with a trainable patchify layer proves to be an efficient alternative to the conventional Reference UNet.",
1418
+ "bbox": [
1419
+ 495,
1420
+ 671,
1421
+ 888,
1422
+ 806
1423
+ ],
1424
+ "page_idx": 7
1425
+ },
1426
+ {
1427
+ "type": "text",
1428
+ "text": "5. Conclusion",
1429
+ "text_level": 1,
1430
+ "bbox": [
1431
+ 496,
1432
+ 819,
1433
+ 616,
1434
+ 834
1435
+ ],
1436
+ "page_idx": 7
1437
+ },
1438
+ {
1439
+ "type": "text",
1440
+ "text": "We presented the Motion-priors Conditional Diffusion Model (MCDM) to address the challenges of long-term TalkingFace generation by achieving robust identity consistency and motion continuity. MCDM integrates three",
1441
+ "bbox": [
1442
+ 495,
1443
+ 844,
1444
+ 887,
1445
+ 906
1446
+ ],
1447
+ "page_idx": 7
1448
+ },
1449
+ {
1450
+ "type": "header",
1451
+ "text": "Motion-Prior Conditional Diffusion Model",
1452
+ "bbox": [
1453
+ 349,
1454
+ 56,
1455
+ 620,
1456
+ 69
1457
+ ],
1458
+ "page_idx": 7
1459
+ },
1460
+ {
1461
+ "type": "page_number",
1462
+ "text": "8",
1463
+ "bbox": [
1464
+ 480,
1465
+ 922,
1466
+ 491,
1467
+ 934
1468
+ ],
1469
+ "page_idx": 7
1470
+ },
1471
+ {
1472
+ "type": "text",
1473
+ "text": "key innovations: an archived-clip motion-prior to enhance identity representation, a present-clip motion-prior diffusion model for accurate motion prediction, and a memory-efficient temporal attention to mitigate error accumulation over extended sequences. Additionally, we introduced the TalkingFace-Wild dataset, offering over 200 hours of multilingual video data across diverse scenarios. Experimental results demonstrate the effectiveness of MCDM, setting new benchmarks in long-term TalkingFace generation.",
1474
+ "bbox": [
1475
+ 84,
1476
+ 85,
1477
+ 475,
1478
+ 220
1479
+ ],
1480
+ "page_idx": 8
1481
+ },
1482
+ {
1483
+ "type": "text",
1484
+ "text": "Impact Statement",
1485
+ "text_level": 1,
1486
+ "bbox": [
1487
+ 86,
1488
+ 239,
1489
+ 240,
1490
+ 257
1491
+ ],
1492
+ "page_idx": 8
1493
+ },
1494
+ {
1495
+ "type": "text",
1496
+ "text": "This paper presents the MCDM model, designed to enhance identity and temporal consistency in long-term TalkingFace generation. While MCDM contributes to the advancement of generative modeling, we recognize the potential ethical concerns, including the risks of misuse for creating deceptive content or spreading misinformation. We emphasize the importance of transparency in AI development and support the integration of detection frameworks to mitigate these risks. In alignment with ongoing efforts in responsible AI, we aim to ensure that the benefits of our work are balanced with its ethical implications, promoting safe and constructive applications in society.",
1497
+ "bbox": [
1498
+ 84,
1499
+ 266,
1500
+ 475,
1501
+ 448
1502
+ ],
1503
+ "page_idx": 8
1504
+ },
1505
+ {
1506
+ "type": "text",
1507
+ "text": "References",
1508
+ "text_level": 1,
1509
+ "bbox": [
1510
+ 86,
1511
+ 465,
1512
+ 181,
1513
+ 482
1514
+ ],
1515
+ "page_idx": 8
1516
+ },
1517
+ {
1518
+ "type": "list",
1519
+ "sub_type": "ref_text",
1520
+ "list_items": [
1521
+ "Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460, 2020.",
1522
+ "Chan, E. R., Lin, C. Z., Chan, M. A., Nagano, K., Pan, B., De Mello, S., Gallo, O., Guibas, L. J., Tremblay, J., Khamis, S., et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16123-16133, 2022.",
1523
+ "Chen, Z., Cao, J., Chen, Z., Li, Y., and Ma, C. Echomimic: Lifelike audio-driven portrait animations through editable landmark conditions. arXiv preprint arXiv:2407.08136, 2024.",
1524
+ "Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.",
1525
+ "Chung, J. S. and Zisserman, A. Out of time: automated lip sync in the wild. In Computer Vision-ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, pp. 251-263. Springer, 2017.",
1526
+ "Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., and Tong, X."
1527
+ ],
1528
+ "bbox": [
1529
+ 86,
1530
+ 489,
1531
+ 475,
1532
+ 906
1533
+ ],
1534
+ "page_idx": 8
1535
+ },
1536
+ {
1537
+ "type": "list",
1538
+ "sub_type": "ref_text",
1539
+ "list_items": [
1540
+ "Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0-0, 2019.",
1541
+ "Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.",
1542
+ "Guo, J., Deng, J., Lattas, A., and Zafeiriou, S. Sample and computation redistribution for efficient face detection. arXiv preprint arXiv:2105.04714, 2021.",
1543
+ "Guo, J., Zhang, D., Liu, X., Zhong, Z., Zhang, Y., Wan, P., and Zhang, D. Liveportrait: Efficient portrait animation with stitching and retargeting control. arXiv preprint arXiv:2407.03168, 2024.",
1544
+ "Guo, Y., Yang, C., Rao, A., Liang, Z., Wang, Y., Qiao, Y., Agrawala, M., Lin, D., and Dai, B. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023.",
1545
+ "Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.",
1546
+ "Hong, F.-T., Zhang, L., Shen, L., and Xu, D. Depth-aware generative adversarial network for talking head video generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3397-3406, 2022a.",
1547
+ "Hong, Y., Peng, B., Xiao, H., Liu, L., and Zhang, J. Head-nerf: A real-time nerf-based parametric head model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20374-20384, 2022b.",
1548
+ "Hu, L. Animate anyone: Consistent and controllable imaged-to-video synthesis for character animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8153-8163, 2024.",
1549
+ "Ji, X., Zhou, H., Wang, K., Wu, W., Loy, C. C., Cao, X., and Xu, F. Audio-driven emotional video portraits. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14080-14089, 2021.",
1550
+ "Jiang, J., Liang, C., Yang, J., Lin, G., Zhong, T., and Zheng, Y. Loopy: Taming audio-driven portrait avatar with long-term motion dependency. arXiv preprint arXiv:2409.02634, 2024.",
1551
+ "Kim, H., Garrido, P., Tewari, A., Xu, W., Thies, J., Niessner, M., Pérez, P., Richardt, C., Zollhöfer, M., and Theobalt, C. Deep video portraits. ACM transactions on graphics (TOG), 37(4):1-14, 2018."
1552
+ ],
1553
+ "bbox": [
1554
+ 500,
1555
+ 85,
1556
+ 887,
1557
+ 906
1558
+ ],
1559
+ "page_idx": 8
1560
+ },
1561
+ {
1562
+ "type": "header",
1563
+ "text": "Motion-Prior Conditional Diffusion Model",
1564
+ "bbox": [
1565
+ 349,
1566
+ 56,
1567
+ 620,
1568
+ 69
1569
+ ],
1570
+ "page_idx": 8
1571
+ },
1572
+ {
1573
+ "type": "page_number",
1574
+ "text": "9",
1575
+ "bbox": [
1576
+ 480,
1577
+ 922,
1578
+ 491,
1579
+ 934
1580
+ ],
1581
+ "page_idx": 8
1582
+ },
1583
+ {
1584
+ "type": "list",
1585
+ "sub_type": "ref_text",
1586
+ "list_items": [
1587
+ "Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.",
1588
+ "Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp. 19730-19742. PMLR, 2023.",
1589
+ "Liang, B., Pan, Y., Guo, Z., Zhou, H., Hong, Z., Han, X., Han, J., Liu, J., Ding, E., and Wang, J. Expressive talking head generation with granular audio-visual control. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3387-3396, 2022.",
1590
+ "Ma, Y., Liu, H., Wang, H., Pan, H., He, Y., Yuan, J., Zeng, A., Cai, C., Shum, H.-Y., Liu, W., et al. Follow-your-emoji: Fine-controllable and expressive freestyle portrait animation. arXiv preprint arXiv:2406.01900, 2024.",
1591
+ "Mirza, M. and Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.",
1592
+ "Peebles, W. and Xie, S. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.",
1593
+ "Peng, Z., Hu, W., Shi, Y., Zhu, X., Zhang, X., Zhao, H., He, J., Liu, H., and Fan, Z. Synctalk: The devil is in the synchronization for talking head synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 666-676, 2024.",
1594
+ "Perez, E., Strub, F., De Vries, H., Dumoulin, V., and Courville, A. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.",
1595
+ "Prajwal, K., Mukhopadhyay, R., Namboodiri, V. P., and Jawahar, C. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the 28th ACM international conference on multimedia, pp. 484-492, 2020.",
1596
+ "Pumarola, A., Agudo, A., Martinez, A. M., Sanfeliu, A., and Moreno-Noguer, F. Ganimation: Anatomically-aware facial animation from a single image. In Proceedings of the European conference on computer vision (ECCV), pp. 818-833, 2018.",
1597
+ "Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021."
1598
+ ],
1599
+ "bbox": [
1600
+ 86,
1601
+ 84,
1602
+ 478,
1603
+ 905
1604
+ ],
1605
+ "page_idx": 9
1606
+ },
1607
+ {
1608
+ "type": "list",
1609
+ "sub_type": "ref_text",
1610
+ "list_items": [
1611
+ "Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022.",
1612
+ "Stypulkowski, M., Vougioukas, K., He, S., Zikeba, M., Petridis, S., and Pantic, M. Diffused heads: Diffusion models beat gans on talking-face generation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5091-5100, 2024.",
1613
+ "Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., and Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3667-3676, 2020.",
1614
+ "Sun, X., Zhang, L., Zhu, H., Zhang, P., Zhang, B., Ji, X., Zhou, K., Gao, D., Bo, L., and Cao, X. Vividtalk: One-shot audio-driven talking head generation based on 3d hybrid prior. arXiv preprint arXiv:2312.01841, 2023.",
1615
+ "Tan, S., Ji, B., and Pan, Y. Emmn: Emotional motion memory network for audio-driven emotional talking face generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22146-22156, 2023.",
1616
+ "Tan, S., Ji, B., and Pan, Y. Flowvqtalker: High-quality emotional talking face generation through normalizing flow and quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26317-26327, 2024.",
1617
+ "Tian, L., Wang, Q., Zhang, B., and Bo, L. Emo: Emote portrait alive-generating expressive portrait videos with audio2video diffusion model under weak conditions. arXiv preprint arXiv:2402.17485, 2024.",
1618
+ "Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., and Gelly, S. Fvd: A new metric for video generation. 2019.",
1619
+ "Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.",
1620
+ "Vougioukas, K., Petridis, S., and Pantic, M. Realistic speech-driven facial animation with gans. International Journal of Computer Vision, 128(5):1398-1413, 2020.",
1621
+ "Wang, C., Tian, K., Zhang, J., Guan, Y., Luo, F., Shen, F., Jiang, Z., Gu, Q., Han, X., and Yang, W. V-express: Conditional dropout for progressive training of portrait video generation. arXiv preprint arXiv:2406.02511, 2024."
1622
+ ],
1623
+ "bbox": [
1624
+ 500,
1625
+ 84,
1626
+ 887,
1627
+ 905
1628
+ ],
1629
+ "page_idx": 9
1630
+ },
1631
+ {
1632
+ "type": "header",
1633
+ "text": "Motion-Prior Conditional Diffusion Model",
1634
+ "bbox": [
1635
+ 349,
1636
+ 56,
1637
+ 622,
1638
+ 69
1639
+ ],
1640
+ "page_idx": 9
1641
+ },
1642
+ {
1643
+ "type": "page_number",
1644
+ "text": "10",
1645
+ "bbox": [
1646
+ 477,
1647
+ 922,
1648
+ 495,
1649
+ 934
1650
+ ],
1651
+ "page_idx": 9
1652
+ },
1653
+ {
1654
+ "type": "list",
1655
+ "sub_type": "ref_text",
1656
+ "list_items": [
1657
+ "Wang, S., Li, L., Ding, Y., Fan, C., and Yu, X. Audio2head: Audio-driven one-shot talking-head generation with natural head motion. arXiv preprint arXiv:2107.09293, 2021a.",
1658
+ "Wang, T.-C., Mallya, A., and Liu, M.-Y. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10039-10049, 2021b.",
1659
+ "Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004.",
1660
+ "Wei, H., Yang, Z., and Wang, Z. Aniportrait: Audio-driven synthesis of photorealistic portrait animation. arXiv preprint arXiv:2403.17694, 2024.",
1661
+ "Xu, M., Li, H., Su, Q., Shang, H., Zhang, L., Liu, C., Wang, J., Van Gool, L., Yao, Y., and Zhu, S. Hallo: Hierarchical audio-driven visual synthesis for portrait image animation. arXiv preprint arXiv:2406.08801, 2024.",
1662
+ "Yang, S., Li, H., Wu, J., Jing, M., Li, L., Ji, R., Liang, J., Fan, H., and Wang, J. Megactor-sigma: Unlocking flexible mixed-modal control in portrait animation with diffusion transformer. arXiv preprint arXiv:2408.14975, 2024.",
1663
+ "Yang, Z., Zeng, A., Yuan, C., and Li, Y. Effective whole-body pose estimation with two-stages distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4210-4220, 2023.",
1664
+ "Ye, Z., Jiang, Z., Ren, Y., Liu, J., He, J., and Zhao, Z. Geneface: Generalized and high-fidelity audio-driven 3d talking face synthesis. arXiv preprint arXiv:2301.13430, 2023.",
1665
+ "Ye, Z., Zhong, T., Ren, Y., Yang, J., Li, W., Huang, J., Jiang, Z., He, J., Huang, R., Liu, J., et al. Real3d-portrait: One-shot realistic 3d talking portrait synthesis. arXiv preprint arXiv:2401.08503, 2024.",
1666
+ "Zhang, W., Cun, X., Wang, X., Zhang, Y., Shen, X., Guo, Y., Shan, Y., and Wang, F. Sadtalker: Learning realistic 3d motion coefficients for stylized audio-driven single image talking face animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8652-8661, 2023.",
1667
+ "Zhang, Z., Li, L., Ding, Y., and Fan, C. Flow-guided one-shot talking face generation with a high-resolution audiovisual dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3661-3670, 2021."
1668
+ ],
1669
+ "bbox": [
1670
+ 86,
1671
+ 84,
1672
+ 475,
1673
+ 904
1674
+ ],
1675
+ "page_idx": 10
1676
+ },
1677
+ {
1678
+ "type": "list",
1679
+ "sub_type": "ref_text",
1680
+ "list_items": [
1681
+ "Zheng, L., Zhang, Y., Guo, H. A., Pan, J., Tan, Z., Lu, J., Tang, C., An, B., and YAN, S. MEMO: Memory-guided and emotion-aware talking video generation, 2024. URL https://openreview.net/forum? id=CpgWRFqxhD.",
1682
+ "Zhou, H., Sun, Y., Wu, W., Loy, C. C., Wang, X., and Liu, Z. Pose-controllable talking face generation by implicitly modularized audio-visual representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4176-4186, 2021.",
1683
+ "Zhou, Y., Han, X., Shechtman, E., Echevarria, J., Kalogerakis, E., and Li, D. Makelttalk: speaker-aware talking-head animation. ACM Transactions On Graphics (TOG), 39(6):1-15, 2020.",
1684
+ "Zhu, H., Wu, W., Zhu, W., Jiang, L., Tang, S., Zhang, L., Liu, Z., and Loy, C. C. Celebv-hq: A large-scale video facial attributes dataset. In European conference on computer vision, pp. 650-667. Springer, 2022."
1685
+ ],
1686
+ "bbox": [
1687
+ 498,
1688
+ 84,
1689
+ 887,
1690
+ 387
1691
+ ],
1692
+ "page_idx": 10
1693
+ },
1694
+ {
1695
+ "type": "header",
1696
+ "text": "Motion-Prior Conditional Diffusion Model",
1697
+ "bbox": [
1698
+ 349,
1699
+ 56,
1700
+ 620,
1701
+ 69
1702
+ ],
1703
+ "page_idx": 10
1704
+ },
1705
+ {
1706
+ "type": "page_number",
1707
+ "text": "11",
1708
+ "bbox": [
1709
+ 477,
1710
+ 922,
1711
+ 493,
1712
+ 934
1713
+ ],
1714
+ "page_idx": 10
1715
+ }
1716
+ ]
2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09533/bd7cfa3b-8bf2-46c9-a84a-a33471eb7c4c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:445f45cd83a242ca6424a540e213c32bfcfc5493c76439761cf3260f66243ea1
3
+ size 3460424
2502.09xxx/2502.09533/full.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model
2
+
3
+ Fei Shen 1 Cong Wang 2 Junyao Gao 3 Qin Guo 4 Jisheng Dang 5 Jinhui Tang 1 Tat-Seng Chua 6
4
+
5
+ # Abstract
6
+
7
+ Recent advances in conditional diffusion models have shown promise for generating realistic TalkingFace videos, yet challenges persist in achieving consistent head movement, synchronized facial expressions, and accurate lip synchronization over extended generations. To address these, we introduce the Motion-priors Conditional Diffusion Model (MCDM), which utilizes both archived and current clip motion priors to enhance motion prediction and ensure temporal consistency. The model consists of three key elements: (1) an archived-clip motion-prior that incorporates historical frames and a reference frame to preserve identity and context; (2) a present-clip motion-prior diffusion model that captures multimodal causality for accurate predictions of head movements, lip sync, and expressions; and (3) a memory-efficient temporal attention mechanism that mitigates error accumulation by dynamically storing and updating motion features. We also release the TalkingFace-Wild dataset, a multilingual collection of over 200 hours of footage across 10 languages. Experimental results demonstrate the effectiveness of MCDM in maintaining identity and motion continuity for long-term TalkingFace generation. Code, models, and datasets will be publicly available.
8
+
9
+ # 1. Introduction
10
+
11
+ TalkingFace generation (Tan et al., 2024; Peng et al., 2024; Zhou et al., 2021; Ye et al., 2024; Ji et al., 2021; Tan et al., 2023; Kim et al., 2018; Liang et al., 2022; Ye et al., 2023; Pumarola et al., 2018; Vougioukas et al., 2020) aims to create realistic and expressive videos from a reference face
12
+
13
+ $^{1}$ Nanjing University of Science and Technology $^{2}$ Nanjing University $^{3}$ Tongji University $^{4}$ Peking University $^{5}$ Sun Yat-sen University $^{6}$ National University of Singapore. Correspondence to: Jinhui Tang, Contact <feishen@njust.edu.cn>.
14
+
15
+ and audio, with applications in virtual avatars, gaming, and filmmaking. However, the complexity of facial movements, including head, lip, and expression motions, presents challenges, along with the need to maintain identity consistency across extended sequences.
16
+
17
+ Early methods (Vougioukas et al., 2020; Wang et al., 2021b; Hong et al., 2022a; Chan et al., 2022; Guo et al., 2024) use GANs (Goodfellow et al., 2014; Mirza & Osindero, 2014) to synthesize facial motions onto a reference image through a two-step process: decoupling motion features from audio and mapping them onto intermediate representations like facial landmarks (Yang et al., 2023), 3DMM (Sun et al., 2023), or HeadNeRF (Hong et al., 2022b). Despite their promise, GAN-based methods suffer from training instability and inaccuracies in motion extraction, often leading to artifacts like blurriness and flickering that compromise video realism. Recent diffusion models (Wei et al., 2024; Stypułkowski et al., 2024; Tian et al., 2024; Guo et al., 2024; Zheng et al., 2024; Jiang et al., 2024) have improved TalkingFace generation by enhancing video realism through multi-step denoising that preserves conditional input information. These methods typically use a Reference UNet (Hu, 2024) to encode identity features and integrate audio via cross-attention. However, reliance on static audio features and weak correlations between audio and motion complicate the decoupling of identity and motion cues, often resulting in artifacts like motion distortion and flickering, especially in long-term generation.
18
+
19
+ While some methods (Wang et al., 2024; Ma et al., 2024; Yang et al., 2024) improve long-term stability by introducing motion constraints like facial landmarks and emotion tags, these constraints often overly bind poses to the reference image, limiting expression diversity. Models trained with driven landmark fail to learn natural audio-driven motion patterns, reducing audio-visual synergy. Additionally, static emotion tags cannot capture dynamic shifts, leading to rigid, inauthentic animations over extended sequences. Besides, some approaches (Xu et al., 2024; Chen et al., 2024) inject brief motion reference frames, usually fewer than five over 0.2 seconds, which is insufficient to establish coherent motion, resulting in random, less dynamic movements.
20
+
21
+ In this paper, we propose the Motion-priors Conditional
22
+
23
+ Diffusion Model (MCDM) to address the challenges in achieving long-term consistency in TalkingFace generation. The MCDM comprises three key modules: the archived-clip motion-prior, the present-clip motion-prior diffusion model, and a memory-efficient temporal attention mechanism. Unlike conventional reference UNet-based identity learning, the archived-clip motion-prior introduces historical frames along with a reference frame via frame-aligned attention, enhancing identity representation and creating a cohesive facial context over extended sequences. Then, the present-clip motion-prior diffusion model leverages multimodal causality and temporal interactions to effectively decouple and predict motion states, including head, lip, and expression movements, ensuring a clear separation between identity and motion features and promoting temporal consistency across frames. To support long-term stability, we devise a memory-efficient temporal attention that dynamically stores and updates historical motion features, integrating them with current motion cues via a memory update mechanism. This structure reduces error accumulation often observed in diffusion-based long-term TalkingFace generation, enabling more stable and consistent outputs. Additionally, we present the TalkingFace-Wild dataset, a high-quality, multilingual video dataset with over 200 hours of footage in 10 languages, offering a valuable resource for further research in TalkingFace generation. Our main contributions are summarized as follows:
24
+
25
+ - We propose MCDM to enhance robust identity consistency and support temporal consistency in long-term TalkingFace generation.
26
+ - We develop the archived-clip motion-prior module to enhance identity representation and construct a comprehensive facial context from historical frames.
27
+ - We devise the present-clip motion-prior diffusion model to decouple current identity and motion features via multimodal causality and temporal interactions.
28
+ - We present a memory-efficient temporal attention to dynamically update and integrate historical motion features with current ones, reducing error accumulation.
29
+ - We release the TalkingFace-Wild dataset, covering 10 languages and over 200 hours of video for advancing TalkingFace research.
30
+
31
+ # 2. Related Work
32
+
33
+ GAN-Based Methods. GAN-based approaches (Kim et al., 2018; Zhou et al., 2020; Pumarola et al., 2018; Vougioukas et al., 2020; Zhang et al., 2023; Wang et al., 2021b; Hong et al., 2022a; Chan et al., 2022; Guo et al., 2024) for
34
+
35
+ TalkingFace generation extract motion features from audio or visual inputs and map them to intermediate representations such as facial landmarks (Yang et al., 2023), 3DMM (Sun et al., 2023), or HeadNeRF (Hong et al., 2022b). MakeItTalk (Zhou et al., 2020) employs LSTMs to predict landmarks from audio, followed by a warp-based GAN for video synthesis. GANimation (Pumarola et al., 2018) models facial motion via continuous manifolds, enhancing expression dynamics. SadTalker (Zhang et al., 2023) integrates ExpNet and PoseVAE to refine motion representations within the FaceVid2Vid (Wang et al., 2021b) framework. DaGAN (Hong et al., 2022a) introduces self-supervised geometric learning to capture dense 3D motion fields. While effective, GAN-based methods suffer from adversarial training instability and motion inaccuracies, often resulting in artifacts that degrade realism.
36
+
37
+ Diffusion-Based Methods. Diffusion models (Rombach et al., 2022) have gained traction in TalkingFace generation, producing high-quality, diverse outputs. AntiPortrait (Wei et al., 2024) maps audio to 3D facial structures, generating temporally coherent videos with expressive detail. MegActor- $\Sigma$ (Wang et al., 2024) synchronizes lip movements, expressions, and head poses using a reference UNet (Hu, 2024) and facial loss functions to enhance fidelity. Hallo (Xu et al., 2024) and EchoMimic (Chen et al., 2024) leverage limited motion reference frames to improve expression diversity and pose alignment. However, reliance on short-term frame histories (2-4 frames) compromises long-term motion consistency, while increased frame dependencies escalate computational costs. Additionally, static audio features and restricted references fail to capture natural motion variations, leading to artifacts such as motion distortion and rigid expressions in extended sequences.
38
+
39
+ Unlike prior work, our approach introduces motion priors from both archived and present clips to enhance long-term motion prediction and identity consistency. By leveraging historical frames and memory-efficient temporal attention, MCDM improves motion continuity while maintaining realism in TalkingFace generation.
40
+
41
+ # 3. Method
42
+
43
+ Task Definition. Given a reference image, audio, and optional facial landmarks, TalkingFace generation aims to produce temporally coherent and realistic videos. The key challenges include maintaining consistent identity over time, achieving natural head movements, and ensuring expressive and precise lip alignment with audio cues. However, existing methods often encounter limitations such as error accumulation, inconsistent identity preservation, suboptimal audio-lip synchronization, and rigid expressions.
44
+
45
+ ![](images/8a8e05f8d72436e995c0ea01becb7479be5be810d07722def043cf5a0df34fbd.jpg)
46
+ Figure 1. Our MCDM architecture. On the upper, the archived-clip motion-prior leverages frame-aligned attention with archived-clip, enhancing identity coherence over extended sequences. On the right, the present-clip motion-prior diffusion model uses multimodal causality and temporal interactions to decouple and predict motion states, covering head, lip, and expression movements while maintaining a clear separation of identity and motion features.
47
+
48
+ ![](images/f45a544d472e7ccf50b45b6eb40697973bb9a007b0e2f5231b28f15ee4ad1b16.jpg)
49
+
50
+ # 3.1. Overall Framework
51
+
52
+ To address the above challenges, we introduce MCDM, a framework centered on a denoising UNet resembling Stable Diffusion v1.5 (SD v1.5)<sup>1</sup>, tailored to denoise multi-frame noisy latent inputs under conditional guidance. As illustrated in Figure 1, unlike standard UNet architectures, each Transformer block in MCDM incorporates four attention layers. The first layer, a self-attention, mirrors that in SD v1.5. The second and third layers are parallel cross attention (spatial-wise), designed for distinct interactions: the archived-clip reference attention layer, which integrates motion priors from archived clip encoded by the archived-clip motion-prior module (Section 3.2), and the present-clip reference attention, which engages with present clip priors from the present-clip motion-prior diffusion model (Section 3.3). The fourth layer, the memory-efficient temporal attention (Section 3.4), is a temporal-wise self attention that dynamically updates and merges archived motion features with current motion features, effectively mitigating error accumulation.
53
+
54
+ # 3.2. Archived-Clip Motion-Prior Module
55
+
56
+ Motivation. Existing methods typically use the past $2 - 4$ frames to guide the denoising network for generating temporally consistent videos. However, this limited history frame is insufficient for maintaining long-term consistency, and incorporating more frames exponentially increases computational demand, making it impractical for real-world applications. To overcome these limitations, we propose an archived-clip motion prior that integrates long-term historical frames and a reference frame into the denoising UNet
57
+
58
+ via conditional frame-aligned attention, enhancing identity representation and establishing motion context.
59
+
60
+ Architecture. As illustrated in Figure 1, the archived-clip motion-prior consists of two frozen VAE encoders, two learnable patchy layers, and a frame-aligned attention mechanism. Given a reference frame $X_{\mathrm{ref}} \in \mathbb{R}^{b \times 1 \times c \times h \times w}$ and a archived clip $X_{\mathrm{arch}} \in \mathbb{R}^{b \times a \times c \times h \times w}$ , where $b$ , $c$ , $h$ , $w$ , and $a$ represent the batch size, channels, height, width, and the number of archived frames, respectively. First, the frozen VAE encoder extracts latent features from both the reference and archived frames, resulting in $f_x \in \mathbb{R}^{b \times 1 \times 4 \times \frac{h}{8} \times \frac{w}{8}}$ and $f_a \in \mathbb{R}^{b \times a \times 4 \times \frac{h}{8} \times \frac{w}{8}}$ , respectively. Next, the learnable patchy layers, consisting of 2D convolutions followed by flattening operations, transform these latent features into tokens, yielding $F_x \in \mathbb{R}^{b \times 1 \times m \times d}$ and $F_a \in \mathbb{R}^{b \times a \times m \times d}$ , where $m$ and $d$ denote the token length and embedding dimension.
61
+
62
+ In the frame-aligned attention, we adopt a frame-wise computation approach to improve efficiency and adaptability for long temporal sequences. For each archived frame $i \in [1,a]$ , the Key $K_{i}$ is derived from the reference tokens $F_{x}$ , while the Value $V_{i}$ is derived from the tokens of the corresponding archived frame $F_{a}^{i}$ :
63
+
64
+ $$
65
+ K _ {i} = F _ {x} \mathbf {W} _ {K}, \quad V _ {i} = F _ {a _ {i}} \mathbf {W} _ {V}, \tag {1}
66
+ $$
67
+
68
+ where $\mathbf{W}_K\in \mathbb{R}^{d\times d}$ and $\mathbf{W}_V\in \mathbb{R}^{d\times d}$ are learnable projection matrices for the Key and Value. The attention for each frame $i$ is then computed as:
69
+
70
+ $$
71
+ \operatorname {A t t e n t i o n} (Q, K _ {i}, V _ {i}) = \operatorname {S o f t m a x} \left(\frac {Q K _ {i} ^ {\top}}{\sqrt {d}}\right) V _ {i}, \tag {2}
72
+ $$
73
+
74
+ where $Q \in \mathbb{R}^{n \times d}$ represents a learnable query tokens, with $n$ denoting the number of queries. Aggregating the outputs
75
+
76
+ across all frames yields the final output $F_{ac}\in \mathbb{R}^{b\times a\times n\times d}$ where each frame's attended tokens reflect both the static reference and dynamic temporal information.
77
+
78
+ # 3.3. Present-Clip Motion-Prior Diffusion Model
79
+
80
+ Motivation. Motion information is typically driven either by landmark signals from a driving video or directly by audio cues. The landmark-driven approach guides reference image movements but limits the natural diversity of head motions and expressions. In contrast, audio-driven methods rely solely on audio cues, often lacking sufficient guidance for realistic head movement. To address these limitations, we propose the present-clip motion-prior diffusion model, which first predicts motion states, including head, lip, and expressions motions, rather than directly generating TalkingFace videos.
81
+
82
+ Architecture. We aim to predict motion in head, lip, and expressions lip movements, conditioned on audio and image tokens. As shown in Figure 1 (right), we begin by extracting feature tokens from the audio encoder, image encoder, head encoder, lip encoder, and express encoder.
83
+
84
+ Audio Encoder: Audio sequence tokens are extracted from the input audio via a frozen Wav2Vec model (Baevski et al., 2020).
85
+
86
+ Image Encoder: Image tokens are extracted from the reference frame using a frozen CLIP (Radford et al., 2021) and are replicated along the temporal dimension to align with audio features.
87
+
88
+ Head Encoder: Head tokens are extracted from reference landmark video through a frozen Landmark Guider $^{2}$ ; notably, these tokens are optional, allowing simulation of conditions with or without reference video guidance.
89
+
90
+ - Lip and Express Encoders: Lip and expression tokens are extracted from the target video using a custom-trained encoder. Details of the lip and express encoders are provided in the supplementary material.
91
+
92
+ We then pass the audio and image tokens through a feature-wise linear modulation (FiLM) layer (Perez et al., 2018) to adaptively learn multimodal correlation tokens. These tokens, along with the timestep $t$ , and noise-added tokens for head, lip, and expression movements, are prepended to the input sequence. This composite input is fed into an $L$ -layer structure consisting of a multimodal causal transformer block (Peebles & Xie, 2023) and a temporal interaction transformer block (Hu, 2024), with added noise in facial motion tokens acting as the supervision. The training loss $L_{\mathrm{prior}}$ for the present-clip motion-prior diffusion model $\epsilon_{\theta}$ is
93
+
94
+ ![](images/141683f3dd6f9e15e88a83a35e740a2f1e22d83bf26d85abb1da87f78d073251.jpg)
95
+ Figure 2. The overview of memory-efficient temporal attention. It can dynamically update and integrate historical motion features with current ones.
96
+
97
+ defined as:
98
+
99
+ $$
100
+ L _ {\text {p r i o r}} = \mathbb {E} _ {t, F _ {p}, z _ {t}, \epsilon ,} \| \epsilon - \epsilon_ {\theta} (z _ {t}, t, F _ {p}) \| ^ {2}. \tag {3}
101
+ $$
102
+
103
+ Without landmark guidance, $F_{p}$ represent multimodal interaction tokens from audio and the reference frame. $z_{t}$ represent noise-added tokens for head, lip, and expression movements at timestep $t$ . With landmark guidance, $F_{p}$ additionally include landmark tokens. $z_{t}$ represent noise-added lip and expression tokens. This design allows flexible conditioning, incorporating landmark guidance when available, while effectively leveraging multimodal interactions for accurate motion state predictions.
104
+
105
+ # 3.4. Memory-Efficient Temporal Attention
106
+
107
+ Motivation. For long-term TalkingFace generation, current methods primarily adopt either fully or semi-autoregressive strategies: the former generates one frame per iteration, while the latter produces a fixed-length clip. However, due to GPU memory limitations, relying on a restricted frame history for extrapolation often results in error accumulation, as limited prior motion information undermines consistency over extended sequences. Therefore, we propose a memory-efficient temporal attention to dynamically update and integrate historical motion features with current ones, reducing error accumulation.
108
+
109
+ Architecture.AnimateDiff (Guo et al., 2023) demonstrates that the temporal layer in self-attention ensures smooth temporal continuity and consistency of appearance details across frames. We replace traditional self-attention with fast attention (Choromanski et al., 2020) in the temporal layer to enhance temporal continuity and manage memory efficiently, allowing the accumulation of extensive historical motion information for consistent long-sequence generation. As shown in Figure 2, let $F_{ac}$ and $F_{pc}$ denote the output features of the archived-clip and present-clip motion-prior modules, respectively, and let $Z_{t}$ represent the noisy latent feature at time step $t$ . These features undergo reference attention, yielding refined representations $F_{ac}^{\mathrm{ref}}$ and $F_{pc}^{\mathrm{ref}}$ , which capture spatial-domain motion characteristics. $F_{ac}^{\mathrm{ref}}$ is then input into the motion memory update mechanism, which ag
110
+
111
+ gregates motion across frames, producing the update feature $M_{f}$ . The memory update mechanism is defined as follows,
112
+
113
+ (1) Initialization: At the first frame, the memory $M_{1}$ is initialized with $F_{ac}^{\mathrm{ref}}$ since no prior motion information exists:
114
+
115
+ $$
116
+ M _ {1} = F _ {a c} ^ {\text {r e f}}. \tag {4}
117
+ $$
118
+
119
+ (2) Memory Update: For each frame $f$ , the memory $M_{f}$ is updated by combining the current feature $F_{ac}^{\mathrm{ref}}$ with the previous memory $M_{f-1}$ as:
120
+
121
+ $$
122
+ M _ {f} = \alpha M _ {f - 1} + (1 - \alpha) F _ {a c} ^ {\text {r e f}}, \tag {5}
123
+ $$
124
+
125
+ where $\alpha \in [0,1]$ controls the balance between past and current frames. This fixed memory update mechanism avoids storage bottlenecks of historical information. We then concatenate $F_{pc}^{\mathrm{ref}}$ with $M_f$ along the temporal dimension, creating $F_m^{\mathrm{cat}}$ , which integrates past and current motion. $F_m^{\mathrm{cat}}$ is processed through Fast Attention along the temporal axis to capture dependencies across frames, with the lower half of the resulting feature map used as the output $F_m$ .
126
+
127
+ # 3.5. Training and Inference
128
+
129
+ Training. Our training process is divided into three stages, each with specific learning objectives. Each stage is supervised using standard MSE loss (Rombach et al., 2022).
130
+
131
+ Stage1. The archived-clip motion-prior is trained to enhance identity representation and establish a robust facial motion context across extended sequences. The present-clip reference attention and memory-efficient temporal attention modules remain frozen during this stage.
132
+
133
+ Stage2. The present-clip motion-prior diffusion model is trained to predict the motion states of facial expressions, lip, and head movements. To simulate scenarios without a driving video, we randomly drop the entire landmark clip.
134
+
135
+ Stage3. The full motion-priors conditional diffusion model is trained for generating stable and consistent long-term TalkingFace videos. Only the present-clip reference and memory-efficient temporal attentions are trained.
136
+
137
+ Inference. The present-clip motion-prior diffusion model first predicts distinct motion tokens based on the given conditions (either with or without landmark guidance). Landmarks are not used by default unless specified. Subsequently, MCDM utilizes these motion tokens, alongside a single reference image and audio input, to generate the video sequence. For the initial archived clip, we initialize it using the reference image and then progressively update the motion memory to ensure temporal consistency.
138
+
139
+ ![](images/9e0594185aff3adeb831bb135df096d74964443f72489100838eef30b9f0d1e4.jpg)
140
+ Table 1. Statistics of our TalkingFace-Wild dataset. We release a TalkingFace dataset that is well-balanced across 10 languages.
141
+
142
+ # 4. Experiments
143
+
144
+ # 4.1. Experimental Settings
145
+
146
+ Datasets. The HDTF dataset (Zhang et al., 2021) comprises 410 videos with over 10,000 unique speech sentences, varied head poses, and movement patterns. Following prior work (Chen et al., 2024; Tian et al., 2024; Xu et al., 2024), we split HDTF into training and testing sets with a 9:1 ratio. The CelebV-HQ dataset (Zhu et al., 2022) includes 35,666 clips (3–20 seconds each) across 15,653 identities, totaling roughly 65 hours. Both datasets present quality issues, such as audio-lip misalignment, facial occlusions, small facial regions, and low resolution. To mitigate these, we developed a custom data processing pipeline for high-quality TalkingFace data, detailed in the following subsection.
147
+
148
+ Additionally, mostly methods (Wang et al., 2024; Xu et al., 2024; Jiang et al., 2024) employ proprietary datasets for supplementary training and testing. Similarly, we sourced a variety of TalkingFace videos from YouTube using targeted keyword queries (e.g., "nationality," "interview," "dialogue") across different languages and contexts. From Table 1, we collect a new high-quality dataset, TalkingFace-Wild, covering 10 languages and totaling over 200 hours after processing through our data pipeline. To assess the generalization capability of models, we also constructed an open-set test collection of 20 diverse portrait images and 20 audio clips.
149
+
150
+ Data Processing. First, we detect scene transitions in raw videos using PySceneDetect<sup>3</sup> and trim each clip to a maximum duration of 30 seconds. Next, we apply face detection (Guo et al., 2021) to exclude videos lacking complete faces or containing multiple faces, using the bounding boxes to extract talking head regions. Third, an image quality assessment model (Su et al., 2020) filters out low-quality and low-resolution clips. Fourth, SyncNet (Prajwal et al., 2020) assesses audio-lip synchronization, discarding clips with misaligned audio. Finally, we manually inspect a subset to verify audio-lip synchronization and overall video quality, ensuring precise filtering. In addition, to ensure a fair comparison, we report results trained independently on each of the previously mentioned datasets.
151
+
152
+ Motion-Prior Conditional Diffusion Model
153
+
154
+ <table><tr><td rowspan="2">Method</td><td colspan="6">HDTF</td><td colspan="6">CelebV-HQ</td></tr><tr><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>Audio2Head</td><td>76.08</td><td>1417.65</td><td>3.16</td><td>17.62</td><td>0.572</td><td>3.81</td><td>127.30</td><td>1882.64</td><td>1.96</td><td>17.36</td><td>0.391</td><td>8.42</td></tr><tr><td>V-Express</td><td>57.14</td><td>1152.29</td><td>5.05</td><td>11.68</td><td>0.706</td><td>1.83</td><td>98.07</td><td>1465.26</td><td>3.71</td><td>13.41</td><td>0.514</td><td>5.18</td></tr><tr><td>AniPortrait</td><td>54.81</td><td>1072.63</td><td>5.40</td><td>11.39</td><td>0.727</td><td>1.95</td><td>94.25</td><td>1260.74</td><td>3.98</td><td>12.88</td><td>0.536</td><td>4.91</td></tr><tr><td>SadTalker</td><td>52.77</td><td>956.24</td><td>5.73</td><td>10.65</td><td>0.736</td><td>1.87</td><td>88.22</td><td>1055.49</td><td>4.05</td><td>11.20</td><td>0.565</td><td>4.66</td></tr><tr><td>Hallo</td><td>37.29</td><td>616.04</td><td>6.33</td><td>8.64</td><td>0.774</td><td>1.67</td><td>72.46</td><td>907.60</td><td>6.48</td><td>8.61</td><td>0.620</td><td>2.93</td></tr><tr><td>EchoMimic</td><td>31.44</td><td>595.17</td><td>6.96</td><td>8.59</td><td>0.782</td><td>1.64</td><td>71.47</td><td>893.28</td><td>6.70</td><td>8.45</td><td>0.637</td><td>2.81</td></tr><tr><td>MegActor-Σ</td><td>31.37</td><td>586.10</td><td>6.87</td><td>8.55</td><td>0.778</td><td>1.62</td><td>70.82</td><td>875.21</td><td>6.77</td><td>8.32</td><td>0.634</td><td>2.74</td></tr><tr><td>MCDM (Ours)</td><td>26.45</td><td>543.28</td><td>7.49</td><td>8.04</td><td>0.824</td><td>1.51</td><td>67.29</td><td>784.53</td><td>7.25</td><td>7.84</td><td>0.662</td><td>2.31</td></tr></table>
155
+
156
+ Metrics. We utilize a comprehensive set of metrics to assess the quality of generated videos and audio-lip synchronization. Fréchet Inception Distance (FID) (Heusel et al., 2017) evaluates individual frame quality by comparing feature distributions from a pre-trained model. Fréchet Video Distance (FVD) (Unterthiner et al., 2019) quantifies the distributional distance between real and generated videos, providing an overall assessment of video fidelity. Sync-C and Sync-D (Chung & Zisserman, 2017) evaluate lip synchronization from content and dynamic perspectives, with higher Sync-C and lower Sync-D scores indicating superior alignment with audio. Structural Similarity Index (SSIM) (Wang et al., 2004) measures structural consistency between ground truth and generated videos, while E-FID (Deng et al., 2019) provides a refined image fidelity evaluation based on Inception network features.
157
+
158
+ Implementations. The experiments are conducted on a computing platform equipped with 8 NVIDIA V100 GPUs. Training is performed in three stages, with each stage consisting of 30,000 iterations and a batch size of 4. Video data is processed at a resolution of $512 \times 512$ . The learning rate is fixed at $1 \times 10^{-5}$ across all stages, and the AdamW optimizer is employed to stabilize training. Each training clip comprised 16 video frames. In the archived-clip motion-prior module, we set $\alpha = 16$ , $m = 256$ , and $n = 16$ . In the present-clip motion-prior diffusion model, the number of layers $L$ is set to 8, and the weighting factor $\alpha$ in Eq. 5 is configured to 0.1 to balance the influence of prior motion information. This setup is chosen to optimize long-term identity preservation and enhance motion consistency within generated TalkingFace videos.
159
+
160
+ # 4.2. Main Results
161
+
162
+ We compare our method with several SOTA methods, including Audio2Head (Wang et al., 2021a), V-Express (Wang et al., 2024), AniPortrait (Wei et al., 2024), SadTalker (Zhang et al., 2023), Hallo (Xu et al., 2024), EchoMimic (Chen et al., 2024), and MegActor- $\Sigma$ (Yang et al., 2024), from quantitative, qualitative, and user study. Unless otherwise specified, all methods do not use landmarks to ensure a fair comparison.
163
+
164
+ Table 2. Quantitative comparisons on HDTF and CelebV-HQ. MCDM achieves the top results across all metrics, with best in bold.
165
+
166
+ <table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>Audio2Head</td><td>87.21</td><td>1836.25</td><td>2.32</td><td>13.92</td><td>0.613</td><td>3.12</td></tr><tr><td>V-Express</td><td>62.18</td><td>1324.57</td><td>5.45</td><td>9.04</td><td>0.674</td><td>2.81</td></tr><tr><td>AniPortrait</td><td>56.11</td><td>954.91</td><td>6.37</td><td>8.29</td><td>0.706</td><td>2.60</td></tr><tr><td>SadTalker</td><td>52.77</td><td>847.20</td><td>6.94</td><td>7.92</td><td>0.724</td><td>2.49</td></tr><tr><td>Hallo</td><td>51.35</td><td>792.38</td><td>6.85</td><td>7.65</td><td>0.728</td><td>2.35</td></tr><tr><td>EchoMimic</td><td>49.20</td><td>751.44</td><td>7.06</td><td>7.18</td><td>0.737</td><td>2.31</td></tr><tr><td>MegActor-Σ</td><td>48.57</td><td>724.40</td><td>7.22</td><td>7.14</td><td>0.745</td><td>2.29</td></tr><tr><td>MCDM (Ours)</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>
167
+
168
+ Table 3. Quantitative comparisons on TalkingFace-Wild. MCDM achieves a significant advantage over other methods.
169
+
170
+ Quantitative Evaluation. Table 2 presents a quantitative comparison on the HDTF (Zhang et al., 2021) and CelebV-HQ (Zhu et al., 2022), illustrating the overall superior performance of diffusion-based methods compared to GAN-based methods. Our proposed MCDM achieves the best scores across all metrics, outperforming existing diffusion-based approaches. Specifically, MCDM achieves superior lip-sync accuracy, reflected in higher Sync-C and lower Sync-D scores, outperforming methods like EchoMimic (Chen et al., 2024) and MegActor- $\Sigma$ (Yang et al., 2024), which show notable declines in synchronization quality. MCDM's outstanding SSIM and E-FID scores also highlight its ability to generate visually appealing, temporally consistent content with precise lip synchronization.
171
+
172
+ Table 3 summarizes the quantitative performance on the proposed TalkingFace-Wild dataset. Consistent with results on HDTF (Zhang et al., 2021) and CelebV-HQ (Zhu et al., 2022), MCDM surpasses all competing SOTA methods across evaluation metrics, demonstrating marked improvements in visual quality and temporal consistency. Achieving the best FID, FVD, and an E-FID of 1.97, MCDM shows strong capability in generating high-fidelity TalkingFace videos under diverse conditions, effectively maintaining temporal coherence across audio, expressions, and lip synchronization.
173
+
174
+ Qualitative Evaluation. Figure 3 provides a qualitative comparison of our method against other SOTA approaches. Compared to V-Express (Wang et al., 2024) and EchoMimic (Chen et al., 2024), our approach shows superior head and lip synchronization, benefiting from the audiovisual consistency introduced by motion priors. Addition
175
+
176
+ ![](images/21c37f389b56476ffbb7367ef560e17747ba89feb2e4b964cda5d5cf3be947fa.jpg)
177
+ Error Eye Sync
178
+ Error
179
+ Expression Sync
180
+ Error
181
+ Identity Sync
182
+ Reference
183
+ Image
184
+ $\frac{1}{2}x - 1 > 0$ ②
185
+ $\therefore m - 1 \neq 0$ ;
186
+ $\therefore m - 1 \neq 0$ ;
187
+ $\therefore m - 1 \neq 0$ ;
188
+ $\frac{1}{2}x - 1 > 0$
189
+ $\therefore m = \frac{3}{11}$
190
+ $\therefore m = \frac{3}{11}$ ;
191
+ $\therefore m = \frac{3}{11}$ ;
192
+ $\therefore m - 1 \neq 0$ ;
193
+ $\therefore m - 1 \neq 0$ ;
194
+ $\therefore m - 1 \neq 0$ ;
195
+ $\therefore m - 1 \neq 0$ ;
196
+ $\therefore m = \frac{3}{11}$ ;
197
+ Figure 3. Qualitative comparison on HDTF and CelebV-HQ. Our method achieves the best generation results, particularly in identity consistency and motion detail.
198
+
199
+ ![](images/a5e329a60ed41c6cea3ea9a81fc5bcbfbda7850cc52ad0fad19c383240c19c83.jpg)
200
+
201
+ ![](images/9b30078b7aab5e4b00084ec39d3259f90625c79e17a8361273bb6c81ceb25e01.jpg)
202
+ Error Lip Sync
203
+
204
+ ![](images/33f4dc0ae860d80985bd1abd247f7db15646938bbc3bd9ef3b58df86a7db2b6e.jpg)
205
+ A: Audio2Head
206
+ B: V-Express
207
+ C: AniPortrait
208
+ Figure 4. User study results of identity consistency, motion synchronization, and video quality. Higher values indicate better performance.
209
+
210
+ ![](images/e9747603bbcab656f71728012b61ddf563a3b79e4a8bf2ef237338ed345a6dc4.jpg)
211
+ D: SadTalker
212
+ E: Hallo
213
+ F: EchoMimic
214
+
215
+ ![](images/fa6c272af71b5e3da57bf175e8ace76565e61300068e93821530f373236e83f3.jpg)
216
+ G: MegActor-
217
+ H:Ours
218
+
219
+ ally, unlike Hallo (Xu et al., 2024) and MegActor- $\Sigma$ (Yang et al., 2024). Our method accurately captures subtle facial actions, including blinks and expression nuances through the archived-clip, while better preserving identity consistency. Overall, our approach demonstrates the best visual results.
220
+
221
+ User Study. The quantitative and qualitative comparisons underscore the substantial advantages of our proposed MCDM in generating consistent TalkingFace videos. To further evaluate video quality, we conduct a user study, focusing on identity consistency, motion synchronization, and overall video quality. We randomly selected 10 cases, shuffled the generated videos from each method, and recruited 20 participants (10 male, 10 female) to provide rank-3 preferences. From Figure 4, our method consistently achieved the highest scores across all metrics in the user preference evaluation. This user study highlights the significant advantage of our approach in user-centric TalkingFace generation.
222
+
223
+ <table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>w/o Fa</td><td>46.25</td><td>708.93</td><td>7.37</td><td>7.05</td><td>0.749</td><td>2.25</td></tr><tr><td>w/o Fpc</td><td>45.63</td><td>684.20</td><td>7.49</td><td>6.97</td><td>0.758</td><td>2.13</td></tr><tr><td>w/o MTA</td><td>44.27</td><td>671.05</td><td>7.62</td><td>6.84</td><td>0.771</td><td>2.04</td></tr><tr><td>Ours</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>
224
+
225
+ Table 4. Ablation results on the TalkingFace-Wild dataset.
226
+
227
+ # 4.3. Ablation Results
228
+
229
+ We conduct an ablation study to assess the impact of each component in our method. Table 4 shows the results: w/o $F_{a}$ omits historical frame information, w/o $F_{pc}$ adds an audio attention module for audio feature input, and w/o MTA applies a standard temporal attention module.
230
+
231
+ Archived-Clip Motion-Prior. The results in Table 4 show that removing historical frame information (w/o $F_{a}$ ) significantly degrades performance across all metrics, underscoring the importance of the archived-clip motion-prior. To further assess the effect of $F_{a}$ on long-term generation, we visualized frames 30, 300, 1800, 3600, and 7200 with corresponding SSIM scores, as shown in Figure 5. Fig-
232
+
233
+ ![](images/dec7c2714f614ffa47ba0661f25a3706fca26ac49784ebfe61b835d30a77bac5.jpg)
234
+ (a) Visualization results of different frames
235
+
236
+ ![](images/2e9e1f84ea1dbb6dec024d0f488c13822c09382fe4483a9e539c0d2be8ce541a.jpg)
237
+ (b) SSIM scores of different frames
238
+
239
+ ![](images/1db265af102eeddc16fb14b4f4ab77b7396aeba4ef12f3c2aafdd1b8efdb12bd.jpg)
240
+ Figure 5. Visualization results and SSIM scores during long-term generation. We find that $\mathrm{w} / F_{a}$ offers a distinct advantage in maintaining both identity and contextual consistency.
241
+ (a) Heatmap Results of Lip
242
+
243
+ ![](images/d5ce0fbeadcf0e1cac1bd7c7eb05a7d184d0501bfb61afad590d1f7c0e22c27a.jpg)
244
+ (b) Clusters Results of Expression
245
+ Figure 6. Lip heatmap and expression cluster. We find that w/ $F_{p}$ effectively tracks the lip region and conveys expressions.
246
+
247
+ ure 5(a) indicates that without the archived-clip (w/o $F_{a}$ ), identity consistency worsens with frame progression, resulting in visible artifacts and inconsistencies in head, mouth, and expression. In Figure 5(b), the SSIM scores highlight error accumulation increases with frame count, showing a rapid decline in (w/o $F_{a}$ ), while (w/ $F_{a}$ ) remains stable at a higher value. These findings validate the effectiveness of the archived-clip motion-prior in preserving both identity and temporal coherence over extended sequences.
248
+
249
+ Present-Clip Motion-Prior. Similarly, excluding the present-clip motion-prior and injecting audio information directly via audio attention (w/o $F_{pc}$ ) leads to a drop in performance across all metrics. This decline highlights the effectiveness of the present-clip motion-prior in leveraging multimodal causality and temporal interactions to decouple and predict motion states, including expressions, lip movement, and head motion (see Table 4). To further validate this decoupling capability, we visualize heatmaps of the predicted lip tokens, as shown in Figure 6(a), where the present-clip motion-prior accurately localizes and tracks lip motion. For expression decoupling, t-SNE (Van der Maaten & Hinton, 2008) visualization of expression tokens reveals
250
+
251
+ ![](images/fe6ca791fe5bd7219f0d2f69262d5cd59424ab5f5d3be36f376e88f5fc43230b.jpg)
252
+ Figure 7. SSIM results for different $\alpha$ values. Our method performs comparably well when the $\alpha$ value is smaller than 0.9.
253
+
254
+ <table><tr><td>Method</td><td>FID↓</td><td>FVD↓</td><td>Sync-C↑</td><td>Sync-D↓</td><td>SSIM↑</td><td>E-FID↓</td></tr><tr><td>B1</td><td>42.49</td><td>668.24</td><td>7.69</td><td>6.78</td><td>0.771</td><td>2.02</td></tr><tr><td>B2</td><td>47.12</td><td>721.17</td><td>7.30</td><td>6.84</td><td>0.732</td><td>2.29</td></tr><tr><td>Ours</td><td>42.08</td><td>656.71</td><td>7.84</td><td>6.69</td><td>0.779</td><td>1.97</td></tr></table>
255
+
256
+ Table 5. More results of variant MCDM.
257
+
258
+ tighter clustering within each of the eight distinct emotion categories when using the present-clip motion-prior, indicating improved separation of emotional content from audio input.
259
+
260
+ Memory-Efficient Temporal Attention. Following the standard approach (Hu, 2024), we replace the proposed memory-efficient temporal attention with conventional temporal attention by directly summing $F_{ac}^{ref}$ and $F_{pc}^{ref}$ . As shown in Table 4, this modification significantly degrades performance across all metrics. This drop in quality is primarily due to the absence of an update mechanism, which introduces gaps between the archived clip and the present clip, compromising video smoothness. Next, we analyzed the effect of different $\alpha$ values in Eq. 5, which control the update rate, on the model's SSIM performance, as shown in Figure 7. We observed that as $\alpha$ increases, SSIM gradually declines. When $\alpha$ is below 0.9, our approach significantly outperforms the w/o WTA configuration. However, at $\alpha = 0.9$ , the performance is weaker than w/o WTA, due to the excessive accumulation of historical frame information and a reduced proportion of the present clip. Consequently, we set $\alpha = 0.1$ as the default value in this paper.
261
+
262
+ More Results. Table 5 evaluates different design variants. In B1, Q-Former (Li et al., 2023) replaces frame-aligned attention, while in B2, Reference UNet (Hu, 2024) substitutes VAE with Reference UNet, omitting archived-clip information. Results show that frame-aligned attention outperforms Q-Former by effectively capturing temporal context and integrating long-term dependencies. Additionally, using a frozen VAE with a trainable patchify layer proves to be an efficient alternative to the conventional Reference UNet.
263
+
264
+ # 5. Conclusion
265
+
266
+ We presented the Motion-priors Conditional Diffusion Model (MCDM) to address the challenges of long-term TalkingFace generation by achieving robust identity consistency and motion continuity. MCDM integrates three
267
+
268
+ key innovations: an archived-clip motion-prior to enhance identity representation, a present-clip motion-prior diffusion model for accurate motion prediction, and a memory-efficient temporal attention to mitigate error accumulation over extended sequences. Additionally, we introduced the TalkingFace-Wild dataset, offering over 200 hours of multilingual video data across diverse scenarios. Experimental results demonstrate the effectiveness of MCDM, setting new benchmarks in long-term TalkingFace generation.
269
+
270
+ # Impact Statement
271
+
272
+ This paper presents the MCDM model, designed to enhance identity and temporal consistency in long-term TalkingFace generation. While MCDM contributes to the advancement of generative modeling, we recognize the potential ethical concerns, including the risks of misuse for creating deceptive content or spreading misinformation. We emphasize the importance of transparency in AI development and support the integration of detection frameworks to mitigate these risks. In alignment with ongoing efforts in responsible AI, we aim to ensure that the benefits of our work are balanced with its ethical implications, promoting safe and constructive applications in society.
273
+
274
+ # References
275
+
276
+ Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460, 2020.
277
+ Chan, E. R., Lin, C. Z., Chan, M. A., Nagano, K., Pan, B., De Mello, S., Gallo, O., Guibas, L. J., Tremblay, J., Khamis, S., et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16123-16133, 2022.
278
+ Chen, Z., Cao, J., Chen, Z., Li, Y., and Ma, C. Echomimic: Lifelike audio-driven portrait animations through editable landmark conditions. arXiv preprint arXiv:2407.08136, 2024.
279
+ Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.
280
+ Chung, J. S. and Zisserman, A. Out of time: automated lip sync in the wild. In Computer Vision-ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, pp. 251-263. Springer, 2017.
281
+ Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., and Tong, X.
282
+
283
+ Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0-0, 2019.
284
+ Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
285
+ Guo, J., Deng, J., Lattas, A., and Zafeiriou, S. Sample and computation redistribution for efficient face detection. arXiv preprint arXiv:2105.04714, 2021.
286
+ Guo, J., Zhang, D., Liu, X., Zhong, Z., Zhang, Y., Wan, P., and Zhang, D. Liveportrait: Efficient portrait animation with stitching and retargeting control. arXiv preprint arXiv:2407.03168, 2024.
287
+ Guo, Y., Yang, C., Rao, A., Liang, Z., Wang, Y., Qiao, Y., Agrawala, M., Lin, D., and Dai, B. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023.
288
+ Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
289
+ Hong, F.-T., Zhang, L., Shen, L., and Xu, D. Depth-aware generative adversarial network for talking head video generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3397-3406, 2022a.
290
+ Hong, Y., Peng, B., Xiao, H., Liu, L., and Zhang, J. Head-nerf: A real-time nerf-based parametric head model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20374-20384, 2022b.
291
+ Hu, L. Animate anyone: Consistent and controllable imaged-to-video synthesis for character animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8153-8163, 2024.
292
+ Ji, X., Zhou, H., Wang, K., Wu, W., Loy, C. C., Cao, X., and Xu, F. Audio-driven emotional video portraits. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14080-14089, 2021.
293
+ Jiang, J., Liang, C., Yang, J., Lin, G., Zhong, T., and Zheng, Y. Loopy: Taming audio-driven portrait avatar with long-term motion dependency. arXiv preprint arXiv:2409.02634, 2024.
294
+ Kim, H., Garrido, P., Tewari, A., Xu, W., Thies, J., Niessner, M., Pérez, P., Richardt, C., Zollhöfer, M., and Theobalt, C. Deep video portraits. ACM transactions on graphics (TOG), 37(4):1-14, 2018.
295
+
296
+ Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
297
+ Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp. 19730-19742. PMLR, 2023.
298
+ Liang, B., Pan, Y., Guo, Z., Zhou, H., Hong, Z., Han, X., Han, J., Liu, J., Ding, E., and Wang, J. Expressive talking head generation with granular audio-visual control. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3387-3396, 2022.
299
+ Ma, Y., Liu, H., Wang, H., Pan, H., He, Y., Yuan, J., Zeng, A., Cai, C., Shum, H.-Y., Liu, W., et al. Follow-your-emoji: Fine-controllable and expressive freestyle portrait animation. arXiv preprint arXiv:2406.01900, 2024.
300
+ Mirza, M. and Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
301
+ Peebles, W. and Xie, S. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195-4205, 2023.
302
+ Peng, Z., Hu, W., Shi, Y., Zhu, X., Zhang, X., Zhao, H., He, J., Liu, H., and Fan, Z. Synctalk: The devil is in the synchronization for talking head synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 666-676, 2024.
303
+ Perez, E., Strub, F., De Vries, H., Dumoulin, V., and Courville, A. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
304
+ Prajwal, K., Mukhopadhyay, R., Namboodiri, V. P., and Jawahar, C. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the 28th ACM international conference on multimedia, pp. 484-492, 2020.
305
+ Pumarola, A., Agudo, A., Martinez, A. M., Sanfeliu, A., and Moreno-Noguer, F. Ganimation: Anatomically-aware facial animation from a single image. In Proceedings of the European conference on computer vision (ECCV), pp. 818-833, 2018.
306
+ Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
307
+
308
+ Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022.
309
+ Stypulkowski, M., Vougioukas, K., He, S., Zikeba, M., Petridis, S., and Pantic, M. Diffused heads: Diffusion models beat gans on talking-face generation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5091-5100, 2024.
310
+ Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., and Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3667-3676, 2020.
311
+ Sun, X., Zhang, L., Zhu, H., Zhang, P., Zhang, B., Ji, X., Zhou, K., Gao, D., Bo, L., and Cao, X. Vividtalk: One-shot audio-driven talking head generation based on 3d hybrid prior. arXiv preprint arXiv:2312.01841, 2023.
312
+ Tan, S., Ji, B., and Pan, Y. Emmn: Emotional motion memory network for audio-driven emotional talking face generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22146-22156, 2023.
313
+ Tan, S., Ji, B., and Pan, Y. Flowvqtalker: High-quality emotional talking face generation through normalizing flow and quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26317-26327, 2024.
314
+ Tian, L., Wang, Q., Zhang, B., and Bo, L. Emo: Emote portrait alive-generating expressive portrait videos with audio2video diffusion model under weak conditions. arXiv preprint arXiv:2402.17485, 2024.
315
+ Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., and Gelly, S. Fvd: A new metric for video generation. 2019.
316
+ Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
317
+ Vougioukas, K., Petridis, S., and Pantic, M. Realistic speech-driven facial animation with gans. International Journal of Computer Vision, 128(5):1398-1413, 2020.
318
+ Wang, C., Tian, K., Zhang, J., Guan, Y., Luo, F., Shen, F., Jiang, Z., Gu, Q., Han, X., and Yang, W. V-express: Conditional dropout for progressive training of portrait video generation. arXiv preprint arXiv:2406.02511, 2024.
319
+
320
+ Wang, S., Li, L., Ding, Y., Fan, C., and Yu, X. Audio2head: Audio-driven one-shot talking-head generation with natural head motion. arXiv preprint arXiv:2107.09293, 2021a.
321
+ Wang, T.-C., Mallya, A., and Liu, M.-Y. One-shot free-view neural talking-head synthesis for video conferencing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10039-10049, 2021b.
322
+ Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004.
323
+ Wei, H., Yang, Z., and Wang, Z. Aniportrait: Audio-driven synthesis of photorealistic portrait animation. arXiv preprint arXiv:2403.17694, 2024.
324
+ Xu, M., Li, H., Su, Q., Shang, H., Zhang, L., Liu, C., Wang, J., Van Gool, L., Yao, Y., and Zhu, S. Hallo: Hierarchical audio-driven visual synthesis for portrait image animation. arXiv preprint arXiv:2406.08801, 2024.
325
+ Yang, S., Li, H., Wu, J., Jing, M., Li, L., Ji, R., Liang, J., Fan, H., and Wang, J. Megactor-sigma: Unlocking flexible mixed-modal control in portrait animation with diffusion transformer. arXiv preprint arXiv:2408.14975, 2024.
326
+ Yang, Z., Zeng, A., Yuan, C., and Li, Y. Effective whole-body pose estimation with two-stages distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4210-4220, 2023.
327
+ Ye, Z., Jiang, Z., Ren, Y., Liu, J., He, J., and Zhao, Z. Geneface: Generalized and high-fidelity audio-driven 3d talking face synthesis. arXiv preprint arXiv:2301.13430, 2023.
328
+ Ye, Z., Zhong, T., Ren, Y., Yang, J., Li, W., Huang, J., Jiang, Z., He, J., Huang, R., Liu, J., et al. Real3d-portrait: One-shot realistic 3d talking portrait synthesis. arXiv preprint arXiv:2401.08503, 2024.
329
+ Zhang, W., Cun, X., Wang, X., Zhang, Y., Shen, X., Guo, Y., Shan, Y., and Wang, F. Sadtalker: Learning realistic 3d motion coefficients for stylized audio-driven single image talking face animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8652-8661, 2023.
330
+ Zhang, Z., Li, L., Ding, Y., and Fan, C. Flow-guided one-shot talking face generation with a high-resolution audiovisual dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3661-3670, 2021.
331
+
332
+ Zheng, L., Zhang, Y., Guo, H. A., Pan, J., Tan, Z., Lu, J., Tang, C., An, B., and YAN, S. MEMO: Memory-guided and emotion-aware talking video generation, 2024. URL https://openreview.net/forum? id=CpgWRFqxhD.
333
+ Zhou, H., Sun, Y., Wu, W., Loy, C. C., Wang, X., and Liu, Z. Pose-controllable talking face generation by implicitly modularized audio-visual representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4176-4186, 2021.
334
+ Zhou, Y., Han, X., Shechtman, E., Echevarria, J., Kalogerakis, E., and Li, D. Makelttalk: speaker-aware talking-head animation. ACM Transactions On Graphics (TOG), 39(6):1-15, 2020.
335
+ Zhu, H., Wu, W., Zhu, W., Jiang, L., Tang, S., Zhang, L., Liu, Z., and Loy, C. C. Celebv-hq: A large-scale video facial attributes dataset. In European conference on computer vision, pp. 650-667. Springer, 2022.
2502.09xxx/2502.09533/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9761fabeaa6811e824ee6ddbb023d15145ca35b50fb0e248ef3095fbf2d854ea
3
+ size 601954
2502.09xxx/2502.09533/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09560/6ff14f6e-b80c-4f66-b669-66b37261f2f5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:850fc2f9743b7d965b398800cd45e14984941d58218e197a1068b9b588589c21
3
+ size 10747554
2502.09xxx/2502.09560/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09560/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e886747f8b265ca9042b64776abb2beab62fd8714aa0b727fb8e562947fe51db
3
+ size 3010916
2502.09xxx/2502.09560/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_content_list.json ADDED
@@ -0,0 +1,1748 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "MDCROW: AUTOMATING MOLECULAR DYNAMICS WORKFLOWS WITH LARGE LANGUAGE MODELS",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 191,
8
+ 119,
9
+ 805,
10
+ 167
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Quintina Campbell†, Sam Cox†,3, Jorge Medina†, Brittany Watterson2, Andrew D. White*1,3",
17
+ "bbox": [
18
+ 183,
19
+ 224,
20
+ 810,
21
+ 242
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Department of Chemical Engineering, University of Rochester, Rochester, New York, USA \n $^{2}$ Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA \n $^{3}$ FutureHouse Inc., San Francisco, CA",
28
+ "bbox": [
29
+ 187,
30
+ 252,
31
+ 807,
32
+ 295
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "ABSTRACT",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 447,
42
+ 349,
43
+ 547,
44
+ 363
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Molecular dynamics (MD) simulations are essential for understanding biomolecular systems but remain challenging to automate. Recent advances in large language models (LLM) have demonstrated success in automating complex scientific tasks using LLM-based agents. In this paper, we introduce MDCrow, an agentic LLM assistant capable of automating MD workflows. MDCrow uses chain-of-thought over 40 expert-designed tools for handling and processing files, setting up simulations, analyzing the simulation outputs, and retrieving relevant information from literature and databases. We assess MDCrow's performance across 25 tasks of varying required subtasks and difficulty, and we evaluate the agent's robustness to both difficulty and prompt style. gpt-4o is able to complete complex tasks with low variance, followed closely by l1ama3-405b, a compelling open-source model. While prompt style does not influence the best models' performance, it has significant effects on smaller models.",
51
+ "bbox": [
52
+ 169,
53
+ 369,
54
+ 826,
55
+ 523
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1 Introduction",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 112,
65
+ 549,
66
+ 254,
67
+ 565
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Molecular dynamics (MD) simulations is a common method to understand dynamic and complex systems in chemistry and biology. While MD is now routine, its integration into and impact on scientific workflows has increased dramatically over the past few decades [1-3]. There are two main reasons for this: First, MD provides valuable insights. Through simulations, scientists can study structural and dynamic phenomena, perturbations, and dynamic processes in their chemical systems. Second, innovations in hardware and expert-designed software packages have made MD much more accessible to both experienced and novice users [3].",
74
+ "bbox": [
75
+ 109,
76
+ 584,
77
+ 883,
78
+ 669
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "For a given protein simulation, parameter selection is nontrivial: the user must provide the input structure (such as a PDB [4] file), select a force field (e.g., CHARMM [5], AMBER [6]), and specify parameters such as temperature, integrator, simulation length, and equilibration protocols. Simulations also generally require pre- and post-processing steps, along with various analyses. For instance, a user may need to clean or trim a PDB file, add a solvent, or analyze the protein's structure. After simulation, they might examine the protein's shape throughout the simulation or assess its stability under different conditions. The choices for pre-processing, analysis, and simulation parameters are highly specific to any given use case and often require expert intuition. Thus, automating this process is difficult but beneficial.",
85
+ "bbox": [
86
+ 109,
87
+ 674,
88
+ 883,
89
+ 785
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Several efforts have been made to automate MD workflows [7-17], focusing largely on specific domains, such as RadonPy for polymer's simulations [8], or PyAutoFEP for proteins and small molecules for drug-screening [16]. Other approaches are constrained to a particular combination of simulation software and simulation (e.g. GROMACS and Free Energy Perturbation). Certainly, there has been significant community-driven improvement in automating and creating MD toolkits [14, 18-24] and user-friendly interfaces and visualizations [25-32]. While these advances",
96
+ "bbox": [
97
+ 109,
98
+ 791,
99
+ 883,
100
+ 863
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "aside_text",
106
+ "text": "arXiv:2502.09565v1 [cs.AI] 13 Feb 2025",
107
+ "bbox": [
108
+ 22,
109
+ 270,
110
+ 57,
111
+ 708
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "page_footnote",
117
+ "text": "†These authors contributed equally to this work",
118
+ "bbox": [
119
+ 133,
120
+ 883,
121
+ 416,
122
+ 897
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_footnote",
128
+ "text": "*Corresponding author: andrew.white@rochester.edu",
129
+ "bbox": [
130
+ 133,
131
+ 898,
132
+ 480,
133
+ 911
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "improve the capabilities and ease of use in many cases, the inherent variability of MD workflows still poses a great challenge for full automation.",
140
+ "bbox": [
141
+ 109,
142
+ 90,
143
+ 885,
144
+ 119
145
+ ],
146
+ "page_idx": 1
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "Large-Language Model (LLM) agents [33-36] have gained popularity for their ability to automate technical tasks through reasoning and tool usage, even surpassing domain-specialized LLMs (e.g., BioGPT [37], Med-PaLM [38]) when programmed for specialized roles [39]. These agents have demonstrated promising results in scientific tasks within a predefined toolspace, with tools like ChemCrow and Coscientist successfully automating complex workflows and novel design in chemical synthesis [40-42]. Likewise, LLM-driven automation has been explored in materials research [43-46], literature and data aggregation [47, 48], and more sophisticated tasks [45, 49-55]. Most similar to this work, ProtAgents [55] is a multi-agent modeling framework tackling protein-related design and analysis, and LLaMP [45] applies a retrieval-augmented generation (RAG)-based ReAct agent to simulate inorganic materials by interfacing with literature databases, Wikipedia, and atomistic simulation tools. Although preliminary work has applied agentic LLMs to MD via a RAG-based agent [45], no fully adaptive and autonomous system exists for biochemical MD or protein simulations. See Ramos et al.[56] for a recent review on the design, assessment, and applications of scientific agents.",
151
+ "bbox": [
152
+ 109,
153
+ 125,
154
+ 885,
155
+ 294
156
+ ],
157
+ "page_idx": 1
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "Here we present MDCrow, an LLM-agent capable of autonomously completing MD workflows. Our main contributions to the field are (1) we assess MDCrow's performance across 25 tasks with varying difficulty and compare performance of different LLM models; (2) we measure robustness how agents are prompted and task complexity based on required number of subtasks we compare with simply equipping an LLM with a python interpreter with the required packages installed, rather than using a custom built environment. Our main conclusions is that MDCrow with gpt-4o or 11ama3-405b is able to perform nearly all of our assessed tasks and is relatively insensitive to how precise the instructions are given to it. See Figure 1D for an overview of the main results.",
162
+ "bbox": [
163
+ 109,
164
+ 297,
165
+ 885,
166
+ 397
167
+ ],
168
+ "page_idx": 1
169
+ },
170
+ {
171
+ "type": "text",
172
+ "text": "2 Methods",
173
+ "text_level": 1,
174
+ "bbox": [
175
+ 112,
176
+ 419,
177
+ 220,
178
+ 434
179
+ ],
180
+ "page_idx": 1
181
+ },
182
+ {
183
+ "type": "text",
184
+ "text": "2.1 MDCrow Toolset",
185
+ "text_level": 1,
186
+ "bbox": [
187
+ 112,
188
+ 452,
189
+ 274,
190
+ 467
191
+ ],
192
+ "page_idx": 1
193
+ },
194
+ {
195
+ "type": "text",
196
+ "text": "MDCrow is an LLM agent, which consists of an environment of tools that emit observations and an LLM that selects actions (tools + input arguments). MDCrow is built with Langchain [57] and a ReAct style prompt.[35]. The tools mostly consist of analysis and simulation methods; we use OpenMM [22] and MDTraj [21] packages, but in principle our findings generalize to any such packages.",
197
+ "bbox": [
198
+ 109,
199
+ 479,
200
+ 883,
201
+ 537
202
+ ],
203
+ "page_idx": 1
204
+ },
205
+ {
206
+ "type": "text",
207
+ "text": "MDCrow's tools can be categorized in four groups: Information Retrieval, PDB & Protein, Simulation, and Analysis (see Figure 1B).",
208
+ "bbox": [
209
+ 109,
210
+ 541,
211
+ 883,
212
+ 570
213
+ ],
214
+ "page_idx": 1
215
+ },
216
+ {
217
+ "type": "text",
218
+ "text": "Information Retrieval Tools These tools enable MDCrow to build context and answer simple questions posed by the user. Most of the tools serve as wrappers for UniProt API functionalities [58], allowing access to data such as 3D structures, binding sites, and kinetic properties of proteins. Additionally, we include a LiteratureSearch tool, which uses PaperQA [48] to answer questions and retrieve information from literature. PaperQA accesses a local database of relevant PDFs, selected specifically for the test prompts, which can be found in SI section C. This real-time information helps the system provide direct answers to user questions and can also assist the agent in selecting parameters or guiding simulation processes.",
219
+ "bbox": [
220
+ 109,
221
+ 589,
222
+ 883,
223
+ 688
224
+ ],
225
+ "page_idx": 1
226
+ },
227
+ {
228
+ "type": "text",
229
+ "text": "PDB & Protein Tools MDCrow uses these tools to interact directly with PDB files, performing tasks such as cleaning structures with PDBFixer [22], retrieving PDBs for small molecules and proteins, and visualizing PDBs through Molrender [59] or NGLview [60].",
230
+ "bbox": [
231
+ 109,
232
+ 705,
233
+ 883,
234
+ 750
235
+ ],
236
+ "page_idx": 1
237
+ },
238
+ {
239
+ "type": "text",
240
+ "text": "Simulation Tools All included simulation tools use OpenMM [22] for simulation and PackMol [19] for solvent addition. These tools are built to manage dynamic simulation parameters, handle errors related to inadequate parameters or incomplete preprocessing, and address missing forcefield templates efficiently. The agent responds to simulation setup errors through informative error messages, improving overall robustness. Finally, the simulation tools outputs Python scripts that can be modified directly by MDCrow whenever the simulation requires additional steps or parameters.",
241
+ "bbox": [
242
+ 109,
243
+ 766,
244
+ 883,
245
+ 838
246
+ ],
247
+ "page_idx": 1
248
+ },
249
+ {
250
+ "type": "text",
251
+ "text": "Analysis Tools This group of tools is the largest in the toolset, designed to cover common MD workflow analysis methods, many of which are built on MDTraj [21] functionalities. Examples include computing the root mean squared distance (RMSD) with respect to a reference structure, the radius of gyration, analyzing the secondary structure, and various plotting functions.",
252
+ "bbox": [
253
+ 109,
254
+ 854,
255
+ 883,
256
+ 914
257
+ ],
258
+ "page_idx": 1
259
+ },
260
+ {
261
+ "type": "page_number",
262
+ "text": "2",
263
+ "bbox": [
264
+ 491,
265
+ 935,
266
+ 504,
267
+ 946
268
+ ],
269
+ "page_idx": 1
270
+ },
271
+ {
272
+ "type": "image",
273
+ "img_path": "images/3e854f587a1ec3ae8a4e9580d0d5f2654b28711865b8b246e4ead7aa11827261.jpg",
274
+ "image_caption": [
275
+ "A."
276
+ ],
277
+ "image_footnote": [],
278
+ "bbox": [
279
+ 158,
280
+ 109,
281
+ 844,
282
+ 258
283
+ ],
284
+ "page_idx": 2
285
+ },
286
+ {
287
+ "type": "image",
288
+ "img_path": "images/b18c97ecb41bf5964ef7932a46af0a283c3537eec9ab8fcac59bf8a4ba6c64a9.jpg",
289
+ "image_caption": [
290
+ "C.",
291
+ "D."
292
+ ],
293
+ "image_footnote": [],
294
+ "bbox": [
295
+ 158,
296
+ 261,
297
+ 421,
298
+ 604
299
+ ],
300
+ "page_idx": 2
301
+ },
302
+ {
303
+ "type": "image",
304
+ "img_path": "images/b823bdb599548693c2dce08ca5faba53e346d5c637861a3135f36ddc0a468170.jpg",
305
+ "image_caption": [],
306
+ "image_footnote": [],
307
+ "bbox": [
308
+ 455,
309
+ 265,
310
+ 843,
311
+ 401
312
+ ],
313
+ "page_idx": 2
314
+ },
315
+ {
316
+ "type": "image",
317
+ "img_path": "images/3f0218acc30787b4773c0687e7243f67a9a9dd7f64c48904f0f6a9fcaee6fecc.jpg",
318
+ "image_caption": [
319
+ "Figure 1: A. MDCrow workflow. Starting with a user prompt and initialized with a set of MD tools, MDCrow follows a chain-of-thought process until it completes all tasks in the prompt. The final output includes a response, along with all resulting analyses and files. B. The tool distribution categorized into 4 types: information retrieval, PDB and protein handling, simulation, and analysis. A few examples from each category are shown. C. Two example prompts that MDCrow is tested on. The first is the simplest prompt, containing only 1 subtask. The most complex task requires 10 subtasks. D. Average subtask completion across all 25 prompts as task complexity (the number of subtasks per prompt) increases. The top three performing base-LLMs are shown. Among them, gpt-4o and 11ama3-405b consistently maintain high stability, staying close to $100\\%$ completion even as task complexity increases."
320
+ ],
321
+ "image_footnote": [],
322
+ "bbox": [
323
+ 454,
324
+ 422,
325
+ 839,
326
+ 619
327
+ ],
328
+ "page_idx": 2
329
+ },
330
+ {
331
+ "type": "text",
332
+ "text": "2.2 Chatting with Simulations",
333
+ "text_level": 1,
334
+ "bbox": [
335
+ 112,
336
+ 767,
337
+ 338,
338
+ 782
339
+ ],
340
+ "page_idx": 2
341
+ },
342
+ {
343
+ "type": "text",
344
+ "text": "A key challenge in developing an automated MD assistant is ensuring it can manage a large number of files, analyses, and long simulations and runtimes. Although MDCrow has been primarily tested with shorter simulations, it is designed to handle larger workflows as well. Its ability to retrieve and resume previous runs allows users to start a simulation, step away during the long process, and later continue interactions and analyses without needing to stay engaged the entire time. An example of this chatting feature is shown in Figure 2.",
345
+ "bbox": [
346
+ 109,
347
+ 792,
348
+ 883,
349
+ 864
350
+ ],
351
+ "page_idx": 2
352
+ },
353
+ {
354
+ "type": "text",
355
+ "text": "MDCrow creates an LLM-generated summary of the user prompt and agent trace, which is assigned to a unique run identifier provided at the end of the run (but accessible at any time during the session). Each run's files, figures, and path registry are saved in a unique checkpoint folder linked to the run identifier.",
356
+ "bbox": [
357
+ 109,
358
+ 869,
359
+ 883,
360
+ 912
361
+ ],
362
+ "page_idx": 2
363
+ },
364
+ {
365
+ "type": "page_number",
366
+ "text": "3",
367
+ "bbox": [
368
+ 493,
369
+ 935,
370
+ 503,
371
+ 946
372
+ ],
373
+ "page_idx": 2
374
+ },
375
+ {
376
+ "type": "text",
377
+ "text": "When resuming a chat, the LLM loads the summarized context of previous steps and maintains access to the same file corpus, as long as the created files remain intact. To resume a run, the user simply provides the checkpoint directory and run identifier. MDCrow then loads the corresponding memory summaries and retrieves all associated files, enabling seamless continuation of analyses.",
378
+ "bbox": [
379
+ 109,
380
+ 90,
381
+ 887,
382
+ 148
383
+ ],
384
+ "page_idx": 3
385
+ },
386
+ {
387
+ "type": "image",
388
+ "img_path": "images/111470b44211b28f0d856c772691c99a243299c1ae55a2e1049d8fa1e5b7c1b6.jpg",
389
+ "image_caption": [
390
+ "Figure 2: Example Chat Example of chat with MDCrow. The user first asks to download PDB files for two systems. Then, once MDCrow has completed this task, the user asks for analysis of the files. Next, the user asks for a quick 10 ps simulation of both files, and MDCrow saves all files for later handling. Lastly, the user asks for plots of RMSD for each simulation over time, and MDCrow responds with each plot."
391
+ ],
392
+ "image_footnote": [],
393
+ "bbox": [
394
+ 197,
395
+ 157,
396
+ 805,
397
+ 647
398
+ ],
399
+ "page_idx": 3
400
+ },
401
+ {
402
+ "type": "text",
403
+ "text": "3 Results",
404
+ "text_level": 1,
405
+ "bbox": [
406
+ 112,
407
+ 737,
408
+ 209,
409
+ 753
410
+ ],
411
+ "page_idx": 3
412
+ },
413
+ {
414
+ "type": "text",
415
+ "text": "3.1 MDCrow Performance on Various Tasks",
416
+ "text_level": 1,
417
+ "bbox": [
418
+ 112,
419
+ 767,
420
+ 437,
421
+ 782
422
+ ],
423
+ "page_idx": 3
424
+ },
425
+ {
426
+ "type": "text",
427
+ "text": "To assess MDCrow's ability to complete tasks of varying difficulty, we designed 25 prompts with different levels of complexity and documented the number of subtasks (minimum required steps) needed to complete each task. MDCrow was not penalized for taking additional steps, but was penalized for omitting necessary ones. For example, the first prompt in Figure 1C contains a single subtask, whereas the complex task requires 10 subtasks: downloading the PDB file, performing three simulations, and performing two analyses per simulation. If the agent failed to complete an earlier step, it was penalized for every subsequent step it could not perform due to that failure.",
428
+ "bbox": [
429
+ 109,
430
+ 792,
431
+ 883,
432
+ 878
433
+ ],
434
+ "page_idx": 3
435
+ },
436
+ {
437
+ "type": "text",
438
+ "text": "The 25 prompts require between 1 and 10 subtasks, with their distribution shown in Figure 3B. Each prompt was tested across three GPT models (gpt-3.5-turbo-0125, gpt-4-turbo-2024-04-09, gpt-4o-2024-08-06) [61, 62],",
439
+ "bbox": [
440
+ 109,
441
+ 883,
442
+ 883,
443
+ 912
444
+ ],
445
+ "page_idx": 3
446
+ },
447
+ {
448
+ "type": "page_number",
449
+ "text": "4",
450
+ "bbox": [
451
+ 493,
452
+ 935,
453
+ 504,
454
+ 946
455
+ ],
456
+ "page_idx": 3
457
+ },
458
+ {
459
+ "type": "text",
460
+ "text": "two Llama models (llama-v3p1-405b-instruct, llama-v3p1-70b-instruct) [63] (accessed via the Fireworks AI API with 8-bit floating point (8FP) quantization [64]), and two Claude models (claude-3-opus-20240229, claude-3-5-sonnet-20240620) [65, 66]. A newer Claude Sonnet model, claude-3-5-sonnet-20241022 was tested in later experiments but was not found to give superior results, so it was not tested on these 25 prompts. All other parameters were held constant across tests, and each version of MDCrow executed a single run per prompt.",
461
+ "bbox": [
462
+ 109,
463
+ 90,
464
+ 883,
465
+ 164
466
+ ],
467
+ "page_idx": 4
468
+ },
469
+ {
470
+ "type": "text",
471
+ "text": "Each run was evaluated by experts recording the number of required subtasks the agent completed and using Boolean indicators to indicate accuracy, whether the agent triggered a runtime error, and whether the trajectory contained any hallucinations. Since the agent trajectories for each run are inherently variable, accuracy is defined as the result's consistency with the expected trajectory rather than comparing against a fixed reference.",
472
+ "bbox": [
473
+ 109,
474
+ 167,
475
+ 883,
476
+ 224
477
+ ],
478
+ "page_idx": 4
479
+ },
480
+ {
481
+ "type": "text",
482
+ "text": "The percentage of tasks that were deemed to have valid solutions for MDCrow with each base-LLM is shown in Figure 3A. The lowest performing model was gpt-3.5. This is not surprising, as this model had some of the highest hallucination rates (32% of prompt completions contained hallucinations), compared to the absence of documented hallucinations in the higher performing models, gpt-4o and 11ama3-405b. However, the discrepancy in accuracy rates between models cannot solely be attributed to hallucinations, as gpt-3.5 attempted fewer than half of the required subtasks, whereas the higher-performing models, gpt-4o and 11ama3-405b, attempted 80-90% of the required subtask, earning accuracy in answering for 72% and 68% of tasks, respectively (Figures 3C, D).",
483
+ "bbox": [
484
+ 109,
485
+ 229,
486
+ 883,
487
+ 328
488
+ ],
489
+ "page_idx": 4
490
+ },
491
+ {
492
+ "type": "text",
493
+ "text": "These results indicate that MDCrow can handle complex MD tasks but is limited by the capabilities of the base model. For gpt-4-turbo, gpt-3.5, and 11ama3-70b, the number of trajectories with verified results decreases significantly as task complexity increases (Figure 3C). In contrast, gpt-4o and 11ama3-405b show only a slight decline, demonstrating that MDCrow performs well even for complex tasks when paired with more robust base models.",
494
+ "bbox": [
495
+ 109,
496
+ 332,
497
+ 883,
498
+ 390
499
+ ],
500
+ "page_idx": 4
501
+ },
502
+ {
503
+ "type": "image",
504
+ "img_path": "images/1e0a7ff3e8ae44786b8b5ae45d26e450068ddbbc6411fc8f7540dd5d12538fde.jpg",
505
+ "image_caption": [],
506
+ "image_footnote": [],
507
+ "bbox": [
508
+ 127,
509
+ 428,
510
+ 545,
511
+ 574
512
+ ],
513
+ "page_idx": 4
514
+ },
515
+ {
516
+ "type": "image",
517
+ "img_path": "images/39713b47276aeb7d791ec5f18e88f58ccc50c68c260fac61db6ad1867485d280.jpg",
518
+ "image_caption": [
519
+ "Figure 3: MDCrow Performance across Large Language Models. A. Summary of MDCrow performance dependent on LLM. Percentage of accuracy is determined by whether it gave acceptable final answer or not. While statistically indistinguishable from Claude and Llama models, gpt-4o significantly outperforms the rest of GPT models on giving accurate solutions (t-test, $0.004 \\leq p$ -value $\\leq 0.046$ ). B. The distribution of number of subtasks in each task across 25 prompts. The prompts range from 1-10 steps, with each step count belonging to at least 2 prompts. C. Percentages of prompts with accurate solutions with respect to LLM used and number of subtasks per task. The correlation between accuracy and complexity is statistically significant for all LLMs (Spearman correlation, $3.9 \\times 10^{-7} \\leq p$ -value $\\leq 1.1 \\times 10^{-2}$ ) D. Percentage of the subtasks that the agent completed for each base LLM per task."
520
+ ],
521
+ "image_footnote": [],
522
+ "bbox": [
523
+ 127,
524
+ 578,
525
+ 519,
526
+ 760
527
+ ],
528
+ "page_idx": 4
529
+ },
530
+ {
531
+ "type": "image",
532
+ "img_path": "images/b26ca67743df780afbcec5dde37c1e52bf50fd0da90b85201ebfb12320677e14.jpg",
533
+ "image_caption": [],
534
+ "image_footnote": [],
535
+ "bbox": [
536
+ 558,
537
+ 426,
538
+ 870,
539
+ 575
540
+ ],
541
+ "page_idx": 4
542
+ },
543
+ {
544
+ "type": "image",
545
+ "img_path": "images/8dddc708f97f13c3d20a6d8db60553e9b7074b53c8e44202844892d2f05e3bc0.jpg",
546
+ "image_caption": [],
547
+ "image_footnote": [],
548
+ "bbox": [
549
+ 532,
550
+ 579,
551
+ 880,
552
+ 762
553
+ ],
554
+ "page_idx": 4
555
+ },
556
+ {
557
+ "type": "page_number",
558
+ "text": "5",
559
+ "bbox": [
560
+ 493,
561
+ 935,
562
+ 503,
563
+ 946
564
+ ],
565
+ "page_idx": 4
566
+ },
567
+ {
568
+ "type": "text",
569
+ "text": "3.2 MDCrow Robustness",
570
+ "text_level": 1,
571
+ "bbox": [
572
+ 112,
573
+ 90,
574
+ 303,
575
+ 104
576
+ ],
577
+ "page_idx": 5
578
+ },
579
+ {
580
+ "type": "text",
581
+ "text": "We evaluated the robustness of MDCrow on complex prompts and different prompt styles. We hypothesized that some models would excel at completing complex tasks, while others would struggle—either forgetting steps or hallucinating—as the number of required subtasks increased. To test this, we created a sequence of 10 prompts that increased in complexity. The first prompt required a single subtask, and each subsequent prompt added an additional subtask (see Figure 4A). Each prompt was tested twice: once in a natural, conversational style and once with explicitly ordered steps. Example prompts can be seen in Figure 4B.",
582
+ "bbox": [
583
+ 109,
584
+ 125,
585
+ 883,
586
+ 209
587
+ ],
588
+ "page_idx": 5
589
+ },
590
+ {
591
+ "type": "text",
592
+ "text": "To quantify robustness, we calculated the coefficient of variation (CV) for the percentage of completed subtasks across tasks. A lower CV indicates greater consistency in task completion and, therefore, higher robustness. The analysis revealed clear differences in robustness across models and prompt types. Overall, gpt-4o and 11ama3-405b demonstrated moderate to high robustness, while the Claude models showed significantly lower robustness. The performance comparison is shown in Figure 4C.",
593
+ "bbox": [
594
+ 109,
595
+ 214,
596
+ 883,
597
+ 285
598
+ ],
599
+ "page_idx": 5
600
+ },
601
+ {
602
+ "type": "text",
603
+ "text": "We expected that the percentage of subtasks completed by each model would decrease as task complexity increased. However, with gpt-4o and llama3-405b as base models, MDCrow demonstrated a strong relationship between the number of required and completed subtasks (Figure 4D) for both prompt types, indicating consistent performance regardless of task complexity or prompt style. The three included Claude models demonstrated less impressive performance. claude-3-opus followed the linear trend very loosely, becoming more erratic as task complexity increased. As the tasks required more subtasks, the model consistently misses nuances in the instructions and make logical errors. Both claude-3.5-sonnet models gave poor performance on these tasks, often producing the same error (see SI section A).",
604
+ "bbox": [
605
+ 109,
606
+ 290,
607
+ 883,
608
+ 402
609
+ ],
610
+ "page_idx": 5
611
+ },
612
+ {
613
+ "type": "text",
614
+ "text": "3.3 MDCrow Comparison",
615
+ "text_level": 1,
616
+ "bbox": [
617
+ 112,
618
+ 438,
619
+ 310,
620
+ 454
621
+ ],
622
+ "page_idx": 5
623
+ },
624
+ {
625
+ "type": "text",
626
+ "text": "We also compared MDCrow to two baselines: a ReAct [35] agent with only a Python REPL tool and a single-query LLM. MDCrow and the baselines were tested on the same 25 prompts as previously mentioned, all using gpt-4o. We use different system prompts to accommodate each framework, guiding the LLM to utilize common packages with MDCrow, and these prompts can be found in SI section B.",
627
+ "bbox": [
628
+ 109,
629
+ 473,
630
+ 883,
631
+ 529
632
+ ],
633
+ "page_idx": 5
634
+ },
635
+ {
636
+ "type": "text",
637
+ "text": "The single-query LLM is asked to complete the prompt by writing the code for all subtasks, not unlike what standalone ChatGPT would be asked to do. We then execute the code ourselves and evaluate the outcomes accordingly. ReAct with Python REPL can write and execute codes using a chain-of-thought framework. We find that MDCrow outperforms the two baselines significantly, as shown in Figure 5A, on attempting all subtasks and achieving an accurate solution. Not surprisingly, the two baseline methods struggled with code syntax errors and incorrect handling of PDB files. There is not a significant difference between the two baselines, indicating that the ReAct framework did not significantly boost the model's robustness.",
638
+ "bbox": [
639
+ 109,
640
+ 535,
641
+ 883,
642
+ 632
643
+ ],
644
+ "page_idx": 5
645
+ },
646
+ {
647
+ "type": "text",
648
+ "text": "In Figure 5B, we observe that the performance of all three methods generally declines as task complexity increases. However, both baseline methods drop to zero after just three steps, with performance then fluctuating erratically at higher complexities. This is not surprising, as proper file processing and simulation setup are crucial for optimal LLM performance in MD tasks. In contrast, MDCrow demonstrates greater robustness and reliability in handling complex tasks, thanks to its well-designed system for accurate file processing and simulation setup, as well as its ability to dynamically adjust to errors.",
649
+ "bbox": [
650
+ 109,
651
+ 638,
652
+ 883,
653
+ 723
654
+ ],
655
+ "page_idx": 5
656
+ },
657
+ {
658
+ "type": "text",
659
+ "text": "3.4 MDCrow Extrapolation through Chatting",
660
+ "text_level": 1,
661
+ "bbox": [
662
+ 109,
663
+ 758,
664
+ 447,
665
+ 776
666
+ ],
667
+ "page_idx": 5
668
+ },
669
+ {
670
+ "type": "text",
671
+ "text": "We further show MDCrow's ability to harness its chatting feature and extrapolate outside of its toolset to complete new tasks. This task requires MDCrow to perform an annealing simulation, which is not part of the current toolset. The agent achieves this by first setting up a simulation to find appropriate system parameters and handle possible early errors. Then, the agent modifies the script according to the user's request. Once the simulation is complete, the user later asks for simulation analyses, shown in Figures 6A, B.",
672
+ "bbox": [
673
+ 109,
674
+ 792,
675
+ 883,
676
+ 864
677
+ ],
678
+ "page_idx": 5
679
+ },
680
+ {
681
+ "type": "text",
682
+ "text": "This shows that MDCrow has the ability to generalize outside of its toolset and is capable of completing more complicated and/or user-specific simulations. By utilizing the chatting feature, users can walk MDCrow through new analyses, reducing the risk of catastrophic mistakes.",
683
+ "bbox": [
684
+ 109,
685
+ 869,
686
+ 883,
687
+ 912
688
+ ],
689
+ "page_idx": 5
690
+ },
691
+ {
692
+ "type": "page_number",
693
+ "text": "6",
694
+ "bbox": [
695
+ 491,
696
+ 936,
697
+ 504,
698
+ 946
699
+ ],
700
+ "page_idx": 5
701
+ },
702
+ {
703
+ "type": "image",
704
+ "img_path": "images/38f1a768abc19d52626aff86c4fac9a89ee426ddd4c4e7f82eea16752e8f67f6.jpg",
705
+ "image_caption": [
706
+ "A."
707
+ ],
708
+ "image_footnote": [],
709
+ "bbox": [
710
+ 127,
711
+ 103,
712
+ 339,
713
+ 260
714
+ ],
715
+ "page_idx": 6
716
+ },
717
+ {
718
+ "type": "image",
719
+ "img_path": "images/558151c04e38e8dfe4fbf77267ed122bef10f5955488af9720a1499909f98779.jpg",
720
+ "image_caption": [
721
+ "B."
722
+ ],
723
+ "image_footnote": [],
724
+ "bbox": [
725
+ 383,
726
+ 107,
727
+ 566,
728
+ 263
729
+ ],
730
+ "page_idx": 6
731
+ },
732
+ {
733
+ "type": "image",
734
+ "img_path": "images/60b4bd95ec0760d34d412d3545793ae62a5534e78223af8cd138c4531ce08b67.jpg",
735
+ "image_caption": [
736
+ "C."
737
+ ],
738
+ "image_footnote": [],
739
+ "bbox": [
740
+ 599,
741
+ 107,
742
+ 883,
743
+ 271
744
+ ],
745
+ "page_idx": 6
746
+ },
747
+ {
748
+ "type": "image",
749
+ "img_path": "images/db6e3b20c32c1e8c056d5e807e1dc104a27434314f9b4be7636aeba81b1a720f.jpg",
750
+ "image_caption": [
751
+ "D.",
752
+ "Figure 4: A. The number of subtasks in each task, categorized by type. Task 1 begins with a single pre-simulation subtask (Download a PDB file) and each subsequent task adds a single subtask, adding to a total of 10 tasks with a maximum of 10 subtasks. B. Example of \"Natural\" and \"Ordered\" prompt style on a three-step prompt. C. The robustness of MDCrow built on each model with both prompt types, measured by coefficient of variation (CV). Lower CV is interpreted as greater consistency. gpt-4o and llama3-405b are the more robust models, as the Claude models have higher CVs. D. Comparison of subtask completion across models and prompt types. In the 9-subtask prompt, gpt-4o encountered an error after only one step and gave up without trying to fix it. In general, gpt-4o and llama3-405b have relatively robust performance with increasing complexity for both prompt types. claude-3-opus struggles with more complex tasks, making more logical errors for complex tasks. The two claude-3.5-sonnet models showed fairly poor performance across this experiment."
753
+ ],
754
+ "image_footnote": [],
755
+ "bbox": [
756
+ 117,
757
+ 318,
758
+ 279,
759
+ 441
760
+ ],
761
+ "page_idx": 6
762
+ },
763
+ {
764
+ "type": "image",
765
+ "img_path": "images/fc0e27d4fbb22e8d6970f408b629c979bbbe7bc66361dd21de87889094014aa0.jpg",
766
+ "image_caption": [
767
+ "N Subtasks Required"
768
+ ],
769
+ "image_footnote": [],
770
+ "bbox": [
771
+ 282,
772
+ 318,
773
+ 428,
774
+ 439
775
+ ],
776
+ "page_idx": 6
777
+ },
778
+ {
779
+ "type": "image",
780
+ "img_path": "images/c4d61200cdf6de27cfe9dc087973adea7cf125e4c97fe3b14e6bb17a691beb78.jpg",
781
+ "image_caption": [],
782
+ "image_footnote": [],
783
+ "bbox": [
784
+ 431,
785
+ 319,
786
+ 576,
787
+ 439
788
+ ],
789
+ "page_idx": 6
790
+ },
791
+ {
792
+ "type": "image",
793
+ "img_path": "images/46fc9bb7157dac093f517960134e7f9009f0c5d2a506540eef9b4ca5c24d070d.jpg",
794
+ "image_caption": [
795
+ "Comparison of Subtasks Completion Across Models and Prompt Type"
796
+ ],
797
+ "image_footnote": [],
798
+ "bbox": [
799
+ 584,
800
+ 308,
801
+ 727,
802
+ 439
803
+ ],
804
+ "page_idx": 6
805
+ },
806
+ {
807
+ "type": "image",
808
+ "img_path": "images/c5ccef0a9c0ac31a1248b132c1023b87a91807b0f267e37cc15289f1ff99cc17.jpg",
809
+ "image_caption": [],
810
+ "image_footnote": [
811
+ "- Ordered Prompt",
812
+ "Natural Prompt"
813
+ ],
814
+ "bbox": [
815
+ 733,
816
+ 308,
817
+ 875,
818
+ 439
819
+ ],
820
+ "page_idx": 6
821
+ },
822
+ {
823
+ "type": "text",
824
+ "text": "4 Discussion",
825
+ "text_level": 1,
826
+ "bbox": [
827
+ 112,
828
+ 645,
829
+ 235,
830
+ 660
831
+ ],
832
+ "page_idx": 6
833
+ },
834
+ {
835
+ "type": "text",
836
+ "text": "Although LLMs' scientific abilities are growing [67-69], they cannot yet independently complete MD workflows, even with a ReAct framework and Python interpreter. However, with frontier LLMs, chain-of-thought, and an expert-curated toolset, MDCrow successfully handles a broad range of tasks. It performs $80\\%$ better than gpt-4o in ReAct workflows at completing subtasks, which is expected due to MD workflows' need for file handling, error management, and real-time data retrieval.",
837
+ "bbox": [
838
+ 109,
839
+ 676,
840
+ 883,
841
+ 746
842
+ ],
843
+ "page_idx": 6
844
+ },
845
+ {
846
+ "type": "text",
847
+ "text": "In some cases, particularly for complex tasks beyond its explicit toolset, MDCrow's performance may improve with human guidance. The system's chatting feature allows users to continue previous conversations, clarify misunderstandings, and guide MDCrow step-by-step through difficult tasks. This adaptability helps MDCrow recover from failures, refine its approach based on user intent, and handle more complex workflows. This suggests that, with more advanced LLM models, targeted feedback, and the addition of specialized tools, MDCrow could tackle an even broader range of tasks. We did not do a full evaluation of MDCrow's capabilities through this chatting feature in this work.",
848
+ "bbox": [
849
+ 109,
850
+ 752,
851
+ 883,
852
+ 835
853
+ ],
854
+ "page_idx": 6
855
+ },
856
+ {
857
+ "type": "text",
858
+ "text": "For all LLMs, task accuracy and subtask completion are affected by task complexity. Interestingly, while gpt-4o can handle multiple steps with low variance, 11ama3-405b is a compelling second best, as an open-source model. Other models, such as gpt-3.5 and claude-3.5-sonnet, struggle with hallucinations or inability to follow multistep instructions. Performance on these models, however, is improved with explicit prompting or model-specific optimization (especially for claude-3.5-sonnet).",
859
+ "bbox": [
860
+ 109,
861
+ 842,
862
+ 883,
863
+ 912
864
+ ],
865
+ "page_idx": 6
866
+ },
867
+ {
868
+ "type": "page_number",
869
+ "text": "7",
870
+ "bbox": [
871
+ 493,
872
+ 935,
873
+ 504,
874
+ 946
875
+ ],
876
+ "page_idx": 6
877
+ },
878
+ {
879
+ "type": "image",
880
+ "img_path": "images/624a3176aa40c5c2bedcf10a999c642da1bef602844179ca1ff1d2b3584d362a.jpg",
881
+ "image_caption": [
882
+ "Figure 5: Performance across LLM Frameworks using the same 25-prompt set: MDCrow, direct LLM with no tools (single-query), and ReAct agent with only Python REPL tool. All use gpt-4o. A. Performance among LLM frameworks measured by whether accuracy and average percentage of subtasks they complete for each of 25 task prompts. MDCrow is significantly better at giving accurate solutions than direct LLM (t-test, $p = 1 \\times 10^{-3}$ ) and ReAct (t-test, $p = 4 \\times 10^{-4}$ ). MDCrow completes significantly more subtasks on average compared to direct LLM (t-test, $p = 1 \\times 10^{-6}$ ) and ReAct (t-test, $p = 6 \\times 10^{-6}$ ). B. Percentage of tasks completed with the respect to LLM framework used and the number of subtasks required for each task. The correlation between accuracy and number of subtasks required is statistically significant, $p = 1 \\times 10^{-3}$ for direct LLM and $p = 1 \\times 10^{-4}$ MDCrow. The p value for ReAct is $p = 7 \\times 10^{-2}$ ."
883
+ ],
884
+ "image_footnote": [],
885
+ "bbox": [
886
+ 114,
887
+ 99,
888
+ 380,
889
+ 262
890
+ ],
891
+ "page_idx": 7
892
+ },
893
+ {
894
+ "type": "image",
895
+ "img_path": "images/f323748f4ffb1a248650f3639f2007839c9ecce9f833d9304229173a5a817c92.jpg",
896
+ "image_caption": [],
897
+ "image_footnote": [],
898
+ "bbox": [
899
+ 398,
900
+ 99,
901
+ 875,
902
+ 261
903
+ ],
904
+ "page_idx": 7
905
+ },
906
+ {
907
+ "type": "text",
908
+ "text": "A.",
909
+ "bbox": [
910
+ 114,
911
+ 438,
912
+ 133,
913
+ 454
914
+ ],
915
+ "page_idx": 7
916
+ },
917
+ {
918
+ "type": "text",
919
+ "text": "User: Run a simulated annealing simulation of the protein 1L2Y in vacuum. First set it up, and then modify the given script to simulate at $303\\mathrm{K}$ for 3 ns (3000ps), then go to 498K and simulate there for 2 ns, then return to $303\\mathrm{K}$ and simulate for 3 extra ns.",
920
+ "bbox": [
921
+ 186,
922
+ 449,
923
+ 550,
924
+ 491
925
+ ],
926
+ "page_idx": 7
927
+ },
928
+ {
929
+ "type": "image",
930
+ "img_path": "images/5a32943045f29a0b7251add69a9e9773207f0bb3fdf2197884961125e5f124a6.jpg",
931
+ "image_caption": [
932
+ "Figure 6: A. MDCrow simulating annealing. The user directly instructs MDCrow to simulate an annealing simulation of protein 1L2Y. Once the simulation is complete, the user utilizes the chatting feature to ask for further analyses. B. RMSD, RGy, and temperature throughout the simulation, as requested by the user in A."
933
+ ],
934
+ "image_footnote": [],
935
+ "bbox": [
936
+ 133,
937
+ 513,
938
+ 550,
939
+ 736
940
+ ],
941
+ "page_idx": 7
942
+ },
943
+ {
944
+ "type": "text",
945
+ "text": "B.",
946
+ "bbox": [
947
+ 575,
948
+ 438,
949
+ 591,
950
+ 454
951
+ ],
952
+ "page_idx": 7
953
+ },
954
+ {
955
+ "type": "image",
956
+ "img_path": "images/504622491ec595b93e404188e1519be61281bd117771428c45bfc9dc661cb7c4.jpg",
957
+ "image_caption": [],
958
+ "image_footnote": [],
959
+ "bbox": [
960
+ 593,
961
+ 445,
962
+ 879,
963
+ 739
964
+ ],
965
+ "page_idx": 7
966
+ },
967
+ {
968
+ "type": "text",
969
+ "text": "These tasks were focused on routine applications of MD with short simulation runtimes, limited to proteins, common solvents, and force fields included in the OpenMM package. We did not explore small-molecule force fields, especially related to ligand binding. Future work could explore multi-modal approaches [70, 71] for tasks like convergence analysis or plot interpretations. The current framework relies on human-created tools, but as LLM-agent systems become more autonomous [72], careful evaluation and benchmarking will be essential.",
970
+ "bbox": [
971
+ 109,
972
+ 842,
973
+ 883,
974
+ 912
975
+ ],
976
+ "page_idx": 7
977
+ },
978
+ {
979
+ "type": "page_number",
980
+ "text": "8",
981
+ "bbox": [
982
+ 493,
983
+ 935,
984
+ 503,
985
+ 946
986
+ ],
987
+ "page_idx": 7
988
+ },
989
+ {
990
+ "type": "text",
991
+ "text": "5 Conclusion",
992
+ "text_level": 1,
993
+ "bbox": [
994
+ 116,
995
+ 89,
996
+ 240,
997
+ 106
998
+ ],
999
+ "page_idx": 8
1000
+ },
1001
+ {
1002
+ "type": "text",
1003
+ "text": "Running and analyzing MD simulations is non-trivial and typically hard to automate. Here, we explored using LLM agents to accomplish this. We built MDCrow, an LLM and environment consisting of over 40 tools purpose built for MD simulation and analysis. We found MDCrow could complete $72\\%$ of the tasks with the optimal settings (gpt-4o). 11ama-405B was able to complete $68\\%$ , providing a compelling open-source model. The best models were relatively robust to how the instructions are given, although weaker models struggle with unstructured instructions. Simply using an LLM with a python interpreter and required packages installed had a $28\\%$ accuracy. The performance of MDCrow was relatively stable as well, though dependent on the model. Correct assessment of these complex scientific workflows is challenging, and had to be done by hand. Chatting with the simulations, via extended conversations, is even more compelling, but is harder to assess.",
1004
+ "bbox": [
1005
+ 114,
1006
+ 119,
1007
+ 883,
1008
+ 244
1009
+ ],
1010
+ "page_idx": 8
1011
+ },
1012
+ {
1013
+ "type": "text",
1014
+ "text": "This work demonstrates the steps to automate and assess computational scientific workflows. As LLMs continue improving in performance, and better training methods arise for complex tasks like this, we expect LLM agents to be increasingly important for accelerating science. MDCrow, for example, can now automatically assess hypotheses with $72\\%$ accuracy with simulation and can scale-out to thousands of simultaneous tasks. The code and tasks are open source and available at https://github.com/ur-whitelab/MDCrow.",
1015
+ "bbox": [
1016
+ 114,
1017
+ 251,
1018
+ 883,
1019
+ 320
1020
+ ],
1021
+ "page_idx": 8
1022
+ },
1023
+ {
1024
+ "type": "text",
1025
+ "text": "6 Acknowledgments",
1026
+ "text_level": 1,
1027
+ "bbox": [
1028
+ 116,
1029
+ 340,
1030
+ 300,
1031
+ 357
1032
+ ],
1033
+ "page_idx": 8
1034
+ },
1035
+ {
1036
+ "type": "text",
1037
+ "text": "Research reported in this work was supported by the National Institute of General Medical Sciences of the National Institutes of Health under award number R35GM137966, National Science Foundation under grant number of 1751471, Robert L. and Mary L. Sproull Fellowship gift and U.S. Department of Energy, Grant No. DE-SC0023354. Work at FutureHouse is supported by the generosity of Eric and Wendy Schmidt. We thank the Center for Integrated Research Computing (CIRC) at University of Rochester for providing computational resources and technical support.",
1038
+ "bbox": [
1039
+ 114,
1040
+ 371,
1041
+ 883,
1042
+ 441
1043
+ ],
1044
+ "page_idx": 8
1045
+ },
1046
+ {
1047
+ "type": "page_number",
1048
+ "text": "9",
1049
+ "bbox": [
1050
+ 493,
1051
+ 936,
1052
+ 503,
1053
+ 946
1054
+ ],
1055
+ "page_idx": 8
1056
+ },
1057
+ {
1058
+ "type": "text",
1059
+ "text": "References",
1060
+ "text_level": 1,
1061
+ "bbox": [
1062
+ 114,
1063
+ 89,
1064
+ 209,
1065
+ 104
1066
+ ],
1067
+ "page_idx": 9
1068
+ },
1069
+ {
1070
+ "type": "list",
1071
+ "sub_type": "ref_text",
1072
+ "list_items": [
1073
+ "[1] Siddharth Sinha, Benjamin Tam, and San Ming Wang. Applications of molecular dynamics simulation in protein study. Membranes, 12(9):844, August 2022.",
1074
+ "[2] Martin Karplus and J Andrew McCammon. Molecular dynamics simulations of biomolecules. nature structural biology, 9(9), 2002.",
1075
+ "[3] Scott A Hollingsworth and Ron O Dror. Molecular dynamics simulation for all. Neuron, 99(6):1129-1143, 2018.",
1076
+ "[4] Sameer Velankar, Stephen K Burley, Genji Kurisu, Jeffrey C Hoch, and John L Markley. The protein data bank archive. Structural Proteomics: High-Throughput Methods, pages 3–21, 2021.",
1077
+ "[5] Bernard R Brooks, Charles L Brooks III, Alexander D Mackerell Jr, Lennart Nilsson, Robert J Petrella, Benoit Roux, Youngdo Won, Georgios Archontis, Christian Bartels, Stefan Boresch, et al. CHARMM: the biomolecular simulation program. Journal of computational chemistry, 30(10):1545-1614, 2009.",
1078
+ "[6] Jay W Ponder and David A Case. Force fields for protein simulations. Advances in protein chemistry, 66:27-85, 2003.",
1079
+ "[7] Matthew P. Baumgartner and Hongzhou Zhang. Building admiral, an automated molecular dynamics and analysis platform. ACS Medicinal Chemistry Letters, 11(11):2331-2335, November 2020.",
1080
+ "[8] Yoshihiro Hayashi, Junichiro Shiomi, Junko Morikawa, and Ryo Yoshida. RadonPy: automated physical property calculation using all-atom classical molecular dynamics simulations for polymer informatics. npj Computational Materials, 8(1):222, November 2022.",
1081
+ "[9] Harvinder Singh, Anupam Raja, Ajay Prakash, and Bikash Medhi. Gmx_qk: An automated protein protein-ligand complex simulation workflow bridged to MM PBSA, based on gromacs and zenity-dependent GUI for beginners in MD simulation study. Journal of Chemical Information and Modeling, 63(9):2603-2608, May 2023.",
1082
+ "[10] Gudrun Gygli and Juergen Pleiss. Simulation foundry: Automated and F.A.I.R. molecular modeling. Journal of Chemical Information and Modeling, 60(4):1922-1927, April 2020.",
1083
+ "[11] Abeeb Abiodun Yekeen, Olanrewaju Ayodeji Durojaye, Mukhtar Oluwaseun Idris, Hamdalat Folake Muritala, and Rotimi Olusanya Arise. CHAPERONg: A tool for automated GROMACS-based molecular dynamics simulations and trajectory analyses. Computational and Structural Biotechnology Journal, 21:4849-4858, 2023.",
1084
+ "[12] Eduardo H. B. Maia, Lucas Rolim Medaglia, Alisson Marques Da Silva, and Alex G. Taranto. Molecular architect: A user-friendly workflow for virtual screening. ACS Omega, 5(12):6628-6640, March 2020.",
1085
+ "[13] Abir Ganguly, Hsu-Chun Tsai, Mario Fernandez-Pendas, Tai-Sung Lee, Timothy J. Giese, and Darrin M. York. AMBER drug discovery boost tools: Automated workflow for production free-energy simulation setup and analysis (professa). Journal of Chemical Information and Modeling, 62(23):6069-6083, December 2022.",
1086
+ "[14] Celso R. C. Rêgo, Jörg Schaarschmidt, Tobias Schlöder, Montserrat Penaloza-Amion, Saientan Bag, Tobias Neumann, Timo Strunk, and Wolfgang Wenzel. SimStack: An intuitive workflow framework. Frontiers in Materials, 9:877597, May 2022.",
1087
+ "[15] Derek Groen, Agastya P. Bhati, James Suter, James Hetherington, Stefan J. Zasada, and Peter V. Coveney. FabSim: Facilitating computational research through automation on large-scale and distributed e-infrastructures. Computer Physics Communications, 207:375-385, October 2016.",
1088
+ "[16] Luan Carvalho Martins, Elio A. Cino, and Rafaela Salgado Ferreira. PyAutoFEP: An automated free energy perturbation workflow for GROMACS integrating enhanced sampling methods. Journal of Chemical Theory and Computation, 17(7):4262-4273, July 2021.",
1089
+ "[17] Miroslav Suruzhon, Tharindu Senapathi, Michael S. Bodnarchuk, Russell Viner, Ian D. Wall, Christopher B. Barnett, Kevin J. Naidoo, and Jonathan W. Essex. ProtoCaller: Robust automation of binding free energy calculations. Journal of Chemical Information and Modeling, 60(4):1917-1921, April 2020.",
1090
+ "[18] Dmitry Suplatov, Yana Sharapova, and Vytas Švedas. EasyAmber: A comprehensive toolbox to automate the molecular dynamics simulation of proteins. Journal of Bioinformatics and Computational Biology, 18(06):2040011, 2020.",
1091
+ "[19] Leandro Martínez, Ricardo Andrade, Ernesto G Birgin, and José Mario Martínez. PACKMOL: A package for building initial configurations for molecular dynamics simulations. Journal of computational chemistry, 30(13):2157-2164, 2009."
1092
+ ],
1093
+ "bbox": [
1094
+ 112,
1095
+ 113,
1096
+ 883,
1097
+ 912
1098
+ ],
1099
+ "page_idx": 9
1100
+ },
1101
+ {
1102
+ "type": "page_number",
1103
+ "text": "10",
1104
+ "bbox": [
1105
+ 490,
1106
+ 935,
1107
+ 509,
1108
+ 946
1109
+ ],
1110
+ "page_idx": 9
1111
+ },
1112
+ {
1113
+ "type": "list",
1114
+ "sub_type": "ref_text",
1115
+ "list_items": [
1116
+ "[20] Naveen Michaud-Agrawal, Elizabeth J Denning, Thomas B Woolf, and Oliver Beckstein. MDAnalysis: a toolkit for the analysis of molecular dynamics simulations. Journal of computational chemistry, 32(10):2319-2327, 2011.",
1117
+ "[21] Robert T. McGibbon, Kyle A. Beauchamp, Matthew P. Harrigan, Christoph Klein, Jason M. Swails, Carlos X. Hernández, Christian R. Schwantes, Lee-Ping Wang, Thomas J. Lane, and Vijay S. Pande. MDTraj: A modern open library for the analysis of molecular dynamics trajectories. Biophysical Journal, 109(8):1528 - 1532, 2015.",
1118
+ "[22] Peter Eastman, Jason Swails, John D Chodera, Robert T McGibbon, Yutong Zhao, Kyle A Beauchamp, Lee-Ping Wang, Andrew C Simmonett, Matthew P Harrigan, Chaya D Stern, et al. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS computational biology, 13(7):e1005659, 2017.",
1119
+ "[23] Mark James Abraham, Teemu Murtola, Roland Schulz, Szilard Páll, Jeremy C Smith, Berk Hess, and Erik Lindahl. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. *SoftwareX*, 1:19–25, 2015.",
1120
+ "[24] A. P. Thompson, H. M. Aktulga, R. Berger, D. S. Bolintineanu, W. M. Brown, P. S. Crozier, P. J. in 't Veld, A. Kohlmeyer, S. G. Moore, T. D. Nguyen, R. Shan, M. J. Stevens, J. Tranchida, C. Trott, and S. J. Plimpton. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comp. Phys. Comm., 271:108171, 2022.",
1121
+ "[25] G Goret, B Aoun, and Eric Pellegrini. MDANSE: An interactive analysis environment for molecular dynamics simulations. Journal of chemical information and modeling, 57(1):1-5, 2017.",
1122
+ "[26] João Vieira Ribeiro, Rafael C Bernardi, Till Rudack, Klaus Schulten, and Emad Tajkhorshid. QwikMD - gateway for easy simulation with VMD and NAMD. Biophysical Journal, 114(3):673a-674a, 2018.",
1123
+ "[27] Victor H Rusu, Vitor AC Horta, Bruno AC Horta, Roberto D Lins, and Riccardo Baron. MDWiZ: a platform for the automated translation of molecular dynamics simulations. Journal of Molecular Graphics and Modelling, 48:80-86, 2014.",
1124
+ "[28] Peter W Hildebrand, Alexander S Rose, and Johanna KS Tiemann. Bringing molecular dynamics simulation data into view. Trends in Biochemical Sciences, 44(11):902-913, 2019.",
1125
+ "[29] Xevi Biarnés, Fabio Pietrucci, Fabrizio Marinelli, and Alessandro Laio. METAGUI. a VMD interface for analyzing metadynamics and molecular dynamics simulations. Computer Physics Communications, 183(1):203-211, 2012.",
1126
+ "[30] William Humphrey, Andrew Dalke, and Klaus Schulten. VMD: visual molecular dynamics. Journal of molecular graphics, 14(1):33-38, 1996.",
1127
+ "[31] Diamantis Sellis, Dimitrios Vlachakis, and Metaxia Vlassi. Gromita: a fully integrated graphical user interface to gromacs 4. Bioinformatics and biology insights, 3:BBI-S3207, 2009.",
1128
+ "[32] Gerard Martínez-Rosell, Toni Giorgino, and Gianni De Fabritiis. PlayMolecule ProteinPrepare: a web application for protein preparation for molecular dynamics simulations. Journal of chemical information and modeling, 57(7):1511-1516, 2017.",
1129
+ "[33] Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.",
1130
+ "[34] Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445, 2022.",
1131
+ "[35] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.",
1132
+ "[36] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024.",
1133
+ "[37] Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in bioinformatics, 23(6):bbac409, 2022.",
1134
+ "[38] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models encode clinical knowledge. Nature, 620(7972):172-180, 2023."
1135
+ ],
1136
+ "bbox": [
1137
+ 112,
1138
+ 90,
1139
+ 883,
1140
+ 912
1141
+ ],
1142
+ "page_idx": 10
1143
+ },
1144
+ {
1145
+ "type": "page_number",
1146
+ "text": "11",
1147
+ "bbox": [
1148
+ 490,
1149
+ 935,
1150
+ 506,
1151
+ 946
1152
+ ],
1153
+ "page_idx": 10
1154
+ },
1155
+ {
1156
+ "type": "list",
1157
+ "sub_type": "ref_text",
1158
+ "list_items": [
1159
+ "[39] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with AI agents. Cell, 187:6125-6151, Oct 2024.",
1160
+ "[40] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Aug-mentation large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024.",
1161
+ "[41] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023.",
1162
+ "[42] Juan Luis Villarreal-Haro, Remy Gardier, Erick J Canales-Rodriguez, Elda Fischi Gomez, Gabriel Girard, Jean-Philippe Thiran, and Jonathan Rafael-Patino. CACTUS: A computational framework for generating realistic white matter microstructure substrates, 2023.",
1163
+ "[43] Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D Bocarsly, Andres M Bran, Stefan Bringuier, L Catherine Brinson, Kamal Choudhary, Defne Circi, et al. 14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon. Digital Discovery, 2(5):1233-1250, 2023.",
1164
+ "[44] Yuming Su, Xue Wang, Yuanxiang Ye, Yibo Xie, Yujing Xu, Yibing Jiang, and Cheng Wang. Automation and machine learning augmented by large language models in catalysis study. Chemical Science, 2024.",
1165
+ "[45] Yuan Chiang, Chia-Hong Chou, and Janosh Riebesell. LLaMP: Large language model made powerful for high-fidelity materials knowledge retrieval and distillation. arXiv preprint arXiv:2401.17244, 2024.",
1166
+ "[46] Seongmin Kim, Yousung Jung, and Joshua Schrier. Large language models for inorganic synthesis predictions. Journal of the American Chemical Society, 2024.",
1167
+ "[47] Wonseok Lee, Yeonghun Kang, Taeun Bae, and Jihan Kim. Harnessing large language model to collect and analyze metal-organic framework property dataset. arXiv preprint arXiv:2404.13053, 2024.",
1168
+ "[48] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024.",
1169
+ "[49] Michael H. Prince, Henry Chan, Aikaterini Vriza, Tao Zhou, Varuni K. Sastry, Matthew T. Dearing, Ross J. Harder, Rama K. Vasudevan, and Mathew J. Cherukara. Opportunities for retrieval and tool augmented large language models in scientific facilities, 2023.",
1170
+ "[50] Odhran O'Donoghue, Aleksandar Shtedritski, John Ginger, Ralph Abboud, Ali Essa Ghareeb, Justin Booth, and Samuel G Rodriques. Bioplanner: Automatic evaluation of llms on protocol planning in biology, 2023.",
1171
+ "[51] Kaixuan Huang, Yuanhao Qu, Henry Cousins, William A. Johnson, Di Yin, Mihir Shah, Denny Zhou, Russ Altman, Mengdi Wang, and Le Cong. CRISPR-GPT: An LLM agent for automated design of gene-editing experiments, 2024.",
1172
+ "[52] Yixiang Ruan, Chenyin Lu, Ning Xu, Jian Zhang, Jun Xuan, Jianzhang Pan, Qun Fang, Hanyu Gao, Xiaodong Shen, Ning Ye, and et al. Accelerated end-to-end chemical synthesis development with large language models. ChemRxiv, 2024.",
1173
+ "[53] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metalorganic frameworks using large language models. Nature Communications, 15(1):4705, June 2024.",
1174
+ "[54] Tianyidan Xie, Rui Ma, Qian Wang, Xiaogian Ye, Feixuan Liu, Ying Tai, Zhenyu Zhang, and Zili Yi. Anywhere: A multi-agent framework for reliable and diverse foreground-conditioned image inpainting, 2024.",
1175
+ "[55] A. Ghafarollahi and M. J. Buehler. ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning, 2024.",
1176
+ "[56] Mayk Caldas Ramos, Christopher J Collison, and Andrew D White. A review of large language models and autonomous agents in chemistry. Chemical Science, 2025.",
1177
+ "[57] Harrison Chase. LangChain, 10 2022.",
1178
+ "[58] The UniProt Consortium. UniProt: the Universal Protein Knowledgebase in 2023. Nucleic Acids Research, 51(D1):D523–D531, 11 2022.",
1179
+ "[59] Molstar Developers. molrender. https://github.com/molstar/molrender, 2019. Accessed: 2025-02-10.",
1180
+ "[60] Hai Nguyen, David A Case, and Alexander S Rose. NGLview–interactive molecular graphics for Jupyter notebooks. Bioinformatics, 34(7):1241–1242, 2018."
1181
+ ],
1182
+ "bbox": [
1183
+ 112,
1184
+ 90,
1185
+ 883,
1186
+ 912
1187
+ ],
1188
+ "page_idx": 11
1189
+ },
1190
+ {
1191
+ "type": "page_number",
1192
+ "text": "12",
1193
+ "bbox": [
1194
+ 490,
1195
+ 935,
1196
+ 509,
1197
+ 946
1198
+ ],
1199
+ "page_idx": 11
1200
+ },
1201
+ {
1202
+ "type": "list",
1203
+ "sub_type": "ref_text",
1204
+ "list_items": [
1205
+ "[61] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.",
1206
+ "[62] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023.",
1207
+ "[63] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.",
1208
+ "[64] Fireworks AI, Inc. [Accessed 24-01-2025].",
1209
+ "[65] www-cdn.anthropic.com. https://www-cdn.anthropic.com/fed9cc193a14b84131812372d8d5857f8f304c52/Model_Card_Claude_3_Addendum.pdf. [Accessed 10-01-2025].",
1210
+ "[66] Anthropic. The Claude 3 model family: Opus, Sonnet, Haiku, Mar 2024.",
1211
+ "[67] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. OpenAI o1 system card. arXiv preprint arXiv:2412.16720, 2024.",
1212
+ "[68] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. GPT-4o system card. arXiv preprint arXiv:2410.21276, 2024.",
1213
+ "[69] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. LAB-Bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024.",
1214
+ "[70] Chenyu Wang, Weixin Luo, Qianyu Chen, Haonan Mai, Jindi Guo, Sixun Dong, Xiaohua, Xuan, Zhengxin Li, Lin Ma, and Shenghua Gao. MLLM-Tool: A multimodal large language model for tool agent learning, 2024.",
1215
+ "[71] Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. AssistGPT: A general multi-modal assistant that can plan, execute, inspect, and learn, 2023.",
1216
+ "[72] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023."
1217
+ ],
1218
+ "bbox": [
1219
+ 112,
1220
+ 90,
1221
+ 929,
1222
+ 534
1223
+ ],
1224
+ "page_idx": 12
1225
+ },
1226
+ {
1227
+ "type": "page_number",
1228
+ "text": "13",
1229
+ "bbox": [
1230
+ 490,
1231
+ 935,
1232
+ 508,
1233
+ 946
1234
+ ],
1235
+ "page_idx": 12
1236
+ },
1237
+ {
1238
+ "type": "text",
1239
+ "text": "Supplemental Information",
1240
+ "text_level": 1,
1241
+ "bbox": [
1242
+ 112,
1243
+ 89,
1244
+ 339,
1245
+ 107
1246
+ ],
1247
+ "page_idx": 13
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "text": "A Claude-Specific Engineering",
1252
+ "text_level": 1,
1253
+ "bbox": [
1254
+ 112,
1255
+ 132,
1256
+ 390,
1257
+ 150
1258
+ ],
1259
+ "page_idx": 13
1260
+ },
1261
+ {
1262
+ "type": "text",
1263
+ "text": "While both of Claude's Sonnet models achieved poor performance during the robustness experiment, it can be noted that a single common error arose consistently. When running an NPT simulation, MDCrow requires that all parameters be passed to the simulation tool. However, both Sonnet models consistently neglected to provide a value for pressure, even when directly prompted to do so. The claude-3-opus made this mistake a single time. This is a relatively simple fix, providing MDCrow with a default pressure of 1 atm when no pressure is passed.",
1264
+ "bbox": [
1265
+ 109,
1266
+ 172,
1267
+ 883,
1268
+ 243
1269
+ ],
1270
+ "page_idx": 13
1271
+ },
1272
+ {
1273
+ "type": "image",
1274
+ "img_path": "images/de4b5fa05febaec849febcb28553e928c7398973d7c1535168db5a068bb08e1d.jpg",
1275
+ "image_caption": [
1276
+ "N Subtasks Completed"
1277
+ ],
1278
+ "image_footnote": [],
1279
+ "bbox": [
1280
+ 184,
1281
+ 335,
1282
+ 411,
1283
+ 506
1284
+ ],
1285
+ "page_idx": 13
1286
+ },
1287
+ {
1288
+ "type": "image",
1289
+ "img_path": "images/99718eb29aaae0e531c85e4b25828736b4ea574d49480b87dc2d4e1e3aa1c931.jpg",
1290
+ "image_caption": [
1291
+ "Subtask Completion for Claude Models"
1292
+ ],
1293
+ "image_footnote": [],
1294
+ "bbox": [
1295
+ 418,
1296
+ 323,
1297
+ 625,
1298
+ 505
1299
+ ],
1300
+ "page_idx": 13
1301
+ },
1302
+ {
1303
+ "type": "image",
1304
+ "img_path": "images/9037b485eaab7ce07d0fb76c10d216c9cddaa949938771f443a59cd8bdb201c5.jpg",
1305
+ "image_caption": [],
1306
+ "image_footnote": [],
1307
+ "bbox": [
1308
+ 630,
1309
+ 323,
1310
+ 841,
1311
+ 506
1312
+ ],
1313
+ "page_idx": 13
1314
+ },
1315
+ {
1316
+ "type": "image",
1317
+ "img_path": "images/d3f84d1b3a893596d88bbdaa27bdc24ca4b6983db7ebfc62a189c6440f741b60.jpg",
1318
+ "image_caption": [
1319
+ "Figure 7: Performance of MDCrow with three Claude models on 10 tasks. As the number of subtasks increase, we all subtasks completed for both prompt types. The top row shows MDCrow's performance as-is, and the bottom row shows MDCrow's performance when given a direct fix for missing parameters. There is a clear change in performance after the fix for both claude-3.5-sonnet-20241022 and claude-3.5-sonnet-20240620."
1320
+ ],
1321
+ "image_footnote": [],
1322
+ "bbox": [
1323
+ 184,
1324
+ 544,
1325
+ 411,
1326
+ 713
1327
+ ],
1328
+ "page_idx": 13
1329
+ },
1330
+ {
1331
+ "type": "image",
1332
+ "img_path": "images/0b4669ac14c016fd78ba8e13ccafec341245b05fbb63b172b7fb7f5d29187f1f.jpg",
1333
+ "image_caption": [],
1334
+ "image_footnote": [],
1335
+ "bbox": [
1336
+ 416,
1337
+ 532,
1338
+ 625,
1339
+ 728
1340
+ ],
1341
+ "page_idx": 13
1342
+ },
1343
+ {
1344
+ "type": "image",
1345
+ "img_path": "images/0501fdaa67e983c79bb2e248c6d63eb6877a05f7b7be45239322c1e92c929c24.jpg",
1346
+ "image_caption": [],
1347
+ "image_footnote": [
1348
+ "- Ordered Prompt\n- Natural Prompt"
1349
+ ],
1350
+ "bbox": [
1351
+ 630,
1352
+ 532,
1353
+ 841,
1354
+ 713
1355
+ ],
1356
+ "page_idx": 13
1357
+ },
1358
+ {
1359
+ "type": "text",
1360
+ "text": "As can be seen in Figure 7, including this fix drastically improves the performance of these models, with performance comparable to the top models. However, no other models made this mistake, and no other model-specific optimization was conducted. Thus, for all experiments shown in this paper, MDCrow does not accommodate this Claude-specific missing parameter fix.",
1361
+ "bbox": [
1362
+ 109,
1363
+ 854,
1364
+ 883,
1365
+ 912
1366
+ ],
1367
+ "page_idx": 13
1368
+ },
1369
+ {
1370
+ "type": "page_number",
1371
+ "text": "14",
1372
+ "bbox": [
1373
+ 490,
1374
+ 935,
1375
+ 508,
1376
+ 946
1377
+ ],
1378
+ "page_idx": 13
1379
+ },
1380
+ {
1381
+ "type": "text",
1382
+ "text": "B Prompts",
1383
+ "text_level": 1,
1384
+ "bbox": [
1385
+ 116,
1386
+ 88,
1387
+ 222,
1388
+ 104
1389
+ ],
1390
+ "page_idx": 14
1391
+ },
1392
+ {
1393
+ "type": "text",
1394
+ "text": "MDCrow Prompt",
1395
+ "text_level": 1,
1396
+ "bbox": [
1397
+ 114,
1398
+ 111,
1399
+ 240,
1400
+ 126
1401
+ ],
1402
+ "page_idx": 14
1403
+ },
1404
+ {
1405
+ "type": "text",
1406
+ "text": "You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem to the best of your ability using the provided tools.",
1407
+ "bbox": [
1408
+ 116,
1409
+ 148,
1410
+ 841,
1411
+ 174
1412
+ ],
1413
+ "page_idx": 14
1414
+ },
1415
+ {
1416
+ "type": "text",
1417
+ "text": "You can only respond with a single complete 'Thought, Action, Action Input' format OR a single 'Final Answer' format.",
1418
+ "bbox": [
1419
+ 116,
1420
+ 186,
1421
+ 841,
1422
+ 210
1423
+ ],
1424
+ "page_idx": 14
1425
+ },
1426
+ {
1427
+ "type": "code",
1428
+ "sub_type": "code",
1429
+ "code_caption": [],
1430
+ "code_body": "Complete format: \nThought: (reflect on your progress and decide what to do next) \nAction: \n{ \"action\": (the action name, it should be the name of a tool), \"action_input\": (the input string for the action) \n}",
1431
+ "guess_lang": "txt",
1432
+ "bbox": [
1433
+ 114,
1434
+ 224,
1435
+ 612,
1436
+ 333
1437
+ ],
1438
+ "page_idx": 14
1439
+ },
1440
+ {
1441
+ "type": "text",
1442
+ "text": "OR",
1443
+ "bbox": [
1444
+ 116,
1445
+ 351,
1446
+ 133,
1447
+ 359
1448
+ ],
1449
+ "page_idx": 14
1450
+ },
1451
+ {
1452
+ "type": "text",
1453
+ "text": "Final Answer: (the final response to the original input question, once all steps are complete)",
1454
+ "bbox": [
1455
+ 116,
1456
+ 376,
1457
+ 540,
1458
+ 401
1459
+ ],
1460
+ "page_idx": 14
1461
+ },
1462
+ {
1463
+ "type": "text",
1464
+ "text": "You are required to use the tools provided, using the most specific tool available for each action. Your final answer should contain all information necessary to answer the question and its subquestions. Before you finish, reflect on your progress and make sure you have addressed the question in its entirety.",
1465
+ "bbox": [
1466
+ 116,
1467
+ 414,
1468
+ 864,
1469
+ 463
1470
+ ],
1471
+ "page_idx": 14
1472
+ },
1473
+ {
1474
+ "type": "text",
1475
+ "text": "If you are asked to continue or reference previous runs, the context will be provided to you. If context is provided, you should assume you are continuing a chat.",
1476
+ "bbox": [
1477
+ 116,
1478
+ 476,
1479
+ 854,
1480
+ 501
1481
+ ],
1482
+ "page_idx": 14
1483
+ },
1484
+ {
1485
+ "type": "text",
1486
+ "text": "Here is the input: \nPrevious Context: {context} \nQuestion: {input}",
1487
+ "bbox": [
1488
+ 114,
1489
+ 513,
1490
+ 323,
1491
+ 551
1492
+ ],
1493
+ "page_idx": 14
1494
+ },
1495
+ {
1496
+ "type": "page_number",
1497
+ "text": "15",
1498
+ "bbox": [
1499
+ 491,
1500
+ 935,
1501
+ 506,
1502
+ 946
1503
+ ],
1504
+ "page_idx": 14
1505
+ },
1506
+ {
1507
+ "type": "text",
1508
+ "text": "During the comparison study between MDCrow, GPT-only, and ReAct with Python REPL tool, we used different system prompts for each of these LLM frameworks.",
1509
+ "bbox": [
1510
+ 116,
1511
+ 85,
1512
+ 885,
1513
+ 114
1514
+ ],
1515
+ "page_idx": 15
1516
+ },
1517
+ {
1518
+ "type": "text",
1519
+ "text": "Direct-LLM Prompt",
1520
+ "text_level": 1,
1521
+ "bbox": [
1522
+ 114,
1523
+ 116,
1524
+ 261,
1525
+ 130
1526
+ ],
1527
+ "page_idx": 15
1528
+ },
1529
+ {
1530
+ "type": "text",
1531
+ "text": "You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem in its entirety to the best of your ability. If any part of the task requires you to perform an action that you are not capable of completing, please write a runnable Python script for that step and move on. For literature papers, use and process papers from the 'paper.collection' folder. For .pdb files, download them from the RSCB website using 'requests'. To preprocess PDB files, you will use PDBFixer. To get information about proteins, retrieve data from the UniProt database. For anything related to simulations, you will use OpenMM, and for anything related to analyses, you will use MDTraj. At the end, combine any scripts into one script.",
1532
+ "bbox": [
1533
+ 116,
1534
+ 152,
1535
+ 879,
1536
+ 253
1537
+ ],
1538
+ "page_idx": 15
1539
+ },
1540
+ {
1541
+ "type": "text",
1542
+ "text": "ReAct Agent Prompt",
1543
+ "text_level": 1,
1544
+ "bbox": [
1545
+ 116,
1546
+ 291,
1547
+ 264,
1548
+ 305
1549
+ ],
1550
+ "page_idx": 15
1551
+ },
1552
+ {
1553
+ "type": "text",
1554
+ "text": "You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem to the best of your ability. If any part of the task requires you to perform an action that you are not capable of completing, please write a runnable Python script for that step and run it. For literature papers, use and process papers from the 'paper.collection' folder. For .pdb files, download them from the RSCB website using 'requests'. TO preprocess PDB files, you will use PDBFixer. To get information about proteins, retrieve data from the UniProt database. For anything related to simulations, you will use OpenMM, and for anything related to analyzes, you will use MDTraj.",
1555
+ "bbox": [
1556
+ 116,
1557
+ 328,
1558
+ 875,
1559
+ 429
1560
+ ],
1561
+ "page_idx": 15
1562
+ },
1563
+ {
1564
+ "type": "text",
1565
+ "text": "You can only respond with a single complete 'Thought, Action, Action Input' format OR a single 'Final Answer' format.",
1566
+ "bbox": [
1567
+ 116,
1568
+ 441,
1569
+ 841,
1570
+ 465
1571
+ ],
1572
+ "page_idx": 15
1573
+ },
1574
+ {
1575
+ "type": "code",
1576
+ "sub_type": "code",
1577
+ "code_caption": [],
1578
+ "code_body": "Complete format: \nThought: (reflect on your progress and decide what to do next) \nAction: \n{ \"action\": (the action name, it should be the name of a tool), \"action_input\": (the input string for the action) \n} \nOR \nFinal Answer: (the final response to the original input question, once all steps are complete)",
1579
+ "guess_lang": "txt",
1580
+ "bbox": [
1581
+ 112,
1582
+ 479,
1583
+ 614,
1584
+ 656
1585
+ ],
1586
+ "page_idx": 15
1587
+ },
1588
+ {
1589
+ "type": "text",
1590
+ "text": "You are required to use the tools provided, using the most specific tool available for each action. Your final answer should contain all information necessary to answer the question and its subquestions. Before you finish, reflect on your progress and make sure you have addressed the question in its entirety.",
1591
+ "bbox": [
1592
+ 116,
1593
+ 669,
1594
+ 857,
1595
+ 719
1596
+ ],
1597
+ "page_idx": 15
1598
+ },
1599
+ {
1600
+ "type": "text",
1601
+ "text": "Here is the input: Question: {input}",
1602
+ "bbox": [
1603
+ 116,
1604
+ 732,
1605
+ 253,
1606
+ 757
1607
+ ],
1608
+ "page_idx": 15
1609
+ },
1610
+ {
1611
+ "type": "text",
1612
+ "text": "C Task Prompts & References Used in Experiments",
1613
+ "text_level": 1,
1614
+ "bbox": [
1615
+ 116,
1616
+ 808,
1617
+ 562,
1618
+ 824
1619
+ ],
1620
+ "page_idx": 15
1621
+ },
1622
+ {
1623
+ "type": "page_number",
1624
+ "text": "16",
1625
+ "bbox": [
1626
+ 491,
1627
+ 936,
1628
+ 508,
1629
+ 946
1630
+ ],
1631
+ "page_idx": 15
1632
+ },
1633
+ {
1634
+ "type": "table",
1635
+ "img_path": "images/82379c3f6a986e3052d9c38fc9be2911370daaf3e64bdbb088bbe1d49c040119.jpg",
1636
+ "table_caption": [
1637
+ "Table 1: Details of 25 task prompts used in experiments"
1638
+ ],
1639
+ "table_footnote": [],
1640
+ "table_body": "<table><tr><td>Prompt ID</td><td>Prompt</td><td># sub-tasks</td><td>List of required subtasks</td></tr><tr><td>1</td><td>Simulate PDB ID 1MBN at two different temperatures: 300 K and 400 K for 1 ns each. Plot the RMSD of both over time and compare the final secondary structures at the end of the simulations. Get information about this protein, such as the number of residues and chains, etc.</td><td>8</td><td>Download PDB, simulate (x2), RMSD (x2), DSSP (x2), summarize_protein</td></tr><tr><td>2</td><td>Download the PDB file for protein 1LYZ.</td><td>1</td><td>Download PDB</td></tr><tr><td>3</td><td>Download the PDB file for protein 1GZX. Then, analyze the secondary structure of the protein and provide information on how many helices, sheets, and other components are present. Get the gene names for this protein.</td><td>3</td><td>Download PDB, DSSP, Get-ProteinFunction (or literature)</td></tr><tr><td>4</td><td>What are the common parameters used to simulate fibronectin?</td><td>1</td><td>literature search</td></tr><tr><td>5</td><td>Simulate 1VII for 1 ns at a temperature of 300 K. Then, tell me if the secondary structure changed from the beginning of the simulation to the end of the simulation.</td><td>5</td><td>Download PDB, simulate, DSSP before, DSSP after, comparison</td></tr><tr><td>6</td><td>Simulate 1A3N and 7VDE (two PDB IDs matching hemoglobin) with identical parameters. Find the appropriate parameters for simulating hemoglobin from the literature. Then, plot the radius of gyration throughout both simulations.</td><td>7</td><td>Download PDB (x2), literature, simulate (x2), RGy (x2)</td></tr><tr><td>7</td><td>Simulate 1ZNI for 1 ns at a temperature of 300 K in water. Then, simulate it again in acetonitrile. Compute the RMSD, final secondary structure, and PCA for each simulation.</td><td>10</td><td>Download PDB, simulate (x2), packmol, RMSD (x2), SASA (x2), PCA (x2)</td></tr><tr><td>8</td><td>Simulate 4RMB at 100K, 200K, and 300K. Then, for each simulation, plot the radius of gyration over time and compare the secondary structure before and after the simulation.</td><td>10</td><td>Download PDB, simulate (x3), RGy (x3), DSSP (x3)</td></tr><tr><td>9</td><td>Download the PDB file for 1AEE. Then tell me how many chains and atoms are present in the protein.</td><td>2</td><td>download PDB, count atoms/chains</td></tr><tr><td>10</td><td>Simulate protein 1ZNI at 300 K for 1 ns and calculate the RMSD.</td><td>3</td><td>Download PDB, simulate, RMSD</td></tr><tr><td>11</td><td>Download the PDB files for 8PFK and 8PFQ. Then, compare the secondary structures of the two proteins, including the number of atoms, secondary structures, number of chains, etc.</td><td>4</td><td>Download PDB (x2), DSSP (x2)</td></tr><tr><td>12</td><td>Simulate fibronectin (PDB ID 1FNF) for 1 ns, using an appropriate temperature found in the literature. Compute the RMSD and the final secondary structure. By using the PDB ID to get the Uniprot ID, obtain the subunit structure and the number of beta sheets, helices, etc. Compare this information to the structure we computed.</td><td>8</td><td>Download PDB, literature, simulate, RMSD, DSSP, get uniprot, subunit structure, get beta sheets/helices</td></tr><tr><td>13</td><td>Compare the RMSF of 1UBQ under high pressure and low pressure. Perform the simulation for 1 ns, varying only the pressure. Plot the moments of inertia over time for both simulations.</td><td>7</td><td>Download PDB, simulate (x2), RMSF (x2), MOI (x2)</td></tr><tr><td>14</td><td>Simulate deoxygenated hemoglobin (1A3N) and oxygenated hemoglobin (6BB5). Plot the PCA of both trajectories.</td><td>6</td><td>Download PDB (x2), simulate (x2), PCA (x2)</td></tr><tr><td>15</td><td>Simulate trypsin (1TRN) for 1 ns at 300 K and plot eneRGy over time. Compute SASA, RMSF, and radius of gyration. Get the subunit structure, sequence, active and binding sites.</td><td>9</td><td>Download PDB, simulate, output figures, SASA, RMSF, RGy, subunit structure, sequence info, all known sites</td></tr><tr><td>16</td><td>Download the PDB file for 1C3W and describe the secondary structure. Then, simulate the protein at 300 K for 1 ns. Plot the RMSD over time and the radius of gyration over time.</td><td>5</td><td>Download PDB, DSSP, simulate, RMSD, RGy</td></tr><tr><td>17</td><td>Download the PDB file for 1XQ8, and then save the visualization for it.</td><td>2</td><td>Download PDB, visualize</td></tr><tr><td>18</td><td>Download the PDB for 2YXF. Tell me about its stability as found in the literature. Then, simulate it for 1 ns and plot its RMSD over time.</td><td>4</td><td>Download PDB, literature search, simulate, RMSD</td></tr><tr><td>19</td><td>Simulate 1MBN in water and methanol solutions.</td><td>4</td><td>Download PDB, packmol to get appropriate non-water solvent, simulate (x2)</td></tr><tr><td>20</td><td>Download protein 1ATN.</td><td>1</td><td>Download PDB</td></tr><tr><td>21</td><td>Download and clean protein 1A3N.</td><td>2</td><td>Download PDB, clean</td></tr><tr><td>22</td><td>Perform a brief simulation of protein 1PQ2.</td><td>2</td><td>Download PDB, simulate</td></tr><tr><td>23</td><td>Analyze the RDF of the simulation of 1A3N solvated in water.</td><td>3</td><td>Download PDB, simulate, RDF</td></tr><tr><td>24</td><td>Simulate oxygenated hemoglobin (1A3N) and deoxygenated hemoglobin (6BB5). Then analyze the RDF of both.</td><td>6</td><td>Download PDB (x2), simu-late (x2), RDF (x2)</td></tr><tr><td>25</td><td>Simulate 1L6X at pH 5.0 and 8.8, then analyze the SASA and RMSF under both pH conditions.</td><td>9</td><td>Download PDB, clean at pH 5.5 and 8.0, simulate(x2), SASA(x2), RMSF(x2)</td></tr></table>",
1641
+ "bbox": [
1642
+ 114,
1643
+ 109,
1644
+ 962,
1645
+ 881
1646
+ ],
1647
+ "page_idx": 16
1648
+ },
1649
+ {
1650
+ "type": "page_number",
1651
+ "text": "17",
1652
+ "bbox": [
1653
+ 490,
1654
+ 935,
1655
+ 508,
1656
+ 946
1657
+ ],
1658
+ "page_idx": 16
1659
+ },
1660
+ {
1661
+ "type": "table",
1662
+ "img_path": "",
1663
+ "table_caption": [],
1664
+ "table_footnote": [],
1665
+ "bbox": [
1666
+ 114,
1667
+ 87,
1668
+ 965,
1669
+ 320
1670
+ ],
1671
+ "page_idx": 17
1672
+ },
1673
+ {
1674
+ "type": "page_number",
1675
+ "text": "18",
1676
+ "bbox": [
1677
+ 490,
1678
+ 935,
1679
+ 508,
1680
+ 946
1681
+ ],
1682
+ "page_idx": 17
1683
+ },
1684
+ {
1685
+ "type": "text",
1686
+ "text": "List of References Used for Literature Search During the Experiments.",
1687
+ "text_level": 1,
1688
+ "bbox": [
1689
+ 171,
1690
+ 90,
1691
+ 668,
1692
+ 107
1693
+ ],
1694
+ "page_idx": 18
1695
+ },
1696
+ {
1697
+ "type": "list",
1698
+ "sub_type": "ref_text",
1699
+ "list_items": [
1700
+ "1. The folding space of protein $\\beta 2$ -microglobulin is modulated by a single disulfide bridge, 10.1088/1478-3975/ac08ec",
1701
+ "2. Molecular Dynamics Simulation of the Adsorption of a Fibronectin Module on a Graphite Surface, 10.1021/1a0357716",
1702
+ "3. Predicting stable binding modes from simulated dimers of the D76N mutant of $\\beta 2$ -microglobulin, 10.1016/j.csbj.2021.09.003",
1703
+ "4. Deciphering the Inhibition Mechanism of under Trial Hsp90 Inhibitors and Their Analogues: A Comparative Molecular Dynamics Simulation, 10.1021/acs.jcim.9b01134",
1704
+ "5. Molecular modeling, simulation and docking of Rv1250 protein from Mycobacterium tuberculosis, 10. 3389/fbinf.2023.1125479",
1705
+ "6. Molecular Dynamics Simulation of Rap1 Myb-type domain in Saccharomyces cerevisiae, 10.6026/97320630008881",
1706
+ "7. A Giant Extracellular Matrix Binding Protein of Staphylococcus epidermidis Binds Surface-Immobilized Fibronectin via a Novel Mechanism, 10.1128/mbio.01612-20",
1707
+ "8. High Affinity vs. Native Fibronectin in the Modulation of $\\alpha \\nu \\beta 3$ Integrin Conformational Dynamics: Insights from Computational Analyses and Implications for Molecular Design, 10.1371/journal.pcbi.1005334",
1708
+ "9. Forced unfolding of fibronectin type 3 modules: an analysis by biased molecular dynamics simulations, 10.1006/jmbi.1999.2670",
1709
+ "10. Adsorption of Fibronectin Fragment on Surfaces Using Fully Atomistic Molecular Dynamics Simulations, 10.3390/ijms19113321",
1710
+ "11. Fibronectin Unfolding Revisited: Modeling Cell Traction-Mediated Unfolding of the Tenth Type-III Repeat, 10.1371/journal.pone.0002373",
1711
+ "12. Tertiary and quaternary structural basis of oxygen affinity in human hemoglobin as revealed by multiscale simulations, 10.1038/s41598-017-11259-0",
1712
+ "13. Oxygen Delivery from Red Cells, 10.1016/s0006-3495(85)83890-x",
1713
+ "14. Molecular Dynamics Simulations of Hemoglobin A in Different States and Bound to DPG: Effector-Linked Perturbation of Tertiary Conformations and HbA Concerted Dynamics, 10.1529/biophysj.107.114942",
1714
+ "15. Theoretical Simulation of Red Cell Sickling Upon Deoxygenation Based on the Physical Chemistry of Sickle Hemoglobin Fiber Formation, 10.1021/acs.jpcb.8b07638",
1715
+ "16. Adsorption of Heparin-Binding Fragments of Fibronectin onto Hydrophobic Surfaces, 10.3390/biophysica3030027",
1716
+ "17. Mechanistic insights into the adsorption and bioactivity of fibronectin on surfaces with varying chemistries by a combination of experimental strategies and molecular simulations, 10.1016/j.bioactmat.2021.02.021",
1717
+ "18. Anti-Inflammatory, Radical Scavenging Mechanism of New 4-Aryl-[1,3]-thiazol-2-yl-2-quinoline Carbohydrates and Quinolinyl[1,3]-thiazolo[3,2-b][1,2,4]triazoles, 10.1002/s1ct.201801398",
1718
+ "19. Trypsin-Ligand binding affinities calculated using an effective interaction entropy method under polarized force field, 10.1038/s41598-017-17868-z",
1719
+ "20. Ubiquitin: Molecular modeling and simulations, 10.1016/j.jmgm.2013.09.006",
1720
+ "21. Valid molecular dynamics simulations of human hemoglobin require a surprisingly large box size, 10.7554/ eLife.35560",
1721
+ "22. Multiple Cryptic Binding Sites are Necessary for Robust Fibronectin Assembly: An In Silico Study, 10. 1038/s41598-017-18328-4",
1722
+ "23. Computer simulations of fibronectin adsorption on hydroxyapatite surfaces, 10.1039/c3ra47381c",
1723
+ "24. An Atomistic View on Human Hemoglobin Carbon Monoxide Migration Processes, 10.1016/j.bpj.2012.01.011",
1724
+ "25. Best Practices for Foundations in Molecular Simulations [v1.0], 10.33011/livecoms.1.1.5957",
1725
+ "26. Unfolding Dynamics of Ubiquitin from Constant Force MD Simulation: Entropy-Enthalpy Interplay Shapes the Free-Energy Landscape, 10.1021/acs.jpcb.8b09318",
1726
+ "27. Dissecting Structural Aspects of Protein Stability",
1727
+ "28. MACE Release 0.1.0 Documentation"
1728
+ ],
1729
+ "bbox": [
1730
+ 145,
1731
+ 109,
1732
+ 883,
1733
+ 907
1734
+ ],
1735
+ "page_idx": 18
1736
+ },
1737
+ {
1738
+ "type": "page_number",
1739
+ "text": "19",
1740
+ "bbox": [
1741
+ 490,
1742
+ 935,
1743
+ 509,
1744
+ 946
1745
+ ],
1746
+ "page_idx": 18
1747
+ }
1748
+ ]
2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09565/c149a430-b509-4bf8-8a7d-e0ef3c523d68_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f43988614e299a944cd170de42d28678908bc11b84b4f4f1e71d4e259f68426
3
+ size 21620793
2502.09xxx/2502.09565/full.md ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MDCROW: AUTOMATING MOLECULAR DYNAMICS WORKFLOWS WITH LARGE LANGUAGE MODELS
2
+
3
+ Quintina Campbell†, Sam Cox†,3, Jorge Medina†, Brittany Watterson2, Andrew D. White*1,3
4
+
5
+ $^{1}$ Department of Chemical Engineering, University of Rochester, Rochester, New York, USA
6
+ $^{2}$ Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA
7
+ $^{3}$ FutureHouse Inc., San Francisco, CA
8
+
9
+ # ABSTRACT
10
+
11
+ Molecular dynamics (MD) simulations are essential for understanding biomolecular systems but remain challenging to automate. Recent advances in large language models (LLM) have demonstrated success in automating complex scientific tasks using LLM-based agents. In this paper, we introduce MDCrow, an agentic LLM assistant capable of automating MD workflows. MDCrow uses chain-of-thought over 40 expert-designed tools for handling and processing files, setting up simulations, analyzing the simulation outputs, and retrieving relevant information from literature and databases. We assess MDCrow's performance across 25 tasks of varying required subtasks and difficulty, and we evaluate the agent's robustness to both difficulty and prompt style. gpt-4o is able to complete complex tasks with low variance, followed closely by l1ama3-405b, a compelling open-source model. While prompt style does not influence the best models' performance, it has significant effects on smaller models.
12
+
13
+ # 1 Introduction
14
+
15
+ Molecular dynamics (MD) simulations is a common method to understand dynamic and complex systems in chemistry and biology. While MD is now routine, its integration into and impact on scientific workflows has increased dramatically over the past few decades [1-3]. There are two main reasons for this: First, MD provides valuable insights. Through simulations, scientists can study structural and dynamic phenomena, perturbations, and dynamic processes in their chemical systems. Second, innovations in hardware and expert-designed software packages have made MD much more accessible to both experienced and novice users [3].
16
+
17
+ For a given protein simulation, parameter selection is nontrivial: the user must provide the input structure (such as a PDB [4] file), select a force field (e.g., CHARMM [5], AMBER [6]), and specify parameters such as temperature, integrator, simulation length, and equilibration protocols. Simulations also generally require pre- and post-processing steps, along with various analyses. For instance, a user may need to clean or trim a PDB file, add a solvent, or analyze the protein's structure. After simulation, they might examine the protein's shape throughout the simulation or assess its stability under different conditions. The choices for pre-processing, analysis, and simulation parameters are highly specific to any given use case and often require expert intuition. Thus, automating this process is difficult but beneficial.
18
+
19
+ Several efforts have been made to automate MD workflows [7-17], focusing largely on specific domains, such as RadonPy for polymer's simulations [8], or PyAutoFEP for proteins and small molecules for drug-screening [16]. Other approaches are constrained to a particular combination of simulation software and simulation (e.g. GROMACS and Free Energy Perturbation). Certainly, there has been significant community-driven improvement in automating and creating MD toolkits [14, 18-24] and user-friendly interfaces and visualizations [25-32]. While these advances
20
+
21
+ improve the capabilities and ease of use in many cases, the inherent variability of MD workflows still poses a great challenge for full automation.
22
+
23
+ Large-Language Model (LLM) agents [33-36] have gained popularity for their ability to automate technical tasks through reasoning and tool usage, even surpassing domain-specialized LLMs (e.g., BioGPT [37], Med-PaLM [38]) when programmed for specialized roles [39]. These agents have demonstrated promising results in scientific tasks within a predefined toolspace, with tools like ChemCrow and Coscientist successfully automating complex workflows and novel design in chemical synthesis [40-42]. Likewise, LLM-driven automation has been explored in materials research [43-46], literature and data aggregation [47, 48], and more sophisticated tasks [45, 49-55]. Most similar to this work, ProtAgents [55] is a multi-agent modeling framework tackling protein-related design and analysis, and LLaMP [45] applies a retrieval-augmented generation (RAG)-based ReAct agent to simulate inorganic materials by interfacing with literature databases, Wikipedia, and atomistic simulation tools. Although preliminary work has applied agentic LLMs to MD via a RAG-based agent [45], no fully adaptive and autonomous system exists for biochemical MD or protein simulations. See Ramos et al.[56] for a recent review on the design, assessment, and applications of scientific agents.
24
+
25
+ Here we present MDCrow, an LLM-agent capable of autonomously completing MD workflows. Our main contributions to the field are (1) we assess MDCrow's performance across 25 tasks with varying difficulty and compare performance of different LLM models; (2) we measure robustness how agents are prompted and task complexity based on required number of subtasks we compare with simply equipping an LLM with a python interpreter with the required packages installed, rather than using a custom built environment. Our main conclusions is that MDCrow with gpt-4o or 11ama3-405b is able to perform nearly all of our assessed tasks and is relatively insensitive to how precise the instructions are given to it. See Figure 1D for an overview of the main results.
26
+
27
+ # 2 Methods
28
+
29
+ # 2.1 MDCrow Toolset
30
+
31
+ MDCrow is an LLM agent, which consists of an environment of tools that emit observations and an LLM that selects actions (tools + input arguments). MDCrow is built with Langchain [57] and a ReAct style prompt.[35]. The tools mostly consist of analysis and simulation methods; we use OpenMM [22] and MDTraj [21] packages, but in principle our findings generalize to any such packages.
32
+
33
+ MDCrow's tools can be categorized in four groups: Information Retrieval, PDB & Protein, Simulation, and Analysis (see Figure 1B).
34
+
35
+ Information Retrieval Tools These tools enable MDCrow to build context and answer simple questions posed by the user. Most of the tools serve as wrappers for UniProt API functionalities [58], allowing access to data such as 3D structures, binding sites, and kinetic properties of proteins. Additionally, we include a LiteratureSearch tool, which uses PaperQA [48] to answer questions and retrieve information from literature. PaperQA accesses a local database of relevant PDFs, selected specifically for the test prompts, which can be found in SI section C. This real-time information helps the system provide direct answers to user questions and can also assist the agent in selecting parameters or guiding simulation processes.
36
+
37
+ PDB & Protein Tools MDCrow uses these tools to interact directly with PDB files, performing tasks such as cleaning structures with PDBFixer [22], retrieving PDBs for small molecules and proteins, and visualizing PDBs through Molrender [59] or NGLview [60].
38
+
39
+ Simulation Tools All included simulation tools use OpenMM [22] for simulation and PackMol [19] for solvent addition. These tools are built to manage dynamic simulation parameters, handle errors related to inadequate parameters or incomplete preprocessing, and address missing forcefield templates efficiently. The agent responds to simulation setup errors through informative error messages, improving overall robustness. Finally, the simulation tools outputs Python scripts that can be modified directly by MDCrow whenever the simulation requires additional steps or parameters.
40
+
41
+ Analysis Tools This group of tools is the largest in the toolset, designed to cover common MD workflow analysis methods, many of which are built on MDTraj [21] functionalities. Examples include computing the root mean squared distance (RMSD) with respect to a reference structure, the radius of gyration, analyzing the secondary structure, and various plotting functions.
42
+
43
+ ![](images/3e854f587a1ec3ae8a4e9580d0d5f2654b28711865b8b246e4ead7aa11827261.jpg)
44
+ A.
45
+
46
+ ![](images/b18c97ecb41bf5964ef7932a46af0a283c3537eec9ab8fcac59bf8a4ba6c64a9.jpg)
47
+ C.
48
+ D.
49
+
50
+ ![](images/b823bdb599548693c2dce08ca5faba53e346d5c637861a3135f36ddc0a468170.jpg)
51
+
52
+ ![](images/3f0218acc30787b4773c0687e7243f67a9a9dd7f64c48904f0f6a9fcaee6fecc.jpg)
53
+ Figure 1: A. MDCrow workflow. Starting with a user prompt and initialized with a set of MD tools, MDCrow follows a chain-of-thought process until it completes all tasks in the prompt. The final output includes a response, along with all resulting analyses and files. B. The tool distribution categorized into 4 types: information retrieval, PDB and protein handling, simulation, and analysis. A few examples from each category are shown. C. Two example prompts that MDCrow is tested on. The first is the simplest prompt, containing only 1 subtask. The most complex task requires 10 subtasks. D. Average subtask completion across all 25 prompts as task complexity (the number of subtasks per prompt) increases. The top three performing base-LLMs are shown. Among them, gpt-4o and 11ama3-405b consistently maintain high stability, staying close to $100\%$ completion even as task complexity increases.
54
+
55
+ # 2.2 Chatting with Simulations
56
+
57
+ A key challenge in developing an automated MD assistant is ensuring it can manage a large number of files, analyses, and long simulations and runtimes. Although MDCrow has been primarily tested with shorter simulations, it is designed to handle larger workflows as well. Its ability to retrieve and resume previous runs allows users to start a simulation, step away during the long process, and later continue interactions and analyses without needing to stay engaged the entire time. An example of this chatting feature is shown in Figure 2.
58
+
59
+ MDCrow creates an LLM-generated summary of the user prompt and agent trace, which is assigned to a unique run identifier provided at the end of the run (but accessible at any time during the session). Each run's files, figures, and path registry are saved in a unique checkpoint folder linked to the run identifier.
60
+
61
+ When resuming a chat, the LLM loads the summarized context of previous steps and maintains access to the same file corpus, as long as the created files remain intact. To resume a run, the user simply provides the checkpoint directory and run identifier. MDCrow then loads the corresponding memory summaries and retrieves all associated files, enabling seamless continuation of analyses.
62
+
63
+ ![](images/111470b44211b28f0d856c772691c99a243299c1ae55a2e1049d8fa1e5b7c1b6.jpg)
64
+ Figure 2: Example Chat Example of chat with MDCrow. The user first asks to download PDB files for two systems. Then, once MDCrow has completed this task, the user asks for analysis of the files. Next, the user asks for a quick 10 ps simulation of both files, and MDCrow saves all files for later handling. Lastly, the user asks for plots of RMSD for each simulation over time, and MDCrow responds with each plot.
65
+
66
+ # 3 Results
67
+
68
+ # 3.1 MDCrow Performance on Various Tasks
69
+
70
+ To assess MDCrow's ability to complete tasks of varying difficulty, we designed 25 prompts with different levels of complexity and documented the number of subtasks (minimum required steps) needed to complete each task. MDCrow was not penalized for taking additional steps, but was penalized for omitting necessary ones. For example, the first prompt in Figure 1C contains a single subtask, whereas the complex task requires 10 subtasks: downloading the PDB file, performing three simulations, and performing two analyses per simulation. If the agent failed to complete an earlier step, it was penalized for every subsequent step it could not perform due to that failure.
71
+
72
+ The 25 prompts require between 1 and 10 subtasks, with their distribution shown in Figure 3B. Each prompt was tested across three GPT models (gpt-3.5-turbo-0125, gpt-4-turbo-2024-04-09, gpt-4o-2024-08-06) [61, 62],
73
+
74
+ two Llama models (llama-v3p1-405b-instruct, llama-v3p1-70b-instruct) [63] (accessed via the Fireworks AI API with 8-bit floating point (8FP) quantization [64]), and two Claude models (claude-3-opus-20240229, claude-3-5-sonnet-20240620) [65, 66]. A newer Claude Sonnet model, claude-3-5-sonnet-20241022 was tested in later experiments but was not found to give superior results, so it was not tested on these 25 prompts. All other parameters were held constant across tests, and each version of MDCrow executed a single run per prompt.
75
+
76
+ Each run was evaluated by experts recording the number of required subtasks the agent completed and using Boolean indicators to indicate accuracy, whether the agent triggered a runtime error, and whether the trajectory contained any hallucinations. Since the agent trajectories for each run are inherently variable, accuracy is defined as the result's consistency with the expected trajectory rather than comparing against a fixed reference.
77
+
78
+ The percentage of tasks that were deemed to have valid solutions for MDCrow with each base-LLM is shown in Figure 3A. The lowest performing model was gpt-3.5. This is not surprising, as this model had some of the highest hallucination rates (32% of prompt completions contained hallucinations), compared to the absence of documented hallucinations in the higher performing models, gpt-4o and 11ama3-405b. However, the discrepancy in accuracy rates between models cannot solely be attributed to hallucinations, as gpt-3.5 attempted fewer than half of the required subtasks, whereas the higher-performing models, gpt-4o and 11ama3-405b, attempted 80-90% of the required subtask, earning accuracy in answering for 72% and 68% of tasks, respectively (Figures 3C, D).
79
+
80
+ These results indicate that MDCrow can handle complex MD tasks but is limited by the capabilities of the base model. For gpt-4-turbo, gpt-3.5, and 11ama3-70b, the number of trajectories with verified results decreases significantly as task complexity increases (Figure 3C). In contrast, gpt-4o and 11ama3-405b show only a slight decline, demonstrating that MDCrow performs well even for complex tasks when paired with more robust base models.
81
+
82
+ ![](images/1e0a7ff3e8ae44786b8b5ae45d26e450068ddbbc6411fc8f7540dd5d12538fde.jpg)
83
+
84
+ ![](images/39713b47276aeb7d791ec5f18e88f58ccc50c68c260fac61db6ad1867485d280.jpg)
85
+ Figure 3: MDCrow Performance across Large Language Models. A. Summary of MDCrow performance dependent on LLM. Percentage of accuracy is determined by whether it gave acceptable final answer or not. While statistically indistinguishable from Claude and Llama models, gpt-4o significantly outperforms the rest of GPT models on giving accurate solutions (t-test, $0.004 \leq p$ -value $\leq 0.046$ ). B. The distribution of number of subtasks in each task across 25 prompts. The prompts range from 1-10 steps, with each step count belonging to at least 2 prompts. C. Percentages of prompts with accurate solutions with respect to LLM used and number of subtasks per task. The correlation between accuracy and complexity is statistically significant for all LLMs (Spearman correlation, $3.9 \times 10^{-7} \leq p$ -value $\leq 1.1 \times 10^{-2}$ ) D. Percentage of the subtasks that the agent completed for each base LLM per task.
86
+
87
+ ![](images/b26ca67743df780afbcec5dde37c1e52bf50fd0da90b85201ebfb12320677e14.jpg)
88
+
89
+ ![](images/8dddc708f97f13c3d20a6d8db60553e9b7074b53c8e44202844892d2f05e3bc0.jpg)
90
+
91
+ # 3.2 MDCrow Robustness
92
+
93
+ We evaluated the robustness of MDCrow on complex prompts and different prompt styles. We hypothesized that some models would excel at completing complex tasks, while others would struggle—either forgetting steps or hallucinating—as the number of required subtasks increased. To test this, we created a sequence of 10 prompts that increased in complexity. The first prompt required a single subtask, and each subsequent prompt added an additional subtask (see Figure 4A). Each prompt was tested twice: once in a natural, conversational style and once with explicitly ordered steps. Example prompts can be seen in Figure 4B.
94
+
95
+ To quantify robustness, we calculated the coefficient of variation (CV) for the percentage of completed subtasks across tasks. A lower CV indicates greater consistency in task completion and, therefore, higher robustness. The analysis revealed clear differences in robustness across models and prompt types. Overall, gpt-4o and 11ama3-405b demonstrated moderate to high robustness, while the Claude models showed significantly lower robustness. The performance comparison is shown in Figure 4C.
96
+
97
+ We expected that the percentage of subtasks completed by each model would decrease as task complexity increased. However, with gpt-4o and llama3-405b as base models, MDCrow demonstrated a strong relationship between the number of required and completed subtasks (Figure 4D) for both prompt types, indicating consistent performance regardless of task complexity or prompt style. The three included Claude models demonstrated less impressive performance. claude-3-opus followed the linear trend very loosely, becoming more erratic as task complexity increased. As the tasks required more subtasks, the model consistently misses nuances in the instructions and make logical errors. Both claude-3.5-sonnet models gave poor performance on these tasks, often producing the same error (see SI section A).
98
+
99
+ # 3.3 MDCrow Comparison
100
+
101
+ We also compared MDCrow to two baselines: a ReAct [35] agent with only a Python REPL tool and a single-query LLM. MDCrow and the baselines were tested on the same 25 prompts as previously mentioned, all using gpt-4o. We use different system prompts to accommodate each framework, guiding the LLM to utilize common packages with MDCrow, and these prompts can be found in SI section B.
102
+
103
+ The single-query LLM is asked to complete the prompt by writing the code for all subtasks, not unlike what standalone ChatGPT would be asked to do. We then execute the code ourselves and evaluate the outcomes accordingly. ReAct with Python REPL can write and execute codes using a chain-of-thought framework. We find that MDCrow outperforms the two baselines significantly, as shown in Figure 5A, on attempting all subtasks and achieving an accurate solution. Not surprisingly, the two baseline methods struggled with code syntax errors and incorrect handling of PDB files. There is not a significant difference between the two baselines, indicating that the ReAct framework did not significantly boost the model's robustness.
104
+
105
+ In Figure 5B, we observe that the performance of all three methods generally declines as task complexity increases. However, both baseline methods drop to zero after just three steps, with performance then fluctuating erratically at higher complexities. This is not surprising, as proper file processing and simulation setup are crucial for optimal LLM performance in MD tasks. In contrast, MDCrow demonstrates greater robustness and reliability in handling complex tasks, thanks to its well-designed system for accurate file processing and simulation setup, as well as its ability to dynamically adjust to errors.
106
+
107
+ # 3.4 MDCrow Extrapolation through Chatting
108
+
109
+ We further show MDCrow's ability to harness its chatting feature and extrapolate outside of its toolset to complete new tasks. This task requires MDCrow to perform an annealing simulation, which is not part of the current toolset. The agent achieves this by first setting up a simulation to find appropriate system parameters and handle possible early errors. Then, the agent modifies the script according to the user's request. Once the simulation is complete, the user later asks for simulation analyses, shown in Figures 6A, B.
110
+
111
+ This shows that MDCrow has the ability to generalize outside of its toolset and is capable of completing more complicated and/or user-specific simulations. By utilizing the chatting feature, users can walk MDCrow through new analyses, reducing the risk of catastrophic mistakes.
112
+
113
+ ![](images/38f1a768abc19d52626aff86c4fac9a89ee426ddd4c4e7f82eea16752e8f67f6.jpg)
114
+ A.
115
+
116
+ ![](images/558151c04e38e8dfe4fbf77267ed122bef10f5955488af9720a1499909f98779.jpg)
117
+ B.
118
+
119
+ ![](images/60b4bd95ec0760d34d412d3545793ae62a5534e78223af8cd138c4531ce08b67.jpg)
120
+ C.
121
+
122
+ ![](images/db6e3b20c32c1e8c056d5e807e1dc104a27434314f9b4be7636aeba81b1a720f.jpg)
123
+ D.
124
+ Figure 4: A. The number of subtasks in each task, categorized by type. Task 1 begins with a single pre-simulation subtask (Download a PDB file) and each subsequent task adds a single subtask, adding to a total of 10 tasks with a maximum of 10 subtasks. B. Example of "Natural" and "Ordered" prompt style on a three-step prompt. C. The robustness of MDCrow built on each model with both prompt types, measured by coefficient of variation (CV). Lower CV is interpreted as greater consistency. gpt-4o and llama3-405b are the more robust models, as the Claude models have higher CVs. D. Comparison of subtask completion across models and prompt types. In the 9-subtask prompt, gpt-4o encountered an error after only one step and gave up without trying to fix it. In general, gpt-4o and llama3-405b have relatively robust performance with increasing complexity for both prompt types. claude-3-opus struggles with more complex tasks, making more logical errors for complex tasks. The two claude-3.5-sonnet models showed fairly poor performance across this experiment.
125
+
126
+ ![](images/fc0e27d4fbb22e8d6970f408b629c979bbbe7bc66361dd21de87889094014aa0.jpg)
127
+ N Subtasks Required
128
+
129
+ ![](images/c4d61200cdf6de27cfe9dc087973adea7cf125e4c97fe3b14e6bb17a691beb78.jpg)
130
+
131
+ ![](images/46fc9bb7157dac093f517960134e7f9009f0c5d2a506540eef9b4ca5c24d070d.jpg)
132
+ Comparison of Subtasks Completion Across Models and Prompt Type
133
+
134
+ ![](images/c5ccef0a9c0ac31a1248b132c1023b87a91807b0f267e37cc15289f1ff99cc17.jpg)
135
+ - Ordered Prompt
136
+ Natural Prompt
137
+
138
+ # 4 Discussion
139
+
140
+ Although LLMs' scientific abilities are growing [67-69], they cannot yet independently complete MD workflows, even with a ReAct framework and Python interpreter. However, with frontier LLMs, chain-of-thought, and an expert-curated toolset, MDCrow successfully handles a broad range of tasks. It performs $80\%$ better than gpt-4o in ReAct workflows at completing subtasks, which is expected due to MD workflows' need for file handling, error management, and real-time data retrieval.
141
+
142
+ In some cases, particularly for complex tasks beyond its explicit toolset, MDCrow's performance may improve with human guidance. The system's chatting feature allows users to continue previous conversations, clarify misunderstandings, and guide MDCrow step-by-step through difficult tasks. This adaptability helps MDCrow recover from failures, refine its approach based on user intent, and handle more complex workflows. This suggests that, with more advanced LLM models, targeted feedback, and the addition of specialized tools, MDCrow could tackle an even broader range of tasks. We did not do a full evaluation of MDCrow's capabilities through this chatting feature in this work.
143
+
144
+ For all LLMs, task accuracy and subtask completion are affected by task complexity. Interestingly, while gpt-4o can handle multiple steps with low variance, 11ama3-405b is a compelling second best, as an open-source model. Other models, such as gpt-3.5 and claude-3.5-sonnet, struggle with hallucinations or inability to follow multistep instructions. Performance on these models, however, is improved with explicit prompting or model-specific optimization (especially for claude-3.5-sonnet).
145
+
146
+ ![](images/624a3176aa40c5c2bedcf10a999c642da1bef602844179ca1ff1d2b3584d362a.jpg)
147
+ Figure 5: Performance across LLM Frameworks using the same 25-prompt set: MDCrow, direct LLM with no tools (single-query), and ReAct agent with only Python REPL tool. All use gpt-4o. A. Performance among LLM frameworks measured by whether accuracy and average percentage of subtasks they complete for each of 25 task prompts. MDCrow is significantly better at giving accurate solutions than direct LLM (t-test, $p = 1 \times 10^{-3}$ ) and ReAct (t-test, $p = 4 \times 10^{-4}$ ). MDCrow completes significantly more subtasks on average compared to direct LLM (t-test, $p = 1 \times 10^{-6}$ ) and ReAct (t-test, $p = 6 \times 10^{-6}$ ). B. Percentage of tasks completed with the respect to LLM framework used and the number of subtasks required for each task. The correlation between accuracy and number of subtasks required is statistically significant, $p = 1 \times 10^{-3}$ for direct LLM and $p = 1 \times 10^{-4}$ MDCrow. The p value for ReAct is $p = 7 \times 10^{-2}$ .
148
+
149
+ ![](images/f323748f4ffb1a248650f3639f2007839c9ecce9f833d9304229173a5a817c92.jpg)
150
+
151
+ A.
152
+
153
+ User: Run a simulated annealing simulation of the protein 1L2Y in vacuum. First set it up, and then modify the given script to simulate at $303\mathrm{K}$ for 3 ns (3000ps), then go to 498K and simulate there for 2 ns, then return to $303\mathrm{K}$ and simulate for 3 extra ns.
154
+
155
+ ![](images/5a32943045f29a0b7251add69a9e9773207f0bb3fdf2197884961125e5f124a6.jpg)
156
+ Figure 6: A. MDCrow simulating annealing. The user directly instructs MDCrow to simulate an annealing simulation of protein 1L2Y. Once the simulation is complete, the user utilizes the chatting feature to ask for further analyses. B. RMSD, RGy, and temperature throughout the simulation, as requested by the user in A.
157
+
158
+ B.
159
+
160
+ ![](images/504622491ec595b93e404188e1519be61281bd117771428c45bfc9dc661cb7c4.jpg)
161
+
162
+ These tasks were focused on routine applications of MD with short simulation runtimes, limited to proteins, common solvents, and force fields included in the OpenMM package. We did not explore small-molecule force fields, especially related to ligand binding. Future work could explore multi-modal approaches [70, 71] for tasks like convergence analysis or plot interpretations. The current framework relies on human-created tools, but as LLM-agent systems become more autonomous [72], careful evaluation and benchmarking will be essential.
163
+
164
+ # 5 Conclusion
165
+
166
+ Running and analyzing MD simulations is non-trivial and typically hard to automate. Here, we explored using LLM agents to accomplish this. We built MDCrow, an LLM and environment consisting of over 40 tools purpose built for MD simulation and analysis. We found MDCrow could complete $72\%$ of the tasks with the optimal settings (gpt-4o). 11ama-405B was able to complete $68\%$ , providing a compelling open-source model. The best models were relatively robust to how the instructions are given, although weaker models struggle with unstructured instructions. Simply using an LLM with a python interpreter and required packages installed had a $28\%$ accuracy. The performance of MDCrow was relatively stable as well, though dependent on the model. Correct assessment of these complex scientific workflows is challenging, and had to be done by hand. Chatting with the simulations, via extended conversations, is even more compelling, but is harder to assess.
167
+
168
+ This work demonstrates the steps to automate and assess computational scientific workflows. As LLMs continue improving in performance, and better training methods arise for complex tasks like this, we expect LLM agents to be increasingly important for accelerating science. MDCrow, for example, can now automatically assess hypotheses with $72\%$ accuracy with simulation and can scale-out to thousands of simultaneous tasks. The code and tasks are open source and available at https://github.com/ur-whitelab/MDCrow.
169
+
170
+ # 6 Acknowledgments
171
+
172
+ Research reported in this work was supported by the National Institute of General Medical Sciences of the National Institutes of Health under award number R35GM137966, National Science Foundation under grant number of 1751471, Robert L. and Mary L. Sproull Fellowship gift and U.S. Department of Energy, Grant No. DE-SC0023354. Work at FutureHouse is supported by the generosity of Eric and Wendy Schmidt. We thank the Center for Integrated Research Computing (CIRC) at University of Rochester for providing computational resources and technical support.
173
+
174
+ # References
175
+
176
+ [1] Siddharth Sinha, Benjamin Tam, and San Ming Wang. Applications of molecular dynamics simulation in protein study. Membranes, 12(9):844, August 2022.
177
+ [2] Martin Karplus and J Andrew McCammon. Molecular dynamics simulations of biomolecules. nature structural biology, 9(9), 2002.
178
+ [3] Scott A Hollingsworth and Ron O Dror. Molecular dynamics simulation for all. Neuron, 99(6):1129-1143, 2018.
179
+ [4] Sameer Velankar, Stephen K Burley, Genji Kurisu, Jeffrey C Hoch, and John L Markley. The protein data bank archive. Structural Proteomics: High-Throughput Methods, pages 3–21, 2021.
180
+ [5] Bernard R Brooks, Charles L Brooks III, Alexander D Mackerell Jr, Lennart Nilsson, Robert J Petrella, Benoit Roux, Youngdo Won, Georgios Archontis, Christian Bartels, Stefan Boresch, et al. CHARMM: the biomolecular simulation program. Journal of computational chemistry, 30(10):1545-1614, 2009.
181
+ [6] Jay W Ponder and David A Case. Force fields for protein simulations. Advances in protein chemistry, 66:27-85, 2003.
182
+ [7] Matthew P. Baumgartner and Hongzhou Zhang. Building admiral, an automated molecular dynamics and analysis platform. ACS Medicinal Chemistry Letters, 11(11):2331-2335, November 2020.
183
+ [8] Yoshihiro Hayashi, Junichiro Shiomi, Junko Morikawa, and Ryo Yoshida. RadonPy: automated physical property calculation using all-atom classical molecular dynamics simulations for polymer informatics. npj Computational Materials, 8(1):222, November 2022.
184
+ [9] Harvinder Singh, Anupam Raja, Ajay Prakash, and Bikash Medhi. Gmx_qk: An automated protein protein-ligand complex simulation workflow bridged to MM PBSA, based on gromacs and zenity-dependent GUI for beginners in MD simulation study. Journal of Chemical Information and Modeling, 63(9):2603-2608, May 2023.
185
+ [10] Gudrun Gygli and Juergen Pleiss. Simulation foundry: Automated and F.A.I.R. molecular modeling. Journal of Chemical Information and Modeling, 60(4):1922-1927, April 2020.
186
+ [11] Abeeb Abiodun Yekeen, Olanrewaju Ayodeji Durojaye, Mukhtar Oluwaseun Idris, Hamdalat Folake Muritala, and Rotimi Olusanya Arise. CHAPERONg: A tool for automated GROMACS-based molecular dynamics simulations and trajectory analyses. Computational and Structural Biotechnology Journal, 21:4849-4858, 2023.
187
+ [12] Eduardo H. B. Maia, Lucas Rolim Medaglia, Alisson Marques Da Silva, and Alex G. Taranto. Molecular architect: A user-friendly workflow for virtual screening. ACS Omega, 5(12):6628-6640, March 2020.
188
+ [13] Abir Ganguly, Hsu-Chun Tsai, Mario Fernandez-Pendas, Tai-Sung Lee, Timothy J. Giese, and Darrin M. York. AMBER drug discovery boost tools: Automated workflow for production free-energy simulation setup and analysis (professa). Journal of Chemical Information and Modeling, 62(23):6069-6083, December 2022.
189
+ [14] Celso R. C. Rêgo, Jörg Schaarschmidt, Tobias Schlöder, Montserrat Penaloza-Amion, Saientan Bag, Tobias Neumann, Timo Strunk, and Wolfgang Wenzel. SimStack: An intuitive workflow framework. Frontiers in Materials, 9:877597, May 2022.
190
+ [15] Derek Groen, Agastya P. Bhati, James Suter, James Hetherington, Stefan J. Zasada, and Peter V. Coveney. FabSim: Facilitating computational research through automation on large-scale and distributed e-infrastructures. Computer Physics Communications, 207:375-385, October 2016.
191
+ [16] Luan Carvalho Martins, Elio A. Cino, and Rafaela Salgado Ferreira. PyAutoFEP: An automated free energy perturbation workflow for GROMACS integrating enhanced sampling methods. Journal of Chemical Theory and Computation, 17(7):4262-4273, July 2021.
192
+ [17] Miroslav Suruzhon, Tharindu Senapathi, Michael S. Bodnarchuk, Russell Viner, Ian D. Wall, Christopher B. Barnett, Kevin J. Naidoo, and Jonathan W. Essex. ProtoCaller: Robust automation of binding free energy calculations. Journal of Chemical Information and Modeling, 60(4):1917-1921, April 2020.
193
+ [18] Dmitry Suplatov, Yana Sharapova, and Vytas Švedas. EasyAmber: A comprehensive toolbox to automate the molecular dynamics simulation of proteins. Journal of Bioinformatics and Computational Biology, 18(06):2040011, 2020.
194
+ [19] Leandro Martínez, Ricardo Andrade, Ernesto G Birgin, and José Mario Martínez. PACKMOL: A package for building initial configurations for molecular dynamics simulations. Journal of computational chemistry, 30(13):2157-2164, 2009.
195
+
196
+ [20] Naveen Michaud-Agrawal, Elizabeth J Denning, Thomas B Woolf, and Oliver Beckstein. MDAnalysis: a toolkit for the analysis of molecular dynamics simulations. Journal of computational chemistry, 32(10):2319-2327, 2011.
197
+ [21] Robert T. McGibbon, Kyle A. Beauchamp, Matthew P. Harrigan, Christoph Klein, Jason M. Swails, Carlos X. Hernández, Christian R. Schwantes, Lee-Ping Wang, Thomas J. Lane, and Vijay S. Pande. MDTraj: A modern open library for the analysis of molecular dynamics trajectories. Biophysical Journal, 109(8):1528 - 1532, 2015.
198
+ [22] Peter Eastman, Jason Swails, John D Chodera, Robert T McGibbon, Yutong Zhao, Kyle A Beauchamp, Lee-Ping Wang, Andrew C Simmonett, Matthew P Harrigan, Chaya D Stern, et al. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS computational biology, 13(7):e1005659, 2017.
199
+ [23] Mark James Abraham, Teemu Murtola, Roland Schulz, Szilard Páll, Jeremy C Smith, Berk Hess, and Erik Lindahl. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. *SoftwareX*, 1:19–25, 2015.
200
+ [24] A. P. Thompson, H. M. Aktulga, R. Berger, D. S. Bolintineanu, W. M. Brown, P. S. Crozier, P. J. in 't Veld, A. Kohlmeyer, S. G. Moore, T. D. Nguyen, R. Shan, M. J. Stevens, J. Tranchida, C. Trott, and S. J. Plimpton. LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comp. Phys. Comm., 271:108171, 2022.
201
+ [25] G Goret, B Aoun, and Eric Pellegrini. MDANSE: An interactive analysis environment for molecular dynamics simulations. Journal of chemical information and modeling, 57(1):1-5, 2017.
202
+ [26] João Vieira Ribeiro, Rafael C Bernardi, Till Rudack, Klaus Schulten, and Emad Tajkhorshid. QwikMD - gateway for easy simulation with VMD and NAMD. Biophysical Journal, 114(3):673a-674a, 2018.
203
+ [27] Victor H Rusu, Vitor AC Horta, Bruno AC Horta, Roberto D Lins, and Riccardo Baron. MDWiZ: a platform for the automated translation of molecular dynamics simulations. Journal of Molecular Graphics and Modelling, 48:80-86, 2014.
204
+ [28] Peter W Hildebrand, Alexander S Rose, and Johanna KS Tiemann. Bringing molecular dynamics simulation data into view. Trends in Biochemical Sciences, 44(11):902-913, 2019.
205
+ [29] Xevi Biarnés, Fabio Pietrucci, Fabrizio Marinelli, and Alessandro Laio. METAGUI. a VMD interface for analyzing metadynamics and molecular dynamics simulations. Computer Physics Communications, 183(1):203-211, 2012.
206
+ [30] William Humphrey, Andrew Dalke, and Klaus Schulten. VMD: visual molecular dynamics. Journal of molecular graphics, 14(1):33-38, 1996.
207
+ [31] Diamantis Sellis, Dimitrios Vlachakis, and Metaxia Vlassi. Gromita: a fully integrated graphical user interface to gromacs 4. Bioinformatics and biology insights, 3:BBI-S3207, 2009.
208
+ [32] Gerard Martínez-Rosell, Toni Giorgino, and Gianni De Fabritiis. PlayMolecule ProteinPrepare: a web application for protein preparation for molecular dynamics simulations. Journal of chemical information and modeling, 57(7):1511-1516, 2017.
209
+ [33] Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
210
+ [34] Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. MRKL systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445, 2022.
211
+ [35] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
212
+ [36] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024.
213
+ [37] Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings in bioinformatics, 23(6):bbac409, 2022.
214
+ [38] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models encode clinical knowledge. Nature, 620(7972):172-180, 2023.
215
+
216
+ [39] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with AI agents. Cell, 187:6125-6151, Oct 2024.
217
+ [40] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Aug-mentation large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024.
218
+ [41] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023.
219
+ [42] Juan Luis Villarreal-Haro, Remy Gardier, Erick J Canales-Rodriguez, Elda Fischi Gomez, Gabriel Girard, Jean-Philippe Thiran, and Jonathan Rafael-Patino. CACTUS: A computational framework for generating realistic white matter microstructure substrates, 2023.
220
+ [43] Kevin Maik Jablonka, Qianxiang Ai, Alexander Al-Feghali, Shruti Badhwar, Joshua D Bocarsly, Andres M Bran, Stefan Bringuier, L Catherine Brinson, Kamal Choudhary, Defne Circi, et al. 14 examples of how LLMs can transform materials science and chemistry: a reflection on a large language model hackathon. Digital Discovery, 2(5):1233-1250, 2023.
221
+ [44] Yuming Su, Xue Wang, Yuanxiang Ye, Yibo Xie, Yujing Xu, Yibing Jiang, and Cheng Wang. Automation and machine learning augmented by large language models in catalysis study. Chemical Science, 2024.
222
+ [45] Yuan Chiang, Chia-Hong Chou, and Janosh Riebesell. LLaMP: Large language model made powerful for high-fidelity materials knowledge retrieval and distillation. arXiv preprint arXiv:2401.17244, 2024.
223
+ [46] Seongmin Kim, Yousung Jung, and Joshua Schrier. Large language models for inorganic synthesis predictions. Journal of the American Chemical Society, 2024.
224
+ [47] Wonseok Lee, Yeonghun Kang, Taeun Bae, and Jihan Kim. Harnessing large language model to collect and analyze metal-organic framework property dataset. arXiv preprint arXiv:2404.13053, 2024.
225
+ [48] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024.
226
+ [49] Michael H. Prince, Henry Chan, Aikaterini Vriza, Tao Zhou, Varuni K. Sastry, Matthew T. Dearing, Ross J. Harder, Rama K. Vasudevan, and Mathew J. Cherukara. Opportunities for retrieval and tool augmented large language models in scientific facilities, 2023.
227
+ [50] Odhran O'Donoghue, Aleksandar Shtedritski, John Ginger, Ralph Abboud, Ali Essa Ghareeb, Justin Booth, and Samuel G Rodriques. Bioplanner: Automatic evaluation of llms on protocol planning in biology, 2023.
228
+ [51] Kaixuan Huang, Yuanhao Qu, Henry Cousins, William A. Johnson, Di Yin, Mihir Shah, Denny Zhou, Russ Altman, Mengdi Wang, and Le Cong. CRISPR-GPT: An LLM agent for automated design of gene-editing experiments, 2024.
229
+ [52] Yixiang Ruan, Chenyin Lu, Ning Xu, Jian Zhang, Jun Xuan, Jianzhang Pan, Qun Fang, Hanyu Gao, Xiaodong Shen, Ning Ye, and et al. Accelerated end-to-end chemical synthesis development with large language models. ChemRxiv, 2024.
230
+ [53] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metalorganic frameworks using large language models. Nature Communications, 15(1):4705, June 2024.
231
+ [54] Tianyidan Xie, Rui Ma, Qian Wang, Xiaogian Ye, Feixuan Liu, Ying Tai, Zhenyu Zhang, and Zili Yi. Anywhere: A multi-agent framework for reliable and diverse foreground-conditioned image inpainting, 2024.
232
+ [55] A. Ghafarollahi and M. J. Buehler. ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning, 2024.
233
+ [56] Mayk Caldas Ramos, Christopher J Collison, and Andrew D White. A review of large language models and autonomous agents in chemistry. Chemical Science, 2025.
234
+ [57] Harrison Chase. LangChain, 10 2022.
235
+ [58] The UniProt Consortium. UniProt: the Universal Protein Knowledgebase in 2023. Nucleic Acids Research, 51(D1):D523–D531, 11 2022.
236
+ [59] Molstar Developers. molrender. https://github.com/molstar/molrender, 2019. Accessed: 2025-02-10.
237
+ [60] Hai Nguyen, David A Case, and Alexander S Rose. NGLview–interactive molecular graphics for Jupyter notebooks. Bioinformatics, 34(7):1241–1242, 2018.
238
+
239
+ [61] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.
240
+ [62] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
241
+ [63] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
242
+ [64] Fireworks AI, Inc. [Accessed 24-01-2025].
243
+ [65] www-cdn.anthropic.com. https://www-cdn.anthropic.com/fed9cc193a14b84131812372d8d5857f8f304c52/Model_Card_Claude_3_Addendum.pdf. [Accessed 10-01-2025].
244
+ [66] Anthropic. The Claude 3 model family: Opus, Sonnet, Haiku, Mar 2024.
245
+ [67] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. OpenAI o1 system card. arXiv preprint arXiv:2412.16720, 2024.
246
+ [68] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. GPT-4o system card. arXiv preprint arXiv:2410.21276, 2024.
247
+ [69] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. LAB-Bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024.
248
+ [70] Chenyu Wang, Weixin Luo, Qianyu Chen, Haonan Mai, Jindi Guo, Sixun Dong, Xiaohua, Xuan, Zhengxin Li, Lin Ma, and Shenghua Gao. MLLM-Tool: A multimodal large language model for tool agent learning, 2024.
249
+ [71] Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. AssistGPT: A general multi-modal assistant that can plan, execute, inspect, and learn, 2023.
250
+ [72] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023.
251
+
252
+ # Supplemental Information
253
+
254
+ # A Claude-Specific Engineering
255
+
256
+ While both of Claude's Sonnet models achieved poor performance during the robustness experiment, it can be noted that a single common error arose consistently. When running an NPT simulation, MDCrow requires that all parameters be passed to the simulation tool. However, both Sonnet models consistently neglected to provide a value for pressure, even when directly prompted to do so. The claude-3-opus made this mistake a single time. This is a relatively simple fix, providing MDCrow with a default pressure of 1 atm when no pressure is passed.
257
+
258
+ ![](images/de4b5fa05febaec849febcb28553e928c7398973d7c1535168db5a068bb08e1d.jpg)
259
+ N Subtasks Completed
260
+
261
+ ![](images/99718eb29aaae0e531c85e4b25828736b4ea574d49480b87dc2d4e1e3aa1c931.jpg)
262
+ Subtask Completion for Claude Models
263
+
264
+ ![](images/9037b485eaab7ce07d0fb76c10d216c9cddaa949938771f443a59cd8bdb201c5.jpg)
265
+
266
+ ![](images/d3f84d1b3a893596d88bbdaa27bdc24ca4b6983db7ebfc62a189c6440f741b60.jpg)
267
+ Figure 7: Performance of MDCrow with three Claude models on 10 tasks. As the number of subtasks increase, we all subtasks completed for both prompt types. The top row shows MDCrow's performance as-is, and the bottom row shows MDCrow's performance when given a direct fix for missing parameters. There is a clear change in performance after the fix for both claude-3.5-sonnet-20241022 and claude-3.5-sonnet-20240620.
268
+
269
+ ![](images/0b4669ac14c016fd78ba8e13ccafec341245b05fbb63b172b7fb7f5d29187f1f.jpg)
270
+
271
+ ![](images/0501fdaa67e983c79bb2e248c6d63eb6877a05f7b7be45239322c1e92c929c24.jpg)
272
+ - Ordered Prompt
273
+ - Natural Prompt
274
+
275
+ As can be seen in Figure 7, including this fix drastically improves the performance of these models, with performance comparable to the top models. However, no other models made this mistake, and no other model-specific optimization was conducted. Thus, for all experiments shown in this paper, MDCrow does not accommodate this Claude-specific missing parameter fix.
276
+
277
+ # B Prompts
278
+
279
+ # MDCrow Prompt
280
+
281
+ You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem to the best of your ability using the provided tools.
282
+
283
+ You can only respond with a single complete 'Thought, Action, Action Input' format OR a single 'Final Answer' format.
284
+
285
+ ```txt
286
+ Complete format:
287
+ Thought: (reflect on your progress and decide what to do next)
288
+ Action:
289
+ { "action": (the action name, it should be the name of a tool), "action_input": (the input string for the action)
290
+ }
291
+ ```
292
+
293
+ OR
294
+
295
+ Final Answer: (the final response to the original input question, once all steps are complete)
296
+
297
+ You are required to use the tools provided, using the most specific tool available for each action. Your final answer should contain all information necessary to answer the question and its subquestions. Before you finish, reflect on your progress and make sure you have addressed the question in its entirety.
298
+
299
+ If you are asked to continue or reference previous runs, the context will be provided to you. If context is provided, you should assume you are continuing a chat.
300
+
301
+ Here is the input:
302
+ Previous Context: {context}
303
+ Question: {input}
304
+
305
+ During the comparison study between MDCrow, GPT-only, and ReAct with Python REPL tool, we used different system prompts for each of these LLM frameworks.
306
+
307
+ # Direct-LLM Prompt
308
+
309
+ You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem in its entirety to the best of your ability. If any part of the task requires you to perform an action that you are not capable of completing, please write a runnable Python script for that step and move on. For literature papers, use and process papers from the 'paper.collection' folder. For .pdb files, download them from the RSCB website using 'requests'. To preprocess PDB files, you will use PDBFixer. To get information about proteins, retrieve data from the UniProt database. For anything related to simulations, you will use OpenMM, and for anything related to analyses, you will use MDTraj. At the end, combine any scripts into one script.
310
+
311
+ # ReAct Agent Prompt
312
+
313
+ You are an expert molecular dynamics scientist, and your task is to respond to the question or solve the problem to the best of your ability. If any part of the task requires you to perform an action that you are not capable of completing, please write a runnable Python script for that step and run it. For literature papers, use and process papers from the 'paper.collection' folder. For .pdb files, download them from the RSCB website using 'requests'. TO preprocess PDB files, you will use PDBFixer. To get information about proteins, retrieve data from the UniProt database. For anything related to simulations, you will use OpenMM, and for anything related to analyzes, you will use MDTraj.
314
+
315
+ You can only respond with a single complete 'Thought, Action, Action Input' format OR a single 'Final Answer' format.
316
+
317
+ ```txt
318
+ Complete format:
319
+ Thought: (reflect on your progress and decide what to do next)
320
+ Action:
321
+ { "action": (the action name, it should be the name of a tool), "action_input": (the input string for the action)
322
+ }
323
+ OR
324
+ Final Answer: (the final response to the original input question, once all steps are complete)
325
+ ```
326
+
327
+ You are required to use the tools provided, using the most specific tool available for each action. Your final answer should contain all information necessary to answer the question and its subquestions. Before you finish, reflect on your progress and make sure you have addressed the question in its entirety.
328
+
329
+ Here is the input: Question: {input}
330
+
331
+ # C Task Prompts & References Used in Experiments
332
+
333
+ Table 1: Details of 25 task prompts used in experiments
334
+
335
+ <table><tr><td>Prompt ID</td><td>Prompt</td><td># sub-tasks</td><td>List of required subtasks</td></tr><tr><td>1</td><td>Simulate PDB ID 1MBN at two different temperatures: 300 K and 400 K for 1 ns each. Plot the RMSD of both over time and compare the final secondary structures at the end of the simulations. Get information about this protein, such as the number of residues and chains, etc.</td><td>8</td><td>Download PDB, simulate (x2), RMSD (x2), DSSP (x2), summarize_protein</td></tr><tr><td>2</td><td>Download the PDB file for protein 1LYZ.</td><td>1</td><td>Download PDB</td></tr><tr><td>3</td><td>Download the PDB file for protein 1GZX. Then, analyze the secondary structure of the protein and provide information on how many helices, sheets, and other components are present. Get the gene names for this protein.</td><td>3</td><td>Download PDB, DSSP, Get-ProteinFunction (or literature)</td></tr><tr><td>4</td><td>What are the common parameters used to simulate fibronectin?</td><td>1</td><td>literature search</td></tr><tr><td>5</td><td>Simulate 1VII for 1 ns at a temperature of 300 K. Then, tell me if the secondary structure changed from the beginning of the simulation to the end of the simulation.</td><td>5</td><td>Download PDB, simulate, DSSP before, DSSP after, comparison</td></tr><tr><td>6</td><td>Simulate 1A3N and 7VDE (two PDB IDs matching hemoglobin) with identical parameters. Find the appropriate parameters for simulating hemoglobin from the literature. Then, plot the radius of gyration throughout both simulations.</td><td>7</td><td>Download PDB (x2), literature, simulate (x2), RGy (x2)</td></tr><tr><td>7</td><td>Simulate 1ZNI for 1 ns at a temperature of 300 K in water. Then, simulate it again in acetonitrile. Compute the RMSD, final secondary structure, and PCA for each simulation.</td><td>10</td><td>Download PDB, simulate (x2), packmol, RMSD (x2), SASA (x2), PCA (x2)</td></tr><tr><td>8</td><td>Simulate 4RMB at 100K, 200K, and 300K. Then, for each simulation, plot the radius of gyration over time and compare the secondary structure before and after the simulation.</td><td>10</td><td>Download PDB, simulate (x3), RGy (x3), DSSP (x3)</td></tr><tr><td>9</td><td>Download the PDB file for 1AEE. Then tell me how many chains and atoms are present in the protein.</td><td>2</td><td>download PDB, count atoms/chains</td></tr><tr><td>10</td><td>Simulate protein 1ZNI at 300 K for 1 ns and calculate the RMSD.</td><td>3</td><td>Download PDB, simulate, RMSD</td></tr><tr><td>11</td><td>Download the PDB files for 8PFK and 8PFQ. Then, compare the secondary structures of the two proteins, including the number of atoms, secondary structures, number of chains, etc.</td><td>4</td><td>Download PDB (x2), DSSP (x2)</td></tr><tr><td>12</td><td>Simulate fibronectin (PDB ID 1FNF) for 1 ns, using an appropriate temperature found in the literature. Compute the RMSD and the final secondary structure. By using the PDB ID to get the Uniprot ID, obtain the subunit structure and the number of beta sheets, helices, etc. Compare this information to the structure we computed.</td><td>8</td><td>Download PDB, literature, simulate, RMSD, DSSP, get uniprot, subunit structure, get beta sheets/helices</td></tr><tr><td>13</td><td>Compare the RMSF of 1UBQ under high pressure and low pressure. Perform the simulation for 1 ns, varying only the pressure. Plot the moments of inertia over time for both simulations.</td><td>7</td><td>Download PDB, simulate (x2), RMSF (x2), MOI (x2)</td></tr><tr><td>14</td><td>Simulate deoxygenated hemoglobin (1A3N) and oxygenated hemoglobin (6BB5). Plot the PCA of both trajectories.</td><td>6</td><td>Download PDB (x2), simulate (x2), PCA (x2)</td></tr><tr><td>15</td><td>Simulate trypsin (1TRN) for 1 ns at 300 K and plot eneRGy over time. Compute SASA, RMSF, and radius of gyration. Get the subunit structure, sequence, active and binding sites.</td><td>9</td><td>Download PDB, simulate, output figures, SASA, RMSF, RGy, subunit structure, sequence info, all known sites</td></tr><tr><td>16</td><td>Download the PDB file for 1C3W and describe the secondary structure. Then, simulate the protein at 300 K for 1 ns. Plot the RMSD over time and the radius of gyration over time.</td><td>5</td><td>Download PDB, DSSP, simulate, RMSD, RGy</td></tr><tr><td>17</td><td>Download the PDB file for 1XQ8, and then save the visualization for it.</td><td>2</td><td>Download PDB, visualize</td></tr><tr><td>18</td><td>Download the PDB for 2YXF. Tell me about its stability as found in the literature. Then, simulate it for 1 ns and plot its RMSD over time.</td><td>4</td><td>Download PDB, literature search, simulate, RMSD</td></tr><tr><td>19</td><td>Simulate 1MBN in water and methanol solutions.</td><td>4</td><td>Download PDB, packmol to get appropriate non-water solvent, simulate (x2)</td></tr><tr><td>20</td><td>Download protein 1ATN.</td><td>1</td><td>Download PDB</td></tr><tr><td>21</td><td>Download and clean protein 1A3N.</td><td>2</td><td>Download PDB, clean</td></tr><tr><td>22</td><td>Perform a brief simulation of protein 1PQ2.</td><td>2</td><td>Download PDB, simulate</td></tr><tr><td>23</td><td>Analyze the RDF of the simulation of 1A3N solvated in water.</td><td>3</td><td>Download PDB, simulate, RDF</td></tr><tr><td>24</td><td>Simulate oxygenated hemoglobin (1A3N) and deoxygenated hemoglobin (6BB5). Then analyze the RDF of both.</td><td>6</td><td>Download PDB (x2), simu-late (x2), RDF (x2)</td></tr><tr><td>25</td><td>Simulate 1L6X at pH 5.0 and 8.8, then analyze the SASA and RMSF under both pH conditions.</td><td>9</td><td>Download PDB, clean at pH 5.5 and 8.0, simulate(x2), SASA(x2), RMSF(x2)</td></tr></table>
336
+
337
+ # List of References Used for Literature Search During the Experiments.
338
+
339
+ 1. The folding space of protein $\beta 2$ -microglobulin is modulated by a single disulfide bridge, 10.1088/1478-3975/ac08ec
340
+ 2. Molecular Dynamics Simulation of the Adsorption of a Fibronectin Module on a Graphite Surface, 10.1021/1a0357716
341
+ 3. Predicting stable binding modes from simulated dimers of the D76N mutant of $\beta 2$ -microglobulin, 10.1016/j.csbj.2021.09.003
342
+ 4. Deciphering the Inhibition Mechanism of under Trial Hsp90 Inhibitors and Their Analogues: A Comparative Molecular Dynamics Simulation, 10.1021/acs.jcim.9b01134
343
+ 5. Molecular modeling, simulation and docking of Rv1250 protein from Mycobacterium tuberculosis, 10. 3389/fbinf.2023.1125479
344
+ 6. Molecular Dynamics Simulation of Rap1 Myb-type domain in Saccharomyces cerevisiae, 10.6026/97320630008881
345
+ 7. A Giant Extracellular Matrix Binding Protein of Staphylococcus epidermidis Binds Surface-Immobilized Fibronectin via a Novel Mechanism, 10.1128/mbio.01612-20
346
+ 8. High Affinity vs. Native Fibronectin in the Modulation of $\alpha \nu \beta 3$ Integrin Conformational Dynamics: Insights from Computational Analyses and Implications for Molecular Design, 10.1371/journal.pcbi.1005334
347
+ 9. Forced unfolding of fibronectin type 3 modules: an analysis by biased molecular dynamics simulations, 10.1006/jmbi.1999.2670
348
+ 10. Adsorption of Fibronectin Fragment on Surfaces Using Fully Atomistic Molecular Dynamics Simulations, 10.3390/ijms19113321
349
+ 11. Fibronectin Unfolding Revisited: Modeling Cell Traction-Mediated Unfolding of the Tenth Type-III Repeat, 10.1371/journal.pone.0002373
350
+ 12. Tertiary and quaternary structural basis of oxygen affinity in human hemoglobin as revealed by multiscale simulations, 10.1038/s41598-017-11259-0
351
+ 13. Oxygen Delivery from Red Cells, 10.1016/s0006-3495(85)83890-x
352
+ 14. Molecular Dynamics Simulations of Hemoglobin A in Different States and Bound to DPG: Effector-Linked Perturbation of Tertiary Conformations and HbA Concerted Dynamics, 10.1529/biophysj.107.114942
353
+ 15. Theoretical Simulation of Red Cell Sickling Upon Deoxygenation Based on the Physical Chemistry of Sickle Hemoglobin Fiber Formation, 10.1021/acs.jpcb.8b07638
354
+ 16. Adsorption of Heparin-Binding Fragments of Fibronectin onto Hydrophobic Surfaces, 10.3390/biophysica3030027
355
+ 17. Mechanistic insights into the adsorption and bioactivity of fibronectin on surfaces with varying chemistries by a combination of experimental strategies and molecular simulations, 10.1016/j.bioactmat.2021.02.021
356
+ 18. Anti-Inflammatory, Radical Scavenging Mechanism of New 4-Aryl-[1,3]-thiazol-2-yl-2-quinoline Carbohydrates and Quinolinyl[1,3]-thiazolo[3,2-b][1,2,4]triazoles, 10.1002/s1ct.201801398
357
+ 19. Trypsin-Ligand binding affinities calculated using an effective interaction entropy method under polarized force field, 10.1038/s41598-017-17868-z
358
+ 20. Ubiquitin: Molecular modeling and simulations, 10.1016/j.jmgm.2013.09.006
359
+ 21. Valid molecular dynamics simulations of human hemoglobin require a surprisingly large box size, 10.7554/ eLife.35560
360
+ 22. Multiple Cryptic Binding Sites are Necessary for Robust Fibronectin Assembly: An In Silico Study, 10. 1038/s41598-017-18328-4
361
+ 23. Computer simulations of fibronectin adsorption on hydroxyapatite surfaces, 10.1039/c3ra47381c
362
+ 24. An Atomistic View on Human Hemoglobin Carbon Monoxide Migration Processes, 10.1016/j.bpj.2012.01.011
363
+ 25. Best Practices for Foundations in Molecular Simulations [v1.0], 10.33011/livecoms.1.1.5957
364
+ 26. Unfolding Dynamics of Ubiquitin from Constant Force MD Simulation: Entropy-Enthalpy Interplay Shapes the Free-Energy Landscape, 10.1021/acs.jpcb.8b09318
365
+ 27. Dissecting Structural Aspects of Protein Stability
366
+ 28. MACE Release 0.1.0 Documentation
2502.09xxx/2502.09565/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adac993bb2eb7feb7f79f6e439e5fd6e39fc7bbad25da95452ed6300c704bea5
3
+ size 1264184
2502.09xxx/2502.09565/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09597/92dff6d3-a389-4702-b8af-cefb6f91c3c1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43b85bf2319d811c15ed9719ab3c3a9d1a95b45b9b3e45fa59a22d8e87ca91ba
3
+ size 1543326
2502.09xxx/2502.09597/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09597/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89217f705676b6ce25adfebc1b7e6c163c3569d0afdddfe7b5d1ab2e65e388d3
3
+ size 4482560
2502.09xxx/2502.09597/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09601/9307b62a-8d13-459a-aa8c-2794e5ec2dbf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:722b95ff411deee34b2694fb82ed5068179a489b80307a1a61309aa482f118e6
3
+ size 794160
2502.09xxx/2502.09601/full.md ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CoT-Valve: Length-Compressible Chain-of-Thought Tuning
2
+
3
+ Xinyin Ma*, Guangnian Wan*, Runpeng Yu, Gongfan Fang, Xinchao Wang†
4
+
5
+ National University of Singapore
6
+
7
+ {maxinyin, guangnian} @u.nus.edu, xinchao@nus.edu.sg
8
+
9
+ # Abstract
10
+
11
+ Chain-of-Thought significantly enhances a model's reasoning capability, but it also comes with a considerable increase in inference costs due to long chains. With the observation that the reasoning path can be easily compressed under easy tasks but struggle on hard tasks, we explore the feasibility of elastically controlling the length of reasoning paths with only one model, thereby reducing the inference overhead of reasoning models dynamically based on task difficulty. We introduce a new tuning and inference strategy named CoT-Valve, designed to allow models to generate reasoning chains of varying lengths. To achieve this, we propose to identify a direction in the parameter space that, when manipulated, can effectively control the length of generated CoT. Moreover, we show that this property is valuable for compressing the reasoning chain. We construct datasets with chains from long to short for the same questions and explore two enhanced strategies for CoT-Valve: (1) a precise length-compressible CoT tuning method, and (2) a progressive chain length compression approach. Our experiments show that CoT-Valve successfully enables controllability and compressibility of the chain and shows better performance than the prompt-based control. We applied this method to QwQ-32B-Preview, reducing reasoning chains on GSM8K from 741 to 225 tokens with a minor performance drop (95.07% to 94.92%) and on AIME from 6827 to 4629 tokens, with only one additional incorrect answer.
12
+
13
+ # 1 Introduction
14
+
15
+ Chain-of-Thought (CoT) reasoning (Wei et al., 2022) has emerged as a powerful technique for enhancing the reasoning capabilities of large language models (Jaech et al., 2024; Dubey et al., 2024; Abdin et al., 2024), particularly in complex
16
+
17
+ tasks such as mathematics and coding (Sprague et al., 2024) that require multi-step inference. By simulating the process of human-like thought progression, CoT enables models to break down complex problems into sub-questions, improving accuracy and interpretability (Joshi et al., 2023). Those reasoning abilities have also been tested in different domains, such as image generation (Ma et al., 2025) and visual understanding (Shao et al., 2024).
18
+
19
+ Training reasoning models often involves generating extensive reasoning paths through methods such as sampling (Wang et al., 2023), tree search (Yao et al., 2023; Guan et al., 2025a; Zhang et al., 2024) or reinforcement learning (DeepSeek-AI, 2025) to ultimately reach the correct answer. However, these long chains often incorporate redundant intermediate steps that can be unnecessary or too complex (Lightman et al., 2024), and the redundancy in the reasoning paths for training leads to inefficiencies in token usage and increased inference costs. However, crafting an optimal reasoning chain that omits extraneous details is challenging due to the limited availability of intermediate rewards to guide the process and human annotations (Zhang et al., 2025). Removing some or all of the intermediate steps and then training or distilling the model (Liu et al., 2024b; Yu et al., 2024) will degrade the performance. Alternative approaches employ information-theoretic measures (Ton et al., 2024) or identify an "overthinking" solution in QwQ (Team, 2024b) to evaluate the contribution of each sentence to the final answer.
20
+
21
+ We observe that current reasoning models, such as QwQ (Team, 2024b) and DeepSeek-R1 (DeepSeek-AI, 2025) allocate an excessive number of tokens to simple tasks, while potentially providing insufficient tokens for complex tasks. Thus, a long reasoning path is still essential, while maintaining the ability to compress reasoning paths for simpler questions is equally important. To solve this, our goal is to fine-tune a model ca
22
+
23
+ ![](images/c155babc057fd154c943d10827d80c47b8a71f43b8edc2424e462e61104d7be5.jpg)
24
+ Figure 1: The reasoning model, after the length-compressible CoT tuning, can generate reasoning paths from long to short, leveraging LoRA as a 'Valve'. We show one example from our constructed dataset MixChain.
25
+
26
+ pable of generating both long and short reasoning paths, rather than being restricted to a compressed form. We offer a new way to control the length of CoT, which we refer to as Length-Compressible Chain-of-Thought Tuning.
27
+
28
+ A central component of the proposed method is to identify an update direction in the parameter space, which, by manipulating it, acts as increasing or decreasing the length of CoT. Taking a large step in this direction leads the model to generate a short sequence, while a small step still produces a long and complex reasoning trajectory. We choose to incorporate this update direction by LoRA (Hu et al., 2022), enabling it to function as an additional branch that facilitates easy modulation of intensity while imposing minimal extra parameters on the model. We explore methods to identify this direction and demonstrate that it offers superior controllability compared to prompt-based approaches, which enables the generation of short CoT that prompt-based methods are unable to achieve. Besides, we observe that the direction can be extrapolated, allowing the reasoning chains to be extended beyond or shortened to lengths unseen in the training set. Leveraging this compressibility, we construct a dataset that pairs long and short reasoning chains for each question. This dataset is then utilized in two ways: (1) to refine the direction for more precise tuning, and (2) to progressively compress the reasoning path.
29
+
30
+ We evaluate our method across different types of models, ranging from a pre-trained LLM with little reasoning ability, LLaMA-3.1-8B and LLaMA-3.2-1.5B-Instruct (Dubey et al., 2024), to post-trained reasoning models, QwQ-32B-Preview (Team, 2024b), and distilled reasoning models, DeepSeek-R1 (DeepSeek-AI, 2025). Our results demonstrate that, with training for one time, our approach enables a model to generate reasoning paths of varying lengths, and we can achieve better
31
+
32
+ results than previous chain compression baselines. Besides, our study highlights several interesting findings: (1) Short reasoning paths can sometimes outperform longer ones, underscoring the significance of CoT-Valve in enhancing model efficiency. (2) Not every reasoning chain, despite all leading to the correct final answer, is conducive to model optimization. Excessively long or short chains complicate the distillation of CoT, posing challenges to the model training.
33
+
34
+ In summary, our contributions are: (1) CoT-Valve: Enables elastic control of length for CoT within the parameter space, allowing a single model to generate CoT from short to long. (2) MixChain Dataset: A dataset with reasoning paths of varying lengths for each question. (3) Improved Tuning & Progressive Compression: Refines the direction-tuning process based on MixChain and introduces progressive compression for inference efficiency. (4) Performance & Controllability: Achieves controllable reasoning generation and state-of-the-art results for compressed CoT.
35
+
36
+ # 2 Related Work
37
+
38
+ Chain-of-Thought. Chain-of-thought (Wei et al., 2022) reasoning has shown promising progress in recent years, especially the success of OpenAiO1 (Jaech et al., 2024) and Deepseek-R1 models (DeepSeek-AI, 2025). This introduces the test-time scaling law, apart from the traditional scaling law for training (Hoffmann et al., 2022). Several approaches have been proposed to boost the language model to have better problem-solving abilities, including the model has its self-reasoning abilities (Team, 2024b) or use Best-of-N (Nakano et al., 2021), beam search and Monte Carlo Tree Search (Kocsis and Szepesvari, 2006; Guan et al., 2025b) to search and refine the solution without further finetune the large language models. The outcome reward model and process reward models are
39
+
40
+ also introduced to evaluate the score for the entire solution, especially the final answer (Cobbe et al., 2021a) and the quality of the reasoning path (Wang et al., 2024; Luo et al., 2025b)
41
+
42
+ Chain Compression in reasoning model. Due to the high computational cost associated with inference in reasoning models, particularly for long-chain reasoning, chain compression has become a critical area of research. (Yu et al., 2024) attempts to distill the chain-of-thought into System 1 but fails to observe improvements when intermediate steps are omitted. (Deng et al., 2024b) proposes internalizing reasoning steps within the hidden states of models, while several implicit-based approaches(Deng et al., 2024a; Hao et al., 2024; Cheng and Durme, 2024) aim to compress token-wise generation by transitioning from language space to hidden space. Other studies focus on skipping intermediate reasoning steps (Liu et al., 2024b) or using summarization techniques to generate shorter reasoning chains (Kang et al., 2024). Additionally, (Chen et al., 2024) addresses the overthinking issue in QwQ (Team, 2024b) and employs SimPO (Meng et al., 2024) for optimization. Kimi K1.5 (Team et al., 2025) proposes merging long-CoT models with short-CoT models in a training-free manner. O1-Pruner (Luo et al., 2025a) adopts reinforcement learning to shorten responses.
43
+
44
+ # 3 Method
45
+
46
+ In this section, we provide an in-depth discussion of our method. Section 3.1 introduces a simple yet effective approach that enables a single tuning process to generate models with CoT with different lengths. This stage also serves as an initial step for subsequent refinements. Next, in Section 3.2, we explore multiple scenarios in which we can apply CoT-Valve to construct the dataset MixChain. In Section 3.3, we propose several advanced methods that take advantage of long-to-short datasets to improve precision and control over the generated reasoning paths in compressible fine-tuning.
47
+
48
+ # 3.1 Length-Compressible CoT Tuning
49
+
50
+ Our primary objective is to achieve a new way to control the length of reasoning paths after training a reasoning model. Existing approaches, such as prompt-based control, explicitly define sequence length in the prompt (Han et al., 2024) or utilize summary tokens (Ding et al., 2024) for guidance. However, these methods offer only limited control
51
+
52
+ over the length of CoT generated. For instance, requesting a sequence of less than 20 tokens may result in the model generating over 350 tokens (see Table 12 in the Appendix), and these methods struggle to produce answers with very short lengths. To address these limitations, we introduce CoT-Valve for training one model but can adjust the length of reasoning paths.
53
+
54
+ Consider a reasoning model defined by the parameter $\theta$ . For a given question $q$ in the dataset $\mathcal{D}$ , the probability of generating an answer $a$ and its reasoning thoughts $\{t_i\}_{i=1}^n$ given the question $q$ can be described by:
55
+
56
+ $$
57
+ p \left(a \mid t _ {1}, \dots , t _ {n}, q; \theta\right) \prod_ {i = 1} ^ {n} p \left(t _ {i} \mid t _ {< i}, q; \theta\right) \tag {1}
58
+ $$
59
+
60
+ where $\{t_i\}_{i = 1}^n$ might include errors or unnecessary details. With short synthesized or human-annotated explanations $\{t_i\}_{i = 1}^m$ with $m < n$ , the training objective is to adjust the parameter in such a way that the chain is shortened while still yielding the correct answer:
61
+
62
+ $$
63
+ \begin{array}{l} \max _ {\Delta \theta} \mathbb {E} _ {(q, a) \sim \mathcal {D}} p (a \mid t _ {1}, \dots , t _ {m}, q; \theta + \Delta \theta) \\ \prod_ {i = 1} ^ {m} p \left(t _ {i} \mid t _ {< i}, q; \theta + \Delta \theta\right) \tag {2} \\ \end{array}
64
+ $$
65
+
66
+ and $\Delta \theta$ denotes the change in the parameter space that steers the model towards generating a more concise chain.
67
+
68
+ Since the model, with and without $\Delta \theta$ , outputs the same final answer, $\Delta \theta$ can be interpreted as a task vector (Ilharco et al., 2022). The task here is to control the length of the CoT, provided that the only difference in the training set lies in intermediate reasoning steps $\{t_i\}_{i=1}^n$ . Those reasoning paths are different in length but ultimately lead to the same final answer. Thus, we can control the task vector to achieve the goal of adjusting the length of CoT. $\Delta \theta$ is designed within a parameter-efficient space, functioning as an external branch for inference that incurs minimal overhead. Controlling this external branch enables the manipulation of the length of the reasoning path.
69
+
70
+ Task Arithmetic: Interpolation and Extrapolation of $\Delta \theta$ . To manipulate this update within the parameter space, we can control the magnitude of a $\Delta \theta$ as an arithmetic operation. We use two primary operations on $\Delta \theta$ here: interpolation and extrapolation. Let $\alpha$ denote the magnitude of $\Delta \theta$ for LoRA.
71
+
72
+ ![](images/29c167d047a008db7a0a4be8d5aa8c3e7a64a7e14ae3943e57bc62e9c82b8605.jpg)
73
+ Figure 2: Illustration of CoT-Valve. In Stage 1, we first determine $\Delta \theta$ from distilling or post-training. Then, the trained $\Delta \theta$ is utilized to construct the MixChain dataset. Using this dataset, we can then apply two enhanced training methods to achieve more precise control over reasoning paths, or to shorten the reasoning paths as needed.
74
+
75
+ ![](images/629450c551317e169bd11d120a93bf568f1771d4153c0729c0f6ccb807600695.jpg)
76
+
77
+ ![](images/f3afbb8f005cf44ab81e92aa8e8d2325d561f7486f6924da81a4656c190bd4f3.jpg)
78
+
79
+ When $\alpha$ falls within the range of (0,1), the model smoothly transitions between longer and shorter reasoning paths, similar to weight interpolation between two models (Frankle et al., 2020; Team et al., 2025). When $\alpha > 1$ , extrapolation is introduced, further shortening the reasoning path beyond what was observed during training. This enables an exploration of the minimal reasoning length required to arrive at a given answer. Thus, by adjusting $\alpha$ at inference, we can modulate the model's behavior, with each value of $\alpha$ corresponding to different CoT lengths.
80
+
81
+ Application Unlike prompt-based approaches that can only regulate the overall length of the reasoning process using prompt words, $\Delta \theta$ provides finer granularity control. $\Delta \theta$ is served in the external parameter space. This allows for greater flexibility in adjusting the reasoning trajectory. Specifically, it facilitates the selective retention of long-chain reasoning in certain thoughts while applying stronger compression to simpler reasoning segments. As a result, reductions in chain length can be localized to specific portions of the inference process rather than being uniformly applied across the entire reasoning path. We remain the design of this segment selection in future work.
82
+
83
+ # 3.2 Construct the MixChain Dataset
84
+
85
+ A crucial thing for the above process is the construction of the training dataset, especially the reasoning chain $\{t_i\}_{i = 1}^n$ . To have reasoning chains with different lengths, previous approaches rely on multiple rounds of sampling, selecting reasoning paths under different random seeds, or using some handcrafted way to remove parts of the answer (Chen
86
+
87
+ et al., 2024).
88
+
89
+ We introduce MixChain, a dataset inherently generated by our method that contains reasoning paths of varying lengths. This dataset is structured such that each question is associated with multiple reasoning paths, with lengths progressively decreasing from long to short. By simply adjusting the parameter $\alpha$ , our approach avoids the need for repeated sampling and achieves this diverse set of reasoning paths. In contrast to multi-sampling techniques, MixChain enables a more reliable and consistent generation of shorter reasoning paths while simultaneously capturing a spectrum of reasoning lengths. To construct MixChain, we consider two possible scenarios:
90
+
91
+ - If a well-annotated dataset with human-labeled solutions is available, such as GSM8K (Cobbe et al., 2021b) or PRM800k (Lightman et al., 2024), it can be leveraged to fine-tune the model for generating shorter reasoning chains as a cold start ( $\theta_{1} \rightarrow \tilde{\theta}_{1}$ and $\theta_{2} \rightarrow \tilde{\theta}_{2}$ in Figure 2).
92
+ - In the absence of a dataset containing explicit reasoning paths, or when only final answers are available without full explanations, training solely on final answers is unlikely to enable the model to generate reasoning steps. To address this limitation, we propose an alternative method for constructing MixChain. Specifically, we leverage an existing base LLM (e.g., LLaMA-3.1-8B or Qwen-32B-Instruct) as $\theta_{1}$ and use its corresponding reasoning model (e.g., DeepSeek-R1-Distill-Llama-8B or QwQ-Preview) to derive $\Delta \theta$ .
93
+
94
+ The parameter update between these models serves as a form of linear interpolation, enabling the transition from $\theta_{1}$ to $\theta_{2}$ . This transition is then used to construct the dataset, as illustrated in Figure 2, where the parameter shift is represented by $\theta_{1} \rightarrow \theta_{2}$ .
95
+
96
+ # 3.3 Improved Tuning for CoT-Valve
97
+
98
+ In this section, we present two enhanced variants of CoT-Valve: one aimed at achieving improved controllability and the other focused on optimizing the compression ratio of the reasoning paths.
99
+
100
+ A More Precise CoT-Valve Paradigm: CoT-Valve++. In the previously proposed CoT-Valve framework, the training process only constrained $\Delta \theta$ to satisfy the final objective with $\alpha = 1$ . However, during inference, we expect all positions along this direction to exhibit reasoning trajectories of varying lengths. This leads to the inconsistency between training and inference. With MixChain, we can explicitly incorporate this requirement during training by introducing an additional constraint, ensuring that the model can adapt to reasoning chains of different lengths across all positions in this direction. For each training sample, in addition to the question, answer, and solution, we have introduced a normalized term $\beta$ , which represents the factor for the length of the reasoning path. Under this dataset, our training objective is modified to find a parameter update $\Delta \theta'$ such that it satisfies:
101
+
102
+ $$
103
+ \begin{array}{l} \max _ {\Delta \theta^ {\prime}} \mathbb {E} _ {(q, a) \sim \mathcal {D} ^ {\prime}} p \left(a \mid t _ {< m}, q; \theta + \beta \Delta \theta^ {\prime}\right) \\ \prod_ {i = 1} ^ {m} p \left(t _ {i} \mid t _ {< i}, q; \theta + \beta \Delta \theta^ {\prime}\right) \tag {3} \\ \end{array}
104
+ $$
105
+
106
+ Where $\mathcal{D}'$ is the Mixchain dataset. Each sample consists of the question $q$ , the answer $a$ , the solution $\{t_i\}_{i=1}^m$ and $\beta$ , where $\beta$ is calculated as:
107
+
108
+ $$
109
+ \beta = 1 - \frac {m - m _ {\text {m i n}}}{m _ {\text {m a x}} - m _ {\text {m i n}}} \tag {4}
110
+ $$
111
+
112
+ Here, $m_{min}$ and $m_{max}$ is the length of the shortest solution and longest solution for this question. Based on synthetic samples, we introduce additional constraints that enable us to better identify the updated parameter $\Delta \theta^{\prime}$ , facilitating more precise compressibility and controllability.
113
+
114
+ # Progressive Chain Compression: CoT-Valve+P.
115
+
116
+ The structure of MixChain, which features progressively shorter reasoning paths for each question,
117
+
118
+ facilitates a progressive chain-length compression strategy. This approach is similar to iterative pruning in model compression (Molchanov et al., 2016). In this process, the model is trained with a shorter reasoning path from the dataset at each iteration, rather than training directly with the shortest reasoning CoT. This gradual compression method allows the model to progressively reduce the length of its reasoning paths.
119
+
120
+ # 4 Experiments
121
+
122
+ # 4.1 Experimental Setup
123
+
124
+ Models. We evaluate our method under several models: QwQ-32B-Preview (Team, 2024b), DeepSeek-R1-Distill-Llama-8B (DeepSeek-AI, 2025), LLaMA-3.1-8B (Dubey et al., 2024), LLaMA-3.2-1B (Dubey et al., 2024) and Qwen-32B-Instruct (Team, 2024a) with LIMO (Ye et al., 2025). We tested different scenarios for CoT-Valve:
125
+
126
+ - (Long to Short CoT) For QwQ-32B-Preview (QwQ for abbreviation) and DeepSeek-R1-Distill-Llama-8B (R1-Distill), we used our method to control and compress the length of the reasoning chain.
127
+ - (Short to Long CoT) For LLaMA-3.1-8B and LLaMA-3.2-1B-Instruct, we applied our method to distill reasoning abilities from QwQ-32B-Preview and incorporated CoT-Valve in the distillation process.
128
+ - (Short-Long-Short CoT) We tested another setting to first post-train a short-CoT LLM, Qwen-2.5-32B-Instruct (Team, 2024a), to generate Long CoT and then compress it to Short CoT. CoT-Valve can be applied in both two stages.
129
+
130
+ Metrics. We report both accuracy and the number of tokens in the answer for each experiment. Given the trade-off between reasoning path length, model size, and performance, we use a new metric, Accuracy per Computation Unit(ACU), to better capture this balance and evaluate model efficiency. It is defined as:
131
+
132
+ $$
133
+ \mathrm {A C U} = \frac {\text {A c c u r a c y}}{\# \text {P a r a m s} \times \# \text {T o k e n s}} \tag {5}
134
+ $$
135
+
136
+ Since the ACU value typically falls within the range of $10^{-5}$ to $10^{-2}$ , we report it in units of $10^{2}$ for improved readability.
137
+
138
+ ![](images/cd0cce3272ad71e02d372ef5a69bfb6ac1d0ba7723d2be614bb30cabe7460eee.jpg)
139
+ (a) GSM8K, QwQ-32B-Preview
140
+
141
+ ![](images/c4dc130606748ef9364abe4999540e782961ca2b41846e02a043c3f0b0020028.jpg)
142
+ (b) GSM8K, Llama-3.2-1B-Instruct
143
+
144
+ ![](images/fa13148a633d2f72d5cc0fabdd5a3dc340bbcd2d31c36f7bc05eebbb452132c9.jpg)
145
+ (c) AIME, Qwen2.5-32B-I w/ LIMO
146
+ Figure 3: Token length and accuracy for different methods, datasets and reasoning models. Points connected by curves in (a) and (b) represent results from one model.
147
+
148
+ Training and Evaluation. For training the model, we use LoRA (Hu et al., 2022) in most of our experiments, except in the experiment for LIMO on Qwen-2.5-32B-Instruct we use full parameter fine-tuning. We also show the results using DoRA (Liu et al., 2024a) in the Appendix. The hyper-parameters for each experiment are shown in Appendix A. We select two math datasets to evaluate the performance, for one easy math dataset, GSM8K (Cobbe et al., 2021b) and one hard math dataset, AIME24.
149
+
150
+ # 4.2 Datasets
151
+
152
+ We find in our experiments that the quality of the solution is important to the performance, even if all the human-annotated solutions or synthesized solutions reach the final answer. In our experiments, we use the question from the train set of GSM8K, the math split of PRM800K or the question from LIMO, and we employ three types of datasets with those questions in our experiments:
153
+
154
+ - Ground-truth Dataset: The dataset provides a human-annotated or model-synthesized solution. We use this as the cold start.
155
+ - MixChain from cold-start (MixChain-C): After taking the ground-truth dataset to train the model, we can get the first model to generate solutions from short to long. Then we use it to generate the dataset.
156
+ - MixChain from zero-shot (MixChain-Z): We employ CoT-Valve between a reasoning model $(\theta_{2})$ and a base LLM $(\theta_{1})$ to generate the solutions.
157
+
158
+ For each dataset, we filter out all the solutions with incorrect answers. We show the statistics of the dataset in Table 9 in the Appendix.
159
+
160
+ <table><tr><td>Method</td><td>Accuracy</td><td>#Token</td><td>ACU ↑</td></tr><tr><td>Llama-3.3-70B-Instruct</td><td>92.6</td><td>235.4</td><td>0.56</td></tr><tr><td>Llama-3.1-405B-Instruct</td><td>95.6</td><td>186.7</td><td>0.13</td></tr><tr><td>Qwen2.5-32B-Instruct</td><td>93.1</td><td>269.3</td><td>1.09</td></tr><tr><td>Qwen2.5-Math-72B-Instruct</td><td>95.8</td><td>312.1</td><td>0.43</td></tr><tr><td>QwQ-32B-Preview</td><td>95.1</td><td>741.1</td><td>0.40</td></tr><tr><td>Prompt (Han et al., 2024)</td><td>93.6</td><td>355.5</td><td>0.82</td></tr><tr><td>Prompt (Ding et al., 2024)</td><td>95.5</td><td>617.7</td><td>0.48</td></tr></table>
161
+
162
+ In-domain Train Set: GSM8K
163
+
164
+ <table><tr><td>CoT-Valve - Ground-Truth</td><td>94.0</td><td>352.8</td><td>0.83</td></tr><tr><td>CoT-Valve++ - MixChain-C</td><td>94.4</td><td>276.3</td><td>1.07</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>96.1</td><td>317.1</td><td>0.95</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>94.9</td><td>225.5</td><td>1.32</td></tr></table>
165
+
166
+ Out-of-Domain Train Set: PRM12K
167
+
168
+ <table><tr><td>Overthink(Chen et al., 2024) - SFT</td><td>94.8</td><td>749.5</td><td>0.40</td></tr><tr><td>Overthink(Chen et al., 2024) - SimPO</td><td>94.8</td><td>326.2</td><td>0.91</td></tr><tr><td>O1-Pruner(Luo et al., 2025a) - SFT</td><td>95.7</td><td>717</td><td>0.42</td></tr><tr><td>O1-Pruner(Luo et al., 2025a)</td><td>96.5</td><td>534</td><td>0.56</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>95.4</td><td>288.5</td><td>1.03</td></tr></table>
169
+
170
+ Table 1: Results of QwQ-32B-Preview on GSM8K. Values of ACU are scaled by $10^{2}$ for readability. We list the dataset we use after the method name.
171
+
172
+ # 4.3 From Long-CoT to Short-CoT.
173
+
174
+ Controllable Results. We illustrate the result in Figure 3a. First, using ground-truth samples as a cold start, we develop a model capable of generating reasoning paths of various lengths, as demonstrated in 'CoT-Valve' in Figure 3a. CoT-Valve already matches the performance of prompt-based control but can generate shorter reasoning chains. We then extrapolate $\Delta \theta$ to produce even shorter reasoning paths. Then, building on MixChain-C from this first model, we conduct further training by CoT-Valve++. CoT-Valve++ substantially surpasses the baseline and shows greater generalization capabilities in cases of extrapolation.
175
+
176
+ Compression Results. We evaluated our method against previous chain compression approaches, with the results detailed in Table 1, Table 2, and
177
+
178
+ <table><tr><td>Method</td><td>AIME24</td><td>#Token</td><td>ACU↑</td></tr><tr><td>Qwen2.5-32B-Instruct</td><td>4/30</td><td>1794.2</td><td>0.023</td></tr><tr><td>Qwen2.5-Math-72B-Instruct</td><td>7/30</td><td>1204.5</td><td>0.061</td></tr><tr><td>Gemini-Flash-Thinking (Team et al., 2023)</td><td>15/30</td><td>10810.5</td><td>-</td></tr><tr><td colspan="4">QwQ-32B-Preview.Train set: GSM8K</td></tr><tr><td>QwQ-32B-Preview</td><td>14/30</td><td>6827.3</td><td>0.021</td></tr><tr><td>Prompt (Han et al., 2024)</td><td>13/30</td><td>6102.5</td><td>0.022</td></tr><tr><td>Prompt (Ding et al., 2024)</td><td>13/30</td><td>5562.3</td><td>0.024</td></tr><tr><td>Overthink (Chen et al., 2024)</td><td>13/30</td><td>5154.5</td><td>0.026</td></tr><tr><td>CoT-Valve - GSM8K</td><td>14/30</td><td>5975.0</td><td>0.024</td></tr><tr><td>CoT-Valve++ - MixChain-C</td><td>13/30</td><td>5360.5</td><td>0.025</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>13/30</td><td>4629.6</td><td>0.029</td></tr><tr><td colspan="4">Qwen-32B-Instruct. Train set: LIMO</td></tr><tr><td>Qwen-32B-LIMO</td><td>15/30</td><td>10498.2</td><td>0.015</td></tr><tr><td>CoT-Valve</td><td>11/30</td><td>6365.2</td><td>0.018</td></tr><tr><td>SFT - MixChain - Solution 1</td><td>13/30</td><td>5368.0</td><td>0.025</td></tr><tr><td>CoT-Valve - MixChain - Solution 1</td><td>15/30</td><td>8174.8</td><td>0.019</td></tr></table>
179
+
180
+ Table 2: Results of QwQ-32B-Preview and Qwen-32B-Instruct w/ LIMO on AIME 24.
181
+
182
+ <table><tr><td rowspan="2">Model</td><td colspan="2">GSM8k</td><td colspan="2">AIME24</td></tr><tr><td>Acc</td><td>#Token</td><td>Acc</td><td>#Token</td></tr><tr><td>Llama-3.1-8B (0-shot)</td><td>15.7</td><td>915.0</td><td>0/30</td><td>1517.6</td></tr><tr><td>R1-Distill-Llama-8B</td><td>87.1</td><td>1636.6</td><td>14/30</td><td>12359.9</td></tr><tr><td>CoT-Valve</td><td>87.3</td><td>1315.2</td><td>6/30</td><td>7410.5</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>84.0</td><td>755.2</td><td>11/30</td><td>9039.0</td></tr></table>
183
+
184
+ Table 3. For GSM8K, we adhered to the baseline setup to train with PRM12K. Utilizing progressive compression, our method surpassed the baseline by producing shorter reasoning paths and improved performance.
185
+
186
+ We also report experimental results on AIME, where the model was trained using MixChain-Z derived from GSM8K. To minimize the impact of randomness on performance, we employed greedy decoding in our AIME experiments. Compared to the baseline (Chen et al., 2024), our method reduced the token count from 5155 to 4630 while maintaining the same accuracy, despite being trained on an easier dataset.
187
+
188
+ # 4.4 From Short-CoT to Long-CoT & Short-Long-Short CoT
189
+
190
+ Our method can also be applied if a short-CoT model is distilled or post-trained to be a Long-CoT model. The results are shown in Figure 3b, Table 4 and Table 5. We found that CoT-Valve can also effectively control the length of the chains in this setting. Notably, we observed that shorter chains could achieve higher accuracy on GSM8K. Moreover, if the model is trained using the MixChain-Z dataset, the results are significantly better, whether using CoT-Valve (55.5 to 58.9) or just simply SFT
191
+
192
+ Table 3: Result of DeepSeek-R1-Distill-Llama-8B.
193
+
194
+ <table><tr><td>Method</td><td>Accuracy</td><td>#Tokens</td><td>ACU↑</td></tr><tr><td>LLaMA-3.2-1B-Instruct(8-shot)</td><td>45.9</td><td>104.3</td><td>44.008</td></tr><tr><td>LLaMA-3.2-1B-Instruct(0-shot)</td><td>45.9</td><td>199.8</td><td>22.973</td></tr><tr><td>SFT-Full Finetune - GSM8k</td><td>46.1</td><td>139.4</td><td>33.070</td></tr><tr><td>SFT - GSM8k</td><td>43.8</td><td>137.7</td><td>31.808</td></tr><tr><td>Prompt</td><td>46.7</td><td>209.9</td><td>22.249</td></tr><tr><td>SFT - QwQ Distill</td><td>52.7</td><td>759.3</td><td>6.941</td></tr><tr><td>CoT-Valve - QwQ Distill</td><td>55.5</td><td>267.0</td><td>20.786</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>55.8</td><td>291.0</td><td>19.175</td></tr><tr><td>SFT - MixChain-Z - Solution 1</td><td>57.0</td><td>288.4</td><td>19.764</td></tr><tr><td>CoT-Valve - MixChain-Z - Solution 1</td><td>58.9</td><td>275.4</td><td>21.387</td></tr></table>
195
+
196
+ Table 4: Results on LLaMA-3-2-1B-Instruct. We report the result of Flexible Match here. QwQ Distill means we use QwQ to synthesize the solution and distill it.
197
+
198
+ <table><tr><td>Method</td><td>Accuracy</td><td>#Tokens</td><td>ACU↑</td></tr><tr><td>LLaMA-3.1-8B (8-shot)</td><td>56.9</td><td>282.1</td><td>2.521</td></tr><tr><td>LLaMA-3.1-8B (0-shot)</td><td>15.7</td><td>915.0</td><td>0.214</td></tr><tr><td>SFT-LoRA - GSM8k</td><td>59.0</td><td>191.9</td><td>3.843</td></tr><tr><td>SFT-LoRA - QwQ Distill</td><td>76.3</td><td>644.8</td><td>1.479</td></tr><tr><td>CoT-Valve - QwQ Distill</td><td>77.5</td><td>569.8</td><td>1.700</td></tr><tr><td>CoT-Valve+P - MixChain-Z</td><td>77.1</td><td>371.2</td><td>2.596</td></tr><tr><td>CoT-Valve + MixChain-Z - Solution 1</td><td>75.7</td><td>264.1</td><td>3.583</td></tr></table>
199
+
200
+ Table 5: Result on LLaMA-3.1-8B. We report the result of Strict Match here.
201
+
202
+ (52.7 to 57.0). Additionally, after training a long-chain model, we can employ the MixChain dataset to reduce the length of its reasoning chains further. As illustrated in Figure 3c, the results suggest that initially training the chains to be long and subsequently compressing them to be shorter (Results with Long-to-Short) can yield better performance than directly using CoT-Valve in the short-to-long stage (Results with Short-to-Long). This demonstrates significant potential for compressing the reasoning chains. We can also surpass the result of Gemini-Flash-Thinking, with the same accuracy but fewer tokens (10810.5 v.s. 8174.8)
203
+
204
+ Training dynamics does not have the same effect as CoT-Valve. We also explore whether intermediate training steps can achieve similar effects. As depicted in Figure 3c, during the early training phases, the length of the CoT increases but does not correspond with the same rapid improvement in performance. As training progresses, the token length begins to decrease while performance improves. CoT-Valve exhibits a distinct pattern, smoothly bridging the gap between the length of CoT and performance.
205
+
206
+ # 4.5 Observations
207
+
208
+ Based on the results from LLaMA-3.1-8B, LLaMA-3.2-1.5B, QwQ, DeepSeek-R1-Distill
209
+
210
+ <table><tr><td>Solution</td><td>Solution Length</td><td>Accuracy</td><td>#Token</td></tr><tr><td>Ground-Truth (Solution 0)</td><td>116.0</td><td>43.8</td><td>139.4</td></tr><tr><td>Solution 1</td><td>279.6</td><td>57.0</td><td>288.4</td></tr><tr><td>Solution 2</td><td>310.7</td><td>55.1</td><td>330.0</td></tr><tr><td>Solution 3</td><td>386.7</td><td>56.5</td><td>414.6</td></tr><tr><td>Solution 4</td><td>497.2</td><td>52.5</td><td>558.3</td></tr></table>
211
+
212
+ Llama-8B and Qwen2.5-32B-Instruct with LIMO, we summarize the following observations:
213
+
214
+ - Longer reasoning chains are not always the best on simple datasets. Across nearly all models, we find that those directly trained on long CoT data typically do not show the best performance. These models often underperform compared to those generated through CoT-Valve, which results in shorter but more accurate reasoning chains. This trend is particularly pronounced in smaller models. For instance, in the LLaMA-3.2-1B model, training on QwQ synthesized data yields an accuracy of 52.69 with 759.3 tokens. However, using CoT-Valve, we can achieve an accuracy of 55.50 with only 267.0 tokens. However, we do not observe this phenomenon in more complex datasets, indicating that while the reasoning model may be redundant for simple datasets, it still requires test-time scaling to effectively handle complex datasets.
215
+
216
+ - Some reasoning chains are difficult for the model to learn, especially for small LLMs. We fine-tuned LLaMA-3.2-1B-Instruct using only one solution from MixChain, where all solutions lead to the same final answer but involve different intermediate reasoning steps. The results, presented in Table 6, indicate that neither the shortest nor the longest chains are optimal for learning. Instead, the model most effectively learns from moderately short chains, achieving the highest accuracy while maintaining a relatively low token count. This phenomenon is particularly evident in smaller models, but it is not observed in larger models. We believe this could be beneficial for the distillation of CoT in small LLMs.
217
+
218
+ # 4.6 Analysis
219
+
220
+ Ablation on Progressive Compression. Table 7 demonstrates the effect of progressive compression.
221
+
222
+ Table 6: Train LLaMA-3.2-1B-Instruct with solutions in MixChain-Z of different lengths on GSM8K.
223
+
224
+ <table><tr><td>Solution Used</td><td>#Epoch</td><td>#Samples</td><td>Accuracy</td><td>#Tokens</td><td>ACU↑</td></tr><tr><td>-</td><td>-</td><td>-</td><td>95.07</td><td>741.1</td><td>0.40</td></tr><tr><td>4</td><td>1</td><td>6.8k</td><td>95.68</td><td>597.3</td><td>0.50</td></tr><tr><td>4+3</td><td>1</td><td>13.7k</td><td>94.84</td><td>458.4</td><td>0.65</td></tr><tr><td>4+3+2</td><td>1</td><td>20.5k</td><td>94.84</td><td>339.9</td><td>0.87</td></tr><tr><td>4+3+2+1</td><td>1</td><td>27.4k</td><td>96.13</td><td>317.1</td><td>0.95</td></tr><tr><td>4+3+2+1+0</td><td>1</td><td>34.2k</td><td>94.92</td><td>225.5</td><td>1.32</td></tr><tr><td>0</td><td>5</td><td>37.4k</td><td>92.19</td><td>250.5</td><td>1.15</td></tr></table>
225
+
226
+ Table 7: Ablation of Progressive Compression on QwQ. Here, solution 0 is the human-annotated solution from the original dataset.
227
+
228
+ <table><tr><td rowspan="2">Method</td><td colspan="2">QwQ-32B-Preview</td><td colspan="2">Llama-3.2-1B-I</td></tr><tr><td>Acc</td><td>#Token</td><td>Acc</td><td>#Token</td></tr><tr><td>Prompt (Shortest)</td><td>93.6</td><td>355.5</td><td>52.5</td><td>621.0</td></tr><tr><td>Ours (Best)</td><td>94.4</td><td>276.3</td><td>55.5</td><td>267.0</td></tr><tr><td>Ours (Shortest)</td><td>87.5</td><td>133.8</td><td>50.4</td><td>247.0</td></tr></table>
229
+
230
+ Table 8: CoT-Valve can achieve shorter chains than prompts with better performance.
231
+
232
+ We compare two settings: training directly with the ground-truth solution for five epochs and applying progressive compression for five epochs in total, with the final epoch using the ground-truth data. Our results show that progressive compression significantly improves the performance of short CoT (from 92.19 to 94.92). For each turn, progressive compression gradually reduces the token number while maintaining accuracy.
233
+
234
+ CoT-Valve achieves shorter chains compared to prompt control We also present in Table 8 the shortest chain achieved by our method and compare these with those obtained using prompt control. Our method outperforms prompt control methods at shorter chain lengths. Additionally, we explored the limits of chain length for both methods and found that our approach can generate substantially shorter chains than what can be achieved through prompt control.
235
+
236
+ # 5 Conclusion
237
+
238
+ In this paper, we propose a method that enables a model to generate reasoning chains of varying lengths instead of the prompt control. Based on this approach, we construct a dataset containing both long and short reasoning chains to further enhance controllability and compression efficiency. Experimental results demonstrate the effectiveness of our method in dynamic reasoning chain control and the compression of CoT. Future research can further explore finer-grained control strategies to improve reasoning efficiency and model controllability.
239
+
240
+ # References
241
+
242
+ Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J Hewett, Mojan Javaheripi, Piero Kauffmann, et al. 2024. Phi-4 technical report. arXiv preprint arXiv:2412.08905.
243
+ Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qizhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. 2024. Do not think that much for $2 + 3 = ?$ on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187.
244
+ Jeffrey Cheng and Benjamin Van Durme. 2024. Compressed chain of thought: Efficient reasoning through dense representations. Preprint, arXiv:2412.13171.
245
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reichiro Nakano, Christopher Hesse, and John Schulman. 2021a. Training verifiers to solve math word problems. ArXiv, abs/2110.14168.
246
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reichiro Nakano, Christopher Hesse, and John Schulman. 2021b. Training verifiers to solve math word problems. Preprint, arXiv:2110.14168.
247
+ DeepSeek-AI. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Preprint, arXiv:2501.12948.
248
+ Yuntian Deng, Yejin Choi, and Stuart Shieber. 2024a. From explicit cot to implicit cot: Learning to internalize cot step by step. Preprint, arXiv:2405.14838.
249
+ Yuntian Deng, Kiran Prasad, Roland Fernandez, Paul Smolensky, Vishrav Chaudhary, and Stuart Shieber. 2024b. Implicit chain of thought reasoning via knowledge distillation.
250
+ Mengru Ding, Hanmeng Liu, Zhizhang Fu, Jian Song, Wenbo Xie, and Yue Zhang. 2024. Break the chain: Large language models can be shortcut reasoners. arXiv preprint arXiv:2406.06580.
251
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
252
+ Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pages 3259-3269. PMLR.
253
+ Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. 2025a. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. Preprint, arXiv:2501.04519.
254
+
255
+ Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. 2025b. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. arXiv preprint arXiv:2501.04519.
256
+ Tingxu Han, Chunrong Fang, Shiyu Zhao, Shiqing Ma, Zhenyu Chen, and Zhenting Wang. 2024. Token-budget-aware llm reasoning. arXiv preprint arXiv:2412.18547.
257
+ Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training large language models to reason in a continuous latent space. Preprint, arXiv:2412.06769.
258
+ Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Preprint, arXiv:2203.15556.
259
+ Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
260
+ Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2022. Editing models with task arithmetic. arXiv preprint arXiv:2212.04089.
261
+ Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720.
262
+ Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, and Xiang Ren. 2023. Are machine rationales (not) useful to humans? measuring and improving human utility of free-text rationales. arXiv preprint arXiv:2305.07095.
263
+ Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. 2024. C3ot: Generating shorter chain-of-thought without compromising effectiveness. Preprint, arXiv:2412.11664.
264
+ Levente Kocsis and Csaba Szepesvari. 2006. Bandit based monte-carlo planning. In European Conference on Machine Learning.
265
+ Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let's verify step by step. In *The Twelfth International Conference on Learning Representations*.
266
+
267
+ Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. 2024a. Dora: Weight-decomposed low-rank adaptation. In ICML.
268
+ Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Cheng Jiayang, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2024b. Can language models learn to skip steps? In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
269
+ Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, and Dacheng Tao. 2025a. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning. arXiv preprint arXiv:2501.12570.
270
+ Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2025b. Improve mathematical reasoning in language models with automated process supervision.
271
+ Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu-Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, and Saining Xie. 2025. Inference-time scaling for diffusion models beyond scaling denoising steps. Preprint, arXiv:2501.09732.
272
+ Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. In Advances in Neural Information Processing Systems (NeurIPS).
273
+ Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. 2016. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440.
274
+ Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Ouyang Long, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2021. Webgpt: Browser-assisted question-answering with human feedback. ArXiv, abs/2112.09332.
275
+ Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, and Hongsheng Li. 2024. Visual cot: Unleashing chain-of-thought reasoning in multi-modal language models. Preprint, arXiv:2403.16999.
276
+ Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. 2024. To cot or not to cot? chain-of-thought helps mainly on math and symbolic reasoning. arXiv preprint arXiv:2409.12183.
277
+ Gemini Team, Rohan Anil, Sebastian Borgeaud, JeanBaptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. 2023. Gemini: a family of
278
+
279
+ highly capable multimodal models. arXiv preprint arXiv:2312.11805.
280
+ Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. 2025. Kimi k1.5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599.
281
+ Qwen Team. 2024a. Qwen2.5: A party of foundation models.
282
+ Qwen Team. 2024b. Qwq: Reflect deeply on the boundaries of the unknown.
283
+ Jean-Francois Ton, Muhammad Faaiz Taufiq, and Yang Liu. 2024. Understanding chain-of-thought in llms through information theory. Preprint, arXiv:2411.11984.
284
+ Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. 2024. Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9426-9439, Bangkok, Thailand. Association for Computational Linguistics.
285
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations.
286
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
287
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems.
288
+ Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. 2025. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387.
289
+ Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. 2024. Distilling system 2 into system 1. ArXiv, abs/2407.06023.
290
+ Di Zhang, Xiaoshui Huang, Dongzhan Zhou, Yuqiang Li, and Wanli Ouyang. 2024. Accessing gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. Preprint, arXiv:2406.07394.
291
+ Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. 2025. The lessons of developing process reward models in mathematical reasoning. Preprint, arXiv:2501.07301.
292
+
293
+ # A Implementation Details
294
+
295
+ # A.1 Evaluation Metric.
296
+
297
+ For experiments on LLaMA, we use lm-evalharness to evaluate the model performance. For LLaMA-3.1-8B, we report the strict matching metric due to observed repetition in the model's responses, which causes the flexible match to extract incorrect numerical values. For LLaMA-3.2-1B-Instruct, we report results using the flexible match metric. For QwQ-32B-Preview, DeepSeekR1-Distill-Llama-8B and Qwen-2.5B-LIMO, we first extract the result enclosed within \boxed{}\}. If no such boxed answer is found, we default to using the last digit in the response as the final answer.
298
+
299
+ # A.2 Training Setting.
300
+
301
+ LLaMA-3.1-8B The model is trained using eight A5000 24GB GPUs. We set the batch size to 64 and the peak learning rate to 4e-5, following a cosine decay schedule. A weight decay of 0.01 is applied. For the progressive chain compression experiment, we train the model for two epochs with each type of solution. For all other experiments, we train for a maximum of eight epochs. For LoRA, the rank is set to 32, and the lora_alpha for training is set to 64. During inference, the maximum number of tokens is set to 2048.
302
+
303
+ LLaMA-3.2-1B-Instruct The model is trained using 8 A5000 24GB GPUs. We set the batch size to 8 for the CoT-Valve experiment and 64 for all other experiments. The peak learning rate is $4\mathrm{e} - 5$ following a cosine decay schedule, except for the SFT - GSM8K experiment, where the peak learning rate is 1e-5. A weight decay of 0.01 is applied. For the CoT-Valve and SFT-Full Finetune - GSM8k experiment, we train for a maximum of four and six epochs, respectively. For the progressive chain compression experiment, we train the model for two epochs with each type of solution. For all other experiments, training is conducted for up to 8 epochs. For LoRA, the rank is set to 32, and the lora_alpha for training is set to 64. During inference, the maximum number of tokens is set to 2048.
304
+
305
+ QwQ-32B-Preview. The model is trained on two H100-80G GPUs. We set the batch size to 64 and trained for a maximum of five epochs. The learning rate is 1e-5, with a weight decay of 0.01 applied
306
+
307
+ during training. For LoRA, the rank is set to 2, and the lora_alpha for training is set to 8. During inference, we set the maximum token to be 4192 for GSM8K and the maximum token as 8192 for AIME correspondingly.
308
+
309
+ DeepSeek-R1-Distill-Llama-8B. Our experiment on DeepSeek-R1-Distill-Llama-8B² is conducted using the MixChain-zero-shot-GSM8K dataset. The batch size is set to 128, and training is performed for a maximum of five epochs. To ensure that the inference process successfully generates the final answer, we set the maximum token limit to 30K.
310
+
311
+ Qwen2.5-32B-LIMO. We fine-tuned Qwen-32B-Instruct using LIMO, training on four H100 GPUs for 10 epochs with a batch size of 4 and a maximum sequence length of 16K. The learning rate was set to 5e-6. We define Qwen-32B-Instruct as $\theta_0$ and the trained model as $\theta_{1}$ , treating the update direction between them as $\Delta \theta$ . By adjusting $\alpha$ , we generated the MixChain-C-LIMO dataset, which includes two solutions: solution 1 ( $\alpha = 0.8$ ) and solution 0 ( $\alpha = 0.6$ ).
312
+
313
+ Based on this, we further trained $\theta_{2}$ for 5 epochs with a batch size of 32, a learning rate of 5e-6, and a weight decay of 0.01, obtaining the results of MixChain-Solution 0 in Table 2. This model can be further refined through CoT-Valve (Results: CoT-Valve + MixChain - Solution 0). Unlike previous experiments, we applied full fine-tuning instead of LoRA. The maximum generated sequence length in this experiment was 15K.
314
+
315
+ # A.3 Dataset Explanation
316
+
317
+ As detailed in Section 4.2, we constructed two types of datasets: MixChain-C and MixChain-Z. The statistics for the datasets are shown in 9. For these datasets, we select $\alpha$ values ranging from [0.6, 0.8] for LIMO and [0.2, 0.4, 0.6, 0.8] for other datasets, ensuring all incorrect responses are excluded.
318
+
319
+ For MixChain-Z, while the training transition from $\theta_{1}$ to $\theta_{2}$ remains a black box, we can still identify numerous model pairs such as Qwen-32B-Instruct $\rightarrow$ QwQ-32B-Preview, and LLaMA-3.1-8B $\rightarrow$ R1-Distill-Llama-8B, as documented in the technical report. We find that the performance of the base model significantly influences the quality of the dataset.
320
+
321
+ <table><tr><td>Dataset</td><td>Solution Index</td><td>#Samples</td><td>#Avg Token</td></tr><tr><td colspan="4">GSM8K</td></tr><tr><td>Ground-Truth</td><td>1</td><td>7473</td><td>121.8</td></tr><tr><td>MixChain-C</td><td>1</td><td>22419</td><td>294.8</td></tr><tr><td></td><td>0 (Ground-Truth)</td><td></td><td>116.0</td></tr><tr><td></td><td>1</td><td></td><td>279.6</td></tr><tr><td>MixChain-Z</td><td>2</td><td>6863</td><td>310.7</td></tr><tr><td></td><td>3</td><td></td><td>386.7</td></tr><tr><td></td><td>4</td><td></td><td>497.2</td></tr></table>
322
+
323
+ <table><tr><td colspan="4">PRM12K</td></tr><tr><td>Ground-Truth</td><td>1</td><td>12000</td><td>223.1</td></tr><tr><td></td><td>0 (Ground-Truth)</td><td></td><td>172.3</td></tr><tr><td></td><td>1</td><td></td><td>583.2</td></tr><tr><td>MixChain-Z</td><td>2</td><td>8841</td><td>613.7</td></tr><tr><td></td><td>3</td><td></td><td>739.3</td></tr><tr><td></td><td>4</td><td></td><td>1003.2</td></tr></table>
324
+
325
+ <table><tr><td colspan="4">LIMO</td></tr><tr><td>Ground-Truth</td><td>1</td><td>817</td><td>6984.1</td></tr><tr><td rowspan="2">MixChain-C</td><td>1</td><td>474</td><td>2994.7</td></tr><tr><td>2</td><td>564</td><td>4890.6</td></tr></table>
326
+
327
+ Table 9: Dataset Statistic. Here we use the tokenizer from QwQ-32B-Preview to count the number of tokens.
328
+
329
+ <table><tr><td>α</td><td>0</td><td>0.125</td><td>0.25</td><td>0.5</td><td>0.75</td><td>1.0</td></tr><tr><td># Tokens</td><td>199.8</td><td>219.4</td><td>233.4</td><td>257.7</td><td>466.3</td><td>772.7</td></tr><tr><td>Accuracy</td><td>45.9</td><td>47.5</td><td>50.2</td><td>57.1</td><td>55.0</td><td>54.5</td></tr></table>
330
+
331
+ # B More Analysis
332
+
333
+ Experiments on DoRA. In addition to LoRA, we also train LLaMA-3.2-1B using DoRA (Liu et al., 2024a) and control the magnitude of $\Delta \theta$ by adjusting the $\alpha$ for DoRA. The model is trained on QwQ synthesized data for a maximum of five epochs. We set the batch size to 8 and the peak learning rate to 4e-5, following a cosine decay schedule. A weight decay of 0.01 is applied. For DoRA, the rank is set to 32, and the lora_alpha for training is set to 64.
334
+
335
+ As shown in Table 10, the chain length increases with the $\alpha$ value, demonstrating the effectiveness of interpolating $\Delta \theta$ for DoRA. Furthermore, similar to our observations with LoRA, the best result is not obtained by directly training the model on long CoT data. Specifically, training on QwQ synthesized data $(\alpha = 1.0)$ achieves an accuracy of 54.5 with 772.7 tokens, whereas the best model obtained via CoT-Valve $(\alpha = 0.5)$ achieves an accuracy of 55.72 with only 257.7 tokens.
336
+
337
+ Attention has less effect on the length of the reasoning path than MLP. We experimented
338
+
339
+ Table 10: Results of LLaMA-3.2-1B-Instruct trained with DoRA using different $\alpha$ values for interpolation.
340
+
341
+ <table><tr><td>Modules</td><td>GSM8K</td><td>#Tokens</td><td>#Params</td><td>ACU ↑</td></tr><tr><td>-</td><td>95.1</td><td>741.1</td><td>-</td><td>0.40</td></tr><tr><td>K+V</td><td>95.0</td><td>687.7</td><td>0.005%</td><td>0.43</td></tr><tr><td>Q</td><td>95.2</td><td>621.4</td><td>0.004%</td><td>0.48</td></tr><tr><td>O</td><td>95.2</td><td>484.2</td><td>0.004%</td><td>0.61</td></tr><tr><td>Attention</td><td>94.2</td><td>284.2</td><td>0.013%</td><td>1.04</td></tr><tr><td>MLP</td><td>93.5</td><td>221.8</td><td>0.038%</td><td>1.32</td></tr><tr><td>All Linear</td><td>92.4</td><td>227.6</td><td>0.051%</td><td>1.27</td></tr></table>
342
+
343
+ Table 11: LoRA on Different Modules.
344
+
345
+ with applying LoRA fine-tuning to only specific model components in QwQ. Table 11 shows that fine-tuning only the query, key, or value projection had less effect than other linear projections on shortening the reasoning chain. This suggests that attention computation may have less impact on controlling chain length, whereas the MLP layers and the final projection in attention play a more significant role. We find this to be an intriguing phenomenon, and further investigation into its underlying causes could provide deeper insights into optimizing reasoning chain compression.
346
+
347
+ Prompt Control. We evaluate the length of CoT under constraint in prompts. Table 12 presents the number of tokens generated when using various prompts across two models. For these two models, the prompts used here are:
348
+
349
+ # Prompt Template for QwQ-32B-Preview
350
+
351
+ System: You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step.
352
+
353
+ User: Generate the solution in less than <token_count> tokens. <Question>
354
+
355
+ # Prompt Template for LLaMA-3.2-1B-Instruct
356
+
357
+ # System:
358
+
359
+ Cutting Knowledge Date: December 2023
360
+
361
+ Today Date: 01 Jan 2025
362
+
363
+ User:
364
+
365
+ Given the following problem, reason and give a final answer to the problem using less than <token_count> tokens.
366
+
367
+ <Question>
368
+
369
+ From the results, we observe that while these prompts provide control over the length, there remains a significant discrepancy between the generated token count and the intended target.
370
+
371
+ Question: Amanda had 10 notebooks. This week, she ordered 6 more and then lost 2. How many notebooks does Amanda have now?
372
+
373
+ # Ground Truth Solution - 49 Tokens:
374
+
375
+ This week, Amanda added 6 notebooks - 2 notebooks = «6-2=4»4 notebooks. Now, Amanda has 10 notebooks + 4 notebooks = «10+4=14»14 notebooks.
376
+
377
+ Extrapolation - 300 Tokens: So Amanda started with 10 notebooks. Then, she ordered 6 more. So, I think I need to add those 6 to the original 10. That should be 10 plus 6, which is 16. Okay, now she lost 2 notebooks. So, I need to subtract those 2 from the 16. That should be 16 minus 2, which equals 14. So, Amanda has 14 notebooks now. Wait a minute, let me double-check that. She had 10, ordered 6 more, so that's $10 + 6 = 16$ . Then lost 2, so $16 - 2 = 14$ . Yeah, that seems right. Alternatively, maybe I can think of it differently. Maybe consider the net gain of notebooks. She ordered 6 and lost 2, so net gain is $6 - 2 = 4$ . Then, add that net gain to the original 10: $10 + 4 = 14$ . Same answer. Or, maybe another way: starting with 10, losing 2 would bring her to 8, and then ordering 6 more would bring her to 14. Again, the same result. I think no matter how I look at it, the answer is 14 notebooks. **Final Answer**\*boxed{14}
378
+
379
+ QwQ-32B-Preview - 257 tokens: So Amanda started with 10 notebooks. Then, she ordered 6 more. So, I need to add those 6 to the original 10. That should be 10 plus 6, which is 16. Okay, now she lost 2 notebooks. So, I need to subtract those 2 from the 16 she had after ordering more. That means 16 minus 2, which equals 14. So, Amanda has 14 notebooks now. Wait a minute, let me double-check that. She had 10, ordered 6 more, so that's $10 + 6 = 16$ . Then lost 2, so $16 - 2 = 14$ . Yeah, that seems right. Alternatively, maybe I can think of it differently. Maybe combine the ordered and lost notebooks first. She ordered 6 and lost 2, so overall, she gained $6 - 2 = 4$ notebooks. Then, add that to the original 10: $10 + 4 = 14$ . Same answer. I think that's correct. Amanda has 14 notebooks now. **Final Answer** $\boxed{14}$
380
+
381
+ Interpolation - 207 Tokens: Let's break this down step by step. Amanda started with 10 notebooks. Then, she ordered 6 more. So, I need to add those 6 to the original 10. That would be 10 plus 6, which equals 16 notebooks. Okay, so now she has 16 notebooks. But then, she lost 2. So, I need to subtract those 2 from the 16. That would be 16 minus 2, which equals 14 notebooks. Therefore, Amanda now has 14 notebooks. Wait a minute, let me double-check that. She started with 10, ordered 6 more, so that's $10 + 6 = 16$ . Then lost 2, so $16 - 2 = 14$ . Yeah, that seems right. I think that's the answer: Amanda has 14 notebooks now.
382
+
383
+ Interpolation - 133 Tokens: Let's solve this step-by-step: 1. **Initial number of notebooks**: Amanda started with 10 notebooks. 2. **Notebooks ordered**: This week, she ordered 6 more notebooks. - So, we add these 6 notebooks to her initial 10 notebooks. - $10 + 6 = 16$ notebooks. 3. **Notebooks lost**: Then, Amanda lost 2 notebooks. - We need to subtract these 2 lost notebooks from the total she had after ordering more. - $16 - 2 = 14$ notebooks. Therefore, Amanda now has 14 notebooks.
384
+
385
+ Extrapolation - 103 Tokens: To find out how many notebooks Amanda has now, we can break the problem into steps: 1. Amanda initially had 10 notebooks. 2. This week, she ordered 6 more notebooks, so we add 6 to the initial amount: $10 + 6 = 16$ notebooks. 3. Amanda then lost 2 notebooks, so we subtract 2 from the total: $16 - 2 = 14$ notebooks. So, Amanda has 14 notebooks now.
386
+
387
+ Figure 4: An example of the generated solution for the same question.
388
+
389
+ # C Examples
390
+
391
+ Here we show in Fig.4 an example of the generated CoT from short to long, and we also show two extrapolation cases to show the generalization ability of our method. Our method notably generates a longer reasoning process compared to the original QwQ model, incorporating an extra reflection phase. During the chain shortening process, it reduces multiple rounds of reasoning and streamlines the language, ultimately enabling us to produce an answer with only 103 tokens through extrapolation.
392
+
393
+ <table><tr><td colspan="2">QwQ-32B-Preview</td><td colspan="2">Llama-3.2-1B Instruct</td></tr><tr><td>Token in Prompt</td><td>#Token Generated</td><td>Token in Prompt</td><td>#Token Generated</td></tr><tr><td>20</td><td>355</td><td>50</td><td>118</td></tr><tr><td>50</td><td>422</td><td>100</td><td>132</td></tr><tr><td>100</td><td>511</td><td>200</td><td>141</td></tr><tr><td>200</td><td>569</td><td>300</td><td>160</td></tr><tr><td>300</td><td>623</td><td>400</td><td>183</td></tr><tr><td>400</td><td>666</td><td>500</td><td>186</td></tr></table>
394
+
395
+ Table 12: Significant discrepancies exist between the conditions specified in the prompt and the number of generated tokens on GSM8k.
2502.09xxx/2502.09601/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86ae58a9b530c47e8464fd9d483a1095b2ff7b72b6dbb566fc495fadaeb846e2
3
+ size 600134
2502.09xxx/2502.09601/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09604/082985ca-12fb-4886-9e5a-31479296b1ff_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db65c5eaf346a1d3efc0246387c62d28f4e0879b66a730bacd4268b7c17d2b26
3
+ size 1363896
2502.09xxx/2502.09604/full.md ADDED
@@ -0,0 +1,484 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
2
+
3
+ Yung-Sung Chuang<sup>1</sup> Benjamin Cohen-Wang<sup>1</sup> Shannon Zejiang Shen<sup>1</sup> Zhaofeng Wu<sup>1</sup> Hu Xu<sup>2</sup> Xi Victoria Lin<sup>2</sup> James Glass<sup>1</sup> Shang-Wen Li<sup>2</sup> Wen-tau Yih<sup>2</sup>
4
+
5
+ # Abstract
6
+
7
+ We introduce SelfCite, a novel self-supervised approach that aligns LLMs to generate high-quality, fine-grained, sentence-level citations for the statements in their generated responses. Instead of only relying on costly and labor-intensive annotations, SelfCite leverages a reward signal provided by the LLM itself through context ablation: If a citation is necessary, removing the cited text from the context should prevent the same response; if sufficient, retaining the cited text alone should preserve the same response. This reward can guide the inference-time best-of-N sampling strategy to improve citation quality significantly, as well as be used in preference optimization to directly fine-tune the models for generating better citations. The effectiveness of SelfCite is demonstrated by increasing citation F1 up to 5.3 points on the LongBench-Cite benchmark across five long-form question answering tasks. The source code is available at https://github.com/facebookresearch/SelfCite.
8
+
9
+ # 1. Introduction
10
+
11
+ Assistants built using large language models (LLMs) have become ubiquitous in helping users gather information and acquire knowledge (OpenAI, 2022; 2023). For instance, when asked about recent news, an assistant can read through dozens of relevant articles—potentially more than a user could comb through themselves—and use these articles as context to provide a clear, specific answer to the user's query. While this ability can greatly accelerate information gathering, LLMs often produce hallucinations—content that sounds plausible but is actually fabricated (Ji et al., 2023).
12
+
13
+ <sup>1</sup>Massachusetts Institute of Technology, Cambridge, MA 02139, USA <sup>2</sup>Meta FAIR, USA. Correspondence to: Yung-Sung Chuang <yungsung@mit.edu>.
14
+
15
+ Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
16
+
17
+ Even when provided with accurate context, models may misinterpret the data or include details that are not supported by the context (Shi et al., 2024; Chuang et al., 2024).
18
+
19
+ Although completely eliminating hallucinations remains difficult, existing approaches have sought to enhance the reliability of LLMs by providing context attributions—commonly referred to as citations—which are fine-grained references to relevant evidences from the context, alongside generated responses for user verification (Menick et al., 2022; Slobodkin et al., 2024; Zhang et al., 2024). While they have shown promise in generating citations, an outstanding challenge is their reliance on annotated data either from human (Menick et al., 2022; Slobodkin et al., 2024) or costly proprietary APIs (Zhang et al., 2024) to train models to generate citations. Collecting annotations can be time-consuming or costly, especially with long-context documents.
20
+
21
+ To address this challenge, we introduce SelfCite, a novel alignment approach designed to autonomously enhance the quality of citations generated by LLMs without the need for any annotations in the alignment process. Drawing inspiration from model interpretability techniques (Lei et al., 2016; Cohen-Wang et al., 2024), SelfCite leverages the inherent capabilities of LLMs to provide feedback through context ablation—a process to evaluate the necessity and sufficiency of a citation. If removing the cited text prevents the LLM from assigning high probability to the same response, we can infer that it is necessary for the LLM. Conversely, if the response remains highly probable despite removing all context other than the cited text, this indicates that the citation is sufficient for the LLM to make the claim. This self-evaluation mechanism enables SelfCite to calculate a reward signal without relying on the annotation processes.
22
+
23
+ Building on this intuition, we design a reward that can be cheaply computed by the LLM itself, composed by probability drop and probability hold in context ablation. By integrating this reward function into a best-of-N sampling strategy, SelfCite achieves substantial improvements in citation quality. Furthermore, we employ this reward for preference optimization using SimPO (Meng et al., 2024), which not only maintains these improvements but also eliminates the need for the computationally expensive best-of-N
24
+
25
+ sampling. We outperform the previous state of the art on the LongBench-Cite benchmark (Zhang et al., 2024) by up to 5.3 points in F1 scores, and showing a promising direction to bootstrap the citation quality from LLMs via self-rewarding.
26
+
27
+ # 2. Method
28
+
29
+ In this section, we describe the SelfCite framework. We begin by introducing the task of generating responses with context attributions (2.1), referred to as citations for brevity. We then design a reward for providing feedback on citation quality without human annotations (2.2) as illustrated in Fig. 1. Finally, we discuss two approaches for utilizing this reward to improve citation quality: best-of-N sampling (2.3) and preference optimization (2.4).
30
+
31
+ # 2.1. Problem Formulation
32
+
33
+ We first formalize the task of generating responses with context attributions and the metrics to self-evaluate context attributions within the SelfCite framework, inspired by previous papers (Zhang et al., 2024; Cohen-Wang et al., 2024) but adapted to our proposed self-supervised reward.
34
+
35
+ Setup. Consider employing an autoregressive language model (LM) to generate a response to a specific query given a context of relevant information. Specifically, given an LM $p_{\mathrm{LM}}$ , let $p_{\mathrm{LM}}(t_i \mid t_1, \ldots, t_{i-1})$ denote its output distribution over the next token $t_i$ based on a sequence of preceding tokens $t_1, \ldots, t_{i-1}$ . Next, let $C$ represent the context of relevant information. This context is partitioned into $|C|$ sentences: $c_1, c_2, \ldots, c_{|C|}$ . Each sentence $c_j$ is prepended with a unique identifier (e.g., sentence index $j$ ) as a way for the model to reference the sentence when generating citations. The context $C$ is followed by a query $Q$ , a question or instruction for the model. A response $R$ is then sampled from the model $p_{\mathrm{LM}}$ .
36
+
37
+ Generating Responses with Context Attributions. In SelfCite, following prior work on generating responses with context attributions (Zhang et al., 2024), each statement $r_i$ in the response $R$ is followed by a citation sequence $e_i$ consisting of the identifiers of sentences from the context $C$ . Thus, the entire response sequence $R$ is $\{r_1, e_1, r_2, e_2, \ldots, r_S, e_S\}$ , where $S$ is the total number of generated statements. The citation $e_i$ is intended to reference sentences that support the generation of $r_i$ . Formally, for each response statement $r_i$ , the model outputs a citation sequence $e_i = \{e_i^1, e_i^2, \ldots, e_i^m\}$ , where each $e_i^j \in \{1, 2, \ldots, |C|\}$ corresponds to a specific sentence number in the context $C$ , and $m$ sentences are cited. Note that this citation sequence may be empty. The entire response $R$ consisting of statements $r_i$ followed by citations $e_i$ is
38
+
39
+ sampled from the LM $p_{\mathrm{LM}}$ as follows:
40
+
41
+ $$
42
+ \begin{array}{l} r _ {i} \sim p _ {\mathrm {L M}} \left(\cdot \mid c _ {1}, \dots , c _ {| C |}, Q, r _ {1}, e _ {1}, \dots , r _ {i - 1}, e _ {i - 1}\right), \\ e _ {i} \sim p _ {\mathrm {L M}} \left(\cdot \mid c _ {1}, \dots , c _ {| C |}, Q, r _ {1}, e _ {1}, \dots , r _ {i - 1}, e _ {i - 1}, r _ {i}\right). \\ \end{array}
43
+ $$
44
+
45
+ The objective of optimizing the LM is to ensure that the citation sequence $e_i$ accurately reflects the evidence from the context that supports the generation of $r_i$ . In the SFT setting (Zhang et al., 2024), the probability of a "ground truth" annotated responses and citations $\{\hat{r}_1, \hat{e}_1, \dots, \hat{r}_S, \hat{e}_S\}$ will be maximized, given the input $C$ and $Q$ , but it is not trivial to do further alignment with feedback after the SFT data is used up. To achieve this, we introduce SelfCite that can evaluate the quality of these citations based on context ablation as a reward for further preference optimization.
46
+
47
+ # 2.2. Self-Supervised Reward via Context Ablation
48
+
49
+ We measure the quality of a citation sequence $e_i$ by the changes in the LM's probability of generating $r_i$ when the cited sentences are either removed from or isolated within the context. To simplify the notation, let all the cited context sentences be $E_i = \{c_{e_1^1}, c_{e_2^2}, \ldots, c_{e_i^m}\}$ . We define two key metrics: necessity score and sufficiency score, and finally combine them into the final reward, as shown in Fig. 1.
50
+
51
+ Necessity Score: Probability Drop. This metric quantifies the decrease in the probability of generating $r_i$ when the cited sentences in $E_i$ are all removed from the context (denoted as set minus \ operator). Formally, it is defined as:
52
+
53
+ $$
54
+ \operatorname {P r o b} - \operatorname {D r o p} \left(e _ {i}\right) = \log p _ {\mathrm {L M}} \left(r _ {i} \mid C\right) - \log p _ {\mathrm {L M}} \left(r _ {i} \mid C \backslash E _ {i}\right).
55
+ $$
56
+
57
+ To keep the equation concise, we ignore $Q$ and $\{r_1,e_1,\dots,r_{i - 1},e_{i - 1}\}$ in the equation, but they are staying in the context history when computing the probabilities. A larger probability drop indicates that the removal of $E_{i}$ significantly diminishes the likelihood of generating $r_i$ , thereby validating the necessity of the cited evidence.
58
+
59
+ Sufficiency Score: Probability Hold. Conversely, this metric measures if the probability of generating $r_i$ is still kept large when only the cited sentences are kept in the context, effectively testing the sufficiency of the citation to support the response statement. Formally:
60
+
61
+ $$
62
+ \operatorname {P r o b} - \operatorname {H o l d} \left(e _ {i}\right) = \log p _ {\mathrm {L M}} \left(r _ {i} \mid E _ {i}\right) - \log p _ {\mathrm {L M}} \left(r _ {i} \mid C\right).
63
+ $$
64
+
65
+ A more positive value of probability hold indicates that the cited sentences alone are sufficient to support the generation of $r_i$ , while removing all the other irrelevant context. Please note that the values of probability drop or hold can be either positive or negative. For example, if the citation is not relevant to $r_i$ or even distracting, it is possible for $p(r_i \mid E_i)$ to be lower than $p(r_i \mid C)$ .
66
+
67
+ ![](images/d604f867e851087665c88ea446c6b66c3bc8baa812d400470b1c6ae1166b3d7f.jpg)
68
+ Figure 1. The SelfCite framework calculates rewards based on two metrics: necessity score (probability drop) and sufficiency score (probability hold). First, the full context is used to generate a response. Then, the framework evaluates the probability of generating the same response after (1) removing the cited sentences from the context and (2) using only the cited sentences in the context. The probability drop and hold are computed from these probability differences, and their sum is used as the final reward.
69
+
70
+ Final Reward. To comprehensively evaluate the necessity and sufficiency of the generated citations, we add the two metrics together, where the opposing terms cancel out:
71
+
72
+ $$
73
+ \begin{array}{l} \operatorname {R e w a r d} \left(e _ {i}\right) = \operatorname {P r o b - D r o p} \left(e _ {i}\right) + \operatorname {P r o b - H o l d} \left(e _ {i}\right) \\ = \log p _ {\mathrm {L M}} \left(r _ {i} \mid E _ {i}\right) - \log p _ {\mathrm {L M}} \left(r _ {i} \mid C \setminus E _ {i}\right). \tag {1} \\ \end{array}
74
+ $$
75
+
76
+ The combined reward measures if the citations are both necessary and sufficient for generating the response $r_i$ .
77
+
78
+ # 2.3. Best-of-N Sampling
79
+
80
+ To leverage the self-supervised reward computed via context ablation, we employ a best-of- $N$ sampling strategy, which is a common way to test the effectiveness of a reward design (Gao et al., 2023a; Lightman et al., 2024) as a performance oracle without any confounders from training. For convenience, we first generate the full response $R = \{r_1,e_1,\dots ,r_S,e_S\}$ which includes a set of statements $(r_i)$ paired with citations $(e_i)$ , and then locate the position of $e_i$ , i.e., where the citation tags $< \text{cite}>$ . . . $</ \text{cite}>$ are generated. Within the citation tags of $e_i$ , we resample $N$ candidate citation sequences $(e_i^{(1)},\ldots ,e_i^{(N)})$ , by making the model to continue the generation from $\{C,Q,r_1,e_1,\ldots ,r_i\}$ , and then select the best citation $(e_i^*)$ that maximizes the combined reward metric, Eq. (1). The corresponding procedure is shown in Algorithm 1. After obtaining all the selected citations $\{e_1^*,\ldots ,e_S^*\}$ , we replace the original citation sequence $e_i$ with the optimal citation $e_i^*$ for each response statement $r_i$ , while keeping the response statements $\{r_1,\dots ,r_S\}$ unchanged. This process is repeated for each statement in the response $R$ to obtain the final, citation-improved output $R^{*} = \{r_{1},e_{1}^{*},\dots ,r_{S},e_{S}^{*}\}$ . To prevent the model from citing too many sentences, we exclude the candidate $e_i$ if the cited text $(E_i)$ is longer than $L_{\max} = 384$ tokens in total, unless $E_i$ are all from a single long sentence.
81
+
82
+ # Algorithm 1 SelfCite Best-of-N Sampling for Citations
83
+
84
+ Require: LM $p_{\mathrm{LM}}$ , context $C$ , query $Q$ , response $R$ , # of candidates $N$ , length limit $L_{\max}$ , $T(\cdot)$ counts # of tokens in a text, $\# (\cdot)$ counts # of sentences in a citation.
85
+
86
+ for $r_i\in R$ do
87
+
88
+ $$
89
+ \operatorname {R e w a r d} (k) = - \infty \text {f o r} k = 1, \dots , N
90
+ $$
91
+
92
+ for $k = 1,\dots ,N$ do
93
+
94
+ $$
95
+ e _ {i} ^ {(k)} \sim p _ {\mathrm {L M}} (\cdot | r _ {i}, C, Q)
96
+ $$
97
+
98
+ $$
99
+ \begin{array}{l} \text {i f} T \left(E _ {i} ^ {(k)}\right) < = L _ {\max } \text {o r} \# \left(e _ {i} ^ {(k)}\right) = 1 \text {t h e n} \\ \operatorname {R e w a r d} (k) \\ = \log p _ {\mathrm {L M}} \left(r _ {i} \mid E _ {i} ^ {(k)}\right) - \log p _ {\mathrm {L M}} \left(r _ {i} \mid C \backslash E _ {i} ^ {(k)}\right) \\ \end{array}
100
+ $$
101
+
102
+ end if
103
+
104
+ end for
105
+
106
+ $$
107
+ \begin{array}{l} k ^ {*} = \arg \max _ {k} \operatorname {R e w a r d} (k) \\ e _ {i} ^ {*} = e _ {i} ^ {(k ^ {*})} \\ \end{array}
108
+ $$
109
+
110
+ end for
111
+
112
+ $$
113
+ \text {r e t u r n} R ^ {*} = \left\{r _ {1}, e _ {1} ^ {*}, \dots , r _ {S}, e _ {S} ^ {*} \right\}
114
+ $$
115
+
116
+ # 2.4. Preference Optimization
117
+
118
+ Best-of-N sampling is a straightforward way to obtain better citations, but at the additional inference cost of generating candidates and reranking. Thus, we try to internalize the ability of generating better citations back to the LM itself.
119
+
120
+ Given documents and queries, we can prompt the LM to generate the responses along with the citations $R = \{r_1, e_1, \dots, r_S, e_S\}$ . By further applying best-of-N sampling, we can obtain new responses of the same statements but with better citations $R^* = \{r_1, e_1^*, \dots, r_S, e_S^*\}$ . Such preference data can be used in direct preference optimization (DPO) (Rafailov et al., 2024) to align the model based on the preference between the original outputs and improved outputs. Instead of using DPO, we choose its variant SimPO (Meng et al., 2024) here, as SimPO does not require
121
+
122
+ a reference model and allows $2 \times$ memory saving for 25.6K long-context fine-tuning. Through this self-supervised process, which does not require ground-truth answers or human annotations, the model learns to generate more accurate and contextually grounded citations on its own.
123
+
124
+ # 3. Experiments
125
+
126
+ We evaluate the effectiveness of SelfCite by applying the best-of-N sampling and preference optimization methods to existing models that generate responses with citations.
127
+
128
+ # 3.1. Model Details
129
+
130
+ We use LongCite-8B, the Llama-3.1-8B model (Dubey et al., 2024) fine-tuned on LongCite-45K SFT data (Zhang et al., 2024) as the start point for both best-of-N sampling and preference optimization. We adopt the same text segmentation strategy from Zhang et al. (2024): each document is split into individual sentences using NLTK (Bird, 2006) and Chinese punctuations, and each sentence is prepended with a unique identifier in $\langle \mathbb{C}\{i\} \rangle$ format. These identifiers serve as the citation indices, enabling the model to cite relevant context right after the statements with the format of $\langle \text{statement} \rangle \{ \text{content } \ldots \} \langle \text{cite} \rangle [i_1 - i_2][i_3 - i_4] \ldots \langle \text{/statement} \rangle$ . This format allows the model to cite a single sentence (e.g. $i_1 = i_2$ ) or a span (e.g. $i_1 < i_2$ ) efficiently within several tokens. The responses are generated via top-p sampling (Holtzman et al., 2020) with $p = 0.7$ and temperature $= 0.95$ . We set $p = 0.9$ and temperature $= 1.2$ when doing best-of-N sampling for the citation strings to increase the diversity. We set $N = 10$ in all the experiments considering the limited diversity in citations.
131
+
132
+ # 3.2. Preference Optimization
133
+
134
+ LongCite-45K. Best-of-N sampling (Section 2.3) requires no training, so no training data is used. For preference optimization with SimPO (Section 2.4), we use 2K document–question pairs from LongCite-45K (Zhang et al., 2024) as the training set but we do not use its ground-truth responses with high-quality citations for SFT. Instead, we generate model responses from the documents and queries, then apply best-of-N to refine citations. We label the original responses as rejected and replace their citations with BoN-refined ones to create the chosen responses, forming preference pairs to build the dataset for SimPO.
135
+
136
+ Data Construction and Length Balancing Since best-of-N responses tend to have slightly longer citations, directly fine-tuning on them can lead the model to adopt a short-
137
+
138
+ cut—generating longer citations instead of improving citation quality. To prevent this, we introduce length balancing: if an original response has a shorter citation length than the best-of-N response, we insert random citations from nearby sentences. This encourages the model to focus on where to cite rather than simply citing more. Details are provided in Appendix C, with an ablation study in Section 4.2.
139
+
140
+ # 3.3. Evaluation
141
+
142
+ Benchmark. We evaluate our approach on LongBench-Cite (Zhang et al., 2024), a comprehensive benchmark specifically designed for long-context QA with citations (LQAC). Given a long context $C$ and a query $Q$ , the model must produce a multi-statement answer with each statement cites relevant supporting sentences in $C$ . Unlike chunk-level citation schemes (Gao et al., 2023b) which cites short paragraphs, LongBench-Cite adopts sentence-level citations to ensure semantic integrity and finer-grained evidence tracking. LongBench-Cite assesses two main aspects:
143
+
144
+ - Citation Quality: Whether each statement is fully supported by relevant and only relevant sentences. GPT-4o measures citation recall (extent to which a statement is fully or partially supported by the cited text) and citation precision (whether each cited text truly supports the statement). These are combined into a citation F1 score. Additionally, we track average citation length (tokens per citation) to promote fine-grained citations over unnecessarily long passages.
145
+ - Correctness: How accurately and comprehensively the response answers the query disregarding the citations. This is scored by GPT-4o in a zero-/few-shot fashion based on the query and reference answers.
146
+
147
+ The benchmark contains five datasets, including single-doc QA MultiFieldQA-en/zh (Bai et al., 2023), multi-doc QA HotpotQA (Yang et al., 2018) and DuReader (He et al., 2018), one summarization dataset GovReport (Huang et al., 2021), and LongBench-Chat (Bai et al., 2024) which covers diverse real-world queries with long contexts such as document QA, summarization, and coding.
148
+
149
+ Baselines. SelfCite is compared with these baselines.
150
+
151
+ - Prompting: Zhang et al. (2024) propose the baseline of prompting LLMs with an one-shot example. This can be applied to proprietary models including GPT-4o (OpenAI, 2023), Claude-3-sonnet (Anthropic, 2024), and GLM-4 (GLM et al., 2024), as well as open-source models including GLM-4-9B-chat (GLM et al., 2024), Llama-3.1-{8,70}B-Instruct (Dubey et al., 2024), and Mistral-Large-Instruct (Mistral, 2024).
152
+ - Contributive context attribution: Contributive context attribution seeks to directly identify the parts of the context that cause the model to generate a par
153
+
154
+ SelfCite: Self-Supervised Alignment for Context Attribution in LLMs
155
+ Table 1. Citation recall (R), citation precision (P), citation F1 (F1), and citation length evaluated on LongBench-Cite benchmark. The best of our results are bolded. The best of previous state of the art are underlined. $\dagger$ indicates the results taken from Zhang et al. (2024).
156
+
157
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Longbench-Chat</td><td colspan="3">MultifieldQA</td><td colspan="3">HotpotQA</td><td colspan="3">Dureader</td><td colspan="3">GovReport</td><td rowspan="2">Avg. F1</td><td rowspan="2">Citation Length</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td></tr><tr><td colspan="18">Proprietary models</td></tr><tr><td>GPT-4o†</td><td>46.7</td><td>53.5</td><td>46.7</td><td>79.0</td><td>87.9</td><td>80.6</td><td>55.7</td><td>62.3</td><td>53.4</td><td>65.6</td><td>74.2</td><td>67.4</td><td>73.4</td><td>90.4</td><td>79.8</td><td>65.6</td><td>220</td></tr><tr><td>Claude-3-sonnet†</td><td>52.0</td><td>67.8</td><td>55.1</td><td>64.7</td><td>85.8</td><td>71.3</td><td>46.4</td><td>65.8</td><td>49.9</td><td>67.7</td><td>89.2</td><td>75.5</td><td>77.4</td><td>93.9</td><td>84.1</td><td>67.2</td><td>132</td></tr><tr><td>GLM-4†</td><td>47.6</td><td>53.9</td><td>47.1</td><td>72.3</td><td>80.1</td><td>73.6</td><td>47.0</td><td>50.1</td><td>44.4</td><td>73.4</td><td>82.3</td><td>75.0</td><td>82.8</td><td>93.4</td><td>87.1</td><td>65.4</td><td>169</td></tr><tr><td colspan="18">Open-source models</td></tr><tr><td>GLM-4-9B-chat†</td><td>25.9</td><td>20.5</td><td>16.7</td><td>51.1</td><td>60.6</td><td>52.0</td><td>22.9</td><td>28.8</td><td>20.1</td><td>45.4</td><td>48.3</td><td>40.9</td><td>5.7</td><td>8.2</td><td>6.3</td><td>27.2</td><td>96</td></tr><tr><td>Llama-3.1-8B-Instruct†</td><td>14.1</td><td>19.5</td><td>12.4</td><td>29.8</td><td>44.3</td><td>31.6</td><td>20.2</td><td>30.9</td><td>20.9</td><td>22.0</td><td>25.1</td><td>17.0</td><td>16.2</td><td>25.3</td><td>16.8</td><td>19.7</td><td>100</td></tr><tr><td>Llama-3.1-70B-Instruct†</td><td>25.8</td><td>32.0</td><td>23.2</td><td>53.2</td><td>65.2</td><td>53.9</td><td>29.6</td><td>37.3</td><td>28.6</td><td>38.2</td><td>46.0</td><td>35.4</td><td>53.4</td><td>77.5</td><td>60.7</td><td>40.4</td><td>174</td></tr><tr><td>Mistral-Large-Instruct†</td><td>19.8</td><td>23.9</td><td>19.0</td><td>71.8</td><td>80.7</td><td>73.8</td><td>34.5</td><td>40.9</td><td>32.1</td><td>58.3</td><td>67.0</td><td>60.1</td><td>67.9</td><td>79.6</td><td>72.5</td><td>51.5</td><td>132</td></tr><tr><td colspan="18">Contributive context attribution (with Llama-3.1-8B-Instruct)</td></tr><tr><td>ContextCite (32 calls)</td><td>56.7</td><td>76.8</td><td>58.0</td><td>76.1</td><td>87.2</td><td>78.9</td><td>40.5</td><td>54.7</td><td>43.9</td><td>58.0</td><td>82.4</td><td>65.0</td><td>67.1</td><td>88.8</td><td>75.6</td><td>64.3</td><td>92.7</td></tr><tr><td>ContextCite (256 calls)</td><td>63.5</td><td>83.1</td><td>64.7</td><td>78.8</td><td>89.8</td><td>81.8</td><td>46.5</td><td>60.8</td><td>49.2</td><td>61.7</td><td>89.1</td><td>70.1</td><td>69.1</td><td>93.5</td><td>78.8</td><td>68.9</td><td>100.8</td></tr><tr><td colspan="18">Fine-tuned models</td></tr><tr><td>LongCite-9B†</td><td>57.6</td><td>78.1</td><td>63.6</td><td>67.3</td><td>91.0</td><td>74.8</td><td>61.8</td><td>78.8</td><td>64.8</td><td>67.6</td><td>89.2</td><td>74.4</td><td>63.4</td><td>76.5</td><td>68.2</td><td>69.2</td><td>91</td></tr><tr><td>LongCite-8B†</td><td>62.0</td><td>79.7</td><td>67.4</td><td>74.7</td><td>93.0</td><td>80.8</td><td>59.2</td><td>72.1</td><td>60.3</td><td>68.3</td><td>85.6</td><td>73.1</td><td>74.0</td><td>86.6</td><td>78.5</td><td>72.0</td><td>85</td></tr><tr><td>+ SimPO w/ NLI Rewards</td><td>64.4</td><td>87.1</td><td>69.8</td><td>70.1</td><td>92.4</td><td>77.4</td><td>58.8</td><td>78.1</td><td>63.2</td><td>69.4</td><td>91.1</td><td>77.2</td><td>83.7</td><td>93</td><td>87.5</td><td>75.0</td><td>105.9</td></tr><tr><td colspan="18">Ours: SelfCite</td></tr><tr><td>LongCite-8B (Our repro.)</td><td>67.0</td><td>78.1</td><td>66.6</td><td>74.8</td><td>90.7</td><td>79.9</td><td>60.8</td><td>77.9</td><td>64.1</td><td>67.1</td><td>87.2</td><td>73.7</td><td>81.6</td><td>89.3</td><td>84.5</td><td>73.8</td><td>83.5</td></tr><tr><td>+ BoN</td><td>68.4</td><td>81.3</td><td>71.2</td><td>76.1</td><td>92.8</td><td>81.2</td><td>67.2</td><td>81.0</td><td>68.8</td><td>70.6</td><td>90.9</td><td>76.9</td><td>87.6</td><td>92.4</td><td>89.3</td><td>77.5</td><td>93.4</td></tr><tr><td>+ SimPO</td><td>68.1</td><td>79.5</td><td>69.1</td><td>75.5</td><td>92.6</td><td>81.0</td><td>69.4</td><td>82.3</td><td>71.5</td><td>72.7</td><td>91.6</td><td>78.9</td><td>86.4</td><td>92.9</td><td>89.1</td><td>77.9</td><td>105.7</td></tr><tr><td>+ SimPO then BoN</td><td>73.3</td><td>79.4</td><td>72.8</td><td>76.7</td><td>93.2</td><td>82.2</td><td>69.4</td><td>83.0</td><td>71.1</td><td>74.2</td><td>92.2</td><td>80.3</td><td>86.7</td><td>92.7</td><td>89.2</td><td>79.1</td><td>94.7</td></tr><tr><td colspan="18">Llama-3.1-8B-Instruct (fully self-supervised setting)</td></tr><tr><td>+ SFT on ContextCite</td><td>52.3</td><td>70.6</td><td>56.5</td><td>79.1</td><td>90.5</td><td>82.0</td><td>54.5</td><td>72.3</td><td>56.3</td><td>54.9</td><td>79.0</td><td>61.6</td><td>63.7</td><td>84.9</td><td>72.3</td><td>65.7</td><td>83.0</td></tr><tr><td>+ BoN</td><td>54.8</td><td>67.6</td><td>58.1</td><td>80.4</td><td>90.5</td><td>83.0</td><td>58.3</td><td>70.0</td><td>57.5</td><td>57.6</td><td>79.0</td><td>63.1</td><td>67.2</td><td>84.8</td><td>74.6</td><td>67.3</td><td>80.4</td></tr><tr><td>+ SimPO</td><td>63.3</td><td>74.3</td><td>64.6</td><td>80.2</td><td>88.9</td><td>82.4</td><td>59.7</td><td>76.9</td><td>61.0</td><td>59.0</td><td>80.9</td><td>65.4</td><td>68.5</td><td>86.6</td><td>76.1</td><td>69.9</td><td>90.2</td></tr><tr><td>+ SimPO then BoN</td><td>66.0</td><td>82.4</td><td>71.1</td><td>81.5</td><td>90.7</td><td>83.2</td><td>61.3</td><td>70.0</td><td>59.9</td><td>62.1</td><td>81.4</td><td>67.4</td><td>68.8</td><td>86.2</td><td>76.1</td><td>71.5</td><td>87.4</td></tr></table>
158
+
159
+ ticular statement. We consider ContextCite (Cohen-Wang et al., 2024), a contributive context attribution method that performs several random context ablations to model the effect of ablating different parts of the context on a generated statement. We use NLTK to split Llama-3.1-8B-Instruct's responses into statements, and then apply ContextCite with 32 and 256 times of random context ablations to get the citations, with the details described in Appendix B.
160
+
161
+ - Fine-tuned models: LongCite-8B and 9B released by Zhang et al. (2024), trained on LongCite-45K, finetuned from Llama-3.1-8B (Dubey et al., 2024) and GLM-4-9B (GLM et al., 2024), respectively. Additionally, we consider a baseline of finetuning LongCite-8B using SimPO with the NLI rewards which resembles Huang et al. (2024a), with the details in Appendix E.
162
+
163
+ # 3.4. Main Results
164
+
165
+ Citation Quality. Table 1 presents our main results. Our best-of-N sampling (BoN) consistently improves both citation recall and citation precision across tasks, increasing the overall F1 score from 73.8 to 77.5. Using SimPO to internalize BoN's gains—eliminating the need for costly BoN sampling—achieves a similar improvement, with an F1 of 77.9. Applying BoN again to the SimPO fine-tuned model further boosts F1 by 5.3 points to 79.1, the highest
166
+
167
+ across the datasets, suggesting room for further gains. Our results surpass LongCite-8B/9B at similar citation lengths and outperform proprietary model prompting while producing shorter citations.
168
+
169
+ To better contextualize the gains of our proposed reward, we additionally implement a variant of SimPO using NLI-based citation precision/recall rewards from Huang et al. (2024a) by using the same training pipeline and initialization as our SimPO, modifying only the reward function (see details in Appendix E). As shown in row of SimPO w/ NLI Rewards, this baseline improves LongCite-8B on 3 out of 5 datasets, but is still consistently outperformed by SelfCite. This result highlights that while NLI-based rewards are helpful, our SelfCite reward provides a more accurate signal for optimizing citation quality.
170
+
171
+ Besides the fine-tuned baselines, we additionally compare our method to ContextCite for reference, a method very different from SelfCite-it does not directly generate citations, it estimates the importance scores of the context sentences after the response is generated (in Appendix B we show how to convert continuous importance scores into citations). Both SelfCite and ContextCite rely on the idea of context ablation, but our approach is significantly better. A key reason is that ContextCite estimates sentence importance from scratch using linear regression, while we rerank ex
172
+
173
+ Table 2. Answer correctness when responding with or without citations. $\dagger$ indicates results taken from Zhang et al. (2024). The header contains abbreviations for the same five datasets in Table 1.
174
+
175
+ <table><tr><td>Model</td><td>Long.</td><td>Multi.</td><td>Hot.</td><td>Dur.</td><td>Gov.</td><td>Avg</td></tr><tr><td colspan="7">Answering without citations</td></tr><tr><td>\( LongSFT-8B^† \)</td><td>68.6</td><td>83.6</td><td>69.0</td><td>62.3</td><td>54.4</td><td>67.6</td></tr><tr><td>\( LongSFT-9B^† \)</td><td>64.6</td><td>83.3</td><td>67.5</td><td>66.3</td><td>46.4</td><td>65.6</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>66.0</td><td>83.7</td><td>65.8</td><td>62.8</td><td>66.1</td><td>68.9</td></tr><tr><td colspan="7">Answering with citations</td></tr><tr><td>LongCite-8B (Our repro.)</td><td>67.6</td><td>86.7</td><td>69.3</td><td>64.0</td><td>60.4</td><td>69.6</td></tr><tr><td>+ SimPO</td><td>67.4</td><td>86.7</td><td>67.5</td><td>66.0</td><td>61.3</td><td>69.8</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>58.4</td><td>75.3</td><td>67.3</td><td>59.3</td><td>56.4</td><td>63.3</td></tr><tr><td>+ SFT on ContextCite</td><td>58.8</td><td>83.4</td><td>65.8</td><td>57.8</td><td>57.5</td><td>64.6</td></tr><tr><td>+ SimPO</td><td>56.8</td><td>80.9</td><td>65.3</td><td>59.5</td><td>60.9</td><td>64.7</td></tr></table>
176
+
177
+ isting LLM-generated citation candidates, leading to more efficient and accurate citation quality estimation.
178
+
179
+ Finally, we evaluate the latest released Claude Citations API, as shown in Appendix D that SelfCite achieves strong results very close to this commercial-level API, validating the effectiveness of SelfCite.
180
+
181
+ Fully Self-Supervised Setting. In our main experiment, we start from the Llama-3.1-8B model fine-tuned on the LongCite-45K SFT data, which effectively kick-starts its ability to generate structured citations for best-of-N sampling. The subsequent SimPO alignment stage is entirely self-supervised. We are also curious if it is possible to start from a fully self-supervised SFT model and then apply our self-supervised alignment after that. To begin with, we automatically generate 11K citation SFT data using ContextCite (see Appendix B for details) to replace the LongCite-45K annotations in the training data, as shown in the results at the bottom of Table 1. We can see that SFT on ContextCite can achieve decent initial results (65.7 F1) but still far from LongCite-8B (73.8 F1). BoN helps improving F1 to 67.3. After SimPO training, it achieves 69.9 F1, and additionally applying BoN can boost its F1 by 5.8 to 71.5, significantly closing the gap to LongCite-8B, showing our alignment method not only improve the supervised models, but also enhance the models purely trained from self-supervision.
182
+
183
+ Answer Correctness. For best-of-N sampling, only the citation parts are modified, so the responses it generates to answer the questions are the same as those of the original LongCite-8B model, maintaining the same correctness. For the SimPO fine-tuned models, we test their answer correctness by the evaluation in Zhang et al. (2024), which contains two settings: answering with/without citations. If answering with citations, the model will be prompted to generate answers with structured citations, making the task more complex, and the citation parts will be removed when evaluating the answer correctness. The results in Table 2 show that the SimPO fine-tuning does not change the correctness of the LongCite-8B model much. The correctness
184
+
185
+ is similar to LongSFT-8B/9B (Zhang et al., 2024), which are ablation baselines fine-tuned on LongCite-45k QA pairs but without the citation parts. The same observation still holds when starting from Llama-3.1-8B-Instruct, either SFT with ContextCite data or the further SimPO step do not change the answer correctness significantly. Under the same answer correctness, the additional "citations" can benefit the verifiability of the answers, enabling a user to easily double-check the answer, even in cases where the answers are wrong.
186
+
187
+ Chunk-level Citation Evaluation. Additionally, we evaluate our methods on the traditional chunk-level citation benchmark ALCE (Gao et al., 2023b). However, due to the mismatch of data distributions and different task settings during training (sentence-level) and evaluation (chunk-level), we consider this as a zero-shot evaluation, and the results are shown in Appendix F, due to the limited space.
188
+
189
+ # 4. Analysis
190
+
191
+ # 4.1. Ablation Study on Rewards
192
+
193
+ To better understand our final reward design, we explore various reward strategies in the BoN sampling process. Here, all BoN candidates are pre-generated and fixed, the reward is the only factor affecting results. Table 3 presents our ablation results on HotpotQA, while citation lengths are computed across all LongBench-Cite datasets for direct comparison with Table 1. We evaluate four alternative reward designs. BoN by LM log prob re-ranks candidates simply by the probability of the citation string, $[i_1 - i_2][i_3 - i_4] \ldots$ . which is similar to beam search but less costly. We observe that this strategy slightly boosts recall while reducing precision, resulting in a minor reduction in F1. BoN by max citation length always selects the candidates with the longest citations, i.e. citing the greatest number of sentences. Although it improves recall, it significantly reduces precision from 77.9 to 73.6 and inflates the citation length from 83.5 to 139.8. By contrast, both BoN by Prob-Drop and BoN by Prob-Hold improve recall without sacrificing precision. Finally, by combining both Prob-Drop and Prob-Hold into our final SelfCite reward, we achieve the best outcome, increasing both recall and precision and a 4-point improvement in F1.
194
+
195
+ We also explored different token-length limits for citations in the bottom of Table 3, as discussed in Section 2.3. By default, we exclude candidates citing more than 384 tokens, unless the citation contains only a single sentence. Lowering the cap to 256 tokens slightly hurts F1, while raising it to 512 tokens has negligible impact. Completely removing length limits inflates citation length to 121.9 tokens and yields worse precision (79.3) but slightly improved recall (67.9). We also notice that the 256 length limit still outperforms the LongCite-8B baseline (66.4 vs 64.1) while having almost
196
+
197
+ Table 3. Ablation study on HotpotQA citation recall, precision, and F1 (R, P, F1) and citation length for BoN decoding methods.
198
+
199
+ <table><tr><td rowspan="2">Decoding Methods</td><td colspan="3">HotpotQA</td><td rowspan="2">Citation Length</td></tr><tr><td>R</td><td>P</td><td>F1</td></tr><tr><td>LongCite-8B (Our repro.)</td><td>60.8</td><td>77.9</td><td>64.1</td><td>83.5</td></tr><tr><td>+ BoN by LM log prob</td><td>62.7</td><td>75.5</td><td>63.4</td><td>74.6</td></tr><tr><td>+ BoN by max citation length</td><td>66.5</td><td>73.6</td><td>65.1</td><td>139.8</td></tr><tr><td>+ BoN by Prob-Drop</td><td>65.6</td><td>78.1</td><td>66.6</td><td>92.9</td></tr><tr><td>+ BoN by Prob-Hold</td><td>66.2</td><td>78.1</td><td>67.0</td><td>93.4</td></tr><tr><td>+ BoN by SelfCite</td><td>67.2</td><td>81.0</td><td>68.8</td><td>93.4</td></tr><tr><td>w/ lower length limit (256)</td><td>65.8</td><td>78.8</td><td>66.4</td><td>84.5</td></tr><tr><td>w/ higher length limit (512)</td><td>67.0</td><td>82.2</td><td>68.5</td><td>99.2</td></tr><tr><td>w/o length limit (∞)</td><td>67.9</td><td>79.3</td><td>68.1</td><td>121.9</td></tr></table>
200
+
201
+ Table 4. Ablation study on HotpotQA citation recall, precision, and F1 (R, P, F1) and citation length for finetuned models.
202
+
203
+ <table><tr><td rowspan="2">Fine-tuning Methods</td><td colspan="3">HotpotQA</td><td rowspan="2">Citation Length</td></tr><tr><td>R</td><td>P</td><td>F1</td></tr><tr><td>LongCite-8B (Our repro.)</td><td>60.8</td><td>77.9</td><td>64.1</td><td>83.5</td></tr><tr><td>+ SimPO</td><td>69.4</td><td>82.3</td><td>71.5</td><td>105.7</td></tr><tr><td>+ SimPO + BoN</td><td>72.0</td><td>82.7</td><td>72.9</td><td>126.9</td></tr><tr><td colspan="5">+ SimPO w/ or w/o length balancing</td></tr><tr><td>w/ length balancing</td><td>69.4</td><td>82.3</td><td>71.5</td><td>105.7</td></tr><tr><td>w/o length balancing</td><td>64.4</td><td>62.9</td><td>60.5</td><td>152.9</td></tr><tr><td colspan="5">+ SimPO w/ varying data sizes</td></tr><tr><td>1K examples</td><td>62.5</td><td>78.9</td><td>65.7</td><td>90.1</td></tr><tr><td>2K examples</td><td>69.4</td><td>82.3</td><td>71.5</td><td>105.7</td></tr><tr><td>4K examples</td><td>68.5</td><td>80.4</td><td>70.3</td><td>134.1</td></tr><tr><td>8K examples</td><td>64.6</td><td>79.5</td><td>65.9</td><td>158.1</td></tr><tr><td>+ SFT on BoN responses</td><td>68.8</td><td>77.3</td><td>68.4</td><td>98.7</td></tr><tr><td colspan="5">+ SimPO by denoising perturbed citations</td></tr><tr><td>On original responses</td><td>40.5</td><td>50.5</td><td>41.6</td><td>88.8</td></tr><tr><td>On BoN responses</td><td>42.6</td><td>50.7</td><td>42.3</td><td>79.7</td></tr></table>
204
+
205
+ equally long citation length (84.5 vs 83.5), showing that the improvement of SelfCite correlates less with the citation length. Overall, using a 384-token limit achieves a good balance for short citation lengths and strong performance.
206
+
207
+ # 4.2. Citation Length Balance
208
+
209
+ As noted in Section 3.2, BoN selects slightly longer citations, making it easy for a model trained directly on BoN-preferred data to adopt the shortcut of generating longer citations without improving quality. To counter this, we apply length balancing, injecting random citations into examples where length bias exists to equalize the number of cited sentences. Table 4 (see w/ vs. w/o length balancing) highlights its critical role in length balancing. Without length balancing, the model overextends citations (average length 152.9), leading to lower precision (62.9) and F1 (60.5). In contrast, enabling length balancing maintains high precision (82.3) and recall (69.4), achieving a better F1 of 71.5 while keeping citation length reasonable (105.7). These results confirm that length balancing prevents shortcut learning, ensuring the model truly learns to cite accurately.
210
+
211
+ # 4.3. Training Size of SimPO
212
+
213
+ In prior study (Zhou et al., 2023), 1K examples are sufficient to align user preferences effectively. Table 4 presents SimPO results with 1K to 8K examples. 1K examples already bring a moderate improvement, raising F1 from 64.1 to 65.7, with gains in precision and recall. Using 2K examples further boosts F1 to 71.5, while 4K leads to saturated improvement. However, at 8K examples, performance declines, and citation length rises to 158.1. We attribute this to SimPO's off-policy nature, especially because it lacks a reference model to constrain the output distributions to be similar to the collected data. As training steps grow, the model may drift from the collected data, potential overfitting to the biases in preference data. Thus, further fine-tuning may degrade citation quality. To address this, we show initial results from iterative SimPO in Section 4.6.
214
+
215
+ # 4.4. SimPO vs. SFT on Best-of-N responses
216
+
217
+ We also show the effect of applying standard supervised fine-tuning (SFT) on the responses selected by best-of-N sampling, which is a simplified alternative of preference optimization. As the result shown in the last row in Table 4, SFT also improves the F1 score from 64.1 to 68.4, but it still falls behind 71.5 of SimPO. This result confirms that it is necessary to train the model via SimPO with preference data, which enables the model to distinguish between bad and good citations, and thus improve the citation quality.
218
+
219
+ # 4.5. Off-policy Denoising Perturbed Citations
220
+
221
+ We explored a purely off-policy alternative approach. Specifically, given a model-generated response, we randomly shift its citation spans to create perturbed variants. SimPO training pairs were then constructed by preferring the original citation over the perturbed one, encouraging the model to "de-noise" citations by restoring their original spans. However, as shown at the bottom of Table 4, this approach degrades performance, both when applied to original and best-of-N responses. We attribute this to a mismatch between the training data and the model's natural error distribution—since random shifts do not reflect typical citation errors, they fail to provide useful guidance for improvement.
222
+
223
+ # 4.6. Iterative Preference Optimization
224
+
225
+ It has been discussed that an on-policy alignment process can be beneficial to avoid reward exploitation (Bai et al., 2022) and maintains consistency between the generated data and the model's evolving output distribution. We thus experiment with iteratively performing SimPO, similar to the concepts of recent studies (Pang et al., 2024; Yasunaga et al., 2024), to maintain the consistency between the generated data and the model's evolving output distribution.
226
+
227
+ ![](images/9a758dc22e5bcb731f2f024a20bdb19174bb582fcf6838766e90d93b1dbc5e3c.jpg)
228
+ Figure 2. Iteratively applying SimPO for three iterations.
229
+
230
+ ![](images/0dff4654e246b69a5529a8e46e597282d66bd20d8fee58f873462ef5858e7a9b.jpg)
231
+
232
+ Specifically, after fine-tuning with SimPO, we generate a new dataset via BoN, which is also 2K in size but not overlapped with previous iterations. We continue training the model and repeat the process for three rounds. As shown in Figure 2, while the largest improvement occurs in the first round, improvements continue over three iterations, which further validates the reliability of our reward signal. Iterative SimPO is still not perfect since it remains an off-policy method. Given that our reward can be cheaply computed, we believe that on-policy methods like PPO (Schulman et al., 2017) could further enhance performance. We leave the exploration of such approaches for future work.
233
+
234
+ # 4.7. Latency of Best-of-N
235
+
236
+ Table 5 reports the average per-example latency on LongBench-Cite. As expected, Best-of-N (BoN) introduces additional latency due to the need to generate and rerank multiple citation candidates. In our setup, we use $N = 10$ candidates, but the sampling time is not $10 \times$ longer than direct decoding. This is because we only re-sample short citation spans (typically 5-10 tokens), not the full responses, resulting in relatively lightweight sampling overhead.
237
+
238
+ However, the increased latency from BoN is not a major concern, because our SelfCite SimPO model also achieves the same performance as BoN in a single pass, without additional latency. For scenarios requiring maximum efficiency, we recommend using the SimPO model directly.
239
+
240
+ # 4.8. Qualitative Study
241
+
242
+ Finally, we examine an example that requires citing multiple context sentences to support a complex response. As shown in Table 6, the response integrates information from sentences 302, 303, and 306. Direct sampling (2) omits sentence 302 while incorrectly including 305. In contrast, the best-of-N candidate (1) correctly includes 302 and excludes 305, achieving a slightly higher reward (0.578 vs. 0.547), demonstrating the effectiveness of our reward design. We also present candidates (3) and (4), which cite more irrelevant sentences and miss key citations, leading to even lower rewards. Additional qualitative examples are provided in Appendix H.
243
+
244
+ Table 5. Average latency per example on LongBench-Cite (8 × A100 GPUs, batch size 1, model parallel).
245
+
246
+ <table><tr><td>Method</td><td>Avg Latency (s)</td></tr><tr><td>LongCite-8B</td><td>24.3</td></tr><tr><td>SelfCite BoN Sampling</td><td>149.0</td></tr><tr><td>SelfCite BoN Reranking</td><td>34.0</td></tr><tr><td>SelfCite SimPO model</td><td>26.2</td></tr></table>
247
+
248
+ # 5. Related Work
249
+
250
+ Citations for Language Models. Recent work has explored various approaches to teaching language models to generate citations, including fine-tuning with direct human feedback or annotations (Nakano et al., 2021; Menick et al., 2022; Slobodkin et al., 2024), rewards from external NLI models (Huang et al., 2024a;b), and prompting-based methods (Gao et al., 2022; 2023b) to explicitly incorporate relevant retrieved documents. Given the high cost of human annotation, Zhang et al. (2024) introduced CoF ("Coarse to Fine"), an automated multi-stage pipeline that simulates human annotation. This approach leverages proprietary LLMs for query generation, chunk-level retrieval, and sentence-level citation extraction, achieving high citation quality through supervised fine-tuning. However, it depends on larger proprietary models two proprietary APIs—GLM-4 for the LLM and Zhipu Embedding-v2 for retrieval<sup>2</sup>—with carefully designed prompting, effectively distilling the capabilities of these proprietary models into much smaller models in 8B/9B. In contrast, our SelfCite aims at completely eliminating the reliance on annotations for citation, either from human or proprietary APIs. Instead, our method enables a small 8B model to assess citation quality itself using self-supervised reward signal from context ablation, effectively self-improving without external supervision. We additionally provide Table 9 to contrast the key differences between SelfCite and prior papers in Appendix G.
251
+
252
+ Contributive Context Attribution. Besides being self-supervised, SelfCite also adopts the view that citations should reference the sources from the context that a model actually uses when generating a statement-known as contributive attribution (Worledge et al., 2023)-rather than any sources that merely support the claim. Our reward signal naturally aligns with this attribution framework, as context ablation identifies the sources that cause the model to produce a statement. Existing contributive attribution methods for LLMs typically require extensive context ablations or other computationally expensive techniques, such as gradient-based analysis during inference (Cohen-Wang et al., 2024; Qi et al., 2024; Phukan et al., 2024). In contrast,
253
+
254
+ Table 6. An example of differences in the citation from baseline vs BoN. Related information are highlighted in the context/response.
255
+
256
+ <table><tr><td>Sent. ID</td><td colspan="3">Context Sentences (only showing a paragraph due to limited space)</td></tr><tr><td>302 (✓)</td><td colspan="3">In general, consumer advocates believe that any comprehensive federal privacy policy should complement, and not supplant, sector-specific privacy legislation or state-level legislation.</td></tr><tr><td>303 (✓)</td><td colspan="3">Finding a global consensus on how to balance open data flows and privacy protection may be key to maintaining trust in the digital environment and advancing international trade.</td></tr><tr><td>304 (X)</td><td colspan="3">One study found that over 120 countries have laws related to personal data protection.</td></tr><tr><td>305 (X)</td><td colspan="3">Divergent national privacy approaches raise the costs of doing business and make it harder for governments to collaborate and share data, whether for scientific research, defense, or law enforcement.</td></tr><tr><td>306 (✓)</td><td colspan="3">A system for global interoperability in a least trade-restrictive and nondiscriminatory way between different national systems could help minimize costs and allow entities in different jurisdictions with varying online privacy regimes to share data via cross-border data flows.</td></tr><tr><td>Query</td><td colspan="3">Please write a one-page summary of the above government report.</td></tr><tr><td>Response (only single statement due to space)</td><td colspan="3">[...] The report concludes by noting that finding a global consensus on how to balance open data flows and privacy protection may be key to maintaining trust in the digital environment and advancing international trade. The report suggests that Congress may consider comprehensive privacy legislation and examine the potential challenges and implications of building a system of interoperability between different national privacy regimes. [...]</td></tr><tr><td>BoN Candidates</td><td>Citation Strings (green: correct; red: wrong)</td><td>Missing Citations</td><td>SelfCite Reward</td></tr><tr><td>(1) Best candidate</td><td>[302-303] [306-306]</td><td>-</td><td>0.578</td></tr><tr><td>(2) Direct sampling</td><td>[303-303] [305-306]</td><td>(302)</td><td>0.547</td></tr><tr><td>(3) Other candidate</td><td>[303-304] [308-308] [310-311]</td><td>(302, 306)</td><td>0.461</td></tr><tr><td>(4) Other candidate</td><td>[303-303] [309-309] [311-311]</td><td>(302, 306)</td><td>0.375</td></tr></table>
257
+
258
+ SelfCite simply generate the citation tags, and refine citation candidates by preference optimization with reward signals from context ablations, effectively teaching the model to perform contributive context attribution itself.
259
+
260
+ We also note that there is a distinction between corroborative citation—highlighting sources that support a claim, as used in benchmarks like LongBench-Cite—and contributive attribution, as emphasized in ContextCite. While SelfCite applies a contributive alignment method (via ablations) in the context of a corroborative evaluation framework, we find the two objectives to be at least partially aligned: citations that genuinely influence the generation are often also semantically supportive. Although this alignment is not guaranteed, our empirical results show that enforcing contributive attribution leads to clear improvements on corroborative benchmarks, suggesting that current corroborative methods (e.g., LongCite) still have significant headroom for improvement—even under a slightly mismatched objective.
261
+
262
+ Self-Supervised Alignment and Reward Modeling. Another relevant area is self- or weakly-supervised approaches for aligning LLMs without human supervision (Kim et al., 2023; Yuan et al., 2024), reducing the need for explicit human feedback (Ouyang et al., 2022), or curating high-quality data for supervised fine-tuning (Zhou et al., 2023). SelfCite shares the same spirit by computing simple probability differences under context ablation as rewards, eliminating the need for additional annotation process.
263
+
264
+ # 6. Conclusion and Limitations
265
+
266
+ We present SelfCite, a self-supervised framework for aligning large language models (LLMs) to generate more accurate and fine-grained citations. By leveraging LLMs' own output probabilities, SelfCite computes necessity and sufficiency rewards through context ablation, enabling preference optimization without relying on external annotations from human or proprietary APIs. Applying such rewards in best-of-N (BoN) sampling and SimPO fine-tuning can significantly improve the citation correctness on the LongBench-Cite benchmark, offering a promising self-improving direction towards verifiable and trustworthy LLMs.
267
+
268
+ SelfCite also has limitations: 1) While achieving strong results with SimPO, integrating other preference optimization or reinforcement learning (RL) algorithms, e.g., PPO (Schulman et al., 2017), remains under explored. However, prior work (Mudgal et al., 2024) shows that BoN closely approximates the performance upper bound of RL, and we follow established practice (Gao et al., 2023a; Lightman et al., 2024) to mainly validate our rewards through BoN, and further verify it with SimPO fine-tuning. 2) SelfCite assumes access to model output probabilities, which may not be feasible for closed-source models. 3) While our framework improves the quality of citations already generated by LLMs, discovering unsupervised methods to kick-start LLMs' ability in generating structured citations from scratch remains an important direction for future research.
269
+
270
+ # Impact Statement
271
+
272
+ This paper introduces SelfCite, a self-supervised framework for improving citation accuracy in large language models (LLMs). Our method enhances the verifiability and trustworthiness of LLM-generated content by aligning citations with relevant supporting evidence in a scalable manner, without relying on costly human annotations. By improving citation quality, SelfCite contributes to the broader goal of reducing misinformation and hallucinations in AI-generated responses. Ensuring that LLMs provide accurate and properly attributed information is particularly crucial in high-stakes domains such as healthcare, law, and journalism, where incorrect or unverified information can have significant real-world consequences. Overall, SelfCite aligns with the broader ethical goal of making machine learning systems more transparent and accountable, reducing the risk of unchecked misinformation while maintaining the efficiency and scalability required for real-world applications.
273
+
274
+ # Acknowledgements
275
+
276
+ We thank Jiajie Zhang and Yushi Bai for their assistance in providing implementation details of LongCite. Special thanks to Pin-Lun (Byron) Hsu for his invaluable support and guidance with Liger-Kernel. We are also grateful to Tianyu Gao for his timely help in setting up the ALCE benchmark during the rebuttal period. We also appreciate Andrei Barbu, Linlu Qiu, Weijia Shi for their valuable discussions. Yung-Sung was sponsored by the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
277
+
278
+ # References
279
+
280
+ Anthropic. Anthropic: Introducing claude 3.5 sonnet, 2024. URL https://www.anthropic.com/news/claude-3-5-sonnet.
281
+ Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das-Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
282
+ Bai, Y., Lv, X., Zhang, J., Lyu, H., Tang, J., Huang, Z., Du, Z., Liu, X., Zeng, A., Hou, L., et al. Longbench: A
283
+
284
+ bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508, 2023.
285
+ Bai, Y., Lv, X., Zhang, J., He, Y., Qi, J., Hou, L., Tang, J., Dong, Y., and Li, J. Longalign: A recipe for long context alignment of large language models. arXiv preprint arXiv:2401.18058, 2024.
286
+ Bird, S. Nltk: the natural language toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pp. 69-72, 2006.
287
+ Chuang, Y.-S., Qiu, L., Hsieh, C.-Y., Krishna, R., Kim, Y., and Glass, J. Lookback lens: Detecting and mitigating contextual hallucinations in large language models using only attention maps. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 1419-1436, 2024.
288
+ Cohen-Wang, B., Shah, H., Georgiev, K., and Madry, A. Contextcite: Attributing model generation to context. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
289
+ Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
290
+ Gao, L., Dai, Z., Pasupat, P., Chen, A., Chaganty, A. T., Fan, Y., Zhao, V. Y., Lao, N., Lee, H., Juan, D.-C., et al. Rarr: Researching and revising what language models say, using language models. arXiv preprint arXiv:2210.08726, 2022.
291
+ Gao, L., Schulman, J., and Hilton, J. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835-10866. PMLR, 2023a.
292
+ Gao, T., Yen, H., Yu, J., and Chen, D. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 6465-6488, 2023b.
293
+ GLM, T., Zeng, A., Xu, B., Wang, B., Zhang, C., Yin, D., Rojas, D., Feng, G., Zhao, H., Lai, H., Yu, H., Wang, H., Sun, J., Zhang, J., Cheng, J., Gui, J., Tang, J., Zhang, J., Li, J., Zhao, L., Wu, L., Zhong, L., Liu, M., Huang, M., Zhang, P., Zheng, Q., Lu, R., Duan, S., Zhang, S., Cao, S., Yang, S., Tam, W. L., Zhao, W., Liu, X., Xia, X., Zhang, X., Gu, X., Lv, X., Liu, X., Yang, X., Song, X., Zhang, X., An, Y., Xu, Y., Niu, Y., Yang, Y., Li, Y., Bai, Y., Dong, Y., Qi, Z., Wang, Z., Yang, Z., Du, Z., Hou, Z., and Wang, Z. Chatglm: A family of large language models from glm-130b to glm-4 all tools, 2024.
294
+
295
+ He, W., Liu, K., Liu, J., Lyu, Y., Zhao, S., Xiao, X., Liu, Y., Wang, Y., Wu, H., She, Q., et al. Dureader: a chinese machine reading comprehension dataset from real-world applications. In Proceedings of the Workshop on Machine Reading for Question Answering, pp. 37-46, 2018.
296
+ Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. The curious case of neural text degeneration. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygGQyrFvH.
297
+ Hsu, P.-L., Dai, Y., Kothapalli, V., Song, Q., Tang, S., Zhu, S., Shimizu, S., Sahni, S., Ning, H., and Chen, Y. Liger kernel: Efficient triton kernels for llm training. arXiv preprint arXiv:2410.10989, 2024. URL https://arxiv.org/abs/2410.10989.
298
+ Huang, C., Wu, Z., Hu, Y., and Wang, W. Training language models to generate text with citations via fine-grained rewards. arXiv preprint arXiv:2402.04315, 2024a.
299
+ Huang, L., Cao, S., Parulian, N., Ji, H., and Wang, L. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1419-1436, 2021.
300
+ Huang, L., Feng, X., Ma, W., Zhao, L., Fan, Y., Zhong, W., Xu, D., Yang, Q., Liu, H., and Qin, B. Advancing large language model attribution through self-improving. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 3822-3836, 2024b.
301
+ Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., and Fung, P. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38, 2023.
302
+ Kim, S., Bae, S., Shin, J., Kang, S., Kwak, D., Yoo, K., and Seo, M. Aligning large language models through synthetic feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 13677-13700, 2023.
303
+ Lei, T., Barzilay, R., and Jaakkola, T. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 107-117, 2016.
304
+ Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=v8L0pN6EOi.
305
+
306
+ Meng, Y., Xia, M., and Chen, D. SimPO: Simple preference optimization with a reference-free reward. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=3Tzcot1LKb.
307
+ Menick, J., Trebacz, M., Mikulik, V., Aslanides, J., Song, F., Chadwick, M., Glaese, M., Young, S., Campbell-Gillingham, L., Irving, G., et al. Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147, 2022.
308
+ Mistral. Mistral large, 2024. URL https://mistral.ai/news/mistral-large/.
309
+ Mudgal, S., Lee, J., Ganapathy, H., Li, Y., Wang, T., Huang, Y., Chen, Z., Cheng, H.-T., Collins, M., Strohman, T., et al. Controlled decoding from language models. In International Conference on Machine Learning, pp. 36486-36503. PMLR, 2024.
310
+ Nakano, R., Hilton, J., Balaji, S., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
311
+ OpenAI. Introducing chatgpt, November 2022. URL https://openai.com/blog/chatgpt.
312
+ OpenAI. Gpt-4 technical report, 2023. URL https://cdn.openai.com/papers/gpt-4.pdf.
313
+ Ouyang, L., Wu, J., Jiang, X., Almeida, D., L. Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. Training language models to follow instructions with human feedback. arXiv preprint 2203.02155, 2022.
314
+ Pang, R. Y., Yuan, W., Cho, K., He, H., Sukhbaatar, S., and Weston, J. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024.
315
+ Phukan, A., Somasundaram, S., Saxena, A., Goswami, K., and Srinivasan, B. V. Peering into the mind of language models: An approach for attribution in contextual question answering. arXiv preprint arXiv:2405.17980, 2024.
316
+ Qi, J., Sarti, G., Fernández, R., and Bisazza, A. Model internals-based answer attribution for trustworthy retrieval-augmented generation. arXiv preprint arXiv:2406.13663, 2024.
317
+ Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
318
+
319
+ Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. In arXiv preprint arXiv:1707.06347, 2017.
320
+ Shi, W., Han, X., Lewis, M., Tsvetkov, Y., Zettlemoyer, L., and Yih, W.-t. Trusting your evidence: Hallucinate less with context-aware decoding. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pp. 783-791, 2024.
321
+ Slobodkin, A., Hirsch, E., Cattan, A., Schuster, T., and Dagan, I. Attribute first, then generate: Locally-attributable grounded text generation. arXiv preprint arXiv:2403.17104, 2024.
322
+ Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. M. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6.
323
+ Worledge, T., Shen, J. H., Meister, N., Winston, C., and Guestrin, C. Unifying corroborative and contributive attributions in large language models. arXiv preprint arXiv:2311.12233, 2023.
324
+ Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W. W., Salakhutdinov, R., and Manning, C. D. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600, 2018.
325
+ Yasunaga, M., Shamis, L., Zhou, C., Cohen, A., Weston, J., Zettlemoyer, L., and Ghazvininejad, M. Alma: Alignment with minimal annotation. arXiv preprint arXiv:2412.04305, 2024.
326
+ Yuan, W., Pang, R. Y., Cho, K., Li, X., Sukhbaatar, S., Xu, J., and Weston, J. E. Self-rewarding language models. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=ONphYCmgua.
327
+ Zhang, J., Bai, Y., Lv, X., Gu, W., Liu, D., Zou, M., Cao, S., Hou, L., Dong, Y., Feng, L., et al. Longcite: Enabling llms to generate fine-grained citations in long-context qa. arXiv preprint arXiv:2409.02897, 2024.
328
+ Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y., Ma, X., Efrat, A., Yu, P., YU, L., Zhang, S., Ghosh, G., Lewis,
329
+
330
+ M., Zettlemoyer, L., and Levy, O. LIMA: Less is more for alignment. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 55006-55021, 2023.
331
+
332
+ # A. Implementation Details
333
+
334
+ For SimPO fine-tuning, we randomly sample 2K document and question pairs from the LongCite-45k data, generate the best-of-N responses with our Algorithm 1 to obtain the preference data, and train for one epoch. We sample another 100 examples as development set to pick the best learning rate from $\{1\mathrm{e} - 7,3\mathrm{e} - 7,5\mathrm{e} - 7,7\mathrm{e} - 7\}$ . We keep other hyperparameters the same as the original SimPO (Meng et al., 2024). We follow the same prompt format used in Zhang et al. $(2024)^{3}$ to keep the comparison fair. For the iterative SimPO experiment, in each iteration, we sampled a new, non-overlapping subset of 2K examples to ensure no data repetition across iterations. For self-supervised SFT, we generate 11K citation data unsupervisedly from ContextCite outputs as described in Appendix B, trained with a larger learning rate 7e-6.
335
+
336
+ We use the SimPO source code built from Huggingface Transformers (Wolf et al., 2020) for the finetuning experiments, as well as Liger-Kernel (Hsu et al., 2024) to enable memory efficient training for long-context examples in LongCite-45K without tensor parallelization. We run all the finetuning experiments on with $8 \times \mathrm{A}100$ GPUs of 80 GB memory on a single node. The batch size is set to 1 per GPU due to the long context examples. We set our max context length to 25600 to prevent OOM. For the data examples longer than 25600, we perform truncation, start from truncating the sentences that are the most far away from the sentences cited by the ground truth annotation, so as to keep the impact of truncation to be minimum.
337
+
338
+ When evaluating the citation length, as well as calculating the token length limit of 384 for excluding long BoN candidates, we follow Zhang et al. (2024) to use GLM4-9B's tokenizer to count tokens.
339
+
340
+ In the ablation study of off-policy denoising in Section 4.5, the citation examples for denoising are collected by randomly shifting existing citation spans by 3-10 positions in sentence indices.
341
+
342
+ # B. Obtaining Citations from ContextCite
343
+
344
+ In this section, we first describe how the ContextCite method (Cohen-Wang et al., 2024) estimates continuous attribution scores for each sentence in the context. We then explain a simple heuristic for extracting citations (i.e., selecting a subset of context sources) from these scores.
345
+
346
+ # B.1. ContextCite
347
+
348
+ Given a language model $p_{\mathrm{LM}}$ , a context $C$ , a query $Q$ and a generated response $R$ , ContextCite aims to quantify how each source in the context $C = \{c_1, c_2, \ldots, c_{|C|}\}$ contributes to the generated response $R$ (in our case, the sources are sentences). To do so, ContextCite performs several random context ablations. We begin by introducing some notation to describe these ablations. Let $v \in \{0, 1\}^{|C|}$ be an ablation vector whose $i$ -th entry toggles whether source $c_i$ is included ( $v_i = 1$ ) or excluded ( $v_i = 0$ ). We write ABLATE( $C, v$ ) to denote a modified version of the original context $C$ in which sources for which $v_i = 0$ are omitted. ContextCite seeks to understand how the probability of generating the original generated response,
349
+
350
+ $$
351
+ f (v) := p _ {\mathrm {L M}} (R \mid \text {A B L A T E} (C, v), Q),
352
+ $$
353
+
354
+ changes as a function of the ablation vector $v$ .
355
+
356
+ Attribution via Surrogate Modeling. Directly measuring $f(v)$ for all $2^{|C|}$ ablation vectors is infeasible for large $|C|$ . Hence, ContextCite seeks to identify a surrogate model $\hat{f}(v)$ that is easy to understand and approximates $f(v)$ well. To simplify this surrogate modeling task, ContextCite applies a logit transform to $f$ , which maps values in $(0,1)$ to $(-\infty, \infty)$ :
357
+
358
+ $$
359
+ g (v) := \sigma^ {- 1} (f (v)) = \log \left(\frac {f (v)}{1 - f (v)}\right).
360
+ $$
361
+
362
+ ContextCite then approximates $g(v)$ using a sparse linear function,
363
+
364
+ $$
365
+ \hat {g} (v) = \hat {w} ^ {\top} v + \hat {b}.
366
+ $$
367
+
368
+ Notice that resulting weights $\hat{w} \in \mathbb{R}^{|C|}$ encode the importance of each source $c_i$ to the probability of generating the original response; they can be interpreted directly as attribution scores (higher scores suggest greater importance).
369
+
370
+ Finding a Surrogate Model via LASSO. To learn the parameters $\hat{w}$ and $\hat{b}$ of the surrogate model, ContextCite randomly samples a small number of ablation vectors and measures the corresponding probabilities of generating the original response. It then uses this "training dataset" to fit a sparse linear model with LASSO. Concretely, it learns a surrogate model with the following three steps:
371
+
372
+ 1. Sample $n$ ablation vectors $\{v_i\}_{i=1}^n$ uniformly at random from $\{0,1\}^{|C|}$ .
373
+ 2. For each sample $v_{i}$ , compute $g(v_{i}) = \sigma^{-1}(f(v_{i}))$ by running the LM with only the sources specified by $v_{i}$ and measuring the (sigmoid) probability of $R$ .
374
+ 3. Solve a Lasso regression problem to find $\hat{w}$ and $\hat{b}$ :
375
+
376
+ $$
377
+ \hat {w}, \hat {b} = \arg \min _ {w, b} \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g (v _ {i}) - w ^ {\top} v _ {i} - b\right) ^ {2} + \lambda \| w \| _ {1},
378
+ $$
379
+
380
+ where $\lambda$ controls sparsity (larger $\lambda$ drives more coefficients to zero).
381
+
382
+ In Cohen-Wang et al. (2024), typical choices of $n$ range from 32 to 256, balancing cost (requires $n$ LM forward passes) and accuracy. If there are multiple statements $\{r_1, r_2, \dots, r_{|R|}\}$ in $R$ , the same method can also be applied by focusing only on a subset of tokens in $R$ .
383
+
384
+ # B.2. Heuristic Citation Extraction
385
+
386
+ In our setting, we would like a discrete list of cited sentences for each generated statement, rather than a score for every sentence. We will now describe how to convert the attribution scores $\hat{w}$ into a discrete subset $C' \subseteq C$ of citations. Let $t$ be a threshold, $p$ be a cumulative probability mass cutoff, and $k$ be a maximum citation limit.
387
+
388
+ # Thresholding and Merging.
389
+
390
+ 1. Filtering: Include only those sources $c_{i}$ whose attribution score $\hat{w}_{i} \geq t$ .
391
+ 2. Merging Adjacent Sources: If multiple consecutive sources in the original text each exceed $t$ , merge them into a single "span" $S_{j}$ . We assign this merged span the maximum score among its constituents:
392
+
393
+ $$
394
+ \hat {w} (S _ {j}) = \max _ {c _ {i} \in S _ {j}} \hat {w} _ {i}.
395
+ $$
396
+
397
+ Here, adjacency is defined by the original ordering in $C$ . For instance, if $c_{2}$ and $c_{3}$ both pass the threshold and appear consecutively, we merge them into a single span $S_{j}$ .
398
+
399
+ Softmax Normalization. Let $\{S_j\}$ be the set of spans (or single sources) that survived the threshold. We normalize their scores into a probability distribution:
400
+
401
+ $$
402
+ \hat {w} ^ {\prime} (S _ {j}) = \frac {\exp \big (\hat {w} (S _ {j}) \big)}{\sum_ {i} \exp \big (\hat {w} (S _ {i}) \big)},
403
+ $$
404
+
405
+ so that $\sum_{j}\hat{w} '(S_j) = 1$
406
+
407
+ Top- $p$ Selection. To avoid including too many low-value sources, we adopt a greedy approach:
408
+
409
+ $$
410
+ \text {A d d s p a n s i n o r d e r o f d e s c e n d i n g} \hat {w} ^ {\prime} (S _ {j}), \text {s t o p p i n g o n c e} \sum_ {S _ {j} \in C ^ {\prime}} \hat {w} ^ {\prime} (S _ {j}) \geq p.
411
+ $$
412
+
413
+ Top- $k$ Filtering. Finally, if $|C'| > k$ , we take only the $k$ highest-scoring spans.
414
+
415
+ We set $t = 1.5$ , $p = 0.7$ , $k = 4$ in the experiment. When generating supervised fine-tuning (SFT) data, we discard any example for which more than $30\%$ of its statements have no any citations that can survive threshold $t$ . This ensures the dataset emphasizes cases where the LM's response can be tied to explicit context sources. We take the LongCite-45K document and question pairs to generate the responses by Llama-3.1-8B-Instruct itself, and then obtain citations with ContextCite (256 calls), transformed into the statement/citation format of LongCite-45K. Finally, we collect $\sim 11\mathrm{K}$ examples used for SFT.
416
+
417
+ Table 7. Citation recall (R), citation precision (P), citation F1 (F1), and citation length evaluated on LongBench-Cite benchmark. The best results are bolded. $\dagger$ indicates the results taken from Zhang et al. (2024).
418
+
419
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Longbench-Chat</td><td colspan="3">MultifieldQA</td><td colspan="3">HotpotQA</td><td colspan="3">Dureader</td><td colspan="3">GovReport</td><td rowspan="2">Avg. F1</td><td rowspan="2">Citation Length</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td></tr><tr><td colspan="18">Proprietary models</td></tr><tr><td>GPT-4o†</td><td>46.7</td><td>53.5</td><td>46.7</td><td>79.0</td><td>87.9</td><td>80.6</td><td>55.7</td><td>62.3</td><td>53.4</td><td>65.6</td><td>74.2</td><td>67.4</td><td>73.4</td><td>90.4</td><td>79.8</td><td>65.6</td><td>220</td></tr><tr><td>Claude-3-sonnet†</td><td>52.0</td><td>67.8</td><td>55.1</td><td>64.7</td><td>85.8</td><td>71.3</td><td>46.4</td><td>65.8</td><td>49.9</td><td>67.7</td><td>89.2</td><td>75.5</td><td>77.4</td><td>93.9</td><td>84.1</td><td>67.2</td><td>132</td></tr><tr><td>GLM-4†</td><td>47.6</td><td>53.9</td><td>47.1</td><td>72.3</td><td>80.1</td><td>73.6</td><td>47.0</td><td>50.1</td><td>44.4</td><td>73.4</td><td>82.3</td><td>75.0</td><td>82.8</td><td>93.4</td><td>87.1</td><td>65.4</td><td>169</td></tr><tr><td colspan="18">Ours: SelfCite</td></tr><tr><td>LongCite-8B (Our repro.)</td><td>67.0</td><td>78.1</td><td>66.6</td><td>74.8</td><td>90.7</td><td>79.9</td><td>60.8</td><td>77.9</td><td>64.1</td><td>67.1</td><td>87.2</td><td>73.7</td><td>81.6</td><td>89.3</td><td>84.5</td><td>73.8</td><td>83.5</td></tr><tr><td>+ BoN</td><td>68.4</td><td>81.3</td><td>71.2</td><td>76.1</td><td>92.8</td><td>81.2</td><td>67.2</td><td>81.0</td><td>68.8</td><td>70.6</td><td>90.9</td><td>76.9</td><td>87.6</td><td>92.4</td><td>89.3</td><td>77.5</td><td>93.4</td></tr><tr><td>+ SimPO</td><td>68.1</td><td>79.5</td><td>69.1</td><td>75.5</td><td>92.6</td><td>81.0</td><td>69.4</td><td>82.3</td><td>71.5</td><td>72.7</td><td>91.6</td><td>78.9</td><td>86.4</td><td>92.9</td><td>89.1</td><td>77.9</td><td>105.7</td></tr><tr><td>+ SimPO then BoN</td><td>73.3</td><td>79.4</td><td>72.8</td><td>76.7</td><td>93.2</td><td>82.2</td><td>69.4</td><td>83.0</td><td>71.1</td><td>74.2</td><td>92.2</td><td>80.3</td><td>86.7</td><td>92.7</td><td>89.2</td><td>79.1</td><td>94.7</td></tr><tr><td colspan="18">Topline</td></tr><tr><td>Claude Citations</td><td>61.2</td><td>81.7</td><td>67.8</td><td>76.8</td><td>98.4</td><td>84.9</td><td>61.9</td><td>94.1</td><td>72.9</td><td>88.5</td><td>99.7</td><td>93.2</td><td>79.4</td><td>99.2</td><td>87.7</td><td>81.3</td><td>88.8</td></tr></table>
420
+
421
+ # C. Length Balancing
422
+
423
+ To prevent the model from simply generating longer citations rather than focusing on citation correctness, we apply a length balancing procedure to align the total citation length in our two training responses: a chosen prediction and a reject prediction. First, we find the citation string (e.g., [435-437]) enclosed in <cite>...</cite> tags for each statement. We then measure each string's total citation "coverage", which means the total number of cited sentences in these intervals.
424
+
425
+ If a reject prediction has a total coverage lower than the corresponding chosen prediction, we insert additional citations around nearby sentence indices to match the chosen coverage. Conversely, if the reject coverage is larger, we randomly remove some of its intervals. We ensure new or inserted citations do not overlap existing intervals and keep them within a small window of 5–10 sentences away from the original citations to maintain realism. Finally, the reject and chosen will have matched coverage. This approach discourages the model from trivially learning to cite more sentences, instead prompting it to learn where and how to cite evidence more accurately. Our ablation in Section 4.2 shows that this length balancing technique significantly improves final citation quality.
426
+
427
+ # D. Comparison with Claude Citations API
428
+
429
+ On January 23rd, 2025, Claude announced an API specialized for providing citations along with responses: *Claude Citations*<sup>6</sup>. We also try to evaluate this API on the LongBench-Cite benchmark. Since the implementation details and resource requirements (e.g., training data) of Claude Citations are not publicly available yet, and it relies on a significantly larger and more powerful LLM, Claude-3.5-Sonnet, which potentially has over 100 billions of parameters, we consider it as a topline of the benchmark rather than a baseline.
430
+
431
+ When evaluating it on Chinese examples from LongBench-Cite, we found that the API does not split Chinese text properly. As a result, it cites large passages when processing Chinese examples, leading to an average citation length of approximately 800 tokens per citation.
432
+
433
+ To address this issue, we pre-segment the text ourselves using exactly the same method as our approach following LongCite (Zhang et al., 2024), which uses NLTK and Chinese punctuation segmentation. We then run the Claude Citations API, as it supports both non-segmented and pre-segmented document inputs. The evaluation was conducted using the latest version of claudi-3-5-sonnet-20241022.
434
+
435
+ As shown in Table 7, Claude Citations achieves an overall F1 score of 81.3, which is higher than all other models we have tested. However, the performance of Claude Citations is not consistent over all datasets. For example, it is worse than SelfCite on LongBench-Chat and GovReport. The main improvement of Claude is from the DuReader dataset, while the results on other datasets are comparable to the results of SelfCite. Given the fact that SelfCite leverages a much smaller 8B model compared to the Claude-3.5-Sonnet model, the result of SelfCite is very impressive, demonstrating its potential to serve as a strong alternative to proprietary solutions.
436
+
437
+ # E. Baseline: SimPO with NLI Rewards
438
+
439
+ To provide a stronger fine-tuned baseline, we implement a SimPO variant that adopts NLI-based citation rewards, following the design proposed by Huang et al. (2024a). For fair comparison, we keep our full SelfCite SimPO training pipeline—initializing from LongCite-8B and training on the LongCite-45k dataset—and modify only the reward function as a controlled experiment. This NLI-based reward combines two components:
440
+
441
+ - Citation Recall Reward: This measures whether the full set of cited sentences entails the model-generated statement. It is equivalent to the Citation Recall Reward proposed by Huang et al. (2024a).
442
+ - Citation Precision Reward: This estimates whether each cited sentence is necessary by ablating one sentence at a time and testing whether the remaining span still entails the statement. If entailment fails after removing a sentence, it indicates that the sentence contributes uniquely to the justification. To reduce latency, we ablate all sentences when the citation contains 5 or fewer; otherwise, we randomly sample 5 for ablation. When there are N ablations, each ablation makes a reward of $\frac{1}{N}$ , and finally all ablations sum up to 1.0. It resembles the Citation Precision Reward proposed by Huang et al. (2024a).
443
+
444
+ We make both rewards positive and capped at 1.0, effectively constructing preference pairs for SimPO. We do not consider the Correctness Recall Reward from Huang et al. (2024a), because the LongCite-45k training set does not contain ground-truth answers. All entailment scores are computed using the public NLI model google/t5_xx1_true_nli_mixture<sup>7</sup>.
445
+
446
+ # F. Zero-shot Evaluation on Chunk-level Citation Benchmark ALCE
447
+
448
+ We additionally include the zero-shot evaluation on the chunk-level citation benchmark ALCE (Gao et al., 2023b) and report the results in Table 8. We find that our baseline model, LongCite-8B, although under a zero-shot setting (it is trained on sentence-level citation but test on chunk-level citations), already outperforms the prompting-based approach from Gao et al. (2023b) by a substantial margin in both citation recall and precision. Incorporating NLI-based rewards from Huang et al. (2024a) into our SimPO training yields further improvements. Most notably, our method—SimPO with SelfCite rewards—achieves the best performance among models trained on the same LongCite-45k dataset.
449
+
450
+ The last row of the table presents the best result reported by Huang et al. (2024a), who fine-tuned their model using supervised data. However, this setting is not directly comparable to ours for several reasons:
451
+
452
+ 1. They optimize directly for the ALCE evaluation metric by using the same NLI evaluator model (google/t5-xx1_true_nli_mixture) to provide both training rewards and evaluation scores.
453
+ 2. Their model is trained on the in-distribution QA training sets in ALCE, with exactly the same chunk-level format as the benchmark. In contrast, our SelfCite model is trained on out-of-distribution sentence-level citations from LongCite-45k
454
+ 3. Their method involves distillation from ChatGPT in the first stage, whereas ours does not rely on external supervision
455
+
456
+ Despite this domain and format mismatch, SelfCite demonstrates strong generalization and consistently outperforms both LongCite-8B and the NLI-based SimPO baseline. This highlights the robustness and effectiveness of our approach even in cross-domain, cross-format transfer settings.
457
+
458
+ # G. Comparison with Prior Studies
459
+
460
+ We further provide a comparison table in Table 9 to contrast the key differences between SelfCite and other prior studies on producing citations from LLMs. Among all methods, SelfCite is the only approach that supports sentence-level citation generation in a single pass, leverages preference optimization, and scales to 128K-token contexts—all without requiring additional supervision. In contrast, prior work such as ALCE (Gao et al., 2023b) and Huang et al. (2024a) use chunk-level citations for shorter context ( $\leq$ 8K) and require prompt-based or supervised NLI signals. ContextCite (Cohen-Wang et al., 2024), while being sentence-level, relies on a computationally expensive (at least 32 inference calls) process for random context ablation and trains a linear model for estimating the importance scores. This comparison underscores the practical advantages and technical contributions of SelfCite.
461
+
462
+ Table 8. Evaluation on the chunk-level citation benchmark ALCE (Gao et al., 2023b). Our model (SimPO w/ SelfCite) is trained on sentence-level, out-of-distribution LongCite-45k data but still generalizes well to the chunk-level ALCE benchmark.
463
+
464
+ <table><tr><td rowspan="2">Model</td><td colspan="3">ASQA</td><td colspan="3">ELI5</td></tr><tr><td>EM Rec.</td><td>Cite Rec.</td><td>Cite Prec.</td><td>Correct</td><td>Cite Rec.</td><td>Cite Prec.</td></tr><tr><td colspan="7">Gao et al. (2023b) (Prompting)</td></tr><tr><td>Llama-2-13B-chat</td><td>34.66</td><td>37.48</td><td>39.62</td><td>12.77</td><td>17.13</td><td>17.05</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>42.68</td><td>50.64</td><td>53.08</td><td>13.63</td><td>34.66</td><td>32.08</td></tr><tr><td colspan="7">Finetuned on LongCite-45k (Out-of-Distribution)</td></tr><tr><td>LongCite-8B</td><td>42.11</td><td>62.27</td><td>57.00</td><td>15.37</td><td>30.54</td><td>29.15</td></tr><tr><td>+ SimPO w/ NLI Rewards</td><td>41.20</td><td>65.65</td><td>60.20</td><td>15.30</td><td>33.06</td><td>31.05</td></tr><tr><td>+ SimPO w/ SelfCite</td><td>42.57</td><td>71.68</td><td>62.05</td><td>15.17</td><td>37.09</td><td>35.62</td></tr><tr><td colspan="7">Finetuned on ALCE train set (In-Distribution Supervision)</td></tr><tr><td>Huang et al. (2024a)</td><td>40.05</td><td>77.83</td><td>76.33</td><td>11.54</td><td>60.86</td><td>60.23</td></tr></table>
465
+
466
+ Table 9. Key differences among prior methods on producing citations from LLMs. CC stands for ContextCite.
467
+
468
+ <table><tr><td>Method</td><td>Sentence-level citations?</td><td>One pass generation?</td><td>Preference optimization?</td><td>Handle 128K long-context?</td><td>External supervision?</td></tr><tr><td>ALCE (Gao et al., 2023b)</td><td>X (chunk-level)</td><td>✓</td><td>X (prompting)</td><td>X (8K)</td><td>2-shot prompting</td></tr><tr><td>Huang et al. (2024a)</td><td>X (chunk-level)</td><td>✓</td><td>✓</td><td>X (8K)</td><td>NLI + ground truth</td></tr><tr><td>CC (Cohen-Wang et al., 2024)</td><td>✓</td><td>X (at least 32 calls)</td><td>X (not generative)</td><td>✓</td><td>N/A</td></tr><tr><td>LongCite (Zhang et al., 2024)</td><td>✓</td><td>✓</td><td>X (SFT only)</td><td>✓</td><td>SFT data</td></tr><tr><td>SelfCite (Ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>N/A</td></tr></table>
469
+
470
+ # H. More Qualitative Examples
471
+
472
+ We further show more qualitative examples in Table 10,11, and 12, to represent the cases where SelfCite is better as well as where the LongCite-8B direct sampling baseline is better. In Table 10, SelfCite BoN avoid the cited irrelevant sentence (42, 47-50) by the baseline, while further including a correct citation (23) that are not found by the baseline. In Table 11, both SelfCite BoN and the baseline cites too many irrelevant sentences (391-393) but SelfCite BoN's citation is slightly better. In Table 12, SelfCite BoN wrongly includes 30 and misses 70, but the baseline is slightly better and only wrongly includes 71.
473
+
474
+ Table 10. An example of differences in the citation from baseline vs BoN. Related information are highlighted in the context/response.
475
+
476
+ <table><tr><td>Sent. ID</td><td>Context Sentences (only showing cited sentences due to space)</td></tr><tr><td>23 (√)</td><td>We explored using a simple and cost-effective procedure to instruction finetune our continually pretrained long models without any human-annotated data.</td></tr><tr><td>42 (X)</td><td>Collecting human demonstration and preference labels for LLM alignment is a cumbersome and expensive process (Ouyang et al., 2022; Touvron et al., 2023).</td></tr><tr><td>45 (√)</td><td>In this work, we found that a simple and cheap approach which leverages a pre-built large and diverse short-prompt dataset works surprisingly well on long-context benchmarks.</td></tr><tr><td>46 (√)</td><td>Specifically, we take the RLHF dataset used in LLAMA 2 CHAT and augment it with synthetic self-instruct (Wang et al., 2022) long data generated by LLAMA 2 CHAT itself, in the hope that the model can learn a diverse set of skills through the large amount of RLHF data and transfer that knowledge to long-context scenarios via self-instruct data.</td></tr><tr><td>47-50 (X)</td><td>The data generation process focuses on QA-format tasks: starting from a long document in our pretraining corpus, we select a random chunk and prompt LLAMA 2 CHAT to write question-answer pairs based on information in the text chunk. [...]</td></tr><tr><td>Query</td><td>What aspects of the LLAMA Long model proposed above have changed relative to the LLAMA-based model? What improvements have been made?</td></tr><tr><td>Response (single statement due to space)</td><td>[...] 3. Instruction Tuning: The paper proposes a simple and cost-effective procedure to instruction finetune the continually pretrained long models without any human-annotated data. This involves using a pre-built large and diverse short-prompt dataset and augmenting it with synthetic self-instruct long data generated by LLAMA CHAT itself. [...]</td></tr><tr><td colspan="2">Citation Strings (green: correct; red: wrong)</td></tr><tr><td>Baseline</td><td>[42-42] [45-50]</td></tr><tr><td>SelfCite BoN</td><td>[23-23] [45-45] [46-46]</td></tr></table>
477
+
478
+ Table 11. An example of differences in the citation from baseline vs BoN. Related information are highlighted in the context/response.
479
+
480
+ <table><tr><td>Sent. ID</td><td>Context Sentences (only showing cited sentences due to space)</td></tr><tr><td>299 (X)</td><td>Ribosomes link amino acids together in the order specified by the codons of messenger RNA (mRNA) molecules to form polypeptide chains.</td></tr><tr><td>300 (✓)</td><td>Ribosomes consist of two major components: the small and large ribosomal subunits.</td></tr><tr><td>301 (✓)</td><td>Each subunit consists of one or more ribosomal RNA (rRNA) molecules and many ribosomal proteins (RPs or t-proteins).</td></tr><tr><td>302 (✓)</td><td>The ribosomes and associated molecules are also known as the translational apparatus.</td></tr><tr><td>383 (X)</td><td>Ribosomes are minute particles consisting of RNA and associated proteins that function to synthesize proteins.</td></tr><tr><td>384 (X)</td><td>Proteins are needed for many cellular functions such as repairing damage or directing chemical processes.</td></tr><tr><td>385 (X)</td><td>Ribosomes can be found floating within the cytoplasm or attached to the endoplasmic reticulum.</td></tr><tr><td>390 (✓)</td><td>Each subunit is made of one or more rRNAs and many r-proteins.</td></tr><tr><td>391 (X)</td><td>The small subunit (30S in bacteria and archaea, 40S in eukaryotes) has the decoding function, whereas the large subunit (50S in bacteria and archaea, 60S in eukaryotes) catalyzes the formation of peptide bonds, referred to as the peptidyl-transferase activity.</td></tr><tr><td>392 (X)</td><td>The bacterial (and archaeal) small subunit contains the 16S rRNA and 21 r-proteins (Escherichia coli), whereas the eukaryotic small subunit contains the 18S rRNA and 32 r-proteins (Saccharomyces cerevisiae; although the numbers vary between species).</td></tr><tr><td>393 (X)</td><td>The bacterial large subunit contains the 5S and 23S rRNAs and 34 r-proteins (E. coli), with the eukaryotic large subunit containing the 5S, 5.8S, and 25S / 28S rRNAs and 46 r-proteins (S. cerevisiae; again, the exact numbers vary between species).</td></tr><tr><td>Query</td><td>The large subunit and small subunit that use two types of RNA are major components that make up what?</td></tr><tr><td>Response (single statement due to space)</td><td>[...] The ribosome consists of two major components: the small and large ribosomal subunits. Each subunit consists of one or more ribosomal RNA (rRNA) molecules and many ribosomal proteins (RPs or r-proteins). The ribosomes and associated molecules are also known as the translational apparatus. [...]</td></tr><tr><td colspan="2">Citation Strings (green: correct; red: wrong)</td></tr><tr><td>Baseline</td><td>[299-302] [383-385] [390-393]</td></tr><tr><td>SelfCite BoN</td><td>[300-302] [390-393]</td></tr></table>
481
+
482
+ Table 12. An example of differences in the citation from baseline vs BoN. Related information are highlighted in the context/response.
483
+
484
+ <table><tr><td>Sent. ID</td><td>Context Sentences (only showing cited sentences due to space)</td></tr><tr><td>28 (√)</td><td>The Aegis BMD system exists in several variants.</td></tr><tr><td>29 (√)</td><td>Listed in order of increasing capability, these include (but are not necessarily limited to) 3.6.X variant, the 4.0.3 variant, the 4.1 variant (also known as the Aegis Baseline [BL] 5.4 variant), the 5.0 CU (Capability Upgrade) variant (also known as the BL 9.1 variant), the 5.1 variant (also known as the BL 9.2 variant), and the 6.X variant (also known as the BL 10 variant).</td></tr><tr><td>30 (X)</td><td>Figure 1 summarizes the capabilities of some of these variants (using their designations as of 2016) and correlates them with the phases of the European Phased Adaptive Approach (or EPAA; see discussion below) for European BMD operations.</td></tr><tr><td>61 (√)</td><td>The BMD interceptor missiles used by Aegis ships are the Standard Missile-3 (SM-3), the SM-2 Block IV, and the SM-6.</td></tr><tr><td>62 (√)</td><td>The SM-3 is designed to intercept ballistic missiles above the atmosphere (i.e., exo-atmospheric intercept), in the midcourse phase of an enemy ballistic missile&#x27;s flight.</td></tr><tr><td>70 (√)</td><td>The SM-2 Block IV is designed to intercept ballistic missiles inside the atmosphere (i.e., endo-atmospheric intercept), during the terminal phase of an enemy ballistic missile&#x27;s flight.</td></tr><tr><td>71 (X)</td><td>It is equipped with a blast fragmentation warhead.</td></tr><tr><td>Query</td><td>Please write a one-page summary of the above government report.</td></tr><tr><td>Response (single statement due to space)</td><td>[...] The Aegis BMD system exists in several variants with increasing capability, including the 3.6.X, 4.0.3, 4.1, 5.0 CU, 5.1, and 6.X variants. The system uses the Standard Missile-3 (SM-3), SM-2 Block IV, and SM-6 interceptor missiles. The SM-3 is designed to intercept ballistic missiles above the atmosphere in the midcourse phase, while the SM-2 Block IV and SM-6 are designed to intercept ballistic missiles inside the atmosphere in the terminal phase. [...]</td></tr><tr><td colspan="2">Citation Strings (green: correct; red: wrong)</td></tr><tr><td>Baseline</td><td>[28-29] [61-62] [70-71]</td></tr><tr><td>SelfCite BoN</td><td>[28-30] [61-62] (missing: 70)</td></tr></table>
2502.09xxx/2502.09604/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bc5cb08f9fb908dc9bc08603cbd4762ae2b12d28ebdb7d712e72b426bb3a3d5
3
+ size 1617606
2502.09xxx/2502.09604/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09615/555c522b-66f3-4636-b28c-3841415ddf3f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8540ad8624b29c90f2926dd24d348baab5313c3cffb33f8b82c88c2c5ff4a236
3
+ size 27345099
2502.09xxx/2502.09615/full.md ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets
2
+
3
+ ISABELLA LIU, UC San Diego, USA
4
+
5
+ ZHAN XU, Adobe Research, USA
6
+
7
+ WANG YIFAN, Adobe Research, USA
8
+
9
+ HAO TAN, Adobe Research, USA
10
+
11
+ ZEXIANG XU, Hillbot Inc., USA
12
+
13
+ XIAOLONG WANG, UC San Diego, USA
14
+
15
+ HAO SU, UC San Diego, USA and Hillbot Inc., USA
16
+
17
+ ZIFAN SHI, Adobe Research, USA
18
+
19
+ ![](images/3076a6c60076e922596fefb67e309ec865fdb1e733c96270c09b8c1c1965e721.jpg)
20
+ Fig. 1. RigAnything is an autoregressive transformer-based approach for automatic rigging. From an arbitrarily posed shape (shown on the left), it can generate a skeleton and skinning weights that adapt seamlessly to the input's global structure (shown on the right), enabling articulation into new poses.
21
+
22
+ We present RigAnything, a novel autoregressive transformer-based model, which makes 3D assets rig-ready by probabilistically generating joints and skeleton topologies and assigning skinning weights in a template-free manner. Unlike most existing auto-rigging methods, which rely on predefined skeleton templates and are limited to specific categories like humanoid, RigAnything approaches the rigging problem in an autoregressive manner, iteratively predicting the next joint based on the global input shape and the previous prediction. While autoregressive models are typically used to generate sequential data, RigAnything extends its application to effectively learn and represent skeletons, which are inherently tree structures. To achieve this, we organize the joints in a breadth-first search (BFS) order, enabling the skeleton to be defined as a sequence of 3D locations and the parent index. Furthermore, our model improves the accuracy of position prediction by leveraging diffusion modeling, ensuring precise and consistent placement of
23
+
24
+ Authors' Contact Information: Isabella Liu, lal005@ucsd.edu, UC San Diego, USA; Zhan Xu, zhaxu@adobe.com, Adobe Research, USA; Wang Yifan, yifwang@adobe.com, Adobe Research, USA; Hao Tan, hatan@adobe.com, Adobe Research, USA; Zexiang Xu, zexiangxu@gmail.com, Hillbot Inc., USA; Xiaolong Wang, xiw012@ucsd.edu, UC San Diego, USA; Hao Su, haosu@ucsd.edu, UC San Diego, USA and Hillbot Inc., USA; Zifan Shi, vivianszf9@gmail.com, Adobe Research, USA.
25
+
26
+ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
27
+
28
+ © 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
29
+
30
+ ACM 1557-7368/2025/8-ART
31
+
32
+ https://doi.org/10.1145/3731149
33
+
34
+ joints within the hierarchy. This formulation allows the autoregressive model to efficiently capture both spatial and hierarchical relationships within the skeleton. Trained end-to-end on both RigNet and Objaverse datasets, RigAnything demonstrates state-of-the-art performance across diverse object types, including humanoids, quadrupeds, marine creatures, insects, and many more, surpassing prior methods in quality, robustness, generalizability, and efficiency. It achieves significantly faster performance than existing auto-rigging methods, completing rigging in under a few seconds per shape. Please check our website for more details: https://www.liuisabella.com/RigAnything.
35
+
36
+ CCS Concepts: $\cdot$ Computing methodologies $\rightarrow$ Animation; Neural networks.
37
+
38
+ Additional Key Words and Phrases: Animation Skeleton, Automatic Rigging, Skinning, Autoregressive Modeling, Transformer-Based Models
39
+
40
+ # ACM Reference Format:
41
+
42
+ Isabella Liu, Zhan Xu, Wang Yifan, Hao Tan, Zexiang Xu, Xiaolong Wang, Hao Su, and Zifan Shi. 2025. RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets. ACM Trans. Graph. 44, 4 (August 2025), 12 pages. https://doi.org/10.1145/3731149
43
+
44
+ # 1 Introduction
45
+
46
+ Recent advancements in large-scale 3D asset generation [Hong et al. 2023; Jun and Nichol 2023; Li et al. 2023; Liu et al. 2024b, 2023; Nichol et al. 2022; Shi et al. 2023; Xu et al. 2023] have enabled the creation of highly detailed static shapes. However, since motion is an essential aspect of how humans perceive and interact with the world, there is a growing demand for modeling dynamics to create lifelike and interactive assets [Liu et al. 2024a]. While some approaches leverage
47
+
48
+ ![](images/eb9dba886c3fbd61c75d000ac157b3a0a48f158f50ac7a3e6e66c25a75f9be37.jpg)
49
+ Input
50
+
51
+ ![](images/85e15e4c3f25b2dcd6401c6094b6fa94795ffc30e780a87ed4821bddf3677676.jpg)
52
+ Output
53
+
54
+ ![](images/cea6d53a52a0af05b904850a69bf1347a85fd78107867d16909b95feb9d8b47e.jpg)
55
+ Input
56
+ Fig. 2. Skeleton generation given real images, showing our method generalizes well to real data. More real results in Fig. 10.
57
+
58
+ ![](images/d90e9090daf4b14304dfeeb094453d8f064e684b9df550a5fd616e73707b76f2.jpg)
59
+ Output
60
+
61
+ ![](images/169fe87c0ba565a8d5bd87456533b09d42565d3c24ff2b990a8872865aa5f422.jpg)
62
+ Input
63
+
64
+ ![](images/cc69c59c2d49b421253e5fed4052f2a9bfa6ab78cae26d6abef238a687c8e8ab.jpg)
65
+ Output
66
+
67
+ <table><tr><td>Methods</td><td>Humanoid</td><td>Non-Humanoid</td><td>Template-Free</td><td>Arbitrary Pose</td><td>Rigging Time (A100)</td></tr><tr><td>TARig [Ma and Zhang 2023]</td><td>✓</td><td>✗</td><td>✗</td><td>✗</td><td>~40s</td></tr><tr><td>Pinocchio [Baran and Popović 2007]</td><td>✓</td><td>✓</td><td>✗</td><td>✗</td><td>~40s</td></tr><tr><td>RigNet [Xu et al. 2020]</td><td>✓</td><td>✓</td><td>✓</td><td>✗</td><td>~120s</td></tr><tr><td>RigAnything (Ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>~2s</td></tr></table>
68
+
69
+ Table 1. Feature comparison with other Auto-Rigging tools.
70
+
71
+ text-based [Bahmani et al. 2024; Singer et al. 2023; Zhao et al. 2023] or video-guided [Ren et al. 2023; Yin et al. 2023] control to animate objects, these methods often fall short in providing the precision and flexibility required by artists to fully realize their creative visions. Rigging, in contrast, offer a robust and artist-friendly framework for animation, enabling fine-grained control over degree of freedom and range of motion. Our work addresses this need by presenting a systematic approach to automating rigging, advancing the state of the art in articulable asset generation.
72
+
73
+ Auto-rigging has long been a challenging research problem in computer graphics [Baran and Popovic 2007; Chu et al. 2024; Guo et al. 2024; Li et al. 2021; Xu et al. 2020]. Table 1 provides a concise summary of state-of-the-art methods in this domain. Most existing approaches depend on predefined skeleton templates [Baran and Popovic 2007; Chu et al. 2024; Guo et al. 2024; Li et al. 2021], which limit their applicability to specific categories, such as humanoid characters (Tab. 1). To overcome template reliance, RigNet [Xu et al. 2020] employs non-differentiable operators, including clustering for joint position acquisition and a minimum spanning tree for topology construction. However, this approach requires approximately two minutes to rig a single object and is further constrained to operate only on objects in rest poses.
74
+
75
+ In this work, we propose a transformer-based autoregressive model, termed RigAnything, to make any 3D asset "rig-ready". The autoregressive model probabilistically "grows" the skeleton from the root joint in a sequential manner; Skinning weights for any surface sample are then inferred by holistically considering all the joints.
76
+
77
+ Specifically, we represent the tree-structured skeleton as a sequence by ordering the joints in a breadth-first search (BFS) order, where each joint is defined by a 3D position and a parent index. This autoregressive formulation is particularly suited for skeleton prediction, as it addresses the inherent ambiguity in joint configurations by representing them as a probabilistic distribution. Additionally, by sequentially generating joints and connections without relying on a predefined template, the model supports arbitrary skeleton structures and varying numbers of joints, enabling broad generalization
78
+
79
+ across diverse object categories. Furthermore, while transformer-based autoregressive models are traditionally designed to handle discrete values [Brown et al. 2020; Radford et al. 2019; Waswani et al. 2017], inspired by recent work utilizing autoregressive models for image generation [Li et al. 2024], we adopt a diffusion sampling process to predict the continuously valued joint positions, resulting in superior accuracy. Given the predicted skeleton, we infer the skinning weights by a pair-wise computation. We employ transformer blocks throughout the model to comprehensively capture the global shape structure, as well as the interdependence among all joints and their associated surface points.
80
+
81
+ We train our model end-to-end on both the RigNet dataset [Xu et al. 2020] and a curated subset of high-quality animatable assets from the Objaverse dataset [Deitke et al. 2023]. We rigorously filter the Objaverse dataset and select 9686 high-quality rigged shapes, which enrich the dataset for research in this direction. The input shapes are further augmented with random pose variations to enhance robustness. Our training data encompasses a wide range of object types, including bipedal, quadrupedal, avian, marine, insectoid, and manipulable rigid objects, as well as a diverse set of initial poses. This extensive scale and diversity of training data surpasses all prior work, playing a critical role in achieving broad generalizability across shape categories and configurations.
82
+
83
+ Extensive experiments demonstrate that RigAnything achieves state-of-the-art performance in the auto-rigging task, as demonstrated in Fig. 1, Fig. 2 and Fig. 7, surpassing prior methods in quality, robustness, generalizability, and efficiency. By automating rigging for diverse 3D assets, our method advances the vision of fully interactive 3D environments and scalable 3D content creation, empowering artists and developers with a powerful, efficient tool.
84
+
85
+ # 2 Related Work
86
+
87
+ # 2.1 Automatic Rigging
88
+
89
+ Rigging is a fundamental technique for animation in computer graphics. Traditional automatic rigging methods, such as Pinocchio [Baran and Popovic 2007], rely on predefined skeletons and optimize their variations to fit a range of characters, with skinning weights determined by analyzing vertex-bone deformation relationships. However, the optimization process is computationally expensive and diminishes the generalizability. Recent advances in deep learning have improved the quality and adaptability of rigging. TARig [Ma and Zhang 2023] utilizes a template with adaptive joints and a boneflow field to generate skeletons and skinning weights for humanoid characters. Li's et al [2021] leverages a predefined
90
+
91
+ ![](images/07e71cf8afd036f408d5d1c5ed1d674eac078a5fb82ccbd4da37b3e1ba226fb5.jpg)
92
+ Fig. 3. A single step in our method: The input shape and the previously predicted skeleton sequence are tokenized using two separate tokenizers. These tokens are processed through a chain of autoregressive transformer blocks with a hybrid attention mask. Shape tokens perform self-attention to capture global geometric information, while skeleton tokens attend to all shape tokens and use causal attention within themselves to maintain the autoregressive generation process. After the transformer blocks, a skinning module decodes shape tokens into skinning weights, a joint diffusion module samples the next joint position, and a connectivity module predicts the next joint's connection to its preceding joints.
93
+
94
+ skeleton template for characters to learn rigging and proposes neural blend shape to enhance deformation quality. However, these methods are confined to humanoid characters in standard poses and rely heavily on predefined templates, limiting their robustness and generalization to diverse objects, poses, and skeleton topologies.
95
+
96
+ Differently, RigNet [Xu et al. 2020] use a combination of regression and adaptive clustering to handle the diverse number of joints and employs a deep neural network for connectivity prediction to allow various topologies without templates or assumptions about shape classes and structures. However, it lacks the robustness and efficiency due to its model design, which is not end-to-end trainable with clustering and Minimum Spanning Tree operations. Make-it-Animatable [Guo et al. 2024] and HumanRig [Chu et al. 2024] are works developed concurrently with ours. They also focus solely on humanoid characters and rely on template skeletons, restricting their adaptability to more diverse data categories. In contrast, our method eliminates the need for templates and avoids assumptions about skeleton topology, achieving greater generalizability and robustness for diverse object types in a feed-forward manner.
97
+
98
+ # 2.2 Autoregressive Models for 3D
99
+
100
+ Autoregressive models are a powerful class of probabilistic models widely applied across domains such as natural language processing [Achiam et al. 2023; Brown et al. 2020; Radford et al. 2019] and computer vision [Chen et al. 2020; Esser et al. 2021; Li et al. 2024; Parmar et al. 2018]. In 3D tasks, autoregressive models have also demonstrated remarkable potential in areas like shape generation [Cheng et al. 2022; Ibing et al. 2023; Mittal et al. 2022; Qian et al. 2024; Yan et al. 2022] and motion generation [Han et al. 2024; Rempe et al. 2021; Zhang et al. 2023]. In 3D shape generation, methods mostly focus on designing effective representations for autoregressive modeling. ShapeFormer [Yan et al. 2022] introduces a sparse representation that quantizes non-empty voxel grids in a predefined order. AutoSDF [Mittal et al. 2022] takes a different approach by
101
+
102
+ modeling the entire space and using randomized sampling orders to enable non-sequential modeling. Octree Transformer [Ibing et al. 2023] introduce octree-based hierarchical shape representations with adaptive compression, significantly reducing sequence lengths. Cheng et al. [Cheng et al. 2022] decompose point clouds into semantically aligned sequences. Argus3D [Qian et al. 2024] utilizes discrete representation learning on a latent vector and scales up the model to improve the quality and versatility of 3D generation. Similarly, autoregressive models have advanced 3D motion generation. MotionVAEs [Ling et al. 2020] uses autoregressive conditional variational autoencoders to learn a latent action space for human movement generation and control. T2M-GPT [Zhang et al. 2023] uses motion VQ-VAE and textural descriptions for human motion generation. HuMoR [Rempe et al. 2021] proposes hierarchical latent variables for realistic motion synthesis. AMD [Han et al. 2024] presents an autoregressive model that iteratively generates complex 3D human motions from long textual descriptions. In this paper, we pioneer the application of autoregressive models to the task of automatic rigging, marking a significant advancement in this domain.
103
+
104
+ # 3 Method
105
+
106
+ Our goal is to transform a given 3D shape into an animatable asset by generating a plausible skeleton and predicting the corresponding skinning weights. These enable the 3D asset to be articulated under joint transformations using Linear Blend Skinning (LBS). In this section, we first describe two types of skeleton ambiguities in Sec. 3.1. We then present our novel autoregressive model for skeleton generation in Sec. 3.2, followed by our approach to skinning weight prediction in Sec. 3.3. Finally, we provide a detailed description of the model architecture and the overall training objective in Sec. 3.4 and Sec. 3.5.
107
+
108
+ # 3.1 Skeleton Ambiguity
109
+
110
+ We identify two types of skeleton ambiguities that commonly occur in existing pipelines. (1) Sibling ambiguity: The ordering of nodes at the same depth in the skeleton tree is undefined, as illustrated in Fig. 4. For instance, if the skeleton is traversed in breadth-first search (BFS) order and the preceding skeleton tokens are 1, 2, and 3, the next joint could be either 4 or 5, each being equally valid.
111
+
112
+ ![](images/2bdb9c847b4580cd0802f34f4bb42b8e262b7f5de1ccded0eec1871339a760d6.jpg)
113
+ Fig. 4. Illustration of sibling ambiguity during BFS ordering in skeletons.
114
+
115
+ (2) Topology ambiguity: An object may have multiple valid skeleton topologies, as shown in Fig. 5. This requires the method to model a distribution over multiple plausible configurations. Our method naturally addresses this ambiguity by modeling the distribution of the next joint based on preceding predictions, which offers a distinct advantage over deterministic approaches in capturing the inherent uncertainty in joint positions.
116
+
117
+ ![](images/11678a9c0a42f044ac846a812fde23dd4d575bf5fdf097ac31e7edfa77ffbf28.jpg)
118
+ Fig. 5. Examples of different valid skeleton topologies for the same shape.
119
+
120
+ # 3.2 Autoregressive Skeleton Prediction
121
+
122
+ Autoregressive Modeling. The key component of our method is an autoregressive model for the skeleton prediction to address the ambiguity of skeleton structures and eliminates the need for predefined templates. To convert the tree-structured skeleton to a sequence that can be effectively processed by the autoregressive model, we adopt the BFS to serialize the skeleton to a list:
123
+
124
+ $$
125
+ \mathcal {J} = \left[ \left(j _ {1}, p _ {1}\right), \left(j _ {2}, p _ {2}\right), \dots , \left(j _ {K}, p _ {K}\right) \right], \tag {1}
126
+ $$
127
+
128
+ where $j_{k} \in \mathbb{R}^{3}$ and $p_{k} \in \{1, \dots, K\}$ denote the 3D position and the parent index of the $k$ -th joint respectively. As we adopt the BFS order, $p_{k} < k$ and the first element $(j_{1}, p_{1})$ always represents the root joint. The order of joints at the same BFS depth level is non-deterministic. To resolve this ambiguity, we randomly sample the order during training and use generative modeling to cover the uncertainty.
129
+
130
+ Given an input shape $S$ represented by $L$ sampled points, we factorize the joint probability of skeleton by the chain rule:
131
+
132
+ $$
133
+ P (\mathcal {J} \mid \mathcal {S}) = \prod_ {k = 1} ^ {K} P (j _ {k}, p _ {k} \mid \mathcal {J} _ {1: k - 1}, \mathcal {S}).
134
+ $$
135
+
136
+ where $\mathcal{I}_{1:k}$ is the shorthand for the sublist of $\mathcal{J}$ up to the $k$ -th element.
137
+
138
+ The autoregressive model is tasked to iteratively predict the conditional distribution of each joint position $j_{k}$ and parent index $p_{k}$ , formulated as
139
+
140
+ $$
141
+ P \left(j _ {k}, p _ {k} \mid \mathcal {J} _ {1: k - 1}, S\right) = P \left(j _ {k} \mid \mathcal {J} _ {1: k - 1}, S\right) P \left(p _ {k} \mid j _ {k}, \mathcal {J} _ {1: k - 1}, S\right).
142
+ $$
143
+
144
+ Instead of directly modeling in the original joint space, we map all previously predicted joints and their corresponding parents into a higher-dimensional token space to enhance the model's expressive capacity. This token space effectively represents the evolving state of the skeleton, capturing its structural and hierarchical information as new joints and connections are incrementally added. Similarly, a sequence of shape tokens is extracted to encapsulate the global structure of the input shape, providing consistent contextual information throughout the modeling process. Denoting the skeleton tokens as $T_{1:k-1} \in \mathbb{R}^{(k-1) \times d}$ and the shape token as $H \in \mathbb{R}^{L \times d}$ , where $d$ is the dimension of each token. The prediction targets are reformulated as:
145
+
146
+ $$
147
+ P \left(j _ {k} \mid T _ {1: k - 1}, H\right) \quad \text {a n d} \quad P \left(p _ {i} \mid j _ {i}, T _ {1: k - 1}, H\right). \tag {2}
148
+ $$
149
+
150
+ The extraction of the skeleton token $T$ and shape token $H$ are detailed in Sec. 3.4.
151
+
152
+ Joint Prediction with Diffusion Model. To predict the next joint position, which is continuously valued, we address the limitation that most autoregressive models are traditionally designed for discrete outputs, making them less effective for continuous-valued tasks. Inspired by recent autoregressive image generation models [Li et al. 2024], we adopt a diffusion sampling process [Dhariwal and Nichol 2021; Ho et al. 2020; Nichol and Dhariwal 2021] to handle the continuous nature of joint positions. Diffusion models are particularly suited for this task because they iteratively refine samples, effectively resolving the structural ambiguities inherent in skeleton tree representations. For readability, we drop the current joint index $k$ in the following part.
153
+
154
+ Forward Diffusion Process: The forward process gradually adds Gaussian noise to the ground-truth joint $j^0$ over $M$ time steps, producing increasingly noisy versions $j^{m}$ . This is formulated as:
155
+
156
+ $$
157
+ j ^ {m} = \sqrt {\bar {\alpha} _ {m}} j ^ {0} + \sqrt {1 - \bar {\alpha} _ {m}} \epsilon ,
158
+ $$
159
+
160
+ where $\epsilon \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ is Gaussian noise, and $\bar{\alpha}_m = \prod_{s=1}^m \alpha_s$ defines a noise schedule.
161
+
162
+ Training Objective: We train a noise estimator $\epsilon_{\theta}$ , conditioned on the diffusion time step $m$ and the context $Z \in \mathbb{R}^{(L + k - 1) \times d}$ , where
163
+
164
+ $$
165
+ Z = \text {T r a n s f o r m e r B l o c k s} \left(T _ {1: k - 1}, H\right), \tag {3}
166
+ $$
167
+
168
+ capturing both the evolving skeleton state and the input shape. The model takes the noisy joint $j^{m}$ as input and predicts the added noise $\epsilon$ . The training objective is defined as:
169
+
170
+ $$
171
+ \mathcal {L} _ {\text {j o i n t}} \left(Z, j ^ {0}\right) = \mathbb {E} _ {\epsilon , m} \left[ \left\| \epsilon - \epsilon_ {\theta} \left(j ^ {m} \mid m, Z\right) \right\| ^ {2} \right]. \tag {4}
172
+ $$
173
+
174
+ ![](images/61f938c4c2b58ed0a31325cf2b4fe387ebf6bfc5bcccd7345f47aa0d1336f1ca.jpg)
175
+ Fig. 6. (Left) Hybrid attention mask: Shape tokens use full self-attention, while skeleton tokens attend to shape tokens and apply causal masking among themselves. (Right) The skeleton sequence is autoregressively generated during inference.
176
+
177
+ Reverse Diffusion Process: At inference time, the reverse process iteratively removes noise, sampling the next joint position $j^0 \sim p_\theta(j^0 \mid Z)$ . Starting from a Gaussian sample $j^M \sim \mathcal{N}(0, \mathbf{I})$ , the reverse process is defined as:
178
+
179
+ $$
180
+ j ^ {m - 1} = \frac {1}{\sqrt {\alpha_ {m}}} \left(j ^ {m} - \frac {1 - \alpha_ {m}}{\sqrt {1 - \bar {\alpha} _ {m}}} \epsilon_ {\theta} \left(j ^ {m} \mid m, Z\right)\right) + \sigma_ {m} \delta , \tag {5}
181
+ $$
182
+
183
+ where $\delta \sim \mathcal{N}(0,\mathbf{I})$ is Gaussian noise, and $\sigma_{m}$ denotes the noise level at step $m$ . The final output $j^0$ represents the predicted joint position.
184
+
185
+ Connectivity Prediction. After we sample the next joint position $j_{k} \in \mathbb{R}^{3}$ from the diffusion module described earlier, we aim to predict how this newly sampled joint $j_{k}$ connects to its ancestor joints. We first update the context $Z_{k}$ with the sampled joint $j_{k}$ through a fusing module F:
186
+
187
+ $$
188
+ Z _ {k} ^ {\prime} = \mathrm {F} \left(Z _ {k}, j _ {k}, \gamma (k)\right), \tag {6}
189
+ $$
190
+
191
+ where $\gamma (k)\in \mathbb{R}^d$ is a positional embedding signaling the current joint index.
192
+
193
+ Next, a connectivity module $C$ takes $Z_k'$ and each individual predicted skeleton token $T_i (i < k)$ (detailed in Sec. 3.4) produce the parent joint probability,
194
+
195
+ $$
196
+ \mathbf {q} _ {k} = \operatorname {S o f t m a x} \left(\left[ \mathrm {C} \left(Z _ {k} ^ {\prime}, T _ {i}\right) \right] _ {i = 1} ^ {k - 1}\right). \tag {7}
197
+ $$
198
+
199
+ The connectivity is supervised with the binary cross-entropy loss,
200
+
201
+ $$
202
+ \mathcal {L} _ {\text {c o n n e c t}} = - \sum_ {i = 1} ^ {k - 1} \left[ \hat {y} _ {k, i} \log (q _ {k, i}) + (1 - \hat {y} _ {k, i}) \log (1 - q _ {k, i}) \right], \tag {8}
203
+ $$
204
+
205
+ where $q_{k,i}$ is the $i$ -th element in $\mathbf{q}_k$ and $\hat{y}_{k,i} \in \{0,1\}$ is the ground-truth label indicating whether joint $j_k$ is connected to joint $j_i$ .
206
+
207
+ During training, the ground-truth next joint position $j_{k}$ is fed into the network for connectivity prediction, while during the inference time, $j_{k}$ is sampled from the joint diffusion module and subsequently passed to the connectivity network.
208
+
209
+ # 3.3 Skinning Prediction
210
+
211
+ Skinning weights are described by a matrix $W \in \mathbb{R}^{L \times K}$ , where each element $w_{lk}$ indicates the influence of the $k$ -th joint on the $l$ -th
212
+
213
+ surface point in $S$ . The weight vector $\mathbf{w}_l \in \mathbb{R}^K$ for each surface point must satisfy the following constraints: $\sum_{k=1}^{K} w_{lk} = 1$ and $w_{lk} \geq 0$ for all $k$ .
214
+
215
+ To compute the skinning weight $\mathbf{w}_l$ for each surface point $s_l\in S$ , a skinning prediction module G takes as input the shape token $H_{s_l}\in \mathbb{R}^d$ for point $s_l$ , along with the skeleton token $T_{k}$ for each joint $j_{k}$ ( $k\leq K$ ). The module outputs a predicted influence score for each joint $j_{k}$ on $s_l$ . The final skinning weight $\mathbf{w}_l$ is computed using the softmax function:
216
+
217
+ $$
218
+ \mathbf {w} _ {l} = \operatorname {S o f t m a x} \left(\left[ \mathrm {G} \left(H _ {s _ {l}}, T _ {k}\right) \right] _ {k = 1} ^ {K}\right), \tag {9}
219
+ $$
220
+
221
+ We train this module by minimizing a weighted cross-entropy loss, where the ground-truth skinning weight $\hat{\mathbf{w}}_l$ serves as the weighting factor, which can be written as.
222
+
223
+ $$
224
+ \mathcal {L} _ {\text {s k i n n i n g}} = \frac {1}{L} \sum_ {l = 1} ^ {L} \left(- \sum_ {k = 1} ^ {K} \hat {w} _ {l, k} \log \left(w _ {l, k}\right)\right). \tag {10}
225
+ $$
226
+
227
+ This formulation encourages the model to produce higher probabilities for joints with larger ground-truth skinning weights, thereby aligning the learned distribution with the correct influences for each point.
228
+
229
+ # 3.4 Autoregressive Transformer Architecture
230
+
231
+ Our autoregressive modeling is anchored on a transformer-based architecture, which outputs the shape tokens $H \in \mathbb{R}^{L \times d}$ and skeleton tokens $T_{1:k} \in \mathbb{R}^{k \times d}$ ( $0 < k \leq K$ ) that serve as conditional inputs for the autoregressive modeling for skeleton prediction (Sec. 3.2) and skinning prediction (Sec. 3.3). The extraction of these tokens involve two steps: first, referred as the "tokenization" step, an initial shape token and skeleton tokens are lifted from the raw input, this step produces a higher dimensional vector that has sufficient capacity in preparation to capture richer information in the further processing steps in the transformer; subsequently, the transformer process the tokens through a series of attention blocks with carefully crafted attention masking to obtain the final shape and skeleton tokens, which are finally used as inputs to the skeleton and skinning prediction modules in Sec. 3.2 and Sec. 3.3.
232
+
233
+ Tokenization. For the shape data, we sample a set of $L$ surface points $S \in \mathbb{R}^{L \times 3}$ and concatenate them with their corresponding normals $N \in \mathbb{R}^{L \times 3}$ , forming a sequence of $L$ tokens each with 6 dimensions. These tokens are then passed through MLP layers to a $d$ -dimensional space. Formally, the shape tokens $H \in \mathbb{R}^{L \times d}$ can be written as
234
+
235
+ $$
236
+ H = \operatorname {M L P} (\operatorname {C o n c a t} (S, N)). \tag {11}
237
+ $$
238
+
239
+ For the skeleton data, we first apply MLPs to project each joint position $j_{k}$ and its corresponding parent joint position $j_{p_k}$ into a $d$ -dimensional space. These features are then concatenated with positional embeddings, which encode the index of each joint within the sequence. Finally, the concatenated features are processed through MLP layers to obtain the per-joint skeleton tokens. These steps can be expressed formally as
240
+
241
+ $$
242
+ T _ {k} = \operatorname {M L P} \left(\operatorname {C o n c a t} \left(\operatorname {M L P} \left(j _ {k}\right), \gamma (k), \operatorname {M L P} \left(j _ {p _ {k}}\right), \gamma \left(p _ {k}\right)\right)\right). \tag {12}
243
+ $$
244
+
245
+ The skeleton token $T_{1:K} \in \mathbb{R}^{K \times d}$ is a sequence of individual per-joint tokens in BFS-order.
246
+
247
+ ![](images/6aa05b5267689ce574dc14799c0222f1cfaad806f14e24d4208641c587b9a926.jpg)
248
+ Fig. 7. Comparison of reconstructed skeletons between our method, RigNet, and ground truth. Our method generates more accurate and satisfying skeletons across diverse shape categories. While RigNet tends to produce excessive joints and struggles with uncommon shapes like characters with tails or wings. Our approach generates a reasonable number of joints and aligns the skeletons closely with the underlying shapes. Note that RigNet supports only rest poses, so all evaluations are conducted on rest-posed objects for fairness.
249
+
250
+ Processing Tokens with Transformer. The extracted shape tokens $H$ and predicted skeleton tokens $T_{1:k-1}$ are concatenated and then treated as $L + (k-1)$ individual tokens. These are then passed through a chain of transformer blocks, in which multi-head self-attention mechanisms ensure that the skeleton tokens and the shape tokens are aware of each other's features, enabling the model to capture rich global information and interdependencies between the shape context and the evolving skeleton structure. We propose a hybrid attention mechanism that applies different attention patterns to shape and skeleton tokens. As shown in the left part of Fig. 6, shape tokens attend to each other via full self-attention to capture global geometric context. For skeleton tokens, we first allow them to attend to all shape tokens to incorporate shape information, and then apply causal attention [Radford et al. 2019; Waswani et al. 2017] among the skeleton tokens so that each token only attends to its preceding tokens in the sequence. This ensures the autoregressive property required for skeleton sequential generation.
251
+
252
+ The output of the last transformer block $Z_{k}$ will be served as the condition in the diffusion model for joint $j_{k}$ sampling as introduced in Sec. 3.2.
253
+
254
+ # 3.5 Final Training Objective
255
+
256
+ We train our entire model end-to-end, ensuring that joint positions, connectivity, and skinning weights are learned in a mutually reinforcing manner. Specifically, we combine the losses from the joint diffusion, connectivity, and skinning prediction modules into a single objective. The integrated objective allows the network to learn
257
+
258
+ coherent skeleton structures and accurate skinning assignments simultaneously.
259
+
260
+ $$
261
+ \mathcal {L} = \mathcal {L} _ {\text {j o i n t}} + \mathcal {L} _ {\text {c o n n e c t}} + \mathcal {L} _ {\text {s k i n n i n g}}. \tag {13}
262
+ $$
263
+
264
+ # 4 Experiments
265
+
266
+ # 4.1 Implementation Details
267
+
268
+ Our input point cloud consists of 1024 points, with the maximum number of joints per sample set to 64. The point cloud and joint tokenizers are implemented as two-layer MLPs with hidden dimensions of 512 and 1024. For both parent and skinning prediction modules, we employ two-layer MLPs with hidden dimensions of 1024.
269
+
270
+ The implemented transformer consists of 12 layers with a hidden dimension of 1024. Following the implementation in [Zhang et al. 2025], each transformer block incorporates a multi-head self-attention layer with 16 heads and a two-layered MLP with a hidden dimension of 4096 and a GeLU activation. We employ Pre-Layer Normalization, Layer Normalization (LN), and residual connections consistent with the reference implementation. During training, we employ a hybrid attention masking strategy: shape tokens perform self-attention to effectively capture geometric information, while skeleton tokens use causal attention, attending only to their ancestor skeleton tokens within the sequence to facilitate auto-regressive generation. Additionally, skeleton tokens attend to all shape tokens. During inference, the network processes shape tokens as input and generates skeleton tokens in an auto-regressive manner. We apply KV caching in the transformer blocks to boost the inference speed.
271
+
272
+ ![](images/9ec38e59ba586d0b5864478a43b8d7086e349db11e324441f1cf4a351aec1aa6.jpg)
273
+ Fig. 8. (Left) Joint diffusion modeling prevents joint collapse to mean positions, capturing diverse modalities. (Right) Pose augmentation improves generalization to unseen poses, ensuring well-aligned skeletons and avoiding excessive joints.
274
+
275
+ The joint diffusion process follows [Li et al. 2024; Nichol and Dhariwal 2021], which has a cosine noise scheduler with 1000 training steps and 50 resampling steps during inference. The denoising MLP is conditioned on the transformer-outputted joint tokens, where these tokens are incorporated into the noise scheduler's time embedding through AdaLN [Peebles and Xie 2023] within the Layer Normalization layers.
276
+
277
+ The fusing module is a two-layer MLP with an input size of 3072 and hidden dimensions of 2048 and 1024. During inference, after obtaining the next joint position via diffusion sampling, a shape tokenizer generates a latent shape token (dimension 1024), which is concatenated with previous context tokens (dimension 1024) and positional embeddings. The fusing module's output serves as the updated context for connectivity and skinning prediction. Both the connectivity and skinning modules share a similar architecture with the fusing module, except that their input size is 2048.
278
+
279
+ In the autoregressive skeleton generation process, we maintain a learnable start token (BOS) at the beginning of each skeleton sequence to indicate the start of generation. As we traverse the skeleton in BFS order, the joint sampled from the start token is considered the root joint. The stop condition is determined by checking the parent of the current joint—if a joint's parent is itself, it indicates that the sequence has reached its end.
280
+
281
+ # 4.2 Dataset
282
+
283
+ Overview. We utilize both the RigNet dataset [Xu et al. 2020] and the Objaverse dataset [Deitke et al. 2023]. The RigNet dataset contains 2,354 high-quality 3D models with ground-truth skeleton and skinning annotations. The Objverse dataset offers a large collection of 3D models with varying rigging quality. To ensure data reliability, we filtered out 9,686 models with consistent and accurate skeleton and skinning information. Our dataset spans a diverse range of categories, including bipedal, quadrupedal, avian, marine, insectoid, and manipulable rigid objects. For each model, we sample point clouds and face normals from the mesh surface.
284
+
285
+ During training, we perform online pose augmentation to the input data by randomly deforming the input point clouds using the ground-truth skeleton and skinning. Specifically, we apply random perturbations to each joint, with the maximum rotation angle constrained to 45 degrees, and then deform the original point cloud
286
+
287
+ using the perturbed skeleton and the ground truth skinning. As shown in our ablation study in Sec. 4.4 and Fig. 8, this augmentation strengthens our method's ability to generalize to objects in different poses.
288
+
289
+ Data Filtering We apply a thorough data filtering process to ensure the quality and validity of the dataset. Both manual checks and automated scripts are used to maintain consistency and quality. The filtering is based on the following rules:
290
+
291
+ - Shapes with more than 64 joints are excluded.
292
+ - Shapes with invalid skeletons (e.g., skeleton hierarchy not forming a proper tree) are excluded.
293
+ - Shapes whose skeletons do not align well with the geometry are excluded.
294
+ - Overly simplified or indistinct shapes (e.g., consisting of very few vertices and faces) are excluded.
295
+
296
+ The Objaverse dataset originally contains 21,622 shapes with rigging annotations. We exclude 811 shapes with overly complex skeletons (more than 64 joints), which are often related to facial or hair rigs. Additionally, 10,471 shapes are removed due to low-quality rigging or skinning annotations. After filtering, we obtain a refined set of 12,040 shapes with reliable rigging information.
297
+
298
+ Data Statistics We organize the shapes in the Objaverse dataset into six categories based on their labels: humanoid/bipedal, quadruped, insectoid, avian, marine, and other. The "other" category mainly includes manipulable articulated rigid objects, such as suitcases, cabinets, and similar items. The number of shapes in each category is summarized in Sec. 4.2. We also analyze the distribution of joint counts per shape and present the results in Fig. 9. Our dataset contains more shapes with joint counts falling in the ranges [25, 30], [50, 55], and [60, 64].
299
+
300
+ <table><tr><td></td><td>Number</td></tr><tr><td>Humanoid/Bipedal</td><td>7459</td></tr><tr><td>Quadruped</td><td>543</td></tr><tr><td>Insectoid</td><td>129</td></tr><tr><td>Avian</td><td>176</td></tr><tr><td>Marine</td><td>251</td></tr><tr><td>Other</td><td>830</td></tr><tr><td>Total</td><td>9388</td></tr></table>
301
+
302
+ Table 2. Category statistics of the filtered Objverse dataset.
303
+
304
+ # 4.3 Evaluation and Baseline Comparisons
305
+
306
+ 4.3.1 Skeleton Prediction. We provide qualitative visualizations of the reconstructed skeletons in comparison with the ground truth and RigNet in Fig. 7. Our method demonstrates superior performance, producing more accurate and satisfying skeletons across various shape categories. In contrast, RigNet [Xu et al. 2020] struggles to recover reasonable skeletons for less common shapes, such as characters with tails or wings, and frequently generates an excessive number of joints. In comparison, our method generates a reasonable number of joints, with the reconstructed skeletons well-aligned to the underlying shape, ensuring better structural consistency and
307
+
308
+ ![](images/26488597b746aaf8caa1ceba85fa9b1dd313f858baafb0f17774f6b772a18a89.jpg)
309
+ Fig. 9. Distribution of joint numbers across shapes in our dataset.
310
+
311
+ fidelity. More skeleton prediction results of our method can be found in Fig. 11.
312
+
313
+ <table><tr><td></td><td>IoU↑</td><td>Prec.↑</td><td>Rec.↑</td><td>CD-J2J↓</td><td>CD-J2B↓</td><td>CD-B2B↓</td></tr><tr><td>RigNet</td><td>0.456</td><td>0.424</td><td>0.591</td><td>0.048</td><td>0.042</td><td>0.030</td></tr><tr><td>RigAnything</td><td>0.768</td><td>0.789</td><td>0.766</td><td>0.034</td><td>0.035</td><td>0.020</td></tr></table>
314
+
315
+ Table 3. Quantitative comparison of skeleton prediction on the RigNet + Objverse dataset. Our predicted skeletons aligns better with the ground truth.
316
+
317
+ To quantitatively evaluate the performance of skeleton prediction, we measure the similarity between the predicted skeletons and the ground truth using metrics proposed by RigNet [Xu et al. 2020]: Intersection over Union (IoU), Precision, and Recall for bone matching, as well as Chamfer distances for joints (CD-J2J), bone line segments (CD-B2B), and joint-to-bone line segments (CD-J2B). We train and test both our method and the baseline on the same RigNet + Objaverse dataset. Table 3 presents a comparison with RigNet [Xu et al. 2020] across these metrics. Note that we revised the original RigNet implementation to accommodate non-symmetric models. The results show that our method significantly outperforms the baselines, producing skeletons that align more closely with GT.
318
+
319
+ We compare our method with humanoid auto-rigging approaches, including TARig [Ma and Zhang 2023] and Neural Blend Shapes (NBS) [Li et al. 2021], and present the results in Table 4. All methods are evaluated on the humanoid subset of the original evaluation sets. The results show that our method produces significantly better skeletons. Note that TARig and NBS cannot be finetuned, as TARig does not provide training scripts and NBS did not release data processing scripts.
320
+
321
+ <table><tr><td></td><td>IoU ↑</td><td>Prec. ↑</td><td>Rec. ↑</td><td>CD-J2J ↓</td><td>CD-J2B ↓</td><td>CD-B2B ↓</td></tr><tr><td>NBS</td><td>0.337</td><td>0.313</td><td>0.377</td><td>0.124</td><td>0.107</td><td>0.377</td></tr><tr><td>TARig</td><td>0.603</td><td>0.591</td><td>0.637</td><td>0.100</td><td>0.090</td><td>0.079</td></tr><tr><td>RigAnything</td><td>0.886</td><td>0.904</td><td>0.884</td><td>0.030</td><td>0.033</td><td>0.018</td></tr></table>
322
+
323
+ Table 4. Comparison of skeleton prediction on the humanoid subset. Our method significantly outperforms other humanoid auto-rigging approaches.
324
+
325
+ 4.3.2 Connectivity Prediction. We evaluate the connectivity prediction performance when the given joints are from ground truth instead of prediction. We measure the binary classification accuracy (Class. Acc.) for assessing joint pair connections, as well as the CD-B2B and edit distance (ED), which measure the geometric and topological difference between the predicted and reference skeletons. As shown in Table 5, our method significantly outperforms RigNet across all metrics.
326
+
327
+ <table><tr><td></td><td>Class. Acc. ↑</td><td>CD-B2B ↓</td><td>ED ↓</td></tr><tr><td>RigNet</td><td>0.358</td><td>0.017</td><td>12.663</td></tr><tr><td>RigAnything</td><td>0.965</td><td>0.001</td><td>2.150</td></tr></table>
328
+
329
+ Table 5. Connectivity prediction on the RigNet + Objverse dataset. Our method significantly outperforms RigNet across all metrics.
330
+
331
+ 4.3.3 Skinning Prediction. We train both the baseline and our method on the new RigAnything dataset and quantitatively evaluate skinning weight prediction using the metrics proposed in the original RigNet and its evaluation set, with results shown in Table 6. Our method achieves higher precision and lower average L1 error without relying on geometric priors. We also qualitatively compare our method with RigNet and Blender's built-in automatic skinning, which assigns weights based on the shortest Euclidean distance to bones. For fairness, we use the ground truth skeleton during inference. As shown in Fig. 12, our method produces more accurate and consistent weights, especially when differentiating regions that are close in Euclidean space but far in geodesic distance, where baselines fail.
332
+
333
+ <table><tr><td></td><td>Prec.↑</td><td>Rec.↑</td><td>Avg. L1↓</td></tr><tr><td>RigNet</td><td>0.755</td><td>0.828</td><td>0.485</td></tr><tr><td>RigAnything</td><td>0.825</td><td>0.798</td><td>0.432</td></tr></table>
334
+
335
+ Table 6. Quantitative evaluation of skinning weights prediction.
336
+
337
+ # 4.4 Ablation Study
338
+
339
+ We analyze various components of our method and compare their performance with the final model.
340
+
341
+ 4.4.1 Joint Diffusion. In our full model, the joint diffusion module predicts the probability of the next joint position based on preceding joints in a skeleton sequence. This probabilistic approach effectively resolves structural ambiguities in skeleton tree representations, such as equivalent sibling node orderings, by accounting for their equivalence. In an ablation study, we replaced the joint diffusion loss with a deterministic L2 joint position loss. As shown in Fig. 8, using L2 loss leads to joints collapsing toward the middle axis, representing the mean position across samples due to sibling ambiguities within the skeleton sequence. In contrast, our method captures diverse joint position modalities, producing reasonable and accurate joint placements instead of averaged positions. Quantitative results in Tab. 7 further confirm that joint diffusion modeling significantly
342
+
343
+ ![](images/0d954b5bb57ea4f70795d800ea56ff1490fee9d4daffd924cfaffcbc1794f657.jpg)
344
+
345
+ ![](images/bb8fcfc7ab44f426923631ff6117f3b1d49e5a2cebade921446c538baa0e59b7.jpg)
346
+
347
+ ![](images/01c7803ea6318e28dd66aa08b2c0f25bec043077863f421e00ffd735132c8e55.jpg)
348
+ Fig. 10. Skeleton results on shapes from real casual images. We use off-the-shelf image-to-3D model pipeline [Liu et al. 2024c] to generate the shapes from real images and apply RigAnything to predict their skeletons.
349
+
350
+ ![](images/4384a2846b89da72d0353287d16825fc5244d1c786efa8b35e3d907d24cac97b.jpg)
351
+
352
+ ![](images/77ceb5d46ca986664ae9d8add7c21d9a3cf0a92cdab0319807bdc0330c7a2215.jpg)
353
+
354
+ ![](images/61165c6390c639bc8e5b870691be049b751d5a262a884dc3567976d1c45da03f.jpg)
355
+
356
+ ![](images/4cc0c4e60579dfdb1b04e60c5cd738bc3011997ed79632a24fb817125c197377.jpg)
357
+
358
+ ![](images/f5eec288cc76159e8b562058d11baeb5f1fb07d5dff6c8a025f568beeab80f8a.jpg)
359
+
360
+ ![](images/a85b77513faec71b3a397fdb2f0fe4a1d3f70b5cf8c04176e3ea6fd37ccb35ed.jpg)
361
+
362
+ ![](images/32fd3015230605cb0a583cc9554e3d8c15ab0070008d4bb7ec041ac75e07de1d.jpg)
363
+
364
+ ![](images/b2935c7281fdb580d3c4521392716c74534136b924b27ef69cee2c552afe565b.jpg)
365
+
366
+ ![](images/e11490ffd9448dc65406edf841bd811d8754996012e14adfb2e26e6e6a60eda2.jpg)
367
+
368
+ improves our method's performance, boosting the skeleton IoU by almost two times.
369
+
370
+ <table><tr><td></td><td>IoU ↑</td><td>Prec. ↑</td><td>Rec. ↑</td><td>CD-J2J ↓</td><td>CD-J2B ↓</td><td>CD-B2B ↓</td></tr><tr><td>Ours w./o. joint diffusion</td><td>0.308</td><td>0.277</td><td>0.364</td><td>0.068</td><td>0.059</td><td>0.046</td></tr><tr><td>Ours w./o. injecting normal</td><td>0.559</td><td>0.603</td><td>0.547</td><td>0.053</td><td>0.048</td><td>0.034</td></tr><tr><td>Ours w./o. pose aug.</td><td>0.741</td><td>0.768</td><td>0.732</td><td>0.037</td><td>0.037</td><td>0.022</td></tr><tr><td>Ours full model</td><td>0.765</td><td>0.786</td><td>0.765</td><td>0.033</td><td>0.034</td><td>0.019</td></tr></table>
371
+
372
+ Table 7. Ablation study results showing the impact of joint diffusion, normal injection, and pose augmentation on skeleton prediction.
373
+
374
+ 4.4.2 Normal Injecting. To evaluate the impact of incorporating point normals into the shape tokens, we conducted a comparison experiment without point normals as input. The numerical results in Tab. 7 show a significant decline in skeleton performance when normal information is excluded, highlighting the importance of point normals as geometric information for improving performance. We further studied the effect of normal injection on the performance of skinning prediction and provide the results in Table 8. The results demonstrate that normal information improves geodesic inference and skinning prediction, as surfaces with similar normals tend to be geodesically close. This helps the network infer connectivity information that is otherwise missing in point cloud representations compared to meshes.
375
+
376
+ <table><tr><td></td><td>Prec. ↑</td><td>Rec. ↑</td><td>Avg. L1 ↓</td></tr><tr><td>W./o. injecting normal</td><td>0.830</td><td>0.841</td><td>0.413</td></tr><tr><td>W./ injecting normal</td><td>0.836</td><td>0.821</td><td>0.397</td></tr></table>
377
+
378
+ Table 8. Skinning prediction performance with and without normal injection, indicating normal information improves geodesic inference and skinning prediction.
379
+
380
+ 4.4.3 Online Pose Augmentation. We analyze the effect of online data augmentation by randomly deforming input point clouds using the ground-truth skeleton and skinning. As shown in the numerical results in Sec. 4.4, pose augmentation improves skeleton prediction performance. Additionally, Fig. 8 compares results with and without pose augmentation on a character with a random skeleton pose not present in the dataset. Our full model generates a significantly better-aligned skeleton structure, whereas the model trained without pose augmentation fails to produce skeletons aligned with the shape and generates excessive joints. This augmentation enhances our method's ability to generalize to objects in diverse poses. Furthermore, as demonstrated in Fig. 10, our method achieves high-quality skeletons even when the input shapes are obtained from real-world data and the targets are in arbitrary poses.
381
+
382
+ ![](images/b82a625cc874a85c076c721bfafbb3c8ad843d9659ed350e14086be33f94cd0e.jpg)
383
+ Fig. 11. More results on the RigNet dataset. Please refer to the supplementary video for more 360-degree video results on both the RigNet and Objverse dataset.
384
+
385
+ # 5 Limitation and Future Work
386
+
387
+ Although our method can automatically rig a variety of objects, there are several limitations and potential avenues for future work.
388
+
389
+ ![](images/b3b259779268f603279f013da4079478390e20b54fad3cd29d2764898f52caf8.jpg)
390
+ Fig. 12. Comparison of skinning weight predictions. Our method produces more accurate and consistent weights, especially in challenging cases with large geodesic distances.
391
+
392
+ First, our current approach does not allow for control over the level of detail in the rigs. Specifically, artists may need varying levels of detail in different parts of an object to achieve different degrees of motion control. To get this feature, more detailed rigging data shall be collected to allow for finer rigging (e.g., head and hand area), and we can introduce a condition in the network to provide control over the level of detail. In addition, our method relies solely on geometry information to infer rigs, which can sometimes lack sufficient cues for the rig structure, leading to ambiguities. To improve this, texture information can be incorporated as an additional cue for automatic rigging in future iterations. Furthermore, our skinning weight prediction does not account for different motion styles, such as those influenced by materials. This limitation could be alleviated by incorporating dynamic data into the training process. However, high-quality dynamic data are scarce, and it would be interesting to explore the possibility of collecting such data to further improve rigging performance.
393
+
394
+ # 6 Conclusion
395
+
396
+ In this work, we present RigAnything, an autoregressive transformer-based method that automatically predicts rigs for 3D assets. To address the challenges posed by objects with diverse topologies and eliminate the inherent ambiguities in rigging, RigAnything probabilistically predicts the skeletons and assigns skinning weights, eliminating the need for any templates. This approach allows RigAnything to be trained end-to-end on both RigNet and the diverse Objaverse dataset, ensuring its versatility. Extensive experiments highlight the superiority of RigAnything across a wide range of object categories, showcasing its effectiveness and generalizability.
397
+
398
+ # Acknowledgments
399
+
400
+ We thank Miyuki Richardson for helping create teasier animation. This work was done when Isabella Liu was a research intern at Adobe Research. Prof. Xiaolong Wang was supported in part by gifts from Amazon and Meta. This work was also partially supported by Hillbot Inc., with Prof. Hao Su officially serving as its CTO.
401
+
402
+ # References
403
+
404
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florecia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
405
+ Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, and David B Lindell. 2024. 4d-fy: Text-to-4d generation using hybrid score distillation sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
406
+ Ilya Baran and Jovan Popovic. 2007. Automatic rigging and animation of 3d characters. ACM Transactions on graphics (TOG) (2007).
407
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems (2020).
408
+ Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative pretraining from pixels. In International conference on machine learning. PMLR, 1691-1703.
409
+ An-Chieh Cheng, Xueting Li, Sifei Liu, Min Sun, and Ming-Hsuan Yang. 2022. Autoregressive 3d shape generation via canonical mapping. In European Conference on Computer Vision.
410
+ Zedong Chu, Feng Xiong, Meiduo Liu, Jinzhi Zhang, Mingqi Shao, Zhaoxu Sun, Di Wang, and Mu Xu. 2024. HumanRig: Learning Automatic Rigging for Humanoid Character in a Large Scale Dataset. arXiv preprint arXiv:2412.02317 (2024).
411
+ Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weis, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. 2023. Objavere: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13142-13153.
412
+ Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780-8794.
413
+ Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
414
+ Zhiyang Guo, Jinxu Xiang, Kai Ma, Wengang Zhou, Houqiang Li, and Ran Zhang. 2024. Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters. arXiv preprint arXiv:2411.18197 (2024).
415
+ Bo Han, Hao Peng, Minjing Dong, Yi Ren, Yixuan Shen, and Chang Xu. 2024. AMD: Autoregressive Motion Diffusion. In Proceedings of the AAAI Conference on Artificial Intelligence.
416
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems 33 (2020), 6840-6851.
417
+ Yicong Hong, Kai Zhang, Juxiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. 2023. Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400 (2023).
418
+ Moritz Ibing, Gregor Kobsik, and Leif Kobbelt. 2023. Octree transformer: Autoregressive 3d shape generation on hierarchically structured sequences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
419
+ Heewoo Jun and Alex Nichol. 2023. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463 (2023).
420
+ Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. 2023. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. arXiv preprint arXiv:2311.06214 (2023).
421
+ Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, and Baoquan Chen. 2021. Learning skeletal articulations with neural blend shapes. ACM Transactions on Graphics (TOG) (2021).
422
+ Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. 2024. Autoregressive Image Generation without Vector Quantization. arXiv preprint arXiv:2406.11838 (2024).
423
+ Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van De Panne. 2020. Character controllers using motion vaes. ACM Transactions on Graphics (TOG) 39, 4 (2020), 40-1.
424
+ Isabella Liu, Hao Su, and Xiaolong Wang. 2024a. Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos. arXiv preprint arXiv:2404.12379 (2024).
425
+ Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, and Hao Su. 2024b. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. Advances in Neural Information Processing Systems (2024).
426
+ Minghua Liu, Chong Zeng, Xinyue Wei, Ruoxi Shi, Linghao Chen, Chao Xu, Mengqi Zhang, Zhaoning Wang, Xiaoshuai Zhang, Isabella Liu, et al. 2024c. Meshformer: High-quality mesh generation with 3d-guided reconstruction model. arXiv preprint arXiv:2408.10198 (2024).
427
+ Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. 2023. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision.
428
+
429
+ Jing Ma and Dongliang Zhang. 2023. TARig: Adaptive template-aware neural rigging for humanoid characters. Computers & Graphics (2023).
430
+ Paritosh Mittal, Yen-Chi Cheng, Maneesh Singh, and Shubham Tulsiani. 2022. Autosdf: Shape priors for 3d completion, reconstruction and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
431
+ Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. 2022. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751 (2022).
432
+ Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In International conference on machine learning. PMLR, 8162-8171.
433
+ Niki Parmar, Ashish Vassani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In International conference on machine learning.
434
+ William Peebles and Saining Xie. 2023. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4195-4205.
435
+ Xuelin Qian, Yu Wang, Simian Luo, Yinda Zhang, Ying Tai, Zhenyu Zhang, Chengjie Wang, Xiangyang Xue, Bo Zhao, Tiejun Huang, et al. 2024. Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability. arXiv preprint arXiv:2402.12225 (2024).
436
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog (2019).
437
+ Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. 2021. Humor: 3d human motion model for robust pose estimation. In Proceedings of the IEEE/CVF international conference on computer vision. 11488-11499.
438
+ Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, and Ziwei Liu. 2023. Dreamgaussian4d: Generative 4d gaussian splatting. arXiv preprint arXiv:2312.17142 (2023).
439
+ Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. 2023. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512 (2023).
440
+
441
+ Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. 2023. Text-to-4d dynamic scene generation. arXiv preprint arXiv:2301.11280 (2023).
442
+ A Waswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A Gomez, L Kaiser, and I Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems.
443
+ Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, et al. 2023. Dmv3d: Denoising multiview diffusion using 3d large reconstruction model. arXiv preprint arXiv:2311.09217 (2023).
444
+ Zhan Xu, Yang Zhou, Evangelos Kalogerakis, Chris Landreth, and Karan Singh. 2020. Rignet: Neural rigging for articulated characters. arXiv preprint arXiv:2005.00559 (2020).
445
+ Xingguang Yan, Liqiang Lin, Niloy J Mitra, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. 2022. Shapeformer: Transformer-based shape completion via sparse representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
446
+ Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, and Yunchao Wei. 2023. 4dgen: Grounded 4d content generation with spatial-temporal consistency. arXiv preprint arXiv:2312.17225 (2023).
447
+ Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. 2023. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.
448
+ Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. 2025. Gs-lrm: Large reconstruction model for 3d gaussian splatting. In European Conference on Computer Vision. Springer, 1-19.
449
+ Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, and Gim Hee Lee. 2023. animate124: Animating one image to 4d dynamic scene. arXiv preprint arXiv:2311.14603 (2023).
2502.09xxx/2502.09615/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e33c9b63a438ad4560cc45c1f7d14db48399aa6a412c88bacfb50679793e1c
3
+ size 850358
2502.09xxx/2502.09615/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09620/f9f8cded-8061-4168-a5cb-77edf7ff41c8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93f4ebe6bfa81376d836a8eac49391f7be23330e6c28f1fb646ec2668044757e
3
+ size 3264469
2502.09xxx/2502.09620/full.md ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EXPLORING THE POTENTIAL OF ENCODER-FREE ARCHITECTURES IN 3D LMMs
2
+
3
+ Yiwen Tang $^{1,2*}$ , Zoey Guo $^{3*}$ , Zhuhao Wang $^{4*}$ , Ray Zhang $^{3*}$ , Qizhi Chen $^{2}$ , Junli Liu $^{1}$ , Delin Qu $^{2}$
4
+
5
+ Zhigang Wang $^{2}$ , Dong Wang $^{2}$ , Bin Zhao $^{1,2}$ & Xuelong Li $^{5}$
6
+
7
+ $^{1}$ Northwestern Polytechnical University $^{2}$ Shanghai AI Laboratory
8
+
9
+ <sup>3</sup>The Chinese University of Hong Kong <sup>4</sup>Tsinghua University <sup>5</sup>Tele AI
10
+
11
+ # ABSTRACT
12
+
13
+ Encoder-free architectures have been preliminarily explored in the 2D Large Multimodal Models (LMMs), yet it remains an open question whether they can be effectively applied to 3D understanding scenarios. In this paper, we present the first comprehensive investigation into the potential of encoder-free architectures to alleviate the challenges of encoder-based 3D LMMs. These long-standing challenges include the failure to adapt to varying point cloud resolutions during inference and the point features from the encoder not meeting the semantic needs of Large Language Models (LLMs). We identify key aspects for 3D LMMs to remove the pre-trained encoder and enable the LLM to assume the role of the 3D encoder: 1) We propose the LLM-embedded Semantic Encoding strategy in the pre-training stage, exploring the effects of various point cloud self-supervised losses. And we present the Hybrid Semantic Loss to extract high-level semantics. 2) We introduce the Hierarchical Geometry Aggregation strategy in the instruction tuning stage. This incorporates inductive bias into the LLM layers to focus on the local details of the point clouds. To the end, we present the first Encoder-free 3D LMM, ENEL. Our 7B model rivals the state-of-the-art model, PointLLM-PiSA-13B, achieving $57.91\%$ , $61.0\%$ , and $55.20\%$ on the classification, captioning, and VQA tasks, respectively. Our results show that the encoder-free architecture is highly promising for replacing encoder-based architectures in the field of 3D understanding. The code is released at https://github.com/Ivan-Tang-3D/ENEL.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Large Language Models (LLMs) Touvron et al. (2023); Bai et al. (2023) have gained unprecedented attention for their proficiency in understanding and generating complex language scenarios. Building upon these advances, many recent efforts have been made to develop Large Multimodal Models (LMMs), empowering LLMs with the capability to interpret multimodal information, such as 2D images Li et al. (2024), 3D point clouds Chen et al. (2025); Guo et al. (2023); Xu et al. (2025); Wang et al. (2025) and visual generation Tong et al. (2025); Jiang et al. (2025); Guo et al. (2025b).
18
+
19
+ Mainstream LMMs are typically encoder-based, relying on heavyweight yet powerful pre-trained encoders (e.g., CLIP Radford et al. (2021) for 2D and I2P-MAE Zhang et al. (2023a) for 3D). While these pre-trained encoders offer robust multimodal embeddings enriched with pre-existing knowledge, they also introduce challenges that could limit the future advancement of multimodal understanding. To mitigate the limitations introduced by visual encoders in VLMs—such as resolution, aspect ratio, and semantic priors—many encoder-free LMM studies Li et al. (2025); Diao et al. (2024a; 2025); Lei et al. (2025); Luo et al. (2025) have explored the possibility of training without pre-trained encoders.
20
+
21
+ Specifically for 3D LMMs, the encoder-based architecture has the following potential drawbacks: (1) Point Cloud Resolution Limitation. 3D encoders are often pre-trained on point cloud data at a fixed resolution, such as 8,192 points for Point-BERT Yu et al. (2022) in PointLLM Xu et al. (2025). However, during inference, the resolution of point clouds may vary (e.g., 12,000 or 4,000 points). This difference between training and inference resolutions can result in the loss of spatial information
22
+
23
+ ![](images/94890dd0a9d804cdc1b2776a3b47f17c54b66cc87bef4e66a962e7c071314f08.jpg)
24
+ (a) Point Cloud Resolution Limitation
25
+ when extracting 3D embeddings, leading to difficulties for LLMs to comprehend, as showcased in Figure 1 (a). (2) Embedding Semantic Discrepancy. 3D encoders are typically pre-trained using self-supervised methods like MAE Pang et al. (2022); Tang et al. (2024a;b) and contrastive learning Xie et al. (2020); Qi et al. (2023), but these training objectives may not align with the specific semantic needs of LLMs. In other words, they may not capture the most relevant semantics for LLMs to understand 3D objects, as visualized in Figure 1 (b). Even when a projection layer is used to connect 3D encoders with LLMs, simple MLPs are often insufficient for a complete semantic transformation. Given these issues, we ask: Is it possible to explore an encoder-free architecture for 3D LMMs, eliminating the 3D encoder and instead integrating its functionality directly within the LLM itself?
26
+
27
+ ![](images/c7ed2d650ca9819692a8bb0918dd8c6f65aef07527d4a1b4dd1c2145da7c7242.jpg)
28
+ (b) Embedding Semantic Discrepancy
29
+ Figure 1: Issues of encoder-based 3D LMMs. (a) Point Cloud Resolution Limitation. During training, the point cloud size (P.C. Size) and point token size (P.T. Size) are fixed at 8192 and 512, respectively. And we adjust these two sizes during inference, point cloud size from 2K to 16K and the corresponding point token size from 128 to 2048. We evaluate them on the captioning task of the Objaverse benchmark using GPT-4 score as the evaluation metric. (b) Embedding Semantic Discrepancy. We visualize the attention scores of the average text token to the point tokens, where red indicates higher values. The point tokens in the encoder-free architecture exhibit stronger textual semantic relevance needed for the LLM.
30
+
31
+ In this paper, we present the first systematic investigation into the potential of an encoder-free architecture for 3D LMMs. To minimize external influences and ensure clarity, we use the pioneering and sufficiently concise PointLLM Xu et al. (2025) as our encoder-based baseline, which consists of two progressive training stages: pre-training and instruction tuning. We evaluate the performance on 3D classification Deitke et al. (2023), 3D captioning Deitke et al. (2023) and 3D VQA Deitke et al. (2023) tasks. Specifically, to remove the encoder while mitigating any performance degradation, we explore solutions to the following two key questions:
32
+
33
+ (1) How can we compensate for the high-level 3D semantics originally extracted by the 3D encoder? In 3D LMMs, the raw point cloud input is first passed through a token embedding module for low-level tokenization, before being processed by the main 3D encoder, usually a Transformer Vaswani (2017), to generate high-level embeddings. Skipping the encoder entirely poses a challenge in capturing the complex spatial structures of 3D point clouds. To address this, we propose a strategy called LLM-embedded Semantic Encoding in the pre-training stage. First, we adopt a simple yet effective token embedding module that captures as much informative semantic content as possible. These 3D tokens are then directly fed into the LLM. Next, we aim to shift the responsibility of capturing high-level 3D semantics to the LLM itself. To guide this process, we explore various 3D self-supervised loss functions, such as masked modeling loss and distillation loss, and ultimately propose the Hybrid Semantic Loss as the most effective choice. Further, we make the early layers of the LLM to be learnable, allowing them to specialize in multimodal alignment.
34
+ (2) How can we integrate inductive bias into LLMs for better perception of 3D geometric structures? Pre-trained 3D encoders typically embed explicit inductive bias into their architectures to progressively capture multi-level 3D geometries. For instance, models like Point-M2AE Zhang et al. (2022) use a local-to-global hierarchy, which is a concept also common in convolutional layers for 2D image processing He et al. (2016). In contrast, LLMs employ standard Transformer architectures, where each layer processes the same number of tokens, representing the same semantic level across the network. In the absence of the encoder, we introduce the approach of Hierarchical Geometry Aggregation during the fine-tuning stage. In the early layers of the LLM, we aggregate 3D tokens based on their geometric distribution using Dynamic Grid Sampling. This approach enables the LLM to gradually integrate detailed 3D semantics and develop a more holistic understanding of the 3D
35
+
36
+ object. In the later layers, we reverse this aggregation, propagating the tokens back to their original distribution to maintain the fine-grained representation necessary for complex tasks.
37
+
38
+ Through a series of experimental investigations, we have uncovered the strong potential of applying encoder-free architecture to the 3D LMM domain. Building on our insights, we introduce ENEL, an ENcoder-freE 3D LMM evolved from Vicuna-7B Chiang et al. (2023) using the same training dataset from PointLLM. Notably, without any 3D encoders, ENEL-7B achieves comparable performance to the state-of-the-art PointLLM-PiSA-13B Guo et al. (2025a). We hope ENEL may provide the community with an effective path for adapting the encoder-free architecture to 3D scenarios.
39
+
40
+ Our main contributions are summarized as follows:
41
+
42
+ - We present the first comprehensive empirical study of applying encoder-free architectures to the 3D LMM domain, offering valuable insights for the field.
43
+ - We aim to transfer the original roles of 3D encoders to the LLM itself, and propose the LLM-embedded Semantic Encoding and Hierarchical Geometry Aggregation strategy, both of which have been validated as effective.
44
+ - We further introduce ENEL, a concise and well-performed encoder-free 3D LMM, which, at the 7B parameter scale, achieves $57.91\%$ , $61.0\%$ , and $55.20\%$ on 3D captioning, classification, and 3D VQA tasks, respectively, on par with existing encoder-based models.
45
+
46
+ # 2 RELATED WORK
47
+
48
+ 3D LMM. Recent advancements in integrating large language models (LLMs) with 3D data have led to significant progress in both object-level and scene-level understanding. At the object level, early approaches like Hong et al. (2024) utilize 2D rendering to leverage 2D LLMs, but this sacrifices geometric details. More recent models, including Point-Bind LLM Guo et al. (2023), PointLLM Xu et al. (2023b) and ShapeLLM Qi et al. (2024), directly encode point clouds and align them with LLMs, by combining the 3D encoder with a powerful language model, effectively fusing geometric, appearance, and linguistic information. MiniGPT-3D Tang et al. (2024c) is introduced, which efficiently aligns 3D point clouds with LLMs by leveraging 2D priors from 2D-LLMs. It employs a four-stage cascaded training strategy along with a Mixture of Query Experts (MoQE) module. Zeng et al. propose GreenPLM Tang et al. (2025), an energy-efficient framework that directly translates monolingual pre-trained language models into other languages using bilingual lexicons. At the scene level, models like Chat-3D Wang et al. (2023) and Scene-LLM Fu et al. (2024) focus on understanding complex spatial relationships through dialogue and tasks like captioning. Scene-LLM Fu et al. (2024) enhances embodied agents' abilities in interactive 3D indoor environments by integrating both scene-level and egocentric 3D information. Grounded 3D-LLM Chen et al. (2024b) utilizes referent tokens to reference specific objects within 3D scenes, enabling tasks such as object detection and language grounding. However, conventional encoder-based 3D LMMs commonly suffer from limitations, specifically Point Cloud Resolution Limitation and Embedding Semantic Discrepancy, which stem from the inductive bias inherent in the 3D pre-trained encoder. Our ENEL alleviates these restrictions by removing the encoder and utilizes a lightweight architecture to significantly boost performance.
49
+
50
+ Encoder-free Vision-Language Models. Traditional vision-language models (VLMs) often rely on vision encoders to extract visual features before processing them with language models, integrating image encoders like CLIP Radford et al. (2021) and DINO V2 Oquab et al. (2023). However, recent efforts have explored encoder-free VLMs for their simplicity. Approaches like ChameleonTeam (2024); Xie et al. (2024) use VQ tokenizers Esser et al. (2021) or linear projection layers Diao et al. (2024a); Chen et al. (2024a) to represent images. Fuyu-8B Bavishi et al. (2023), a pure decoder-only model, directly processes image patches through linear projections, handling high-resolution images but showing only average performance. The EVE series Diao et al. (2024b; 2025) eliminates the need for a separate vision encoder by bridging vision-language representation within a unified decoder and enhancing visual recognition capabilities through additional supervision. Mono-InternVL series Luo et al. (2024; 2025) leverage visual experts and progressive visual pre-training (EViP/EViP++) to achieve stable optimization and competitive performance. SAIL series Lei et al. (2025) directly encode raw pixels and decodes language within a single architecture, achieving competitive vision-language performance without pre-trained vision encoders. The key idea behind ENEL is enabling the LLM to assume the functionality of the encoder by effective and efficient methods. This approach
51
+
52
+ Table 1: Token Embedding. Performance on Objaverse with PointLLM-7B as the baseline. 'Cls'/'Cap': classification/captioning tasks. 'Avg': accuracy under prompts "What is this?" and "This is an object of." 'S-BERT': Sentence-BERT. 'T.E.: our designed token embedding module.
53
+
54
+ <table><tr><td rowspan="2">Method</td><td>Cls (Avg)</td><td colspan="2">Cap</td></tr><tr><td>GPT-4</td><td>GPT-4</td><td>S-BERT</td></tr><tr><td>PointLLM-7B</td><td>53.00</td><td>44.85</td><td>47.47</td></tr><tr><td>- Encoder</td><td>35.50</td><td>33.37</td><td>41.19</td></tr><tr><td>+ 2-layer T.E.</td><td>40.60</td><td>38.85</td><td>43.25</td></tr><tr><td>+ 3-layer T.E.</td><td>45.55</td><td>41.36</td><td>44.82</td></tr><tr><td>+ 4-layer T.E.</td><td>43.00</td><td>40.47</td><td>43.50</td></tr></table>
55
+
56
+ Table 2: Learnable Layers. We set the LLM early layers to be learnable. 'LR' represents the learning rate during the pre-training stage, with the original learning rate set to 2e-3.
57
+
58
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">LR</td><td>Cls (Avg)</td><td colspan="2">Cap</td></tr><tr><td>GPT-4</td><td>GPT-4</td><td>S-BERT</td></tr><tr><td>PointLLM-7B</td><td>2e-3</td><td>53.00</td><td>44.85</td><td>47.47</td></tr><tr><td rowspan="2">+ 2 learnable layers</td><td>2e-3</td><td>40.00</td><td>40.20</td><td>44.82</td></tr><tr><td>4e-4</td><td>44.00</td><td>42.62</td><td>46.30</td></tr><tr><td rowspan="2">+ 4 learnable layers</td><td>2e-3</td><td>43.75</td><td>40.13</td><td>45.76</td></tr><tr><td>4e-4</td><td>47.90</td><td>43.50</td><td>46.70</td></tr><tr><td rowspan="2">+ 8 learnable layers</td><td>2e-3</td><td>42.35</td><td>37.91</td><td>41.28</td></tr><tr><td>4e-4</td><td>46.70</td><td>42.80</td><td>46.14</td></tr><tr><td rowspan="2">+ 12 learnable layers</td><td>2e-3</td><td>41.55</td><td>40.05</td><td>41.40</td></tr><tr><td>4e-4</td><td>46.15</td><td>42.39</td><td>46.00</td></tr></table>
59
+
60
+ diverges from 2D encoder-free LMMs, which tend to focus on larger datasets and more complex structures for better results.
61
+
62
+ # 3 INVESTIGATION OF ENCODER-FREE 3D LMM
63
+
64
+ # 3.1 PRELIMINARY
65
+
66
+ Encoder-free in 2D LMMs. ELVA Li et al. (2025) is an encoder-free Video-LLM that directly models nuanced video-language interactions without relying on a vision encoder. EVE Diao et al. (2024a) and its successor EVEv2 Dao et al. (2025) are designed as efficient encoder-free vision-language models. SAIL Lei et al. (2025) serves as a unified transformer for vision and language, while MonoInternVL Luo et al. (2025) represents a monolithic multimodal LLM. In parallel, Fuyu-8B Bavishi et al. (2023), a decoder-only transformer developed by Adept AI, has gained substantial community adoption. A common characteristic across these works is the adoption of a lightweight, randomly initialized token embedding layer to convert inputs into tokens for the LLM. This design eliminates the need for a dedicated vision encoder and enables end-to-end training and inference.
67
+
68
+ Pre-trained Encoders in 3D LMMs. Traditionally, 3D pre-trained encoders are characterized by two properties: (1) independent pretraining on point cloud tasks (e.g., reconstruction), and (2) structural decoupling, where they are connected to the language model through projection layers. In 3D LMMs, commonly adopted encoders refer to pre-trained models such as PointMAE Pang et al. (2022), PointBERT Yu et al. (2022), and Uni3D Zhou et al. (2023). Related work is in Appendix A.1.
69
+
70
+ Overall Architecture. We select PointLLM as the baseline model for the exploration and evaluate the performance of different strategies on the Objverse dataset Deitke et al. (2023), using GPT-4 scores combined with traditional metrics as our evaluation metrics. Point Embedding Layer. As shown in Figure 2, we first remove the encoder of PointLLM and adopt the original token embedding Yu et al. (2022). However, the coarse structural design results in a significant performance degradation, as observed in Table 1, where the GPT-4 scores for the classification and captioning tasks decrease by $17.5\%$ and $10.48\%$ , respectively. To mitigate excessive information loss and provide refined local features to the LLM, we adopt a small network with a limited number of parameters, which is a lightweight variant of Point-PN Zhang et al. (2023b). Specifically, for the input $\{P_i\}_{i=1}^N$ , we apply Farthest Point Sampling (FPS) for downsampling the number of points, k-Nearest Neighbors (k-NN) with group size $k$ for local aggregation, and learnable linear layers for feature encoding. After a series of repetitive operations and the projection layer, we transform the point clouds into high-dimensional vectors $\{F_i\}_{i=1}^M \in \mathbb{R}^{M \times D_1}$ . In Table 1, we experiment with token embedding at different depths and find that three layers yield the best performance. 3D Encoding & Alignment. We discover that the absence of the encoder results in a lack of context modeling in point cloud feature processing. Therefore, we attempt to have the early layers of the LLM take on the encoder's role in capturing global interactions of features, further encoding the point cloud features. In the pre-training stage, we set the first K layers of the frozen LLM to be learnable. Within the shared semantic space, 3D tokens and text tokens interact and align naturally. Early Fusion provides a more practical way to achieve modality alignment between 3D and textual semantic spaces. Meanwhile, we experiment
71
+
72
+ ![](images/1982e136af3ae37bdb5857d1cdeef19e045105eaefe3d66bcd364ad731949922.jpg)
73
+ Figure 2: Overall Pipeline of ENEL. The training is divided into two stages: the pre-training stage and the instruction tuning stage. In the first stage, we set the first $K$ layers to be learnable and apply the proposed Hybrid Semantic Loss to embed high-level semantics into the LLM. In the second stage, we adopt the Hierarchical Geometric Aggregation strategy to capture local structures of point clouds.
74
+
75
+ ![](images/6f8ce16b31ebe8af3a1215a6855731eb44de8b12b326f0a7793797215fd71570.jpg)
76
+
77
+ with different learning rates. As shown in Table 2, a smaller learning rate yields better results by stabilizing early layer optimization. Based on the designed token embedding module, setting the first four layers to be learnable yields the best results.
78
+
79
+ # 3.2 LLM-EMBEDDED SEMANTIC ENCODING
80
+
81
+ The lack of the 3D encoder results in insufficient encoding of point cloud semantic information, which greatly hinders the LLM to understand the structural details of point clouds. Most existing 3D encoders use self-supervised losses to embed the high-level semantics of point clouds into the transformer, primarily categorized into four types: Masked Modeling Loss Pang et al. (2022), Reconstruction Loss Qi et al. (2023), Contrastive Loss Khosla et al. (2020), and Knowledge Distillation Loss Zhang et al. (2023a). Based on the proposed token embedding module and LLM learnable early layers, we implement and evaluate the effects of these losses on the encoder-free 3D LMM in the pre-training stage, as described in Figure 3. Finally, we propose the Hybrid Semantic Loss, which assists the LLM to learn the relationship between local spatial information in the point clouds and grasp the high-level 3D semantics.
82
+
83
+ Masked Modeling Loss. In the pre-training stage, we apply the Masked Modeling Loss to the point tokens processed by the LLM, as shown in Figure 3 (a). Through the token embedding module, the point clouds $\{P_i\}_{i=1}^N$ are divided into point patches $\{G_i\}_{i=1}^M \in \mathbb{R}^{M \times k \times 3}$ and the corresponding point tokens $\{F_i\}_{i=1}^M$ . We randomly mask the point tokens with a masking ratio $r$ , and replace them with learnable tokens. The masked feature tokens can be denoted as $\{F_{\mathrm{gt}_i}\}_{i=1}^{M*r}$ , which serve as the ground truth for the loss computation. After the masked tokens are replaced with learnable tokens and processed by the LLM, a linear layer is applied to predict the point tokens $\{F_{\mathrm{pre}_i}\}_{i=1}^{M*r} \in \mathbb{R}^{M*r \times D_1}$ , and the Mean Squared Error (MSE) is computed between $F_{\mathrm{pre}}$ and $F_{\mathrm{gt}}$ . The optimization is:
84
+
85
+ $$
86
+ \mathcal {L} _ {\text {m a s k}} = \frac {1}{M * r} \sum_ {i = 1} ^ {M * r} \left(\left\| F _ {\text {p r e} _ {i}} - F _ {\text {g t} _ {i}} \right\| _ {2} ^ {2}\right). \tag {1}
87
+ $$
88
+
89
+ The specific process of applying Masked Modeling to point patches $G$ is detailed in Appendix A.3.1.
90
+
91
+ Reconstruction Loss. After the point feature tokens $\{F_i\}_{i=1}^M$ are encoded by the LLM, the tokens are transformed to the point patches $\{G_{\mathrm{pre}_i}\}_{i=1}^M \in \mathbb{R}^{M \times k \times 3}$ through a linear layer. We utilize the
92
+
93
+ ![](images/53abf172937658902248e451fe0570451778f2a44a419e576e142ddb4a5a46fd.jpg)
94
+ Figure 3: Point Cloud Self-Supervised Learning Losses. In the pre-training stage, we explore common self-supervised learning losses for the encoder-free 3D LMM: (a) Masked Modeling Loss, (b) Reconstruction Loss, (c) Contrastive Loss, and (d) Knowledge Distillation Loss. The (e) represents our proposed Hybrid Semantic Loss, specifically designed for the encoder-free architecture.
95
+
96
+ ![](images/c5216d3e2bb68f7e923407a9e11e383dd77487a7e98c5586808c7b61e108edfd.jpg)
97
+
98
+ ![](images/81b52f61eca490f97eb9e1aad82af2f70cd34c0cac9ae34bbda07c48b0e9b5be.jpg)
99
+
100
+ ![](images/bf73a66ada5b506e9f320070cf81975b0d789b6396cf76b92c84d0ef30f0723f.jpg)
101
+
102
+ ![](images/15c6622c2ebf0b6dab34f75300584e165069e732893bf0b4495f031a586a6fe4.jpg)
103
+
104
+ $l_{2}$ chamfer distance to align the predicted $G_{\mathrm{pre}}$ with the ground truth $G$ , reconstructing the original spatial information, as illustrated in Figure 3 (b). This approach encourages the LLM to learn the high-level semantics of the point cloud while preserving the critical structure and key features of the point cloud input. The optimization target $L_{\mathrm{recon}}$ can be written as
105
+
106
+ $$
107
+ \frac {1}{M} \sum_ {i = 1} ^ {M} \left(\min _ {j} \| a _ {i} - b _ {j} \| _ {2} ^ {2} + \min _ {j} \| b _ {i} - a _ {j} \| _ {2} ^ {2}\right), \tag {2}
108
+ $$
109
+
110
+ where $a = G_{\mathrm{pre}}$ , $b = G$ . The procedure for reconstructing feature $\mathbf{F}$ is detailed in Appendix A.3.1.
111
+
112
+ Contrastive Loss. We conduct contrastive learning Khosla et al. (2020) at the point cloud level, where we contrast two transformed versions of the point cloud in the Figure 3 (c). Given a sampled point cloud $\{P_i\}_{i=1}^N$ , we apply two random geometric transformations $T_1$ and $T_2$ , including rotation and translation, to obtain $P_{T1}$ and $P_{T2}$ . The two augmented point clouds are separately paired with the original text query and processed through the LLM to obtain their respective feature tokens $F_{T1} \in \mathbb{R}^{M \times D_1}$ and $F_{T2} \in \mathbb{R}^{M \times D_1}$ . Within the mini-batch, the two feature tokens derived from the same point cloud serve as positive pairs, while they are considered negative pairs with other point clouds. Using NCESoftmaxLoss, we aim to maximize the similarity of positive pairs and minimize the similarity of negative pairs, encouraging the LLM to learn geometric equivariance of point clouds. The $\mathcal{L}_{\mathrm{contrast}}$ is shown as below, where B stands for the training batch size.
113
+
114
+ $$
115
+ \frac {1}{B} \sum_ {i = 1} ^ {B} \left(- \log \frac {\exp \left(\mathbf {F} _ {T 1 _ {i}} \cdot \mathbf {F} _ {T 2 _ {i}} / \tau\right)}{\sum_ {j = 1} ^ {B} \exp \left(\mathbf {F} _ {T 1 _ {i}} \cdot \mathbf {F} _ {T 2 _ {j}} / \tau\right)}\right). \tag {3}
116
+ $$
117
+
118
+ Knowledge Distillation Loss. We select the powerful Uni3D-L Zhou et al. (2023) as the teacher encoder, input the point cloud into the 3D encoder, and obtain the output feature $F_{\text{teacher}} \in \mathbb{R}^{M \times D_2}$ . The Mean Squared Error (MSE) between the LLM output tokens $F_{\text{student}}$ and $F_{\text{teacher}}$ is computed to align $F_{\text{student}}$ as closely as possible to $F_{\text{teacher}}$ , thereby transferring the knowledge embedded in the 3D encoder to the LLM. By obtaining additional supervision from the Uni3D, the LLM better captures the complex structures in the point cloud data, as displayed in Figure 3 (d). The objective function is:
119
+
120
+ $$
121
+ \mathcal {L} _ {\mathrm {K D}} = \frac {1}{M} \sum_ {i = 1} ^ {M} \left(\left\| F _ {\text {s t u d e n t} _ {i}} - F _ {\text {t e a c h e r} _ {i}} \right\| _ {2} ^ {2}\right). \tag {4}
122
+ $$
123
+
124
+ Experiments and Insights. As shown in Table 3, we compare the effects of common self-supervised learning losses in the pre-training stage, where they are summed with the LLM cross-entropy loss Touvron et al. (2023), each with a coefficient of 1. The observations are summarized as below:
125
+
126
+ - The point cloud self-supervised losses generally benefit the encoder-free 3D LMM. Compared to previous experimental results, where the GPT scores for the classification and captioning tasks are $47.90\%$ and $43.50\%$ , the self-supervised losses bring about the significant improvements. This is because the self-supervised learning loss forces transformations on the complex point clouds through certain task design. This encourages the LLM to not simply memorize specific point cloud data but to learn the underlying geometric relationships and high-level semantic information.
127
+
128
+ Table 3: LLM-embedded Semantic Encoding. In pre-training, we explore the effects of different self-supervised learning losses targeting point tokens. $\Psi$ and $\Phi$ denote mask ratios of $60\%$ and $30\%$ , respectively. Subscripts patch and feat indicate loss targets. For Hybrid Semantic Loss, the subscripts patch and feat refer to the masked modeling target, with reconstruction targeting the corresponding feat and patch.
129
+
130
+ <table><tr><td rowspan="2">Method</td><td>Cls (Avg)</td><td colspan="2">Cap</td></tr><tr><td>GPT-4</td><td>GPT-4</td><td>S-BERT</td></tr><tr><td>PointLLM-7B</td><td>53.00</td><td>44.85</td><td>47.47</td></tr><tr><td>Masked Modeling Lossψ patch</td><td>47.00</td><td>43.64</td><td>45.36</td></tr><tr><td>Masked Modeling LossΦ patch</td><td>49.00</td><td>45.20</td><td>46.29</td></tr><tr><td>Masked Modeling Lossfeat Ψ</td><td>48.50</td><td>43.90</td><td>45.30</td></tr><tr><td>Masked Modeling Lossfeat Φ</td><td>48.50</td><td>45.85</td><td>46.93</td></tr><tr><td>Reconstruction Losspatch</td><td>48.00</td><td>45.56</td><td>46.33</td></tr><tr><td>Reconstruction Lossfeat</td><td>47.50</td><td>44.05</td><td>46.18</td></tr><tr><td>Contrastive Loss</td><td>42.50</td><td>41.21</td><td>43.77</td></tr><tr><td>Knowledge Distillation Loss</td><td>48.00</td><td>43.87</td><td>46.09</td></tr><tr><td>Hybrid Semantic Losspatch</td><td>50.00</td><td>45.24</td><td>46.59</td></tr><tr><td>Hybrid Semantic Lossfeat</td><td>52.00</td><td>47.65</td><td>47.30</td></tr></table>
131
+
132
+ - Among the self-supervised learning losses, the Masked Modeling Loss demonstrates the strongest performance improvement. It achieves GPT-4 scores of $48.5\%$ and $45.85\%$ for classification and captioning tasks, respectively. The application of the masked modeling to the point features facilitates the embedding of high-level semantics from point clouds into the LLM. However, a higher mask ratio increases training difficulty, with $60\%$ performing worse than $30\%$ . In addition, explicitly reconstructing point patches helps capture complex structures and critical details in point clouds. Knowledge Distillation Loss falls short compared to the first two losses. Finally, Contrastive Loss, which fails to extract the detailed semantics, achieves the lowest performance.
133
+
134
+ Hybrid Semantic Loss. Based on the experimental results above, we propose the self-supervised learning loss specifically designed for the encoder-free 3D LMM—Hybrid Semantic Loss, as shown in Figure 3 (e). We apply a masking ratio $r$ to randomly mask point tokens from the token embedding. The masked tokens and the corresponding patches are referred to as $\{F_{\mathrm{mask}_i}\}_{i=1}^{M*r}$ and $\{G_{\mathrm{mask}_i}\}_{i=1}^{M*r}$ , respectively. The remaining tokens are denoted as $\{F_{\mathrm{vis}_i}\}_{i=1}^{M*(1-r)}$ and $\{G_{\mathrm{vis}_i}\}_{i=1}^{M*(1-r)}$ . Considering the autoregressive nature of the LLM and the unordered attribute of point clouds, we directly concatenate learnable tokens $\{F_{\mathrm{learn}_i}\}_{i=1}^{M*r}$ to the end of $F_{\mathrm{vis}}$ , replacing the masked tokens. For the masked portion, we adopt masked modeling, and for the visible portion, we use the reconstruction strategy. After passing point tokens through the LLM, we compute the MSE between $F_{\mathrm{learn}}$ and $F_{\mathrm{mask}}$ . The visible features $F_{\mathrm{vis}}$ are transformed into $G_{\mathrm{pred}}$ , and the $L_2$ Chamfer distance is computed between $G_{\mathrm{pred}}$ and $G_{\mathrm{vis}}$ . These two are added to the original cross-entropy loss with coefficients all equal to 1. This approach not only embeds high-level semantics into the LLM but also ensures geometric consistency throughout the point cloud learning process. With a 30% mask ratio and per-layer positional encoding of point tokens, it achieves 52.00% and 47.65% on the classification and captioning tasks, respectively. The inverse modeling process is described in Appendix A.3.1.
135
+
136
+ Our motivation arises from the observation that complex objectives, such as KD and contrastive learning, impose significant computational overhead yet often yield marginal gains compared to intrinsic data modeling losses like masked modeling. To address this, we propose the Hybrid Semantic Loss, which resolves the structural mismatch between 3D data and LLMs by exploiting two key properties: (1) the permutation invariance of point clouds, allowing learnable tokens to be appended after visible tokens without positional restoration; and (2) the encoder-free architecture, where 3D tokens are integrated into a causally-masked LLM instead of a bidirectionally-masked 3D encoder, fundamentally altering information flow between visible and masked tokens, enabling visible tokens to learn harder objectives while learnable tokens focus on lightweight reconstruction.
137
+
138
+ # 3.3 HIERARCHICAL GEOMETRY AGGREGATION
139
+
140
+ 3D encoders are designed with specific structures tailored for point clouds, such as local-to-global hierarchy Zhang et al. (2022) for exploring the geometric structure of the point cloud. However, in
141
+
142
+ ![](images/4337dcd88b36f5e5c16700b931356bf839e32c94797933598aa76c1be5d86262.jpg)
143
+ Figure 4: Hierarchical Geometry Aggregation Strategy. In the instruction tuning stage, we apply aggregation and propagation operations to the point tokens to capture the local structural details.
144
+
145
+ Table 4: Hierarchical Geometry Aggregation. In the instruction tuning stage, we conduct the experiments of Hierarchical Geometry Aggregation strategy. $l$ represents the number of aggregation and propagation operations. $H$ refers to the LLM layers between $l$ aggregation and $l$ propagation operations. + Self-Attn. represents the incorporation of the gated self-attention in the aggregation.
146
+
147
+ <table><tr><td rowspan="2">Method</td><td>Cls (Avg)</td><td colspan="2">Cap</td></tr><tr><td>GPT-4</td><td>GPT-4</td><td>S-BERT</td></tr><tr><td>PointLLM-7B</td><td>53.00</td><td>44.85</td><td>47.47</td></tr><tr><td>l=1</td><td>52.50</td><td>48.70</td><td>48.07</td></tr><tr><td>l=2</td><td>51.00</td><td>46.67</td><td>48.12</td></tr><tr><td>l=3</td><td>53.00</td><td>48.93</td><td>48.06</td></tr><tr><td>l=4</td><td>45.00</td><td>45.48</td><td>46.90</td></tr><tr><td>H=2</td><td>54.25</td><td>49.56</td><td>48.52</td></tr><tr><td>H=4</td><td>52.50</td><td>48.61</td><td>47.81</td></tr><tr><td>H=8</td><td>52.25</td><td>48.95</td><td>47.90</td></tr><tr><td>+ Self-Attn.</td><td>55.55</td><td>51.03</td><td>48.79</td></tr></table>
148
+
149
+ encoder-free architectures, the LLM itself does not have an explicit local modeling module. The self-attention mechanism is intended for modeling global interactions. Therefore, building upon the proposed Hybrid Semantic Loss, we explore in the instruction tuning stage how to enable the LLM to actively perceive 3D local details and complement the learned global semantics. To this end, we propose the Hierarchical Geometry Aggregation strategy.
150
+
151
+ Implementation Details. As depicted in Figure 4, from the LLM second layer, the input point tokens $\{F_{\mathrm{input}_i}\}_{i = 1}^M$ , based on their corresponding coordinates $\{P_{\mathrm{input}_i}\}_{i = 1}^M$ , are grouped by Dynamic Grid Sampling. The grid size follows a cumulative scaling strategy across aggregation layers. At the $i$ -th aggregation layer, the grid size is:
152
+
153
+ $$
154
+ s _ {i} = \alpha \cdot e ^ {\sum_ {j = 1} ^ {i} \beta_ {j}}, \quad \beta_ {j} = \gamma \cdot \tanh \left(\theta_ {j}\right) + \beta_ {\mathrm {c t r}}, \tag {5}
155
+ $$
156
+
157
+ where $\alpha = 0.02\mathrm{m}$ and $s_i\in [s_{\mathrm{min}},s_{\mathrm{max}}] = [0.02,1]\mathrm{m}$ . To ensure the cumulative scaling stays within bounds across $l$ aggregation layers, we set:
158
+
159
+ $$
160
+ \gamma = \frac {\ln \left(\frac {s _ {\operatorname* {m a x}}}{\alpha}\right) - \ln \left(\frac {s _ {\operatorname* {m i n}}}{\alpha}\right)}{2 l}, \quad \beta_ {\mathrm {c t r}} = \frac {\ln \left(\frac {s _ {\operatorname* {m a x}}}{\alpha}\right) + \ln \left(\frac {s _ {\operatorname* {m i n}}}{\alpha}\right)}{2 l}, \tag {6}
161
+ $$
162
+
163
+ where $l$ is the total number of aggregation layers. Each $\theta_{j}$ is randomly initialized from a standard normal distribution. Points within the same grid cell form local neighbors, with the set of all neighbors denoted as $\mathcal{G}_i$ having cardinality $M_{i}$ . The neighborhood features $F_{\mathrm{input}}^n\in \mathbb{R}^{M_i\times k\times D_1}$ are then collected, where $k$ denotes the maximum number of points across all cells. To handle varying point numbers across grid cells, we employ a padding strategy: for cells with fewer than $k$ points, we compute the mean-pooled feature of existing points and concatenate it repeatedly until reaching $k$ points per cell. For $F_{\mathrm{input}}^n$ , we employ the gated self-attention mechanism for intra-group interactions, grasping the local geometric structure. We multiply the self-attention output by a learnable parameter initialized from zero to adaptively adjust the required knowledge. We formulate it as
164
+
165
+ $$
166
+ F _ {\text {i n p u t}} ^ {n} ^ {\prime} = \tanh (\alpha) * \text {S e l f - A t t n .} \left(F _ {\text {i n p u t}} ^ {n}\right) + F _ {\text {i n p u t}} ^ {n}. \tag {7}
167
+ $$
168
+
169
+ On top of this, we apply pooling to fuse the features $F_{\mathrm{input}}^{n}$ within each neighbor, yielding aggregated tokens $\{F_{\mathrm{agg}_j}^i\}_{j=1}^{M_i}$ , formulated as
170
+
171
+ $$
172
+ F _ {\text {a g g}} ^ {i} = \text {M e a n P o o l i n g} \left(F _ {\text {i n p u t}} ^ {n} ^ {\prime}\right). \tag {8}
173
+ $$
174
+
175
+ We perform $l$ iterations of geometry aggregation, resulting in $\{F_{\mathrm{agg}_i}^l\}_{i = 1}^{M_l}$ . To ensure that the LLM fully extracts the local information, we choose to perform further semantic modeling using $H$ LLM
176
+
177
+ Table 5: Comparison of different models on various 3D understanding tasks. A primary focus is placed on GPT-4 evaluation, along with data-driven metrics (Sentence-BERT). The * indicates the Qwen2.5-7B LLM base and the ShapeLLM training data. The $\alpha$ denotes reproduced results.† denotes the model is implemented based on the ShapeLLM baseline.
178
+
179
+ <table><tr><td rowspan="2">Model</td><td colspan="6">Cap</td><td>Cls (Avg)</td><td>QA</td></tr><tr><td>GPT-4</td><td>Sentence-BERT</td><td>SimCSE</td><td>BLEU-1</td><td>ROUGE-L</td><td>METEOR</td><td>GPT-4</td><td>GPT-4</td></tr><tr><td>InstructBLIP-7BDai et al. (2023)</td><td>45.34</td><td>47.41</td><td>48.48</td><td>4.27</td><td>8.28</td><td>12.99</td><td>43.50</td><td>-</td></tr><tr><td>InstructBLIP-13BDai et al. (2023)</td><td>44.97</td><td>45.90</td><td>48.86</td><td>4.65</td><td>8.85</td><td>13.23</td><td>34.25</td><td>-</td></tr><tr><td>LLaVA-7BLiu et al. (2024)</td><td>46.71</td><td>45.61</td><td>47.10</td><td>3.64</td><td>7.70</td><td>12.14</td><td>50.00</td><td>-</td></tr><tr><td>LLaVA-13BLiu et al. (2024)</td><td>38.28</td><td>46.37</td><td>45.90</td><td>4.02</td><td>8.15</td><td>12.58</td><td>51.75</td><td>47.90</td></tr><tr><td>PointGPTChen et al. (2023)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>11.60</td><td>-</td></tr><tr><td>Uni3DZhou et al. (2023)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>47.20</td><td>-</td></tr><tr><td>3D-LLMHong et al. (2023)</td><td>33.42</td><td>44.48</td><td>43.68</td><td>16.91</td><td>19.48</td><td>19.73</td><td>45.25</td><td>-</td></tr><tr><td>PointLLM-7BXu et al. (2023b)</td><td>44.85</td><td>47.47</td><td>48.55</td><td>3.87</td><td>7.30</td><td>11.92</td><td>53.00</td><td>41.20</td></tr><tr><td>PointLLM-13BXu et al. (2023b)</td><td>48.15</td><td>47.91</td><td>49.12</td><td>3.83</td><td>7.23</td><td>12.26</td><td>54.00</td><td>46.60</td></tr><tr><td>ShapeLLM-7BQi et al. (2024)</td><td>46.92</td><td>48.20</td><td>49.23</td><td>-</td><td>-</td><td>-</td><td>54.50</td><td>47.40</td></tr><tr><td>ShapeLLM-13BQi et al. (2024)</td><td>48.94</td><td>48.52</td><td>49.98</td><td>-</td><td>-</td><td>-</td><td>54.00</td><td>53.10</td></tr><tr><td>MiniGPT-3Dα Tang et al. (2024c)</td><td>52.49</td><td>48.73</td><td>49.26</td><td>-</td><td>-</td><td>-</td><td>54.50</td><td>43.60</td></tr><tr><td>PointLLM-PiSA-7BGuo et al. (2025a)</td><td>48.63</td><td>48.47</td><td>49.08</td><td>3.80</td><td>7.25</td><td>12.38</td><td>54.50</td><td>42.90</td></tr><tr><td>PointLLM-PiSA-13BGuo et al. (2025a)</td><td>50.52</td><td>48.60</td><td>49.64</td><td>3.75</td><td>7.84</td><td>12.56</td><td>55.00</td><td>46.80</td></tr><tr><td>ENEL-7B</td><td>51.03</td><td>48.79</td><td>49.52</td><td>3.91</td><td>7.20</td><td>12.68</td><td>55.55</td><td>43.80</td></tr><tr><td>ENEL-7B†</td><td>53.26</td><td>48.75</td><td>49.94</td><td>-</td><td>-</td><td>-</td><td>56.00</td><td>48.90</td></tr><tr><td>ENEL-13B</td><td>53.24</td><td>48.92</td><td>50.17</td><td>3.72</td><td>7.89</td><td>12.31</td><td>56.00</td><td>48.50</td></tr><tr><td>ENEL-13B†</td><td>54.78</td><td>49.37</td><td>50.69</td><td>-</td><td>-</td><td>-</td><td>56.00</td><td>54.80</td></tr><tr><td>ENEL-7B*</td><td>57.91</td><td>49.90</td><td>51.84</td><td>5.32</td><td>8.58</td><td>13.98</td><td>61.00</td><td>55.20</td></tr></table>
180
+
181
+ layers after aggregation operations. This allows the model to learn the interactions between local information while preventing the loss of fine-grained geometric details. Subsequently, from the $L$ th layer, we perform $l$ iterations of geometry propagation. Following the grid unpooling strategy, we use the point-to-grid mappings to propagate the aggregated features $F_{\mathrm{agg}}^{l}$ from each grid cell back to its corresponding set of points, generating $\{F_{\mathrm{pro}_i}^1\}_{i=1}^{M_{l-1}}$ . After $l$ iterations, we obtain point tokens of length $M$ , which are then processed by the remaining LLM layers. After processing through H additional LLM layers, the geometry aggregation and propagation process is repeated.
182
+
183
+ Experiments and Insights. We conduct step-by-step experiments on the Hierarchical Geometry Aggregation strategy, sequentially evaluating the impacts of the number of aggregation and propagation operations $(l)$ , the number of LLM layers between aggregation and propagation $(H)$ , and the incorporation of the gated self-attention mechanism.
184
+
185
+ - The best performance is achieved when $l$ is set to 3. As shown in Table 4, performing three aggregation and propagation operations achieves $48.93\%$ and $53.00\%$ performance on captioning and classification tasks, respectively. Fewer aggregation layers limit the capture of local geometric information, while too many layers oversimplify spatial relationships. Setting $l = 3$ achieves balanced modeling of local and global structures and realizes sampling ratio of approximately $1/8$ .
186
+ - Compared to setting $H$ to 4 or 8, the highest performance is achieved when $H$ is set to 2. It reaches $54.25\%$ and $49.56\%$ on the classification and captioning tasks, respectively. The excessive number of LLM layers between aggregation and propagation can lead to the oversmoothing of the aggregated local information, resulting in the loss of local structural details.
187
+ - The gated self-attention mechanism effectively improves performance, reaching $55.55\%$ and $51.03\%$ on classification and captioning tasks, respectively. The adaptive control of attention output ensures that global contextual information is utilized only when necessary, preventing it from disrupting local geometric structures. Additionally, it allows the model to adjust to different tasks.
188
+
189
+ # 4 RESULTS AND VISUALIZATION
190
+
191
+ Results. In Table 5, on the Objaverse benchmark Deitke et al. (2023), ENEL-7B achieves a GPT score of $51.03\%$ for 3D object captioning, setting a new SOTA. In traditional metrics, Sentence-BERT and SimCSE reach $48.79\%$ and $49.52\%$ , respectively, comparable to PointLLM-PiSA-13B. For 3D object classification, ENEL-7B outperforms prior encoder-based 3D LMMs with a GPT score of $55.55\%$ . Given the same training dataset as PointLLM, these results validate the effectiveness of our proposed LLM-embedded Semantic Encoding and Hierarchical Geometry Aggregation strategies for the encoder-free architecture. Additionally, on the 3D-VQA task of the 3D MM-Vet dataset Qi
192
+
193
+ ![](images/7089d73890f3323f87fda6dade9ffc67bb6fc6186e9c539a36ef0ca3df8224d4.jpg)
194
+ PointLLM
195
+
196
+ ![](images/ee75d8958355e4deb2571d04edfbf936436abf3d7168d700c9d2746865e0c99a.jpg)
197
+ ENEL (Ours)
198
+ (a)
199
+
200
+ ![](images/0430ba9148576c63e82fb0c29045fe4dbcd1ae2eeba44bbfd674ce98d2f8a3e6.jpg)
201
+
202
+ ![](images/11fe59b31689c51af2869300c63a809ca3dbfed3b13f6e5b9ab34feb02eee252.jpg)
203
+ (b)
204
+
205
+ ![](images/9321fbde98a71162f13cfafd8ffc232f8bd6efdc5a8ed5cb57c31a2390bbfbe9.jpg)
206
+
207
+ ![](images/46369c23ce34eb0db8dcbaa87ed328fbdb33412f2a8280253bc4eb32f81dc43b.jpg)
208
+ (c)
209
+
210
+ ![](images/ec13bfcb2bdaf67b49555b091d9413ff5a73d6ff7455ecbf37c379afea2b37b1.jpg)
211
+ Figure 5: Difference in Semantic Encoding. By visualizing the attention scores of the average text token to the point tokens on the Objaverse dataset, we compare the semantic encoding potential of encoder-based and encoder-free architectures, where red indicates higher values. And (a) represents chairs, (b) represents airplanes, and (c) represents lamps.
212
+ Figure 6: ENEL Response Examples. We demonstrate that ENEL is capable of providing accurate responses across different types of tasks, such as captioning and question answering, by effectively addressing a wide range of objects, including race cars, buildings, insects, and others.
213
+
214
+ et al. (2024), despite the lack of spatial and embodied interaction-related data in the training set, ENEL achieves the GPT score of $43.8\%$ , surpassing PointLLM-7B by $2.6\%$ . Replacing 7B Vicuna with 13B Vicuna, ENEL-13B achieves substantial performance gains across tasks. When replacing the Vicuna-7B with Qwen2.5-7B and using ShapeLLM training data, ENEL-7B achieves over $6\%$ improvements across benchmarks. Details of the evaluation metric and classification performance on the ModelNet dataset are provided in Appendix A.3.2 and A.3.3, respectively.
215
+
216
+ Visualization. In the Figure 5, we visualize the attention scores between the average text token and the point tokens in the last layer of both PointLLM and ENEL. Three object categories, including the chair, the airplane, and the desk lamp, are selected from the Objaverse dataset Deitke et al. (2023). In the Figure 5, red indicates higher values. We observe that in encoder-based 3D LMMs, the semantic relevance between the text tokens and the processed point tokens is relatively low. In contrast, ENEL, with its encoder-free architecture, achieves a high correlation between the features of the two different modalities, with the average text token focusing on key geometric structures of the objects, such as the backrest of the chair, the wings of the airplane, and the lampshade of the desk lamp.
217
+
218
+ Response Visualization. In the Figure 6, we present a visualization of ENEL's responses for both captioning and question answering (QA) formats. We observe that in the captioning task, ENEL can even accurately identify fine-grained categories such as a cricket. Moreover, in the QA task, ENEL effectively handles visual challenges such as general object recognition (e.g., reasoning about numbers on dice) and spatial reasoning (e.g., accurately interpreting building structures).
219
+
220
+ # 5 CONCLUSION
221
+
222
+ In this study, we investigate the potential of the encoder-free architecture in 3D understanding. Through a systematic analysis, we demonstrate that transferring the functionality of the 3D encoder to the LLM itself can effectively compensate for the performance degradation caused by the removal of the 3D encoder. To achieve this, we introduce the LLM-embedded Semantic Encoding strategy and the Hierarchical Geometry Aggregation strategy in the pre-training and instruction tuning stages. These strategies enable the encoding of high-level point cloud semantics while capturing critical local information. Our experiments highlight the promising prospects of the encoder-free architecture.
223
+
224
+ # REFERENCES
225
+
226
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
227
+ Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
228
+ Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Săgnak Taşürlar. Introducing our multimodal models, 2023. URL https://www.adept.ai/blog/fuyu-8b.
229
+ ChameleonTeam. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024.
230
+ Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, and Yufeng Yue. Pointgpt: Auto-regressively generative pre-training from point clouds. Advances in Neural Information Processing Systems, 36:29667-29679, 2023.
231
+ Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025.
232
+ Yangyi Chen, Xingyao Wang, Hao Peng, and Heng Ji. A single transformer for scalable vision-language modeling. arXiv preprint arXiv:2407.06438, 2024a.
233
+ Yilun Chen, Shuai Yang, Haifeng Huang, Tai Wang, Ruiyuan Lyu, Runsen Xu, Dahua Lin, and Jiangmiao Pang. Grounded 3d-llm with referent tokens. arXiv preprint arXiv:2405.10370, 2024b.
234
+ Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with $90\%$ * chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2(3):6, 2023.
235
+ Wenliang Dai, Junnan Li, D Li, AMH Tiong, J Zhao, W Wang, B Li, P Fung, and S Hoi. Instructclip: Towards general-purpose vision-language models with instruction tuning. arxiv 2023. arXiv preprint arXiv:2305.06500, 2, 2023.
236
+ Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13142-13153, 2023.
237
+ Haiwen Diao, Yufeng Cui, Xiaotong Li, Yueze Wang, Huchuan Lu, and Xinlong Wang. Unveiling encoder-free vision-language models. arXiv preprint arXiv:2406.11832, 2024a.
238
+ Haiwen Diao, Yufeng Cui, Xiaotong Li, Yueze Wang, Huchuan Lu, and Xinlong Wang. Unveiling encoder-free vision-language models. arXiv preprint arXiv:2406.11832, 2024b.
239
+ Haiwen Diao, Xiaotong Li, Yufeng Cui, Yueze Wang, Haoge Deng, Ting Pan, Wenxuan Wang, Huchuan Lu, and Xinlong Wang. Eve2: Improved baselines for encoder-free vision-language models. arXiv preprint arXiv:2502.06788, 2025.
240
+ Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873-12883, 2021.
241
+ Rao Fu, Jingyu Liu, Xilun Chen, Yixin Nie, and Wenhan Xiong. Scene-llm: Extending language model for 3d visual understanding and reasoning. arXiv preprint arXiv:2403.11401, 2024.
242
+ Zilu Guo, Hongbin Lin, Zhihao Yuan, Chaoda Zheng, Pengshuo Qiu, Dongzhi Jiang, Renrui Zhang, Chun-Mei Feng, and Zhen Li. Pisa: A self-augmented data engine and training strategy for 3d understanding with large models. arXiv preprint arXiv:2503.10529, 2025a.
243
+
244
+ Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, et al. Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following. arXiv preprint arXiv:2309.00615, 2023.
245
+ Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Rui Huang, Haoquan Zhang, Manyuan Zhang, Jiaming Liu, Shanghang Zhang, Peng Gao, et al. Can we generate images with cot? let's verify and reinforce image generation step by step. arXiv preprint arXiv:2501.13926, 2025b.
246
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
247
+ Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36:20482-20494, 2023.
248
+ Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 3d-llm: Injecting the 3d world into large language models. Advances in Neural Information Processing Systems, 36, 2024.
249
+ Dongzhi Jiang, Ziyu Guo, Renrui Zhang, Zhuofan Zong, Hao Li, Le Zhuo, Shilin Yan, Pheng-Ann Heng, and Hongsheng Li. T2i-r1: Reinforcing image generation with collaborative semantic-level and token-level cot. arXiv preprint arXiv:2505.00703, 2025.
250
+ Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661-18673, 2020.
251
+ Weixian Lei, Jiacong Wang, Haochen Wang, Xiangtai Li, Jun Hao Liew, Jiashi Feng, and Zilong Huang. The scalability of simplicity: Empirical analysis of vision-language learning with a single transformer. arXiv preprint arXiv:2504.10462, 2025.
252
+ Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-last-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024.
253
+ Handong Li, Yiyuan Zhang, Longteng Guo, Xiangyu Yue, and Jing Liu. Breaking the encoder barrier for seamless video-language understanding. arXiv preprint arXiv:2503.18422, 2025.
254
+ Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024.
255
+ Gen Luo, Xue Yang, Wenhan Dou, Zhaokai Wang, Jifeng Dai, Yu Qiao, and Xizhou Zhu. Monointervl: Pushing the boundaries of monolithic multimodal large language models with endogenous visual pre-training. arXiv preprint arXiv:2410.08202, 2024.
256
+ Gen Luo, Wenhan Dou, Wenhao Li, Zhaokai Wang, Xue Yang, Changyao Tian, Hao Li, Weiyun Wang, Wenhai Wang, Xizhou Zhu, et al. Mono-internvl-1.5: Towards cheaper and faster monolithic multimodal large language models. arXiv preprint arXiv:2507.12566, 2025.
257
+ Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
258
+ Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan. Masked autoencoders for point cloud self-supervised learning. In European conference on computer vision, pp. 604-621. Springer, 2022.
259
+ Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, and Li Yi. Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining. In International Conference on Machine Learning, pp. 28223-28243. PMLR, 2023.
260
+
261
+ Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng Ge, Li Yi, and Kaisheng Ma. Shapellm: Universal 3d object understanding for embodied interaction. arXiv preprint arXiv:2402.17766, 2024.
262
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, volume 139, pp. 8748-8763, 2021.
263
+ Yiwen Tang, Ray Zhang, Zoey Guo, Xianzheng Ma, Bin Zhao, Zhigang Wang, Dong Wang, and Xuelong Li. Point-peft: Parameter-efficient fine-tuning for 3d pre-trained models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 5171-5179, 2024a.
264
+ Yiwen Tang, Ray Zhang, Jiaming Liu, Zoey Guo, Bin Zhao, Zhigang Wang, Peng Gao, Hongsheng Li, Dong Wang, and Xuelong Li. Any2point: Empowering any-modality large models for efficient 3d understanding. In European Conference on Computer Vision, pp. 456-473. Springer, 2024b.
265
+ Yuan Tang, Xu Han, Xianzhi Li, Qiao Yu, Yixue Hao, Long Hu, and Min Chen. Minigpt-3d: Efficiently aligning 3d point clouds with large language models using 2d priors. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 6617-6626, 2024c.
266
+ Yuan Tang, Xu Han, Xianzhi Li, Qiao Yu, Jinfeng Xu, Yixue Hao, Long Hu, and Min Chen. More text, less point: Towards 3d data-efficient point-language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 7284-7292, 2025.
267
+ Chengzhuo Tong, Ziyu Guo, Renrui Zhang, Wenyu Shan, Xinyu Wei, Zhenghao Xing, Hongsheng Li, and Pheng-Ann Heng. Delving into rl for image generation with cot: A study on dpo vs. grpo. arXiv preprint arXiv:2505.17017, 2025.
268
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
269
+ A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
270
+ Jiaze Wang, Yi Wang, Ziyu Guo, Renrui Zhang, Donghao Zhou, Guangyong Chen, Anfeng Liu, and Pheng-Ann Heng. Mm-mixing: Multi-modal mixing alignment for 3d understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 7744-7752, 2025.
271
+ Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang, and Zhou Zhao. Chat-3d: Data-efficiently tuning large language model for universal dialogue of 3d scenes. arXiv preprint arXiv:2308.08769, 2023.
272
+ Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024.
273
+ Saining Xie, Jiatao Gu, Demi Guo, Charles R Qi, Leonidas Guibas, and Or Litany. Pointcontrast: Unsupervised pre-training for 3d point cloud understanding. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part III 16, pp. 574-591. Springer, 2020.
274
+ Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. CoRR, abs/2308.16911, 2023a.
275
+ Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. arXiv preprint arXiv:2308.16911, 2023b.
276
+ Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In European Conference on Computer Vision, pp. 131-147. Springer, 2025.
277
+
278
+ Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pretraining 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 19313-19322, 2022.
279
+ Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022.
280
+ Renrui Zhang, Liuhui Wang, Yu Qiao, Peng Gao, and Hongsheng Li. Learning 3d representations from 2d pre-trained models via image-to-point masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21769-21780, 2023a.
281
+ Renrui Zhang, Liihui Wang, Yali Wang, Peng Gao, Hongsheng Li, and Jianbo Shi. Starting from non-parametric networks for 3d point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5344-5353, 2023b.
282
+ Junsheng Zhou, Jinsheng Wang, Baorui Ma, Yu-Shen Liu, Tiejun Huang, and Xinlong Wang. Uni3d: Exploring unified 3d representation at scale. arXiv preprint arXiv:2310.06773, 2023.
283
+
284
+ # A APPENDIX
285
+
286
+ # A.1 RELATED WORK
287
+
288
+ 3D LMM. Recent advancements in integrating large language models (LLMs) with 3D data have led to significant progress in both object-level and scene-level understanding. At the object level, early approaches like Hong et al. (2024) utilize 2D rendering to leverage 2D LLMs, but this sacrifices geometric details. More recent models, including Point-Bind LLM Guo et al. (2023), PointLLM Xu et al. (2023b) and ShapeLLM Qi et al. (2024), directly encode point clouds and align them with LLMs, by combining the 3D encoder with a powerful language model, effectively fusing geometric, appearance, and linguistic information. At the scene level, models like Chat-3D Wang et al. (2023) and Scene-LLM Fu et al. (2024) focus on understanding complex spatial relationships through dialogue and tasks like captioning. Scene-LLM Fu et al. (2024) enhances embodied agents' abilities in interactive 3D indoor environments by integrating both scene-level and egocentric 3D information. Grounded 3D-LLM Chen et al. (2024b) utilizes referent tokens to reference specific objects within 3D scenes, enabling tasks such as object detection and language grounding.
289
+
290
+ Encoder-free Vision-Language Models. Traditional vision-language models (VLMs) often rely on vision encoders to extract visual features before processing them with language models, integrating image encoders like CLIP Radford et al. (2021) and DINO V2 Oquab et al. (2023). However, recent efforts have explored encoder-free VLMs for their simplicity. Approaches like ChameleonTeam (2024); Xie et al. (2024) use VQ tokenizers Esser et al. (2021) or linear projection layers Diao et al. (2024a); Chen et al. (2024a) to represent images. Fuyu-8B Bavishi et al. (2023), a pure decoder-only model, directly processes image patches through linear projections, handling high-resolution images but showing only average performance. The EVE series Diao et al. (2024b; 2025) eliminates the need for a separate vision encoder by bridging vision-language representation within a unified decoder and enhancing visual recognition capabilities through additional supervision. Mono-InternVL series Luo et al. (2024; 2025) leverage visual experts and progressive visual pre-training (EViP/EViP++) to achieve stable optimization and competitive performance. SAIL series Lei et al. (2025) directly encode raw pixels and decodes language within a single architecture, achieving competitive vision-language performance without pre-trained vision encoders.
291
+
292
+ ![](images/461d3e812ebea5fd29ae3418462a357609b4a7e400e5ac983b0019b4321fdde5.jpg)
293
+ (a)
294
+
295
+ ![](images/08dfedd01ab6b6dcb9728c955a74d23904363f50321c87fe27461e1e9d489f94.jpg)
296
+ (b)
297
+ Figure 7: Variants of Point Cloud Self-Supervised Learning Losses. (a) The Variant of Masked Modeling Loss, (b) The Variant of Reconstruction Loss, (c) The Variant of Hybrid Semantic Loss.
298
+
299
+ ![](images/7983bef9f269cb0f6d808894f932d37850f356267496bc11bf5a1b0f3a43b5e9.jpg)
300
+ (c)
301
+
302
+ # A.2 EXPERIMENTAL SETTINGS
303
+
304
+ Implementation Details. We use the LLaMA model Touvron et al. (2023) as our LLM backbone, with the 7B Vicuna-v1.1 Chiang et al. (2023) checkpoint as the default setting. In the token embedding layer, the point cloud is first processed by a linear layer to expand its dimension from 6 to 288. The input point cloud initially consists of 8192 points, followed by three iterations of farthest point sampling (FPS), reducing the size to 512, 256, and 128, respectively. After each FPS operation, k-Nearest Neighbors (k-NN) is applied with a cluster size of 81. And geometric features are extracted using triangular encoding, followed by linear layers that progressively increase the dimension to 576, 1152, and 2304. Finally, the projection layer maps the features to the LLM dimension of 4096. In the pre-training stage, we unfreeze the first four LLM layers. Within the LLM-embedded Semantic Encoding strategy, Hybrid Semantic Loss applies masked modeling to $30\%$ of the tokens and
305
+
306
+ Table 6: Ablation Experiments. We begin the ablation experiments by changing the single configuration of the module from ENEL. $\Psi$ represents a mask ratio of $60\%$ , while $\Phi$ represents a mask ratio of $30\%$ . For Hybrid Semantic Loss, the subscript patch and feat represent the masked modeling target, while the reconstruction target is the corresponding feat and patch. $l$ represents the number of aggregation and propagation operations. $H$ refers to the LLM layers between $l$ aggregation and $l$ propagation operations. $O$ refers to the LLM layer between two individual aggregation or propagation operations.
307
+
308
+ <table><tr><td rowspan="2">Model</td><td colspan="6">Cap</td><td>Cls</td></tr><tr><td>GPT-4</td><td>Sentence-BERT</td><td>SimCSE</td><td>BLEU-1</td><td>ROUGE-L</td><td>METEOR</td><td>GPT-4</td></tr><tr><td>ENEL-7B</td><td>50.92</td><td>48.61</td><td>49.31</td><td>3.88</td><td>7.20</td><td>12.50</td><td>55.00</td></tr><tr><td>- Hybrid Semantic Loss</td><td>47.19</td><td>48.07</td><td>48.31</td><td>3.46</td><td>7.41</td><td>11.84</td><td>50.61</td></tr><tr><td>Hybrid Semantic Loss\( _{patch} \)Φ</td><td>49.05</td><td>48.82</td><td>49.20</td><td>4.01</td><td>7.25</td><td>12.38</td><td>52.20</td></tr><tr><td>Hybrid Semantic Loss\( _{patch} \)Ψ</td><td>48.96</td><td>48.38</td><td>49.00</td><td>3.66</td><td>6.97</td><td>11.98</td><td>52.00</td></tr><tr><td>Hybrid Semantic Loss\( _{feat} \)Ψ</td><td>49.63</td><td>48.00</td><td>48.62</td><td>3.78</td><td>6.88</td><td>12.33</td><td>51.50</td></tr><tr><td>-gate mechanism</td><td>49.26</td><td>48.41</td><td>48.93</td><td>3.71</td><td>7.12</td><td>12.47</td><td>53.50</td></tr><tr><td>l=2,H=2,O=0</td><td>48.81</td><td>48.10</td><td>48.57</td><td>3.70</td><td>6.99</td><td>12.01</td><td>51.50</td></tr><tr><td>l=2,H=4,O=0</td><td>49.02</td><td>48.47</td><td>48.61</td><td>3.65</td><td>7.10</td><td>12.31</td><td>52.00</td></tr><tr><td>l=2,H=2,O=2</td><td>48.96</td><td>47.96</td><td>48.89</td><td>3.80</td><td>7.05</td><td>12.55</td><td>52.00</td></tr><tr><td>l=2,H=4,O=2</td><td>49.58</td><td>48.70</td><td>48.84</td><td>3.84</td><td>7.56</td><td>12.76</td><td>53.00</td></tr></table>
309
+
310
+ reconstructs the patches for the remaining $70\%$ visible tokens. During instruction tuning, geometric aggregation is applied at the end of the 1st, 2nd, and 3rd LLM layers to reduce point tokens. MaxMean pooling is used to retain more information. After two LLM layers, geometric propagation is applied at the end of the 6th, 7th, and 8th layers to restore the number of point cloud to 128. After two LLM layers, geometric aggregation is applied at the 11th-13th layers, followed by geometric propagation at the 16th-18th layers.
311
+
312
+ Training and Evaluation Details. During the two-stage training, each stage utilizes the same dataset and preprocessing method as PointLLM. All training are conducted on $4 \times 80\mathrm{G}$ A100 GPUs in BF16 precision, utilizing FlashAttention, the AdamW optimizer, and a cosine learning rate schedule. During the pre-training stage, the model is trained for three epochs with a batch size of 128 and a learning rate of 4e-4. In the instruction tuning stage, it is conducted for three epochs with batch size of 32 and a learning rate of 2e-5. The GPT-4 model Achiam et al. (2023) used for classification and captioning tasks evaluation refers to "gpt-4-0613" version consistent with PointLLM Xu et al. (2023b). In contrast, the GPT-4 model employed for QA performance evaluation corresponds to "gpt-4-0125" version aligning with ShapeLLM Qi et al. (2024). Additionally, the GPT evaluation prompts for classification and captioning are identical to those used in PointLLM, while the prompts for QA follow those in ShapeLLM.
313
+
314
+ # A.3 MORE EXPERIMENTS
315
+
316
+ # A.3.1 VARIANTS OF POINT CLOUD SELF-SUPERVISED LEARNING LOSSES.
317
+
318
+ In the Figure 7, we exhibit the other variants of Masked Modeling Loss, Reconstruction Loss and Hybrid Semantic Loss.
319
+
320
+ As seen in Figure 7 (a), in the Masked Modeling Loss, after the learnable tokens are processed by the LLM, the tokens are transformed to the point patches $\{G_{\mathrm{pre}_i}\}_{i = 1}^{M*r} \in \mathbb{R}^{M*r \times k \times 3}$ through a linear layer. We utilize the $l_2$ chamfer distance to align the predicted $G_{\mathrm{pre}}$ with the point patches $G_{mask}$ corresponding to the masked tokens, reconstructing the spatial information. The optimization is:
321
+
322
+ $$
323
+ \frac {1}{M * r} \sum_ {i = 1} ^ {M * r} \left(\min _ {j} \| a _ {i} - b _ {j} \| _ {2} ^ {2} + \min _ {j} \| b _ {i} - a _ {j} \| _ {2} ^ {2}\right), \tag {9}
324
+ $$
325
+
326
+ where $a = G_{\mathrm{pre}}$ and $b = G_{mask}$
327
+
328
+ As shown in Figure 7 (b), after the point feature tokens $\{F_i\}_{i=1}^M$ are encoded by the LLM, the Mean Squared Error (MSE) is computed between the predicted $F_{\mathrm{pre}}$ and the ground truth $F$ . The
329
+
330
+ Table 7: Comparison of computational complexity between PointLLM-7B and ENEL-7B. S1 and S2 refer to the pre-training and instruction tuning stages, respectively. Conv. Steps indicate the number of steps required for loss convergence.
331
+
332
+ <table><tr><td>Method</td><td>Time (H)</td><td>Memory (S1/S2)</td><td>FLOPs</td><td>Conv. Steps (S1/S2)</td></tr><tr><td>PointLLM-7B</td><td>31.6</td><td>67G / 57G</td><td>2.0 × 1018</td><td>10100 / 4300</td></tr><tr><td>ENEL-7B</td><td>22.2</td><td>56G / 42G</td><td>1.59 × 1018</td><td>9790 / 3700</td></tr><tr><td>Improvement</td><td>29.7%</td><td>16.4% / 26.3%</td><td>20.5%</td><td>2.9% / 14.0%</td></tr></table>
333
+
334
+ optimization can be written as
335
+
336
+ $$
337
+ \mathcal {L} _ {\text {m a s k}} = \frac {1}{M} \sum_ {i = 1} ^ {M} \left(\left\| F _ {\text {p r e} _ {i}} - F _ {i} \right\| _ {2} ^ {2}\right). \tag {10}
338
+ $$
339
+
340
+ Finally, in the Figure 7 (c) Hybrid Semantic Loss, the masked tokens and the corresponding patches are referred to as $\{F_{\mathrm{mask}_i}\}_{i=1}^{M*r}$ and $\{G_{\mathrm{mask}_i}\}_{i=1}^{M*r}$ , respectively. The remaining tokens are denoted as $\{F_{\mathrm{vis}_i}\}_{i=1}^{M*(1-r)}$ and $\{G_{\mathrm{vis}_i}\}_{i=1}^{M*(1-r)}$ . After passing point tokens through the LLM, we compute the MSE between $F_{\mathrm{pre}}$ and $F_{\mathrm{vis}}$ . The learnable tokens $F_{\mathrm{learn}}$ are transformed into $G_{\mathrm{pred}}$ , and the $L_2$ Chamfer distance is computed between $G_{\mathrm{pred}}$ and $G_{\mathrm{mask}}$ . These two are added to the original cross-entropy loss with coefficients all equal to 1.
341
+
342
+ # A.3.2 METRIC ANALYSIS
343
+
344
+ GPT-4 Evaluation is a LLM-as-a-judge framework based on custom prompts. Given a model-generated description and human reference, GPT-4 identifies key attributes from the reference, measures how many are accurately or partially matched in the model output, and returns a score from 0 to 100 with a brief explanation. It offers more comprehensive and human-aligned evaluation.
345
+
346
+ Traditional metrics like BLEU measure n-gram precision, ROUGE-L uses longest common subsequence, and METEOR combines unigram precision and recall with lemmatization and synonym matching. However, these metrics struggle with semantic similarity and tend to favor shorter outputs.
347
+
348
+ Reasons for low traditional metrics: 3D-LLM with high traditional metric scores generates captions averaging 20 words—much shorter than ENEL and other methods. However, this does not indicate better output quality and performs worse in human evaluations. Traditional metrics often fail to assess the quality of detailed LLM outputs, as they favor shorter responses and struggle to capture semantic similarity. The GPT-4 score offers stronger semantic understanding, greater diversity, and better generalization.
349
+
350
+ Examples: Here is a typical example where GPT-4 gives high scores but traditional metrics give low scores. Given a point cloud of an airplane, the model outputs:
351
+
352
+ "The 3D model portrays a white cartoon airplane, styled in a simplistic and charming fashion... This model can be inferred to be used in animated children's media or as a playful element in a game or learning application design."
353
+
354
+ The ground truth:
355
+
356
+ "This 3D object is an airplane with distinct wings and a tail. It has a long fuselage with glass windows at the front and sides. The round-shaped wings are located in the middle."
357
+
358
+ The model correctly identifies the object as an airplane and captures key style features like simplicity, cartoon form, and whiteness. It also reasonably infers use in children's media, showing strong understanding. However, traditional metrics rely on n-gram overlaps. Phrases like "airplane body and wings" differ from the ground truth "fuselage with glass windows," leading to mismatches. The output is also longer and more descriptive, while the ground truth is concise and factual, and includes extra details like "white cartoon airplane," all contributing to low traditional scores.
359
+
360
+ Table 8: ModelNet40 classification results under instruction-typed and completion-typed prompts. The instruction-typed (I) prompt is "What is this?" and the completion-typed (C) prompt is "This is an object of."
361
+
362
+ <table><tr><td>Model</td><td>ModelNet (I)</td><td>ModelNet (C)</td><td>ModelNet-Avg</td></tr><tr><td>PointLLM-7B</td><td>53.44</td><td>51.82</td><td>52.63</td></tr><tr><td>PointLLM-13B</td><td>53.00</td><td>52.55</td><td>52.78</td></tr><tr><td>ShapeLLM-7B</td><td>-</td><td>-</td><td>53.08</td></tr><tr><td>ShapeLLM-13B</td><td>-</td><td>-</td><td>52.96</td></tr><tr><td>PointLLM-PiSA-7B</td><td>54.58</td><td>52.60</td><td>53.59</td></tr><tr><td>PointLLM-PiSA-13B</td><td>55.03</td><td>53.81</td><td>54.42</td></tr><tr><td>ENEL-7B</td><td>54.82</td><td>53.69</td><td>54.26</td></tr><tr><td>ENEL-13B</td><td>55.59</td><td>54.38</td><td>55.00</td></tr><tr><td>ENEL-7B*</td><td>61.25</td><td>60.47</td><td>60.86</td></tr></table>
363
+
364
+ # A.3.3 MODELNET CLASSIFICATION TASK
365
+
366
+ As shown in Table 8, ENEL-7B achieves an average accuracy of $54.26\%$ , surpassing PointLLM-7B $(52.63\%)$ , ShapeLLM-7B $(53.08\%)$ and PointLLM-PiSA-7B $(53.59\%)$ . Similarly, ENEL-13B reaches $55.00\%$ , outperforming both ShapeLLM-13B $(52.96\%)$ and PointLLM-PiSA-13B $(54.42\%)$ . These results demonstrate the effectiveness of the encoder-free design in 3D object understanding.
367
+
368
+ # A.3.4 COMPLEXITY ANALYSIS
369
+
370
+ In Table 7, compared to PointLLM-7B, ENEL-7B demonstrates significant improvements while using the same training dataset. It achieves $29.7\%$ faster training time, reduces GPU memory usage by $16.4\%$ and $26.3\%$ in Stage 1 and Stage 2, respectively, lowers training FLOPs by $20.5\%$ , and accelerates convergence speed by $2.9\%$ (Stage 1) and $14.0\%$ (Stage 2).
371
+
372
+ # A.3.5 ENCODER-FREE ARCHITECTURE CLAIM.
373
+
374
+ Following the consensus in recent Large Multimodal Model (LMM) literature, we strictly define an architecture as "encoder-free" based on two criteria: (1) the absence of a heavy, independently pretrained visual backbone, and (2) the utilization of end-to-end training from scratch. Unlike traditional 3D LMMs that rely on decoupled, pretrained encoders (e.g., Point-BERT Yu et al. (2022)) for semantic extraction, our design integrates a lightweight, randomly initialized embedding layer trained jointly with the LLM.
375
+
376
+ Alignment with Community Standards. This design philosophy parallels established encoder-free paradigms in the 2D image and video domains. For instance, EVE Diao et al. (2024a) utilizes a token embedding layer based on convolution and cross-attention ( $\sim$ 16M parameters), while ELVA Li et al. (2025) employs a spatio-temporal attention layer ( $\sim$ 9M parameters) for video framing. Similarly, Mono-InternVL Luo et al. (2024) relies on a lightweight stack of convolutions ( $\sim$ 10M parameters). As detailed in Table 9, our proposed point embedding layer comprises only 3M parameters. This is not only significantly more lightweight than its 2D counterparts but also orders of magnitude smaller than typical 3D encoders (e.g., $\sim$ 88M for PointBERT used in PointLLM). Our module functions strictly as a tokenizer rather than a visual encoder.
377
+
378
+ Table 9: Comparison of Tokenizer Parameters across Domains.
379
+
380
+ <table><tr><td>Method</td><td>Domain</td><td>Tokenizer Structure</td><td>Tokenizer Params</td><td>Ratio (Tok./Total)</td></tr><tr><td>EVE / EVEv2 Diao et al. (2024a; 2025)</td><td>Image</td><td>Conv + Cross-Attn</td><td>16 M</td><td>~0.23%</td></tr><tr><td>Mono-InternVL Luo et al. (2024)</td><td>Image</td><td>Stacked Conv</td><td>10 M</td><td>~0.14%</td></tr><tr><td>ELVA Li et al. (2025)</td><td>Video</td><td>Spatio-temporal Attn</td><td>9 M</td><td>~0.13%</td></tr><tr><td>PointLLM Xu et al. (2023a)</td><td>3D</td><td>PointBERT Encoder</td><td>~88 M</td><td>~1.24%</td></tr><tr><td>Ours</td><td>3D</td><td>Point Embedding</td><td>3 M</td><td>~0.04%</td></tr></table>
381
+
382
+ Structural Formatting vs. Semantic Encoding. We explicitly distinguish the structural operations used in our embedding layer—specifically Farthest Point Sampling (FPS) and $k$ -Nearest Neighbors ( $k$ -NN)—from semantic encoding. Due to the data irregularity of unstructured 3D point clouds, FPS
383
+
384
+ and $k$ -NN serve as the mathematically necessary equivalents of the "patchify" or "stride" operations used in 2D Vision Transformers. They are required to group raw data points into processable tokens. Crucially, these operations are parameter-free. The subsequent learnable MLPs serve only to project these local geometric groupings into the feature dimension required by the LLM.
385
+
386
+ # A.3.6 MORE ABLATION EXPERIMENTS
387
+
388
+ We begin the ablation experiments starting from the ENEL-7B, which is the reverse order compared to the experiments in the main text, as showcased in Table 6
389
+
390
+ The Effects of LLM-embedded Semantic Encoding Strategy. In the Table 6, on the basis of ENEL, removing the Hybrid Semantic Loss during the pre-training stage significantly degrades performance. The GPT-4 score for the captioning task drops from $51.03\%$ to $47.15\%$ , and the GPT-4 score for the classification task decreases to $50.50\%$ . This is because the proposed self-supervised learning function for point clouds effectively captures the detailed structures and high-level semantics.
391
+
392
+ Based on ENEL-7B, we find that setting the mask ratio in the Hybrid Semantic Loss to $30\%$ consistently yields better results than $60\%$ . Additionally, the configuration where the masked token part predicts features while the visible token part reconstructs patches outperforms the reverse setting—where the masked token part predicts patches and the visible token part reconstructs features. This phenomenon can be explained as follows: a mask ratio of $30\%$ retains critical information while facilitating the model to effectively utilize the visible tokens to derive the masked parts. When the mask ratio is set too high, the model fails to learn the global context knowledge adequately. Moreover, when the masked token part is tasked with predicting features, the model focuses on learning the high-level context semantics, while the patch reconstruction aids in accurately capturing low-level details. In contrast, when the masked token part predicts patches, the model becomes excessively dependent on local features during the process of semantic reconstruction.
393
+
394
+ The Effects of Hierarchical Geometry Aggregation Strategy. After removing the gating mechanism in the self-attention of the aggregation operation, the performance drops to $49.61\%$ and $53.60\%$ on the captioning and classification tasks, respectively. The gating mechanism helps the model to adaptively filter information, allowing it to focus on more discriminative features. Without the dynamic adjustment to focus on different parts of the input, the generated text from the LLM lacks accuracy and coherence, leading to a decrease in performance.
395
+
396
+ As the number of aggregation and propagation operations decreases, overall performance declines, mainly due to insufficient layers failing to adequately model complex spatial relationships in point clouds. We observe that increasing the number of LLM layers between the final aggregation operation and the first propagation operation leads to improved performance. This suggests that fewer cascaded aggregation operations require deeper network architectures for high-level feature abstraction; otherwise, insufficient depth may lead to degraded hierarchical representations. Furthermore, the presence of LLM layers between each aggregation or propagation operation enhances performance by allowing the model to process and transform compressed information. Through self-attention mechanisms, these intermediate layers can recapture and restore details lost during the aggregation process.
397
+
398
+ # A.4 USE OF LLM
399
+
400
+ In this paper, the Large Language Model (LLM) is mainly used for grammar checking and spelling error detection.
401
+
402
+ # A.5 MODEL OUTPUT
403
+
404
+ In Figure 8, we showcase more model output, where our ENEL provides precise and diverse responses with multi-modal 3D instruction input.
405
+
406
+ ![](images/edb0a318af7d51c32606571b43c1a0d56bc23f075e211827532e50fa0d42395d.jpg)
407
+
408
+ ![](images/ffdc5e7f832a77b75e7471a2c87a82f16318a57df95a6765a664af90f434d2b0.jpg)
409
+
410
+ ![](images/50a435340609f91dd4c98c4ecf8711281af0ef65e08a8c4bd629953a72bb8374.jpg)
411
+
412
+ ![](images/a48232b3e88a90287fd40969967d05b70d801ca0636915032ef4f5e83fa3cabb.jpg)
413
+
414
+ ![](images/64a02e4523d40824bdc03a078de4b936945b63e55adcea622df4982e32d1c394.jpg)
415
+
416
+ ![](images/c74edd24335aa4f482dadaed1edc7d5e782f5e478aba6a2d121977809885dc75.jpg)
417
+
418
+ ![](images/8ce80578e3f7a917580cf9034b4db01f51e64c1636e5f389dcc9b36bd28afd66.jpg)
419
+ Figure 8: ENEL Output Examples. We demonstrate that ENEL provides precise and diverse responses when addressing different problems.
420
+
421
+ ![](images/8008655abeb7d9010cc052d9f6c22cc301f5662cc50bfba02081ec3b3ebdca14.jpg)
422
+
423
+ ![](images/f564d8262f9e9054141c0c4ec4f215eb22d53e6b6e3139156ec6eb9d75dd8561.jpg)
2502.09xxx/2502.09620/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b367276657d73e933eb63117c9fc4e059a9c0ee6c17d45aa3422ca7c26293ec5
3
+ size 867162
2502.09xxx/2502.09620/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2502.09xxx/2502.09621/e78f0acc-7014-41e7-aad6-9f1853dd8aec_content_list.json ADDED
The diff for this file is too large to render. See raw diff