Chelsea707 commited on
Commit
c70bcba
·
verified ·
1 Parent(s): f424d27

Add Batch b151dce9-56be-4b05-8b1f-af4414cc9da7 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_content_list.json +1588 -0
  3. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_model.json +2288 -0
  4. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_origin.pdf +3 -0
  5. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/full.md +336 -0
  6. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/images.zip +3 -0
  7. 2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/layout.json +0 -0
  8. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_content_list.json +1615 -0
  9. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_model.json +2288 -0
  10. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_origin.pdf +3 -0
  11. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/full.md +330 -0
  12. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/images.zip +3 -0
  13. 2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/layout.json +0 -0
  14. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_content_list.json +1515 -0
  15. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_model.json +0 -0
  16. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_origin.pdf +3 -0
  17. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/full.md +291 -0
  18. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/images.zip +3 -0
  19. 2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/layout.json +0 -0
  20. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_content_list.json +1650 -0
  21. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_model.json +0 -0
  22. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_origin.pdf +3 -0
  23. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/full.md +347 -0
  24. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/images.zip +3 -0
  25. 2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/layout.json +0 -0
  26. 2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_content_list.json +1907 -0
  27. 2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_model.json +0 -0
  28. 2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_origin.pdf +3 -0
  29. 2025/SynCity_ Training-Free Generation of 3D Worlds/full.md +371 -0
  30. 2025/SynCity_ Training-Free Generation of 3D Worlds/images.zip +3 -0
  31. 2025/SynCity_ Training-Free Generation of 3D Worlds/layout.json +0 -0
  32. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_content_list.json +0 -0
  33. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_model.json +0 -0
  34. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_origin.pdf +3 -0
  35. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/full.md +385 -0
  36. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/images.zip +3 -0
  37. 2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/layout.json +0 -0
  38. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_content_list.json +1813 -0
  39. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_model.json +0 -0
  40. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_origin.pdf +3 -0
  41. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/full.md +350 -0
  42. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/images.zip +3 -0
  43. 2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/layout.json +0 -0
  44. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_content_list.json +0 -0
  45. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_model.json +0 -0
  46. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_origin.pdf +3 -0
  47. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/full.md +386 -0
  48. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/images.zip +3 -0
  49. 2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/layout.json +0 -0
  50. 2025/Synchronization of Multiple Videos/af4605e7-9aa0-4a23-926a-33856d420d35_content_list.json +1441 -0
.gitattributes CHANGED
@@ -2332,3 +2332,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
2332
  2025/Supercharging[[:space:]]Floorplan[[:space:]]Localization[[:space:]]with[[:space:]]Semantic[[:space:]]Rays/834ff1eb-78d1-464a-8d20-1e5cbed96504_origin.pdf filter=lfs diff=lfs merge=lfs -text
2333
  2025/Superpowering[[:space:]]Open-Vocabulary[[:space:]]Object[[:space:]]Detectors[[:space:]]for[[:space:]]X-ray[[:space:]]Vision/60987d09-4da2-484e-8cc1-bea5f240eaf8_origin.pdf filter=lfs diff=lfs merge=lfs -text
2334
  2025/Supervised[[:space:]]Exploratory[[:space:]]Learning[[:space:]]for[[:space:]]Long-Tailed[[:space:]]Visual[[:space:]]Recognition/3fb70b61-418e-4890-be11-1e54f33e6902_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2332
  2025/Supercharging[[:space:]]Floorplan[[:space:]]Localization[[:space:]]with[[:space:]]Semantic[[:space:]]Rays/834ff1eb-78d1-464a-8d20-1e5cbed96504_origin.pdf filter=lfs diff=lfs merge=lfs -text
2333
  2025/Superpowering[[:space:]]Open-Vocabulary[[:space:]]Object[[:space:]]Detectors[[:space:]]for[[:space:]]X-ray[[:space:]]Vision/60987d09-4da2-484e-8cc1-bea5f240eaf8_origin.pdf filter=lfs diff=lfs merge=lfs -text
2334
  2025/Supervised[[:space:]]Exploratory[[:space:]]Learning[[:space:]]for[[:space:]]Long-Tailed[[:space:]]Visual[[:space:]]Recognition/3fb70b61-418e-4890-be11-1e54f33e6902_origin.pdf filter=lfs diff=lfs merge=lfs -text
2335
+ 2025/SurfaceSplat_[[:space:]]Connecting[[:space:]]Surface[[:space:]]Reconstruction[[:space:]]and[[:space:]]Gaussian[[:space:]]Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_origin.pdf filter=lfs diff=lfs merge=lfs -text
2336
+ 2025/SweetTok_[[:space:]]Semantic-Aware[[:space:]]Spatial-Temporal[[:space:]]Tokenizer[[:space:]]for[[:space:]]Compact[[:space:]]Video[[:space:]]Discretization/394730b6-2240-4ddc-9556-9a00295dec81_origin.pdf filter=lfs diff=lfs merge=lfs -text
2337
+ 2025/Switch-a-View_[[:space:]]View[[:space:]]Selection[[:space:]]Learned[[:space:]]from[[:space:]]Unlabeled[[:space:]]In-the-wild[[:space:]]Videos/679bbc29-4d07-48b9-a239-1f0c15065380_origin.pdf filter=lfs diff=lfs merge=lfs -text
2338
+ 2025/SynAD_[[:space:]]Enhancing[[:space:]]Real-World[[:space:]]End-to-End[[:space:]]Autonomous[[:space:]]Driving[[:space:]]Models[[:space:]]through[[:space:]]Synthetic[[:space:]]Data[[:space:]]Integration/98f750ca-b29d-44cf-a005-9c501051463b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2339
+ 2025/SynCity_[[:space:]]Training-Free[[:space:]]Generation[[:space:]]of[[:space:]]3D[[:space:]]Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_origin.pdf filter=lfs diff=lfs merge=lfs -text
2340
+ 2025/SynFER_[[:space:]]Towards[[:space:]]Boosting[[:space:]]Facial[[:space:]]Expression[[:space:]]Recognition[[:space:]]with[[:space:]]Synthetic[[:space:]]Data/1b1abe0d-673c-4757-97e7-7262600b5102_origin.pdf filter=lfs diff=lfs merge=lfs -text
2341
+ 2025/SynTag_[[:space:]]Enhancing[[:space:]]the[[:space:]]Geometric[[:space:]]Robustness[[:space:]]of[[:space:]]Inversion-based[[:space:]]Generative[[:space:]]Image[[:space:]]Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_origin.pdf filter=lfs diff=lfs merge=lfs -text
2342
+ 2025/SyncDiff_[[:space:]]Synchronized[[:space:]]Motion[[:space:]]Diffusion[[:space:]]for[[:space:]]Multi-Body[[:space:]]Human-Object[[:space:]]Interaction[[:space:]]Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_origin.pdf filter=lfs diff=lfs merge=lfs -text
2343
+ 2025/Synchronization[[:space:]]of[[:space:]]Multiple[[:space:]]Videos/af4605e7-9aa0-4a23-926a-33856d420d35_origin.pdf filter=lfs diff=lfs merge=lfs -text
2344
+ 2025/Synchronizing[[:space:]]Task[[:space:]]Behavior_[[:space:]]Aligning[[:space:]]Multiple[[:space:]]Tasks[[:space:]]during[[:space:]]Test-Time[[:space:]]Training/60f0e363-9a39-4a78-803f-8a7a065d29c4_origin.pdf filter=lfs diff=lfs merge=lfs -text
2345
+ 2025/Synergistic[[:space:]]Prompting[[:space:]]for[[:space:]]Robust[[:space:]]Visual[[:space:]]Recognition[[:space:]]with[[:space:]]Missing[[:space:]]Modalities/23443d08-f2d3-4457-b33f-3d22e53ffdbb_origin.pdf filter=lfs diff=lfs merge=lfs -text
2346
+ 2025/Synthesizing[[:space:]]Near-Boundary[[:space:]]OOD[[:space:]]Samples[[:space:]]for[[:space:]]Out-of-Distribution[[:space:]]Detection/c3c0a966-f219-49a1-a5df-dd3bae8aeed7_origin.pdf filter=lfs diff=lfs merge=lfs -text
2347
+ 2025/Synthetic[[:space:]]Video[[:space:]]Enhances[[:space:]]Physical[[:space:]]Fidelity[[:space:]]in[[:space:]]Video[[:space:]]Synthesis/59f9a5d1-fc8f-43b3-a954-394a680aed8c_origin.pdf filter=lfs diff=lfs merge=lfs -text
2348
+ 2025/T2Bs_[[:space:]]Text-to-Character[[:space:]]Blendshapes[[:space:]]via[[:space:]]Video[[:space:]]Generation/9a45a79c-5c5c-41c3-b749-b74beda00010_origin.pdf filter=lfs diff=lfs merge=lfs -text
2349
+ 2025/T2I-Copilot_[[:space:]]A[[:space:]]Training-Free[[:space:]]Multi-Agent[[:space:]]Text-to-Image[[:space:]]System[[:space:]]for[[:space:]]Enhanced[[:space:]]Prompt[[:space:]]Interpretation[[:space:]]and[[:space:]]Interactive[[:space:]]Generation/211d05eb-e5de-42fe-bf8d-1fbe51b9a0c1_origin.pdf filter=lfs diff=lfs merge=lfs -text
2350
+ 2025/TAB_[[:space:]]Transformer[[:space:]]Attention[[:space:]]Bottlenecks[[:space:]]enable[[:space:]]User[[:space:]]Intervention[[:space:]]and[[:space:]]Debugging[[:space:]]in[[:space:]]Vision-Language[[:space:]]Models/c47e8229-4bc7-4c48-877e-ff009c7d300e_origin.pdf filter=lfs diff=lfs merge=lfs -text
2351
+ 2025/TACO_[[:space:]]Taming[[:space:]]Diffusion[[:space:]]for[[:space:]]in-the-wild[[:space:]]Video[[:space:]]Amodal[[:space:]]Completion/9aef2cca-9deb-41e8-af6b-17686afaf49b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2352
+ 2025/TAD-E2E_[[:space:]]A[[:space:]]Large-scale[[:space:]]End-to-end[[:space:]]Autonomous[[:space:]]Driving[[:space:]]Dataset/e34dd6ef-6c84-4df1-9a84-f3853c7b546a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2353
+ 2025/TAG-WM_[[:space:]]Tamper-Aware[[:space:]]Generative[[:space:]]Image[[:space:]]Watermarking[[:space:]]via[[:space:]]Diffusion[[:space:]]Inversion[[:space:]]Sensitivity/5c418f3b-3a84-4af8-86f8-844d737b1ba9_origin.pdf filter=lfs diff=lfs merge=lfs -text
2354
+ 2025/TAPNext_[[:space:]]Tracking[[:space:]]Any[[:space:]]Point[[:space:]](TAP)[[:space:]]as[[:space:]]Next[[:space:]]Token[[:space:]]Prediction/1570d2ea-8c3a-4e4f-8816-939792eaee46_origin.pdf filter=lfs diff=lfs merge=lfs -text
2355
+ 2025/TAR3D_[[:space:]]Creating[[:space:]]High-Quality[[:space:]]3D[[:space:]]Assets[[:space:]]via[[:space:]]Next-Part[[:space:]]Prediction/710fdc15-e281-4215-ba00-df2c8d33d122_origin.pdf filter=lfs diff=lfs merge=lfs -text
2356
+ 2025/TARO_[[:space:]]Timestep-Adaptive[[:space:]]Representation[[:space:]]Alignment[[:space:]]with[[:space:]]Onset-Aware[[:space:]]Conditioning[[:space:]]for[[:space:]]Synchronized[[:space:]]Video-to-Audio[[:space:]]Synthesis/878e391d-0d53-4519-a75e-ad1361bd9aa4_origin.pdf filter=lfs diff=lfs merge=lfs -text
2357
+ 2025/TARS_[[:space:]]Traffic-Aware[[:space:]]Radar[[:space:]]Scene[[:space:]]Flow[[:space:]]Estimation/117d0b3b-bb15-46f7-a2e5-b92f9ed6f2e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
2358
+ 2025/TAViS_[[:space:]]Text-bridged[[:space:]]Audio-Visual[[:space:]]Segmentation[[:space:]]with[[:space:]]Foundation[[:space:]]Models/af82698e-0c3b-4eac-9384-3d90cd78e153_origin.pdf filter=lfs diff=lfs merge=lfs -text
2359
+ 2025/TCFG_[[:space:]]Truncated[[:space:]]Classifier-Free[[:space:]]Guidance[[:space:]]for[[:space:]]Efficient[[:space:]]and[[:space:]]Scalable[[:space:]]Text-to-Image[[:space:]]Acceleration/1100162b-a47e-425a-9cfc-342eb2a6a1a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
2360
+ 2025/TESPEC_[[:space:]]Temporally-Enhanced[[:space:]]Self-Supervised[[:space:]]Pretraining[[:space:]]for[[:space:]]Event[[:space:]]Cameras/ac958e4a-297e-411f-8fa0-24835cab7ed2_origin.pdf filter=lfs diff=lfs merge=lfs -text
2361
+ 2025/TF-TI2I_[[:space:]]Training-Free[[:space:]]Text-and-Image-to-Image[[:space:]]Generation[[:space:]]via[[:space:]]Multi-Modal[[:space:]]Implicit-Context[[:space:]]Learning[[:space:]]In[[:space:]]Text-to-Image[[:space:]]Models/ed387077-7133-4437-b0c4-28989035bd93_origin.pdf filter=lfs diff=lfs merge=lfs -text
2362
+ 2025/TIP-I2V_[[:space:]]A[[:space:]]Million-Scale[[:space:]]Real[[:space:]]Text[[:space:]]and[[:space:]]Image[[:space:]]Prompt[[:space:]]Dataset[[:space:]]for[[:space:]]Image-to-Video[[:space:]]Generation/9814fe15-47cc-4a76-b295-d61762f914fd_origin.pdf filter=lfs diff=lfs merge=lfs -text
2363
+ 2025/TITAN-Guide_[[:space:]]Taming[[:space:]]Inference-Time[[:space:]]Alignment[[:space:]]for[[:space:]]Guided[[:space:]]Text-to-Video[[:space:]]Diffusion[[:space:]]Models/5700097a-6a0d-4b54-99b1-75c9828be4bf_origin.pdf filter=lfs diff=lfs merge=lfs -text
2364
+ 2025/TITAN_[[:space:]]Query-Token[[:space:]]based[[:space:]]Domain[[:space:]]Adaptive[[:space:]]Adversarial[[:space:]]Learning/db6df430-7dc9-4f3a-93a3-c318df96384a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2365
+ 2025/TLB-VFI_[[:space:]]Temporal-Aware[[:space:]]Latent[[:space:]]Brownian[[:space:]]Bridge[[:space:]]Diffusion[[:space:]]for[[:space:]]Video[[:space:]]Frame[[:space:]]Interpolation/eb4d704c-7bb0-44dd-80a8-10d5c4565d3a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2366
+ 2025/TOGA_[[:space:]]Temporally[[:space:]]Grounded[[:space:]]Open-Ended[[:space:]]Video[[:space:]]QA[[:space:]]with[[:space:]]Weak[[:space:]]Supervision/b6f43100-2638-4281-902e-bb9d7cbb9974_origin.pdf filter=lfs diff=lfs merge=lfs -text
2367
+ 2025/TOTP_[[:space:]]Transferable[[:space:]]Online[[:space:]]Pedestrian[[:space:]]Trajectory[[:space:]]Prediction[[:space:]]with[[:space:]]Temporal-Adaptive[[:space:]]Mamba[[:space:]]Latent[[:space:]]Diffusion/f1896b8b-1f67-4e76-b35a-1a9aca644b3f_origin.pdf filter=lfs diff=lfs merge=lfs -text
2368
+ 2025/TPG-INR_[[:space:]]Target[[:space:]]Prior-Guided[[:space:]]Implicit[[:space:]]3D[[:space:]]CT[[:space:]]Reconstruction[[:space:]]for[[:space:]]Enhanced[[:space:]]Sparse-view[[:space:]]Imaging/69e9f8c1-06bd-4da4-a169-66ee558c851b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2369
+ 2025/TR-PTS_[[:space:]]Task-Relevant[[:space:]]Parameter[[:space:]]and[[:space:]]Token[[:space:]]Selection[[:space:]]for[[:space:]]Efficient[[:space:]]Tuning/d171e406-f183-40a1-bfdb-c56ccea215c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
2370
+ 2025/TRACE_[[:space:]]Learning[[:space:]]3D[[:space:]]Gaussian[[:space:]]Physical[[:space:]]Dynamics[[:space:]]from[[:space:]]Multi-view[[:space:]]Videos/35c39f29-bbde-4dfe-a438-43609b544789_origin.pdf filter=lfs diff=lfs merge=lfs -text
2371
+ 2025/TRCE_[[:space:]]Towards[[:space:]]Reliable[[:space:]]Malicious[[:space:]]Concept[[:space:]]Erasure[[:space:]]in[[:space:]]Text-to-Image[[:space:]]Diffusion[[:space:]]Models/1fcd7e4c-5bc9-42a5-a64f-2b909c7996cb_origin.pdf filter=lfs diff=lfs merge=lfs -text
2372
+ 2025/TREAD_[[:space:]]Token[[:space:]]Routing[[:space:]]for[[:space:]]Efficient[[:space:]]Architecture-agnostic[[:space:]]Diffusion[[:space:]]Training/f8e97f29-5ea0-4d4b-ad7e-c79cf279dcc5_origin.pdf filter=lfs diff=lfs merge=lfs -text
2373
+ 2025/TRKT_[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Dynamic[[:space:]]Scene[[:space:]]Graph[[:space:]]Generation[[:space:]]with[[:space:]]Temporal-enhanced[[:space:]]Relation-aware[[:space:]]Knowledge[[:space:]]Transferring/7904d212-7f29-4708-9b23-841f48cb9b41_origin.pdf filter=lfs diff=lfs merge=lfs -text
2374
+ 2025/TRNAS_[[:space:]]A[[:space:]]Training-Free[[:space:]]Robust[[:space:]]Neural[[:space:]]Architecture[[:space:]]Search/c7b9475c-1445-489b-a85e-6a1917945609_origin.pdf filter=lfs diff=lfs merge=lfs -text
2375
+ 2025/TWIST[[:space:]]&[[:space:]]SCOUT_[[:space:]]Grounding[[:space:]]Multimodal[[:space:]]LLM-Experts[[:space:]]by[[:space:]]Forget-Free[[:space:]]Tuning/134e44bc-92ea-480b-962b-d0feced2636a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2376
+ 2025/Talking[[:space:]]to[[:space:]]DINO_[[:space:]]Bridging[[:space:]]Self-Supervised[[:space:]]Vision[[:space:]]Backbones[[:space:]]with[[:space:]]Language[[:space:]]for[[:space:]]Open-Vocabulary[[:space:]]Segmentation/2d318053-4e42-4a22-ab3d-3d9554363088_origin.pdf filter=lfs diff=lfs merge=lfs -text
2377
+ 2025/Taming[[:space:]]Flow[[:space:]]Matching[[:space:]]with[[:space:]]Unbalanced[[:space:]]Optimal[[:space:]]Transport[[:space:]]into[[:space:]]Fast[[:space:]]Pansharpening/3daa6182-60b7-4cc5-8a3e-4b4f5dd23c2f_origin.pdf filter=lfs diff=lfs merge=lfs -text
2378
+ 2025/Taming[[:space:]]the[[:space:]]Untamed_[[:space:]]Graph-Based[[:space:]]Knowledge[[:space:]]Retrieval[[:space:]]and[[:space:]]Reasoning[[:space:]]for[[:space:]]MLLMs[[:space:]]to[[:space:]]Conquer[[:space:]]the[[:space:]]Unknown/072f85e2-8c40-4112-83d1-b7bcbaae60d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
2379
+ 2025/Target[[:space:]]Bias[[:space:]]Is[[:space:]]All[[:space:]]You[[:space:]]Need_[[:space:]]Zero-Shot[[:space:]]Debiasing[[:space:]]of[[:space:]]Vision-Language[[:space:]]Models[[:space:]]with[[:space:]]Bias[[:space:]]Corpus/ccbcadb7-2438-45c2-999a-7847b7b6dc5d_origin.pdf filter=lfs diff=lfs merge=lfs -text
2380
+ 2025/Task[[:space:]]Vector[[:space:]]Quantization[[:space:]]for[[:space:]]Memory-Efficient[[:space:]]Model[[:space:]]Merging/12770e03-a2a2-413c-af25-77ecbf310064_origin.pdf filter=lfs diff=lfs merge=lfs -text
2381
+ 2025/Task-Aware[[:space:]]Prompt[[:space:]]Gradient[[:space:]]Projection[[:space:]]for[[:space:]]Parameter-Efficient[[:space:]]Tuning[[:space:]]Federated[[:space:]]Class-Incremental[[:space:]]Learning/330ea2dc-46be-4eea-a815-ff2520dce576_origin.pdf filter=lfs diff=lfs merge=lfs -text
2382
+ 2025/Task-Decoupled[[:space:]]Bezier[[:space:]]Surface[[:space:]]Constraint[[:space:]]for[[:space:]]Uneven[[:space:]]Low-Light[[:space:]]Image[[:space:]]Enhancement/cf609566-93da-426b-b3e8-e870273eba8a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2383
+ 2025/Task-Oriented[[:space:]]Human[[:space:]]Grasp[[:space:]]Synthesis[[:space:]]via[[:space:]]Context-[[:space:]]and[[:space:]]Task-Aware[[:space:]]Diffusers/78084cbf-2e36-4026-8fb6-9a359db5d4a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
2384
+ 2025/Task-Specific[[:space:]]Zero-shot[[:space:]]Quantization-Aware[[:space:]]Training[[:space:]]for[[:space:]]Object[[:space:]]Detection/f7694a0b-af92-4f31-9e1b-1cd427138e9c_origin.pdf filter=lfs diff=lfs merge=lfs -text
2385
+ 2025/TaxaDiffusion_[[:space:]]Progressively[[:space:]]Trained[[:space:]]Diffusion[[:space:]]Model[[:space:]]for[[:space:]]Fine-Grained[[:space:]]Species[[:space:]]Generation/43f015a6-2b0b-4c2c-b0aa-5872afb41d47_origin.pdf filter=lfs diff=lfs merge=lfs -text
2386
+ 2025/TeEFusion_[[:space:]]Blending[[:space:]]Text[[:space:]]Embeddings[[:space:]]to[[:space:]]Distill[[:space:]]Classifier-Free[[:space:]]Guidance/7b4a5389-5346-4954-8bae-346372dd4918_origin.pdf filter=lfs diff=lfs merge=lfs -text
2387
+ 2025/TeRA_[[:space:]]Rethinking[[:space:]]Text-guided[[:space:]]Realistic[[:space:]]3D[[:space:]]Avatar[[:space:]]Generation/208df374-ff90-4c93-976c-5d7baa48a4c1_origin.pdf filter=lfs diff=lfs merge=lfs -text
2388
+ 2025/Teaching[[:space:]]AI[[:space:]]the[[:space:]]Anatomy[[:space:]]Behind[[:space:]]the[[:space:]]Scan_[[:space:]]Addressing[[:space:]]Anatomical[[:space:]]Flaws[[:space:]]in[[:space:]]Medical[[:space:]]Image[[:space:]]Segmentation[[:space:]]with[[:space:]]Learnable[[:space:]]Prior/1d92ab61-2930-488a-9688-8a5e3d69cd5f_origin.pdf filter=lfs diff=lfs merge=lfs -text
2389
+ 2025/Teaching[[:space:]]VLMs[[:space:]]to[[:space:]]Localize[[:space:]]Specific[[:space:]]Objects[[:space:]]from[[:space:]]In-context[[:space:]]Examples/95bd80f4-1de1-433d-84f9-403f9f70c8ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
2390
+ 2025/Teeth[[:space:]]Reconstruction[[:space:]]and[[:space:]]Performance[[:space:]]Capture[[:space:]]Using[[:space:]]a[[:space:]]Phone[[:space:]]Camera/18c89b72-0459-47cb-9df0-d7f62a471bec_origin.pdf filter=lfs diff=lfs merge=lfs -text
2391
+ 2025/TeethGenerator_[[:space:]]A[[:space:]]two-stage[[:space:]]framework[[:space:]]for[[:space:]]paired[[:space:]]pre-[[:space:]]and[[:space:]]post-orthodontic[[:space:]]3D[[:space:]]dental[[:space:]]data[[:space:]]generation/b2d1d147-d6e7-49c7-973a-f45e15a0c12b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2392
+ 2025/Teleportraits_[[:space:]]Training-Free[[:space:]]People[[:space:]]Insertion[[:space:]]into[[:space:]]Any[[:space:]]Scene/e1167a33-c43a-4294-8dbf-bb5f71014f4c_origin.pdf filter=lfs diff=lfs merge=lfs -text
2393
+ 2025/TemCoCo_[[:space:]]Temporally[[:space:]]Consistent[[:space:]]Multi-modal[[:space:]]Video[[:space:]]Fusion[[:space:]]with[[:space:]]Visual-Semantic[[:space:]]Collaboration/68c19cd2-9acb-4a05-84fd-482318ad0922_origin.pdf filter=lfs diff=lfs merge=lfs -text
2394
+ 2025/Temperature[[:space:]]in[[:space:]]Cosine-based[[:space:]]Softmax[[:space:]]Loss/88498a8a-8353-489f-9c49-393c2ca7e682_origin.pdf filter=lfs diff=lfs merge=lfs -text
2395
+ 2025/Temporal[[:space:]]Overlapping[[:space:]]Prediction_[[:space:]]A[[:space:]]Self-supervised[[:space:]]Pre-training[[:space:]]Method[[:space:]]for[[:space:]]LiDAR[[:space:]]Moving[[:space:]]Object[[:space:]]Segmentation/a26d0920-1ecc-4a62-9e6f-658f417c86a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
2396
+ 2025/Temporal[[:space:]]Rate[[:space:]]Reduction[[:space:]]Clustering[[:space:]]for[[:space:]]Human[[:space:]]Motion[[:space:]]Segmentation/f14c6fc6-9149-4e8b-a6dd-a5de071b6ce5_origin.pdf filter=lfs diff=lfs merge=lfs -text
2397
+ 2025/Temporal[[:space:]]Unlearnable[[:space:]]Examples_[[:space:]]Preventing[[:space:]]Personal[[:space:]]Video[[:space:]]Data[[:space:]]from[[:space:]]Unauthorized[[:space:]]Exploitation[[:space:]]by[[:space:]]Object[[:space:]]Tracking/fe5a296a-27d1-4006-9167-632435e6a7a3_origin.pdf filter=lfs diff=lfs merge=lfs -text
2398
+ 2025/Temporal-aware[[:space:]]Query[[:space:]]Routing[[:space:]]for[[:space:]]Real-time[[:space:]]Video[[:space:]]Instance[[:space:]]Segmentation/6956edf3-7339-4603-a3e5-6531a6c8894e_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_content_list.json ADDED
@@ -0,0 +1,1588 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "SurfaceSplat: Connecting Surface Reconstruction and Gaussian Splatting",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 125,
8
+ 130,
9
+ 872,
10
+ 152
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zihui Gao $^{1,3*}$ , Jia-Wang Bian $^{2*}$ , Guosheng Lin $^{3}$ , Hao Chen $^{1,\\dagger}$ , Chunhua Shen $^{1}$ $^{1}$ Zhejiang University, China \n $^{2}$ ByteDance Seed \n $^{3}$ Nanyang Technological University, Singapore \n*Equal contribution †Corresponding author",
17
+ "bbox": [
18
+ 109,
19
+ 178,
20
+ 887,
21
+ 241
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "image",
27
+ "img_path": "images/7425cd0e4e59e0429e9307924a60856a725b765284f8a001420ee39cf2007ac1.jpg",
28
+ "image_caption": [
29
+ "Figure 1. Sparse view reconstruction and rendering comparison. Left: Qualitative results from 10 images evenly sampled from a casually captured 360-degree video. Right: Quantitative analysis of 5, 10, and 20 input views, averaged across the selected 9 MobileBrick test scenes. 3DGS-based methods (e.g., GOF) achieve superior novel view rendering than SDF-based methods (e.g., Voxurf) due to their sparse representations, which capture fine details. However, SDF-based methods outperform the former in mesh reconstruction, as their dense representations better preserve global geometry. Our approach combines the strengths of both, achieving optimal performance."
30
+ ],
31
+ "image_footnote": [],
32
+ "bbox": [
33
+ 93,
34
+ 287,
35
+ 906,
36
+ 398
37
+ ],
38
+ "page_idx": 0
39
+ },
40
+ {
41
+ "type": "text",
42
+ "text": "Abstract",
43
+ "text_level": 1,
44
+ "bbox": [
45
+ 246,
46
+ 489,
47
+ 326,
48
+ 505
49
+ ],
50
+ "page_idx": 0
51
+ },
52
+ {
53
+ "type": "text",
54
+ "text": "Surface reconstruction and novel view rendering from sparse-view images are challenging. Signed Distance Function (SDF)-based methods struggle with fine details, while 3D Gaussian Splatting (3DGS)-based approaches lack global geometry coherence. We propose a novel hybrid method that combines the strengths of both approaches: SDF captures coarse geometry to enhance 3DGS-based rendering, while newly rendered images from 3DGS refine the details of SDF for accurate surface reconstruction. As a result, our method surpasses state-of-the-art approaches in surface reconstruction and novel view synthesis on the DTU and MobileBrick datasets. Code will be released at: https://github.com/aim-uofa/SurfaceSplat.",
55
+ "bbox": [
56
+ 88,
57
+ 523,
58
+ 485,
59
+ 720
60
+ ],
61
+ "page_idx": 0
62
+ },
63
+ {
64
+ "type": "text",
65
+ "text": "1. Introduction",
66
+ "text_level": 1,
67
+ "bbox": [
68
+ 89,
69
+ 753,
70
+ 220,
71
+ 768
72
+ ],
73
+ "page_idx": 0
74
+ },
75
+ {
76
+ "type": "text",
77
+ "text": "3D reconstruction from multi-view images is a core problem in computer vision with applications in virtual reality, robotics, and autonomous driving. Recent advances in Neural Radiance Fields (NeRF) [25] and 3D Gaussian Splatting (3DGS) [17] have significantly advanced the field. However, their performance degrades under sparse-view conditions, a common real-world challenge. This paper tackles sparse-view reconstruction to bridge this gap. Unlike ap-",
78
+ "bbox": [
79
+ 88,
80
+ 779,
81
+ 482,
82
+ 902
83
+ ],
84
+ "page_idx": 0
85
+ },
86
+ {
87
+ "type": "text",
88
+ "text": "proaches that leverage generative models [10, 39, 40, 43] or learn geometry priors through large-scale pretraining [6, 21, 29, 49], we focus on identifying the optimal 3D representations for surface reconstruction and novel view synthesis.",
89
+ "bbox": [
90
+ 511,
91
+ 491,
92
+ 906,
93
+ 551
94
+ ],
95
+ "page_idx": 0
96
+ },
97
+ {
98
+ "type": "text",
99
+ "text": "Surface reconstruction methods primarily use the Signed Distance Function (SDF) or 3DGS-based representations. Here, SDF-based approaches, such as NeuS [36] and Voxsurf [41], model scene geometry continuously with dense representations and optimize them via differentiable volume rendering [25]. In contrast, 3DGS-based methods like GOF [53] and 2DGS [15] leverage a pre-computed sparse point cloud for image rendering and progressively densify and refine it through differentiable rasterization. Due to their dense representations, SDF-based methods capture global structures well but lack fine details, while the sparse nature of 3DGS-based methods enables high-frequency detail preservation but compromises global coherence. As a result, both approaches struggle with poor reconstruction quality under sparse-view conditions. Typically, SDF-based methods outperform 3DGS in surface reconstruction, while 3DGS excels in image rendering, as illustrated in Fig. 1.",
100
+ "bbox": [
101
+ 509,
102
+ 551,
103
+ 908,
104
+ 809
105
+ ],
106
+ "page_idx": 0
107
+ },
108
+ {
109
+ "type": "text",
110
+ "text": "Recognizing the complementary strengths of SDF-based (dense) and 3DGS-based (sparse) representations, we propose a novel hybrid approach, SurfaceSplat, as illustrated in Fig. 2. Our method is built on two key ideas: (i) SDF for Improved 3DGS: To address the limitation of 3DGS in learning global geometry, we first fit the global struc",
111
+ "bbox": [
112
+ 509,
113
+ 810,
114
+ 908,
115
+ 901
116
+ ],
117
+ "page_idx": 0
118
+ },
119
+ {
120
+ "type": "header",
121
+ "text": "CVF",
122
+ "bbox": [
123
+ 106,
124
+ 2,
125
+ 181,
126
+ 42
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "header",
132
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
133
+ "bbox": [
134
+ 238,
135
+ 0,
136
+ 807,
137
+ 46
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "page_number",
143
+ "text": "28525",
144
+ "bbox": [
145
+ 478,
146
+ 944,
147
+ 517,
148
+ 955
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "text",
154
+ "text": "ture using an SDF-based representation, rapidly generating a smooth yet coarse mesh. We then initialize 3DGS by sampling point clouds from the mesh surface, ensuring global consistency while allowing 3DGS to refine fine details during training. (ii) 3DGS for Enhanced SDF: To compensate for the inability of SDF-based methods to capture fine details under sparse-view settings, we leverage the improved 3DGS from the first step to render additional novel viewpoint images, expanding the dataset. This enriched supervision helps the SDF-based method learn finer structural details, leading to improved reconstruction quality.",
155
+ "bbox": [
156
+ 89,
157
+ 90,
158
+ 480,
159
+ 256
160
+ ],
161
+ "page_idx": 1
162
+ },
163
+ {
164
+ "type": "text",
165
+ "text": "We conduct experiments on two real-world datasets, DTU [16] and MobileBrick [19]. Our method, SurfaceSplat, achieves state-of-the-art performance in sparse-view novel view rendering and 3D mesh reconstruction. In summary, we make the following contributions:",
166
+ "bbox": [
167
+ 89,
168
+ 257,
169
+ 480,
170
+ 333
171
+ ],
172
+ "page_idx": 1
173
+ },
174
+ {
175
+ "type": "list",
176
+ "sub_type": "text",
177
+ "list_items": [
178
+ "- We propose SurfaceSplat, which synergistically combines the strengths of SDF-based and 3DGS-based representations to achieve optimal global geometry preservation while capturing fine local details.",
179
+ "- We conducted a comprehensive evaluation and ablations on DTU and MobileBrick datasets. SurfaceSplat achieves state-of-the-art performance in novel view synthesis and mesh reconstruction under sparse-view conditions."
180
+ ],
181
+ "bbox": [
182
+ 89,
183
+ 335,
184
+ 480,
185
+ 455
186
+ ],
187
+ "page_idx": 1
188
+ },
189
+ {
190
+ "type": "text",
191
+ "text": "2. Related work",
192
+ "text_level": 1,
193
+ "bbox": [
194
+ 89,
195
+ 473,
196
+ 227,
197
+ 489
198
+ ],
199
+ "page_idx": 1
200
+ },
201
+ {
202
+ "type": "text",
203
+ "text": "2.1. Novel View Synthesis from Sparse Inputs",
204
+ "text_level": 1,
205
+ "bbox": [
206
+ 89,
207
+ 498,
208
+ 442,
209
+ 513
210
+ ],
211
+ "page_idx": 1
212
+ },
213
+ {
214
+ "type": "text",
215
+ "text": "Neural Radiance Fields (NeRFs)-based methods [2, 3, 5, 7, 12, 25, 26, 33, 35, 45, 55] have revolutionized novel view synthesis with implicit neural representations, and 3DGS-based methods [17, 23, 32, 34, 44, 47, 52] enable efficient training and real-time rendering through explicit 3D point clouds. However, both approaches suffer from performance degradation in sparse-view settings. To address this issue, recent methods have explored generative models [10, 39, 40, 43] or leveraged large-scale training to learn geometric priors [6, 21, 29, 49]. Unlike these approaches, we argue that the key challenge lies in the lack of effective geometric initialization for 3DGS. To overcome this, we investigate how neural surface reconstruction methods can enhance its performance.",
216
+ "bbox": [
217
+ 89,
218
+ 520,
219
+ 482,
220
+ 733
221
+ ],
222
+ "page_idx": 1
223
+ },
224
+ {
225
+ "type": "text",
226
+ "text": "2.2. Neural Surface Reconstruction",
227
+ "text_level": 1,
228
+ "bbox": [
229
+ 89,
230
+ 742,
231
+ 364,
232
+ 757
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "text",
238
+ "text": "SDF-based methods, such as NeuS [36], VolSDF [46], Neuralangelo [20], and PoRF [4] use dense neural representations and differentiable volume rendering to achieve high-quality reconstructions with 3D supervision. However, they suffer from long optimization times and require dense viewpoint images. Recent methods, such as 2DGS [15] and GOF [53], extend 3DGS [17] by leveraging modified Gaussians and depth correction to accelerate geometry extraction. While 3DGS-based methods [1, 8, 13, 15, 18, 37, 53,",
239
+ "bbox": [
240
+ 89,
241
+ 763,
242
+ 482,
243
+ 900
244
+ ],
245
+ "page_idx": 1
246
+ },
247
+ {
248
+ "type": "text",
249
+ "text": "[54] excel at capturing fine local details, their sparse representations struggle to maintain global geometry, leading to incomplete and fragmented reconstructions. This paper focuses on integrating the strengths of both representations to achieve optimal neural surface reconstruction.",
250
+ "bbox": [
251
+ 511,
252
+ 90,
253
+ 903,
254
+ 167
255
+ ],
256
+ "page_idx": 1
257
+ },
258
+ {
259
+ "type": "text",
260
+ "text": "2.3. Combing 3DGS and SDF",
261
+ "text_level": 1,
262
+ "bbox": [
263
+ 511,
264
+ 178,
265
+ 743,
266
+ 193
267
+ ],
268
+ "page_idx": 1
269
+ },
270
+ {
271
+ "type": "text",
272
+ "text": "Several recent approaches have integrated SDF-based [27, 28] and 3DGS-based representations to improve surface reconstruction. NeuSG [9] and GSDF [50] jointly optimize SDF and 3DGS, enforcing geometric consistency (e.g., depths and normals) to improve surface detail [14]. Similarly, 3DGSR [24] combines SDF values with Gaussian opacity in a joint optimization framework for better geometry. While effective in dense-view settings, these methods struggle to reconstruct high-quality structures under sparse-view conditions, as shown in our experiments in Sec. 4. Our approach specifically targets sparse-view scenarios by leveraging a complementary structure to enhance both rendering and reconstruction quality.",
273
+ "bbox": [
274
+ 511,
275
+ 200,
276
+ 906,
277
+ 398
278
+ ],
279
+ "page_idx": 1
280
+ },
281
+ {
282
+ "type": "text",
283
+ "text": "3. Method",
284
+ "text_level": 1,
285
+ "bbox": [
286
+ 511,
287
+ 412,
288
+ 604,
289
+ 428
290
+ ],
291
+ "page_idx": 1
292
+ },
293
+ {
294
+ "type": "text",
295
+ "text": "Our method takes sparse viewpoint images with camera poses as input, aiming to reconstruct 3D geometry and color for novel view synthesis and mesh extraction. Fig. 2 provides an overview of SurfaceSplat. In the following sections, we first introduce the preliminaries in Sec. 3.1, then explain how SDF-based mesh reconstruction improves 3DGS for novel view synthesis in Sec. 3.2, and finally describe how 3DGS-based rendering enhances SDF-based surface reconstruction quality in Sec. 3.3.",
296
+ "bbox": [
297
+ 511,
298
+ 439,
299
+ 906,
300
+ 575
301
+ ],
302
+ "page_idx": 1
303
+ },
304
+ {
305
+ "type": "text",
306
+ "text": "3.1. Preliminaries",
307
+ "text_level": 1,
308
+ "bbox": [
309
+ 511,
310
+ 587,
311
+ 653,
312
+ 602
313
+ ],
314
+ "page_idx": 1
315
+ },
316
+ {
317
+ "type": "text",
318
+ "text": "SDF-based representation. NeuS [36] proposes to model scene coordinates as signed distance function (SDF) values and optimize using differentiable volume rendering, similar to NeRF [25]. After optimization, object surfaces are extracted using the marching cubes algorithm [22]. To render a pixel, a ray is cast from the camera center $o$ through the pixel along the viewing direction $v$ as $\\{p(t) = o + tv|t\\geq 0\\}$ , and the pixel color is computed by integrating $N$ sampled points along the ray $\\{p_i = o + t_iv|i = 1,\\dots,N,t_i < t_{i + 1}\\}$ using volume rendering:",
319
+ "bbox": [
320
+ 511,
321
+ 609,
322
+ 905,
323
+ 761
324
+ ],
325
+ "page_idx": 1
326
+ },
327
+ {
328
+ "type": "equation",
329
+ "text": "\n$$\n\\hat {C} (r) = \\sum_ {i = 1} ^ {N} T _ {i} \\alpha_ {i} c _ {i}, T _ {i} = \\prod_ {j = 1} ^ {i - 1} (1 - \\alpha_ {j}), \\qquad (1)\n$$\n",
330
+ "text_format": "latex",
331
+ "bbox": [
332
+ 575,
333
+ 773,
334
+ 906,
335
+ 818
336
+ ],
337
+ "page_idx": 1
338
+ },
339
+ {
340
+ "type": "text",
341
+ "text": "where $\\alpha_{i}$ represents opacity and $T_{i}$ is the accumulated transmittance. It is computed as:",
342
+ "bbox": [
343
+ 511,
344
+ 828,
345
+ 903,
346
+ 859
347
+ ],
348
+ "page_idx": 1
349
+ },
350
+ {
351
+ "type": "equation",
352
+ "text": "\n$$\n\\alpha_ {i} = \\max \\left(\\frac {\\Phi_ {s} (f (p (t _ {i}))) - \\Phi_ {s} (f (p (t _ {i + 1})))}{\\Phi_ {s} (f (p (t _ {i})))}, 0\\right), \\quad (2)\n$$\n",
353
+ "text_format": "latex",
354
+ "bbox": [
355
+ 531,
356
+ 869,
357
+ 906,
358
+ 906
359
+ ],
360
+ "page_idx": 1
361
+ },
362
+ {
363
+ "type": "page_number",
364
+ "text": "28526",
365
+ "bbox": [
366
+ 478,
367
+ 945,
368
+ 519,
369
+ 955
370
+ ],
371
+ "page_idx": 1
372
+ },
373
+ {
374
+ "type": "image",
375
+ "img_path": "images/864fc9a4b81b720525d6915e06f47ec878f368449e2c2f764959287c3cf100f3.jpg",
376
+ "image_caption": [
377
+ "Figure 2. Overview of the proposed SurfaceSplat. (A) We reconstruct a coarse mesh using an SDF-based representation. (B) Point clouds are sampled from the mesh surface to initialize 3DGS. (C) 3DGS renders new viewpoint images to expand the training set, refining the mesh. (D) Steps B and C can be repeated for iterative optimization, progressively improving performance."
378
+ ],
379
+ "image_footnote": [],
380
+ "bbox": [
381
+ 133,
382
+ 89,
383
+ 867,
384
+ 460
385
+ ],
386
+ "page_idx": 2
387
+ },
388
+ {
389
+ "type": "text",
390
+ "text": "where $f(x)$ is the SDF function and $\\Phi_s(x) = (1 + e^{-sx})^{-1}$ is the Sigmoid function, with $s$ learned during training. Based on this, Voxurf [41] proposes a hybrid representation that combines a voxel grid with a shallow MLP to reconstruct the implicit SDF field. In the coarse stage, Voxurf [41] optimizes for a better overall shape by using 3D convolution and interpolation to estimate SDF values. In the fine stage, it increases the voxel grid resolution and employs a dual-color MLP architecture, consisting of two networks: $g_{geo}$ , which takes hierarchical geometry features as input, and $g_{feat}$ , which receives local features from $V^{(\\mathrm{feat})}$ along with surface normals. We incorporate Voxurf in this work due to its effective balance between accuracy and efficiency.",
391
+ "bbox": [
392
+ 88,
393
+ 537,
394
+ 486,
395
+ 739
396
+ ],
397
+ "page_idx": 2
398
+ },
399
+ {
400
+ "type": "text",
401
+ "text": "3DGS-based representation. 3DGS [17] models a set of 3D Gaussians to represent the scene, which is similar to point clouds. Each Gaussian ellipse has a color and an opacity and is defined by its centered position $x$ (mean), and a full covariance matrix $\\Sigma: G(x) = e^{-\\frac{1}{2} x^T \\Sigma^{-1} x}$ . When projecting 3D Gaussians to 2D for rendering, the splattering method is used to position the Gaussians on 2D planes, which involves a new covariance matrix $\\Sigma'$ in camera coordinates defined as: $\\Sigma' = J W \\Sigma W^T J^T$ , where $W$ denotes",
402
+ "bbox": [
403
+ 88,
404
+ 762,
405
+ 486,
406
+ 902
407
+ ],
408
+ "page_idx": 2
409
+ },
410
+ {
411
+ "type": "text",
412
+ "text": "a given viewing transformation matrix and $J$ is the Jacobian of the affine approximation of the projective transformation. To enable differentiable optimization, $\\Sigma$ is further decomposed into a scaling matrix $S$ and a rotation matrix $R$ : $\\Sigma = R S T^T R^T$ .",
413
+ "bbox": [
414
+ 511,
415
+ 539,
416
+ 908,
417
+ 616
418
+ ],
419
+ "page_idx": 2
420
+ },
421
+ {
422
+ "type": "text",
423
+ "text": "3.2. SDF for Improved 3DGS",
424
+ "text_level": 1,
425
+ "bbox": [
426
+ 511,
427
+ 626,
428
+ 743,
429
+ 643
430
+ ],
431
+ "page_idx": 2
432
+ },
433
+ {
434
+ "type": "text",
435
+ "text": "3DGS [17] typically initializes with sparse point clouds estimated by COLMAP [31], which are often inaccurate or missing in low-texture or little over-lapping regions. To address this, we propose initializing 3DGS by uniformly sampling points from a mesh surface derived from a SDF representation, ensuring high-quality novel view rendering while preserving global geometry. Below, we detail our proposed method for mesh reconstruction, mesh cleaning, and point cloud sampling. A visual example of the reconstructed meshes and sampled points is shown in Fig. 3.",
436
+ "bbox": [
437
+ 511,
438
+ 648,
439
+ 908,
440
+ 801
441
+ ],
442
+ "page_idx": 2
443
+ },
444
+ {
445
+ "type": "text",
446
+ "text": "Coarse mesh reconstruction. Given $M$ sparse images $\\{\\mathcal{I}\\}$ and their camera poses $\\{\\pi\\}$ , our objective is to reconstruct a 3D surface for sampling points. As our focus is on robust global geometry rather than highly accurate surfaces, and to ensure efficient mesh reconstruction,",
447
+ "bbox": [
448
+ 511,
449
+ 825,
450
+ 911,
451
+ 902
452
+ ],
453
+ "page_idx": 2
454
+ },
455
+ {
456
+ "type": "page_number",
457
+ "text": "28527",
458
+ "bbox": [
459
+ 478,
460
+ 944,
461
+ 521,
462
+ 957
463
+ ],
464
+ "page_idx": 2
465
+ },
466
+ {
467
+ "type": "image",
468
+ "img_path": "images/3428cfe527d2a9f41057c4af43ab41f4c9840de87f526892c615e41d3ccf082c.jpg",
469
+ "image_caption": [
470
+ "(a) Reference image"
471
+ ],
472
+ "image_footnote": [],
473
+ "bbox": [
474
+ 94,
475
+ 98,
476
+ 194,
477
+ 183
478
+ ],
479
+ "page_idx": 3
480
+ },
481
+ {
482
+ "type": "image",
483
+ "img_path": "images/81bcf7aba56ce80e7c1ca18ad8c5d6d7aee7aa7954d88388b3237dc60bf1fe08.jpg",
484
+ "image_caption": [
485
+ "(b) Coarse mesh"
486
+ ],
487
+ "image_footnote": [],
488
+ "bbox": [
489
+ 205,
490
+ 88,
491
+ 333,
492
+ 181
493
+ ],
494
+ "page_idx": 3
495
+ },
496
+ {
497
+ "type": "image",
498
+ "img_path": "images/9f735c3e405e2761b225386571feeab7ceaa3298eb1810892bca9b79a74e503d.jpg",
499
+ "image_caption": [
500
+ "(c) Coarse mesh w/ normal"
501
+ ],
502
+ "image_footnote": [],
503
+ "bbox": [
504
+ 344,
505
+ 88,
506
+ 467,
507
+ 181
508
+ ],
509
+ "page_idx": 3
510
+ },
511
+ {
512
+ "type": "image",
513
+ "img_path": "images/12d82b3a567f559e418c56b5a6388ced39cee9ea44f7c5428b6777fb745271c0.jpg",
514
+ "image_caption": [
515
+ "(d) Coarse mesh w/ normal and clean"
516
+ ],
517
+ "image_footnote": [],
518
+ "bbox": [
519
+ 101,
520
+ 207,
521
+ 192,
522
+ 273
523
+ ],
524
+ "page_idx": 3
525
+ },
526
+ {
527
+ "type": "image",
528
+ "img_path": "images/d9169facfb9cbac91ae540a6147773f84038626207e2cedbc22dd241b6ff396d.jpg",
529
+ "image_caption": [
530
+ "(e) Color points sampling"
531
+ ],
532
+ "image_footnote": [],
533
+ "bbox": [
534
+ 220,
535
+ 207,
536
+ 313,
537
+ 273
538
+ ],
539
+ "page_idx": 3
540
+ },
541
+ {
542
+ "type": "image",
543
+ "img_path": "images/9177d658f6f1962ff6008441838d00a44f3495d0d3fa2b6a562aabdc09fe62cb.jpg",
544
+ "image_caption": [
545
+ "(f) COLMAP sparse points",
546
+ "Figure 3. Visualization of our mesh reconstruction, cleaning, and point sampling. (b) Naïve coarse mesh reconstruction following Voxurf [41]. (c) Coarse mesh reconstructed with our proposed normal loss, reducing floaters. (d) Post-processed mesh with both normal loss and our cleaning methods. (e) Our sampled point clouds used for initializing 3DGS. (f) COLMAP-estimated point clouds, typically used for 3DGS initialization."
547
+ ],
548
+ "image_footnote": [],
549
+ "bbox": [
550
+ 333,
551
+ 208,
552
+ 478,
553
+ 280
554
+ ],
555
+ "page_idx": 3
556
+ },
557
+ {
558
+ "type": "text",
559
+ "text": "we adopt the coarse-stage surface reconstruction from Vox-urf [41]. Specifically, we use a grid-based SDF representation $V^{(\\mathrm{sdf})}$ for efficient mesh reconstruction. For each sampled 3D point $\\mathbf{x} \\in \\mathbb{R}^3$ , the grid outputs the corresponding SDF value: $V^{(\\mathrm{sdf})}: \\mathbb{R}^3 \\to \\mathbb{R}$ . We use differentiable volume rendering to render image pixels $\\hat{C}(r)$ and employs image reconstruction loss to supervise. The loss function $\\mathcal{L}$ is formulated as:",
560
+ "bbox": [
561
+ 89,
562
+ 425,
563
+ 483,
564
+ 545
565
+ ],
566
+ "page_idx": 3
567
+ },
568
+ {
569
+ "type": "equation",
570
+ "text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {\\text {r e c o n}} + \\mathcal {L} _ {T V} \\left(V ^ {(\\mathrm {s d f})}\\right) + \\mathcal {L} _ {\\text {s m o o t h}} \\left(\\nabla V ^ {(\\mathrm {s d f})}\\right), \\tag {3}\n$$\n",
571
+ "text_format": "latex",
572
+ "bbox": [
573
+ 104,
574
+ 549,
575
+ 482,
576
+ 574
577
+ ],
578
+ "page_idx": 3
579
+ },
580
+ {
581
+ "type": "text",
582
+ "text": "where the reconstruction loss $\\mathcal{L}_{\\mathrm{recon}}$ calculates photometric image rendering loss, originating from both the $g_{geo}$ and $g_{feat}$ branches. The $\\mathcal{L}_{TV}$ encourages a continuous and compact geometry, while the smoothness regularization $\\mathcal{L}_{\\mathrm{smooth}}$ promotes local smoothness of the geometric surface. We refer to Voxurf [41] for the detailed implementation of the loss functions. The coarse reconstruction typically completes in 15 minutes in our experiments.",
583
+ "bbox": [
584
+ 89,
585
+ 579,
586
+ 482,
587
+ 699
588
+ ],
589
+ "page_idx": 3
590
+ },
591
+ {
592
+ "type": "text",
593
+ "text": "Due to the limited number of training views, the learned grid often exhibits floating artifacts, as shown in Fig. 3 (b), which leads to incorrect point sampling. To mitigate this, we introduce a normal consistency loss to improve training stability, effectively reducing floaters and smoothing the geometric surface. Our approach leverages the predicted monocular surface normal $\\hat{N} (\\mathbf{r})$ from the Metric3D model [48] to supervise the volume-rendered normal $\\bar{N} (\\mathbf{r})$ in the same coordinate system. The formulation is:",
594
+ "bbox": [
595
+ 89,
596
+ 700,
597
+ 482,
598
+ 835
599
+ ],
600
+ "page_idx": 3
601
+ },
602
+ {
603
+ "type": "equation",
604
+ "text": "\n$$\n\\mathcal {L} _ {\\text {n o r m a l}} = \\sum \\left(\\| \\hat {N} (\\mathbf {r}) - \\bar {N} (\\mathbf {r}) \\| _ {1}\\right). \\tag {4}\n$$\n",
605
+ "text_format": "latex",
606
+ "bbox": [
607
+ 169,
608
+ 843,
609
+ 482,
610
+ 864
611
+ ],
612
+ "page_idx": 3
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "We integrate this loss with Eqn. (3) during training to effectively remove floaters. Fig. 3 (c) shows a coarse mesh re",
617
+ "bbox": [
618
+ 89,
619
+ 869,
620
+ 482,
621
+ 901
622
+ ],
623
+ "page_idx": 3
624
+ },
625
+ {
626
+ "type": "text",
627
+ "text": "constructed with the normal loss, demonstrating improved surface smoothness and reduced artifacts.",
628
+ "bbox": [
629
+ 511,
630
+ 90,
631
+ 906,
632
+ 119
633
+ ],
634
+ "page_idx": 3
635
+ },
636
+ {
637
+ "type": "text",
638
+ "text": "Mesh cleaning. Even though the proposed normal loss significantly reduces floaters, some still persist, adding noise to the subsequent 3DGS initialization. To mitigate this, we apply a mesh cleaning step that refines the coarse mesh by removing non-main components. Specifically, we first use Marching Cube algorithm [38] to extract triangle mesh $\\mathcal{M} = (\\mathcal{V},\\mathcal{F})$ from SDF grid $V^{(\\mathrm{sdf})}$ . Then we cluster the connected mesh triangles to $\\{\\mathcal{F}_i\\}$ , identify the largest cluster index: $|\\mathcal{F}_{i_{\\max}}| = \\max (|\\mathcal{F}_i|)$ and get remove parts",
639
+ "bbox": [
640
+ 511,
641
+ 141,
642
+ 906,
643
+ 279
644
+ ],
645
+ "page_idx": 3
646
+ },
647
+ {
648
+ "type": "equation",
649
+ "text": "\n$$\n\\mathcal {F} _ {\\text {r e m o v e}} = \\{f \\in \\mathcal {F} \\mid f \\notin \\mathcal {F} _ {i _ {\\max }} \\}. \\tag {5}\n$$\n",
650
+ "text_format": "latex",
651
+ "bbox": [
652
+ 604,
653
+ 290,
654
+ 906,
655
+ 308
656
+ ],
657
+ "page_idx": 3
658
+ },
659
+ {
660
+ "type": "text",
661
+ "text": "Finally, we filter the floaters $\\mathcal{F}_{\\mathrm{remove}}$ from $\\mathcal{M}$ , resulting in $\\mathcal{M}_1 = \\mathcal{M} \\setminus \\mathcal{F}_{\\mathrm{remove}}$ . Fig. 3 (d) illustrates the refined mesh after applying our cleaning method.",
662
+ "bbox": [
663
+ 511,
664
+ 319,
665
+ 906,
666
+ 364
667
+ ],
668
+ "page_idx": 3
669
+ },
670
+ {
671
+ "type": "text",
672
+ "text": "Sampling surface points for 3DGS. Since the mesh obtained from Marching Cubes includes regions that are invisible from the training views, directly sampling points from the mesh surface can introduce noise into 3DGS. To mitigate this, we propose a depth-based sampling strategy. First, we project the reconstructed mesh onto the training views using their known camera poses to generate depth maps $\\{\\mathcal{D}\\}$ . Since these depth maps originate from a 3D mesh, they maintain multi-view consistency. We then randomly sample points from valid depth regions, ensuring they correspond to visible object surfaces. The sampled pixels $(u,v)$ , along with their depth values $d(u,v)$ , are back-projected to colorized 3D points $\\mathbf{P} = \\{(x_i,y_i,z_i) \\mid i = 1,2,\\dots,N\\}$ using the following formulation:",
673
+ "bbox": [
674
+ 511,
675
+ 385,
676
+ 906,
677
+ 597
678
+ ],
679
+ "page_idx": 3
680
+ },
681
+ {
682
+ "type": "equation",
683
+ "text": "\n$$\n\\left[ \\begin{array}{l l l} x _ {i} & y _ {i} & z _ {i} \\end{array} \\right] = \\boldsymbol {\\pi} _ {\\boldsymbol {k}} \\mathbf {K} ^ {- 1} \\left[ \\begin{array}{l l l} d \\cdot u & d \\cdot v & d \\end{array} \\right] ^ {T}. \\tag {6}\n$$\n",
684
+ "text_format": "latex",
685
+ "bbox": [
686
+ 560,
687
+ 608,
688
+ 905,
689
+ 630
690
+ ],
691
+ "page_idx": 3
692
+ },
693
+ {
694
+ "type": "text",
695
+ "text": "This approach ensures that the sampled points are uniformly distributed on the object's surface while remaining visible in the training views, leading to a more stable and accurate 3DGS initialization. As our reconstructed mesh primarily covers foreground regions, we combine our sampled point cloud with COLMAP sparse points when rendering background regions, serving as the initialization for 3DGS. Fig. 3 (e) and (f) illustrate our sampled point clouds and COLMAP-estimated point clouds, respectively.",
696
+ "bbox": [
697
+ 511,
698
+ 641,
699
+ 906,
700
+ 777
701
+ ],
702
+ "page_idx": 3
703
+ },
704
+ {
705
+ "type": "text",
706
+ "text": "3.3. 3DGS for Enhanced SDF",
707
+ "text_level": 1,
708
+ "bbox": [
709
+ 511,
710
+ 787,
711
+ 741,
712
+ 801
713
+ ],
714
+ "page_idx": 3
715
+ },
716
+ {
717
+ "type": "text",
718
+ "text": "We argue that the primary bottleneck for SDF-based mesh reconstruction is insufficient supervision due to limited training views. To address this, we generate additional novel viewpoint images using a 3DGS-based method and combine them with the original sparse views to enhance the training of SDF-based reconstruction.",
719
+ "bbox": [
720
+ 511,
721
+ 809,
722
+ 905,
723
+ 900
724
+ ],
725
+ "page_idx": 3
726
+ },
727
+ {
728
+ "type": "page_number",
729
+ "text": "28528",
730
+ "bbox": [
731
+ 478,
732
+ 944,
733
+ 517,
734
+ 955
735
+ ],
736
+ "page_idx": 3
737
+ },
738
+ {
739
+ "type": "image",
740
+ "img_path": "images/65e865b72c212a0c00d0b6705503acca33ae5b7c051a666a248d7b0f51cd2080.jpg",
741
+ "image_caption": [
742
+ "(a) Camera position perturbation",
743
+ "Figure 4. Top-view visualization of pose expansion strategies."
744
+ ],
745
+ "image_footnote": [],
746
+ "bbox": [
747
+ 96,
748
+ 89,
749
+ 279,
750
+ 231
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "image",
756
+ "img_path": "images/a0b7eb9f94a07d21f7cb812f5f22d13f3f67e2f67a8f85c864345d64896ce5f3.jpg",
757
+ "image_caption": [
758
+ "(b) Camera poses interpolation"
759
+ ],
760
+ "image_footnote": [],
761
+ "bbox": [
762
+ 305,
763
+ 90,
764
+ 488,
765
+ 229
766
+ ],
767
+ "page_idx": 4
768
+ },
769
+ {
770
+ "type": "text",
771
+ "text": "Rendering novel viewpoint images. We utilize the improved 3DGS, initialized with our proposed mesh-based point sampling method, to render images. Thanks to our robust and dense point initialization, the 3D Gaussian $\\mathcal{G}$ can converge after $7k$ iterations in just 5 minutes, yielding $\\mathcal{G} = f(\\mathbf{P},\\{I\\},\\{\\pi\\})$ . Given new camera poses $\\{\\pi_{\\mathrm{new}}\\}$ , the 3D Gaussian $\\mathcal{G}$ can be projected to generate novel-view images as follows:",
772
+ "bbox": [
773
+ 89,
774
+ 308,
775
+ 483,
776
+ 429
777
+ ],
778
+ "page_idx": 4
779
+ },
780
+ {
781
+ "type": "equation",
782
+ "text": "\n$$\n\\{\\mathcal {I} _ {\\text {n e w}} \\} = \\operatorname {S p l a t} (\\mathcal {G}, \\left\\{\\pi_ {\\text {n e w}} \\right\\}). \\tag {7}\n$$\n",
783
+ "text_format": "latex",
784
+ "bbox": [
785
+ 194,
786
+ 440,
787
+ 480,
788
+ 458
789
+ ],
790
+ "page_idx": 4
791
+ },
792
+ {
793
+ "type": "text",
794
+ "text": "The newly rendered images $\\{\\mathcal{I}_{\\mathrm{new}}\\}$ are combined with the input images $\\{\\mathcal{I}\\}$ to train the SDF-based mesh reconstruction. The key challenge lies in selecting new camera viewpoints $\\{\\pi_{\\mathrm{new}}\\}$ that best enhance surface reconstruction:",
795
+ "bbox": [
796
+ 89,
797
+ 468,
798
+ 483,
799
+ 529
800
+ ],
801
+ "page_idx": 4
802
+ },
803
+ {
804
+ "type": "equation",
805
+ "text": "\n$$\n\\left\\{\\boldsymbol {\\pi} _ {\\text {n e w}} \\right\\} = g \\left(\\left\\{\\boldsymbol {\\pi} \\right\\}\\right) \\tag {8}\n$$\n",
806
+ "text_format": "latex",
807
+ "bbox": [
808
+ 223,
809
+ 540,
810
+ 480,
811
+ 558
812
+ ],
813
+ "page_idx": 4
814
+ },
815
+ {
816
+ "type": "text",
817
+ "text": "where $g$ is our pose expansion strategy. To ensure new viewpoints remain consistent with the original pose distribution and avoid excessive deviation that could blur or diminish the foreground, we explore two methods for generating new camera poses. Fig. 4 shows the generated pose position.",
818
+ "bbox": [
819
+ 89,
820
+ 568,
821
+ 483,
822
+ 643
823
+ ],
824
+ "page_idx": 4
825
+ },
826
+ {
827
+ "type": "text",
828
+ "text": "Camera position perturbation. To generate new camera positions while preserving proximity to the original distribution, a perturbation $\\Delta \\mathbf{p}$ is applied to the initial camera positions $\\{\\pmb {c}\\}$ . The new camera centers $\\{\\pmb {c}_m^{\\prime}\\}$ are computed:",
829
+ "bbox": [
830
+ 89,
831
+ 662,
832
+ 483,
833
+ 723
834
+ ],
835
+ "page_idx": 4
836
+ },
837
+ {
838
+ "type": "equation",
839
+ "text": "\n$$\n\\boldsymbol {c} _ {m} ^ {\\prime} = \\boldsymbol {c} + \\Delta \\mathbf {p}, \\tag {9}\n$$\n",
840
+ "text_format": "latex",
841
+ "bbox": [
842
+ 233,
843
+ 733,
844
+ 482,
845
+ 751
846
+ ],
847
+ "page_idx": 4
848
+ },
849
+ {
850
+ "type": "text",
851
+ "text": "where $\\Delta \\mathbf{p} = (\\Delta x, \\Delta y, \\Delta z)$ represents a controlled offset vector designed to modulate the new viewpoints.",
852
+ "bbox": [
853
+ 89,
854
+ 761,
855
+ 482,
856
+ 792
857
+ ],
858
+ "page_idx": 4
859
+ },
860
+ {
861
+ "type": "text",
862
+ "text": "Camera pose interpolation. Our method takes a set of camera rotation matrices $\\{\\mathbf{R}\\}$ and camera positions $\\{e\\}$ as input. To generate smooth transitions between viewpoints, we employ cubic spline interpolation [11]. This approach interpolates both camera positions and orientations, producing interpolated camera centers $\\{c_m'\\}$ and rotation matrices",
863
+ "bbox": [
864
+ 89,
865
+ 810,
866
+ 483,
867
+ 902
868
+ ],
869
+ "page_idx": 4
870
+ },
871
+ {
872
+ "type": "text",
873
+ "text": "$\\{\\mathbf{R}_m^{\\prime}\\}$ that ensure visual continuity and positional coherence. By maintaining these properties, the newly generated camera poses facilitate high-quality transitions, making them well-suited for 3D mesh reconstruction. The visualizations of the images generated from new viewpoints can be found in Fig. 2 of the supplementary material.",
874
+ "bbox": [
875
+ 511,
876
+ 90,
877
+ 905,
878
+ 181
879
+ ],
880
+ "page_idx": 4
881
+ },
882
+ {
883
+ "type": "text",
884
+ "text": "Refining surface reconstruction. We reuse the reconstructed coarse mesh and refine it with the original and expanded novel viewpoint images. Following the fine-stage reconstruction of Voxurf [41], we increase the grid resolution and introduce a dual color network and hierarchical geometry features for detailed surface reconstruction.",
885
+ "bbox": [
886
+ 511,
887
+ 200,
888
+ 905,
889
+ 292
890
+ ],
891
+ "page_idx": 4
892
+ },
893
+ {
894
+ "type": "text",
895
+ "text": "3.4. Cyclic Optimization",
896
+ "text_level": 1,
897
+ "bbox": [
898
+ 511,
899
+ 301,
900
+ 705,
901
+ 316
902
+ ],
903
+ "page_idx": 4
904
+ },
905
+ {
906
+ "type": "text",
907
+ "text": "We propose an interactive optimization process, which begins by generating an initial coarse mesh $\\mathcal{M}^{(0)}$ . Then, in each iteration $n$ , the process follows two steps:",
908
+ "bbox": [
909
+ 511,
910
+ 323,
911
+ 905,
912
+ 368
913
+ ],
914
+ "page_idx": 4
915
+ },
916
+ {
917
+ "type": "text",
918
+ "text": "1. Rendering Step: We optimize a 3DGS model for rendering novel view images, which is initialized by sampling points from the current coarse mesh $\\mathcal{M}_c^{(n)}$ , represented by:",
919
+ "bbox": [
920
+ 511,
921
+ 369,
922
+ 905,
923
+ 416
924
+ ],
925
+ "page_idx": 4
926
+ },
927
+ {
928
+ "type": "equation",
929
+ "text": "\n$$\n\\mathcal {I} ^ {(n)} = \\mathcal {R} \\left(\\mathcal {M} _ {c} ^ {(n)}\\right) \\tag {10}\n$$\n",
930
+ "text_format": "latex",
931
+ "bbox": [
932
+ 647,
933
+ 426,
934
+ 903,
935
+ 452
936
+ ],
937
+ "page_idx": 4
938
+ },
939
+ {
940
+ "type": "text",
941
+ "text": "2. Meshing Step: We refine the current mesh by finetuning it using both the newly rendered images and the original input images:",
942
+ "bbox": [
943
+ 511,
944
+ 462,
945
+ 905,
946
+ 508
947
+ ],
948
+ "page_idx": 4
949
+ },
950
+ {
951
+ "type": "equation",
952
+ "text": "\n$$\n\\mathcal {M} _ {f} ^ {(n)} = \\mathcal {O} \\left(\\mathcal {M} _ {c} ^ {(n)}, \\mathcal {I} ^ {(n)}\\right) \\tag {11}\n$$\n",
953
+ "text_format": "latex",
954
+ "bbox": [
955
+ 625,
956
+ 518,
957
+ 903,
958
+ 546
959
+ ],
960
+ "page_idx": 4
961
+ },
962
+ {
963
+ "type": "text",
964
+ "text": "where $\\mathcal{O}$ represents the SDF grid optimization. Then, we update the refined mesh:",
965
+ "bbox": [
966
+ 511,
967
+ 555,
968
+ 905,
969
+ 585
970
+ ],
971
+ "page_idx": 4
972
+ },
973
+ {
974
+ "type": "equation",
975
+ "text": "\n$$\n\\mathcal {M} _ {c} ^ {(n + 1)} = \\mathcal {M} _ {f} ^ {(n)}. \\tag {12}\n$$\n",
976
+ "text_format": "latex",
977
+ "bbox": [
978
+ 648,
979
+ 595,
980
+ 903,
981
+ 619
982
+ ],
983
+ "page_idx": 4
984
+ },
985
+ {
986
+ "type": "text",
987
+ "text": "By iterating this process, our method allows SDF-based reconstruction and 3DGS-based rendering to complement each other, improving both reconstruction accuracy and novel view synthesis. To balance efficiency and accuracy, we typically perform only one iteration.",
988
+ "bbox": [
989
+ 511,
990
+ 628,
991
+ 905,
992
+ 704
993
+ ],
994
+ "page_idx": 4
995
+ },
996
+ {
997
+ "type": "text",
998
+ "text": "4. Experiments",
999
+ "text_level": 1,
1000
+ "bbox": [
1001
+ 511,
1002
+ 717,
1003
+ 645,
1004
+ 734
1005
+ ],
1006
+ "page_idx": 4
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "4.1. Experimental Setup",
1011
+ "text_level": 1,
1012
+ "bbox": [
1013
+ 511,
1014
+ 742,
1015
+ 702,
1016
+ 758
1017
+ ],
1018
+ "page_idx": 4
1019
+ },
1020
+ {
1021
+ "type": "text",
1022
+ "text": "Datasets. We conduct a comprehensive evaluation of the proposed method on the MobileBrick [19] and DTU [16] datasets. MobileBrick is a multi-view RGB-D dataset captured on a mobile device, providing precise 3D annotations for detailed 3D object reconstruction. Unlike the DTU dataset, which is captured in a controlled lab environment, MobileBrick represents more challenging, real-world conditions, making it more reflective of everyday scenarios. Following previous methods [19, 36, 41], we use 15 test",
1023
+ "bbox": [
1024
+ 511,
1025
+ 763,
1026
+ 906,
1027
+ 901
1028
+ ],
1029
+ "page_idx": 4
1030
+ },
1031
+ {
1032
+ "type": "page_number",
1033
+ "text": "28529",
1034
+ "bbox": [
1035
+ 478,
1036
+ 944,
1037
+ 517,
1038
+ 955
1039
+ ],
1040
+ "page_idx": 4
1041
+ },
1042
+ {
1043
+ "type": "table",
1044
+ "img_path": "images/c5b28db7e06f81d34767fc1f0568968907a59c612256707b5e54d325b3396cbd.jpg",
1045
+ "table_caption": [
1046
+ "Table 1. Surface reconstruction and novel view synthesis results on MobileBrick. The results are averaged over all 18 test scenes with an initial input of 10 images per scene. PSNR-F is computed only on foreground regions. The best results are bolded."
1047
+ ],
1048
+ "table_footnote": [],
1049
+ "table_body": "<table><tr><td></td><td colspan=\"7\">Mesh Reconstruction</td><td colspan=\"2\">Rendering</td><td rowspan=\"3\">Time</td></tr><tr><td></td><td colspan=\"3\">σ = 2.5mm</td><td colspan=\"3\">σ = 5mm</td><td rowspan=\"2\">CD (mm)↓</td><td rowspan=\"2\">PSNR↑</td><td rowspan=\"2\">PSNR-F↑</td></tr><tr><td></td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td></tr><tr><td>Voxurf [41]</td><td>62.89</td><td>62.54</td><td>62.42</td><td>80.93</td><td>80.61</td><td>80.38</td><td>13.3</td><td>14.34</td><td>18.34</td><td>55 mins</td></tr><tr><td>MonoSDF [51]</td><td>41.56</td><td>32.47</td><td>36.22</td><td>57.88</td><td>48.19</td><td>52.21</td><td>37.7</td><td>14.71</td><td>15.42</td><td>6 hrs</td></tr><tr><td>2DGS [15]</td><td>49.83</td><td>45.32</td><td>47.10</td><td>72.65</td><td>64.88</td><td>67.96</td><td>14.8</td><td>17.12</td><td>18.52</td><td>10 mins</td></tr><tr><td>GOF [53]</td><td>50.24</td><td>61.11</td><td>54.96</td><td>74.99</td><td>82.68</td><td>78.16</td><td>11.0</td><td>16.52</td><td>18.36</td><td>50 mins</td></tr><tr><td>3DGS [17]</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>17.19</td><td>19.12</td><td>10 mins</td></tr><tr><td>SparseGS [42]</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>16.93</td><td>18.74</td><td>30 mins</td></tr><tr><td>Ours</td><td>68.36</td><td>69.79</td><td>68.97</td><td>86.79</td><td>86.82</td><td>86.65</td><td>9.7</td><td>17.48</td><td>20.45</td><td>1 hr</td></tr><tr><td>Ours (Two cycles)</td><td>69.61</td><td>68.89</td><td>69.14</td><td>87.79</td><td>85.93</td><td>86.74</td><td>9.9</td><td>17.58</td><td>20.55</td><td>1.6 hr</td></tr></table>",
1050
+ "bbox": [
1051
+ 117,
1052
+ 128,
1053
+ 883,
1054
+ 358
1055
+ ],
1056
+ "page_idx": 5
1057
+ },
1058
+ {
1059
+ "type": "table",
1060
+ "img_path": "images/474569506360d2c6a0016b332de80a55a405058aebf2102ee2305dc0fa9ddfdf.jpg",
1061
+ "table_caption": [
1062
+ "Table 2. Surface reconstruction results on DTU with 5 input views. Values indicate Chamfer Distance in millimeters (mm). -\" denotes failure cases where COLMAP could not generate point clouds for 3DGS initialization. GSDF-10 is reported with 10 input images, as it fails in sparser settings. The best results are bolded, while the second-best are underlined."
1063
+ ],
1064
+ "table_footnote": [],
1065
+ "table_body": "<table><tr><td>Scan</td><td>24</td><td>37</td><td>40</td><td>55</td><td>63</td><td>65</td><td>69</td><td>83</td><td>97</td><td>105</td><td>106</td><td>110</td><td>114</td><td>118</td><td>122</td><td>Mean</td><td>Time</td></tr><tr><td>Voxurf [41]</td><td>2.74</td><td>4.50</td><td>3.39</td><td>1.52</td><td>2.24</td><td>2.00</td><td>2.94</td><td>1.29</td><td>2.49</td><td>1.28</td><td>2.45</td><td>4.69</td><td>0.93</td><td>2.74</td><td>1.29</td><td>2.43</td><td>50 mins</td></tr><tr><td>MonoSDF [51]</td><td>1.30</td><td>3.45</td><td>1.45</td><td>0.61</td><td>1.43</td><td>1.17</td><td>1.07</td><td>1.42</td><td>1.49</td><td>0.79</td><td>3.06</td><td>2.60</td><td>0.60</td><td>2.21</td><td>2.87</td><td>1.70</td><td>6 hrs</td></tr><tr><td>SparseNeuS [21]</td><td>3.57</td><td>3.73</td><td>3.11</td><td>1.50</td><td>2.36</td><td>2.89</td><td>1.91</td><td>2.10</td><td>2.89</td><td>2.01</td><td>2.08</td><td>3.44</td><td>1.21</td><td>2.19</td><td>2.11</td><td>2.43</td><td>Pretrain + 2 hrs ft</td></tr><tr><td>2DGS [15]</td><td>4.26</td><td>4.80</td><td>5.53</td><td>1.50</td><td>3.01</td><td>1.99</td><td>2.66</td><td>3.65</td><td>3.06</td><td>2.54</td><td>2.15</td><td>-</td><td>0.96</td><td>2.17</td><td>1.31</td><td>2.84</td><td>6 mins</td></tr><tr><td>GOF (TSDF) [53]</td><td>7.30</td><td>5.80</td><td>6.03</td><td>2.79</td><td>4.23</td><td>3.41</td><td>3.44</td><td>4.37</td><td>3.75</td><td>2.99</td><td>3.19</td><td>-</td><td>2.64</td><td>3.67</td><td>2.25</td><td>4.03</td><td>50 mins</td></tr><tr><td>GOF [53]</td><td>4.37</td><td>3.68</td><td>3.84</td><td>2.29</td><td>4.40</td><td>3.28</td><td>2.84</td><td>4.64</td><td>3.40</td><td>3.76</td><td>3.56</td><td>-</td><td>3.06</td><td>2.95</td><td>2.91</td><td>3.55</td><td>50 mins</td></tr><tr><td>GSDF-10 [50]</td><td>6.89</td><td>6.82</td><td>7.97</td><td>6.54</td><td>5.22</td><td>1.91</td><td>5.56</td><td>4.38</td><td>7.01</td><td>3.69</td><td>6.33</td><td>6.33</td><td>3.95</td><td>6.30</td><td>2.09</td><td>5.40</td><td>3 hrs</td></tr><tr><td>Ours</td><td>1.55</td><td>2.64</td><td>1.52</td><td>1.40</td><td>1.51</td><td>1.46</td><td>1.23</td><td>1.43</td><td>1.82</td><td>1.19</td><td>1.49</td><td>1.80</td><td>0.54</td><td>1.19</td><td>1.04</td><td>1.45</td><td>1 hr</td></tr></table>",
1066
+ "bbox": [
1067
+ 93,
1068
+ 422,
1069
+ 903,
1070
+ 584
1071
+ ],
1072
+ "page_idx": 5
1073
+ },
1074
+ {
1075
+ "type": "text",
1076
+ "text": "scenes from DTU and 18 test scenes from MobileBrick for evaluation. In the MobileBrick dataset, each scene consists of 360-degree multi-view images, from which we sample 10 images with $10\\%$ overlap for sparse view reconstruction. In contrast, the DTU dataset, with higher overlap, is sampled with 5 frames per scene. We also present reconstruction results for the little-overlapping 3-view setting in the supplementary materials. For fair comparison, 3DGS-based methods are initialized using point clouds from COLMAP[30] with ground-truth poses. The selected images and poses are used for 3D reconstruction, while the remaining images serve as a test set for evaluating novel view rendering.",
1077
+ "bbox": [
1078
+ 88,
1079
+ 608,
1080
+ 480,
1081
+ 791
1082
+ ],
1083
+ "page_idx": 5
1084
+ },
1085
+ {
1086
+ "type": "text",
1087
+ "text": "Baselines. We compare our proposed method with both SDF-based and 3DGS-based approaches for surface reconstruction. The SDF-based methods include MonoSDF[51], Voxurf[41], and SparseNeuS [21], which is pre-trained on large-scale data. The 3DGS-based methods include 2DGS[15] and GOF[53]. Additionally, we compare with",
1088
+ "bbox": [
1089
+ 88,
1090
+ 809,
1091
+ 482,
1092
+ 902
1093
+ ],
1094
+ "page_idx": 5
1095
+ },
1096
+ {
1097
+ "type": "text",
1098
+ "text": "GSDF [50], which integrates both SDF and 3DGS, similar to our approach, but is designed for dense-view settings. For novel view rendering, we evaluate all these methods along with 3DGS[17] and SparseGS[42].",
1099
+ "bbox": [
1100
+ 511,
1101
+ 608,
1102
+ 906,
1103
+ 670
1104
+ ],
1105
+ "page_idx": 5
1106
+ },
1107
+ {
1108
+ "type": "text",
1109
+ "text": "Evaluation metrics. We follow the official evaluation metrics on MobileBrick, reporting Chamfer Distance, precision, recall, and F1 score at two thresholds: $2.5mm$ and $5mm$ . For the DTU dataset, we use Chamfer Distance as the primary metric for surface reconstruction. To evaluate novel view rendering performance, we report PSNR for full images and PSNR-F, which is computed only over foreground regions. In each scene, we train models using sparse input images and test on all remaining views. The final result is averaged over all evaluation images.",
1110
+ "bbox": [
1111
+ 511,
1112
+ 686,
1113
+ 908,
1114
+ 839
1115
+ ],
1116
+ "page_idx": 5
1117
+ },
1118
+ {
1119
+ "type": "text",
1120
+ "text": "Implementation details. We set the voxel grid resolution to $96^3$ during coarse mesh training, requiring approximately 15 minutes for 10k iterations. The weight of the proposed",
1121
+ "bbox": [
1122
+ 511,
1123
+ 854,
1124
+ 908,
1125
+ 902
1126
+ ],
1127
+ "page_idx": 5
1128
+ },
1129
+ {
1130
+ "type": "page_number",
1131
+ "text": "28530",
1132
+ "bbox": [
1133
+ 478,
1134
+ 944,
1135
+ 519,
1136
+ 957
1137
+ ],
1138
+ "page_idx": 5
1139
+ },
1140
+ {
1141
+ "type": "table",
1142
+ "img_path": "images/1243fc8ff5a9f78d24c652af460e36e3a34b9f6dee122062ec0e67bacf9bbabc.jpg",
1143
+ "table_caption": [
1144
+ "Table 3. Surface reconstruction results with varying numbers of input views on MobileBrick (porsche) and DTU (scan69). The Baseline represents a pure SDF-based reconstruction without the assistance from 3DGS. $\\delta$ indicates the improvement."
1145
+ ],
1146
+ "table_footnote": [],
1147
+ "table_body": "<table><tr><td rowspan=\"2\">Input</td><td colspan=\"3\">MobileBrick / F1 score</td><td colspan=\"3\">DTU / CD</td></tr><tr><td>Baseline</td><td>Ours</td><td>δ</td><td>Baseline</td><td>Ours</td><td>δ</td></tr><tr><td>5</td><td>33.50</td><td>43.11</td><td>+9.61</td><td>2.940</td><td>1.230</td><td>-1.710</td></tr><tr><td>10</td><td>59.66</td><td>62.37</td><td>+2.71</td><td>1.362</td><td>1.165</td><td>-0.197</td></tr><tr><td>20</td><td>63.18</td><td>63.88</td><td>+0.7</td><td>1.043</td><td>0.965</td><td>-0.078</td></tr></table>",
1148
+ "bbox": [
1149
+ 114,
1150
+ 155,
1151
+ 460,
1152
+ 244
1153
+ ],
1154
+ "page_idx": 6
1155
+ },
1156
+ {
1157
+ "type": "text",
1158
+ "text": "normal loss is set to 0.05, while all other parameters follow Voxurf [41]. Next, we train 3DGS [17] for 7k iterations, which takes around 5 minutes, and render 10 new viewpoint images within 30 seconds. After expanding the training images, we increase the voxel grid resolution to $256^{3}$ and train for 20k iterations, taking approximately 40 minutes. Thus, a complete optimization cycle takes roughly 1 hour.",
1159
+ "bbox": [
1160
+ 89,
1161
+ 268,
1162
+ 483,
1163
+ 377
1164
+ ],
1165
+ "page_idx": 6
1166
+ },
1167
+ {
1168
+ "type": "text",
1169
+ "text": "4.2. Comparisons",
1170
+ "text_level": 1,
1171
+ "bbox": [
1172
+ 89,
1173
+ 385,
1174
+ 230,
1175
+ 402
1176
+ ],
1177
+ "page_idx": 6
1178
+ },
1179
+ {
1180
+ "type": "text",
1181
+ "text": "Results on MobileBrick. Table 1 presents a quantitative comparison of our method against previous approaches. The results show that Voxurf [41], which utilizes an SDF-based representation, outperforms 2DGS [15] and GOF [53] (both 3DGS-based methods) in surface reconstruction metrics, particularly in terms of the F1 score. However, all 3DGS-based methods achieve notably better novel view rendering performance, as evidenced by their higher PSNR values compared to Voxurf. A visual comparison is illustrated in Fig. 5 and Fig. 6. BBy leveraging the strengths of both SDF and 3DGS representations, our method achieves state-of-the-art performance in surface reconstruction and novel view synthesis. To balance efficiency and performance, we adopt a single-cycle approach in practice.",
1182
+ "bbox": [
1183
+ 89,
1184
+ 407,
1185
+ 483,
1186
+ 619
1187
+ ],
1188
+ "page_idx": 6
1189
+ },
1190
+ {
1191
+ "type": "text",
1192
+ "text": "Results on DTU. Table 2 presents surface reconstruction results on the DTU dataset, which is particularly challenging due to the use of only 5 uniformly sampled frames for reconstruction. SparseNeuS [21] is a pre-trained model that requires an additional 2 hours of fine-tuning. COLMAP fails to generate sparse point clouds for scene 110, preventing 3DGS initialization. GSDF [50] struggles in sparse-view settings, so we train it on 10 images. Despite these challenges, our method achieves robust reconstruction and significantly outperforms other approaches.",
1193
+ "bbox": [
1194
+ 89,
1195
+ 640,
1196
+ 483,
1197
+ 792
1198
+ ],
1199
+ "page_idx": 6
1200
+ },
1201
+ {
1202
+ "type": "text",
1203
+ "text": "4.3. Ablations",
1204
+ "text_level": 1,
1205
+ "bbox": [
1206
+ 89,
1207
+ 801,
1208
+ 202,
1209
+ 816
1210
+ ],
1211
+ "page_idx": 6
1212
+ },
1213
+ {
1214
+ "type": "text",
1215
+ "text": "Efficacy of 3DGS for Improving SDF. Table 3 compares our method with a pure SDF-based reconstruction baseline at different sparsity levels, using up to 20 images per scene. The results on MobileBrick and DTU validate the effectiveness of our 3DGS-assisted SDF approach. More results are",
1216
+ "bbox": [
1217
+ 89,
1218
+ 824,
1219
+ 483,
1220
+ 902
1221
+ ],
1222
+ "page_idx": 6
1223
+ },
1224
+ {
1225
+ "type": "table",
1226
+ "img_path": "images/3dc0a45afc6ef8256b7fb3b5d810d354af2a8e0f260019119bab31ed973201ca.jpg",
1227
+ "table_caption": [
1228
+ "Table 4. 3DGS rendering results with different initializations, averaged across all 18 MobileBrick test scenes."
1229
+ ],
1230
+ "table_footnote": [],
1231
+ "table_body": "<table><tr><td>Method</td><td>Foreground PSNR</td></tr><tr><td>3DGS (COLMAP)</td><td>19.13</td></tr><tr><td>3DGS w/ mesh clean</td><td>19.88</td></tr><tr><td>3DGS w/ normal and mesh clean</td><td>20.45</td></tr></table>",
1232
+ "bbox": [
1233
+ 545,
1234
+ 128,
1235
+ 874,
1236
+ 200
1237
+ ],
1238
+ "page_idx": 6
1239
+ },
1240
+ {
1241
+ "type": "table",
1242
+ "img_path": "images/97973fbf276aa85b35723cc2064a9e90934e18709c42adc6b20a243e8303dc22.jpg",
1243
+ "table_caption": [
1244
+ "Table 5. Ablation study on pose expansion strategies for in MobileBrick (aston) with 10 input images."
1245
+ ],
1246
+ "table_footnote": [],
1247
+ "table_body": "<table><tr><td></td><td>F1↑</td><td>Recall(%)↑</td><td>CD (mm)↓</td></tr><tr><td>Baseline</td><td>55.8</td><td>49.9</td><td>8.7</td></tr><tr><td>Camera position perturbation</td><td>59.9</td><td>57.4</td><td>6.6</td></tr><tr><td>Camera poses interpolation</td><td>60.8</td><td>59.1</td><td>6.4</td></tr></table>",
1248
+ "bbox": [
1249
+ 524,
1250
+ 253,
1251
+ 901,
1252
+ 333
1253
+ ],
1254
+ "page_idx": 6
1255
+ },
1256
+ {
1257
+ "type": "text",
1258
+ "text": "provided in the supplementary material.",
1259
+ "bbox": [
1260
+ 511,
1261
+ 358,
1262
+ 779,
1263
+ 375
1264
+ ],
1265
+ "page_idx": 6
1266
+ },
1267
+ {
1268
+ "type": "image",
1269
+ "img_path": "images/98f9192688a6ecee21a676bc469d401813d2180877d44440bae719bfd53bef1f.jpg",
1270
+ "image_caption": [
1271
+ "Figure 7. econstruction quality with varying numbers of 3DGS-rendered novel view images from expanded poses, averaged across all 18 MobileBrick test scenes, with an initial input of 10 images."
1272
+ ],
1273
+ "image_footnote": [],
1274
+ "bbox": [
1275
+ 531,
1276
+ 391,
1277
+ 885,
1278
+ 556
1279
+ ],
1280
+ "page_idx": 6
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "text": "Efficacy of SDF for enhancing 3DGS. Table 4 compares the novel view rendering results for 3DGS using point clouds initialized with different sampling strategies. The results demonstrate that our proposed mesh cleaning and normal supervision notably improve 3DGS performance.",
1285
+ "bbox": [
1286
+ 511,
1287
+ 633,
1288
+ 906,
1289
+ 712
1290
+ ],
1291
+ "page_idx": 6
1292
+ },
1293
+ {
1294
+ "type": "text",
1295
+ "text": "Number of newly rendered views. Fig. 7 illustrates the impact of the number of newly rendered images on surface reconstruction. On MobileBrick, rendering 10 novel views significantly improves Chamfer Distance $(26.5\\%)$ and F1 $(7.2\\%)$ . As the number of novel views increases, accuracy gains gradually diminish. This suggests that while additional renderings refine reconstruction, the majority of benefits are achieved with the first 10 rendered images.",
1296
+ "bbox": [
1297
+ 511,
1298
+ 729,
1299
+ 908,
1300
+ 852
1301
+ ],
1302
+ "page_idx": 6
1303
+ },
1304
+ {
1305
+ "type": "text",
1306
+ "text": "Different pose expansion strategies. Table 5 summarizes the reconstruction performance with expansion images",
1307
+ "bbox": [
1308
+ 511,
1309
+ 869,
1310
+ 908,
1311
+ 902
1312
+ ],
1313
+ "page_idx": 6
1314
+ },
1315
+ {
1316
+ "type": "page_number",
1317
+ "text": "28531",
1318
+ "bbox": [
1319
+ 478,
1320
+ 944,
1321
+ 517,
1322
+ 955
1323
+ ],
1324
+ "page_idx": 6
1325
+ },
1326
+ {
1327
+ "type": "image",
1328
+ "img_path": "images/ac1735dc640facf888705f357ae92374d7c76550ce1403c1e2a1051e629377c3.jpg",
1329
+ "image_caption": [
1330
+ "Figure 5. Qualitative mesh reconstruction comparisons on MobileBrick. See more visual results in supplementary material."
1331
+ ],
1332
+ "image_footnote": [],
1333
+ "bbox": [
1334
+ 116,
1335
+ 89,
1336
+ 890,
1337
+ 401
1338
+ ],
1339
+ "page_idx": 7
1340
+ },
1341
+ {
1342
+ "type": "image",
1343
+ "img_path": "images/0362c2caddfb443c8ba7575d01665c0a3a62eb537c5b74ce8804c586df3eed42.jpg",
1344
+ "image_caption": [
1345
+ "Figure 6. Qualitative novel view synthesis comparisons on MobileBrick."
1346
+ ],
1347
+ "image_footnote": [],
1348
+ "bbox": [
1349
+ 106,
1350
+ 441,
1351
+ 890,
1352
+ 648
1353
+ ],
1354
+ "page_idx": 7
1355
+ },
1356
+ {
1357
+ "type": "text",
1358
+ "text": "from different strategies. We double the number of original input camera poses, generating new viewpoints and rendering additional images accordingly. The two strategies significantly enhance surface reconstruction quality, with camera pose interpolation yielding the greatest improvement.",
1359
+ "bbox": [
1360
+ 89,
1361
+ 700,
1362
+ 482,
1363
+ 777
1364
+ ],
1365
+ "page_idx": 7
1366
+ },
1367
+ {
1368
+ "type": "text",
1369
+ "text": "5. Conclusion",
1370
+ "text_level": 1,
1371
+ "bbox": [
1372
+ 89,
1373
+ 787,
1374
+ 209,
1375
+ 801
1376
+ ],
1377
+ "page_idx": 7
1378
+ },
1379
+ {
1380
+ "type": "text",
1381
+ "text": "This paper introduces a novel framework for sparse-view reconstruction, where SDF-based and 3DGS-based representations complement each other to enhance both surface reconstruction and novel view rendering. Specifically, our method leverages SDF for modeling global geometry and 3DGS for capturing fine details, achieving significant im",
1382
+ "bbox": [
1383
+ 89,
1384
+ 810,
1385
+ 483,
1386
+ 901
1387
+ ],
1388
+ "page_idx": 7
1389
+ },
1390
+ {
1391
+ "type": "text",
1392
+ "text": "provements over state-of-the-art methods on two widely used real-world datasets.",
1393
+ "bbox": [
1394
+ 511,
1395
+ 700,
1396
+ 906,
1397
+ 729
1398
+ ],
1399
+ "page_idx": 7
1400
+ },
1401
+ {
1402
+ "type": "text",
1403
+ "text": "Limitation and future work. Although our method can theoretically be generalized to any SDF and novel view rendering approaches, our current implementation is built on Voxurf and 3DGS, which were selected for their efficiency-performance trade-off. As a result, our method is currently limited to object-level scenes and struggles with extremely sparse inputs, such as only two images. In the future, we aim to extend our approach to handle more diverse scenes and further improve its robustness to sparse inputs.",
1404
+ "bbox": [
1405
+ 511,
1406
+ 763,
1407
+ 906,
1408
+ 902
1409
+ ],
1410
+ "page_idx": 7
1411
+ },
1412
+ {
1413
+ "type": "page_number",
1414
+ "text": "28532",
1415
+ "bbox": [
1416
+ 478,
1417
+ 944,
1418
+ 517,
1419
+ 955
1420
+ ],
1421
+ "page_idx": 7
1422
+ },
1423
+ {
1424
+ "type": "text",
1425
+ "text": "Acknowledgments",
1426
+ "text_level": 1,
1427
+ "bbox": [
1428
+ 91,
1429
+ 90,
1430
+ 236,
1431
+ 107
1432
+ ],
1433
+ "page_idx": 8
1434
+ },
1435
+ {
1436
+ "type": "text",
1437
+ "text": "This work was supported by the National Natural Science Foundation of China (No. 62206244).",
1438
+ "bbox": [
1439
+ 89,
1440
+ 112,
1441
+ 482,
1442
+ 142
1443
+ ],
1444
+ "page_idx": 8
1445
+ },
1446
+ {
1447
+ "type": "text",
1448
+ "text": "References",
1449
+ "text_level": 1,
1450
+ "bbox": [
1451
+ 91,
1452
+ 154,
1453
+ 187,
1454
+ 169
1455
+ ],
1456
+ "page_idx": 8
1457
+ },
1458
+ {
1459
+ "type": "list",
1460
+ "sub_type": "ref_text",
1461
+ "list_items": [
1462
+ "[1] Jiayang Bai, Letian Huang, Jie Guo, Wen Gong, Yuanqi Li, and Yanwen Guo. 360-gs: Layout-guided panoramic gaussian splatting for indoor roaming. In International Conference on 3D Vision 2025. 2",
1463
+ "[2] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Int. Conf. Comput. Vis., pages 5855–5864, 2021. 2",
1464
+ "[3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5470-5479, 2022. 2",
1465
+ "[4] Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, and Philip Torr. Porf: Pose residual field for accurate neural surface reconstruction. In ICLR, 2024. 2",
1466
+ "[5] Wenjing Bian, Zirui Wang, Kejie Li, Jiawang Bian, and Victor Adrian Prisacariu. Nope-nerf: Optimising neural radiance field with no pose prior. 2023. 2",
1467
+ "[6] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Int. Conf. Comput. Vis., pages 14124-14133, 2021. 1, 2",
1468
+ "[7] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In Eur. Conf. Comput. Vis., pages 333-350. Springer, 2022. 2",
1469
+ "[8] Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, and Guofeng Zhang. Pgsr: Planar-based gaussian splatting for efficient and high-fidelity surface reconstruction. IEEE Transactions on Visualization and Computer Graphics, 2024. 2",
1470
+ "[9] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846, 2023. 2",
1471
+ "[10] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 1, 2",
1472
+ "[11] C De Boor. A practical guide to splines. Springer-Verlag google schola, 2:4135-4195, 1978. 5",
1473
+ "[12] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5501-5510, 2022. 2",
1474
+ "[13] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5354-5363, 2024. 2",
1475
+ "[14] Siming He, Zach Osman, and Pratik Chaudhari. From nerfs to gaussian splats, and back. arXiv preprint arXiv:2405.09717, 2024. 2"
1476
+ ],
1477
+ "bbox": [
1478
+ 93,
1479
+ 179,
1480
+ 482,
1481
+ 898
1482
+ ],
1483
+ "page_idx": 8
1484
+ },
1485
+ {
1486
+ "type": "list",
1487
+ "sub_type": "ref_text",
1488
+ "list_items": [
1489
+ "[15] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splattering for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers. Association for Computing Machinery, 2024. 1, 2, 6, 7",
1490
+ "[16] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanæs. Large scale multi-view stereopsis evaluation. In IEEE Conf. Comput. Vis. Pattern Recog., 2014. 2, 5",
1491
+ "[17] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 2023. 1, 2, 3, 6, 7",
1492
+ "[18] Jiyeop Kim and Jongwoo Lim. Integrating meshes and 3d gaussians for indoor scene reconstruction with sam mask guidance. arXiv preprint arXiv:2407.16173, 2024. 2",
1493
+ "[19] Kejie Li, Jia-Wang Bian, Robert Castle, Philip HS Torr, and Victor Adrian Prisacariu. Mobilebrick: Building legs for 3d reconstruction on mobile devices. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4892-4901, 2023. 2, 5",
1494
+ "[20] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8456-8465, 2023. 2",
1495
+ "[21] Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. In Eur. Conf. Comput. Vis., pages 210-227. Springer, 2022. 1, 2, 6, 7",
1496
+ "[22] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Semin al graphics: pioneering efforts that shaped the field, pages 347-353, 1998. 2",
1497
+ "[23] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20654-20664, 2024. 2",
1498
+ "[24] Xiaoyang Lyu, Yang-Tian Sun, Yi-Hua Huang, Xiuzhe Wu, Ziyi Yang, Yilun Chen, Jiangmiao Pang, and Xiaojuan Qi. 3dgsr: Implicit surface reconstruction with 3d gaussian splatting. ACM Transactions on Graphics (TOG), 43(6):1-12, 2024. 2",
1499
+ "[25] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Eur. Conf. Comput. Vis., 2020. 1, 2",
1500
+ "[26] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):1-15, 2022. 2",
1501
+ "[27] Stanley Osher, Ronald Fedkiw, Stanley Osher, and Ronald Fedkiw. Constructing signed distance functions. Level set methods and dynamic implicit surfaces, pages 63-74, 2003. 2",
1502
+ "[28] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 165-174, 2019. 2"
1503
+ ],
1504
+ "bbox": [
1505
+ 516,
1506
+ 92,
1507
+ 905,
1508
+ 898
1509
+ ],
1510
+ "page_idx": 8
1511
+ },
1512
+ {
1513
+ "type": "page_number",
1514
+ "text": "28533",
1515
+ "bbox": [
1516
+ 478,
1517
+ 944,
1518
+ 517,
1519
+ 955
1520
+ ],
1521
+ "page_idx": 8
1522
+ },
1523
+ {
1524
+ "type": "list",
1525
+ "sub_type": "ref_text",
1526
+ "list_items": [
1527
+ "[29] Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys, and Sabine Susstrunk. Volrecon: Volume rendering of signed ray distance functions for generalizable multi-view reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 16685-16695, 2023. 1, 2",
1528
+ "[30] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In IEEE Conf. Comput. Vis. Pattern Recog., 2016. 6",
1529
+ "[31] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: exploring photo collections in 3d. In ACM SIGgraph, pages 835-846, 2006. 3",
1530
+ "[32] Xiaowei Song, Jv Zheng, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, and Hao Zhao. Saags: Scale-adaptive gaussian splatting for training-free antialIASing. arXiv preprint arXiv:2403.19615, 2024. 2",
1531
+ "[33] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5459-5469, 2022. 2",
1532
+ "[34] Hao Sun, Junping Qin, Lei Wang, Kai Yan, Zheng Liu, Xinglong Jia, and Xiaole Shi. 3dgs-hd: Elimination of unrealistic artifacts in 3d gaussian splatting. In 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), pages 696-702. IEEE, 2024. 2",
1533
+ "[35] Matias Turkulainen, Xuqian Ren, Jaroslav Melekhov, Otto Seiskari, Esa Rahtu, and Juho Kannala. Dn-splatter: Depth and normal priors for gaussian splatting and meshing. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2421-2431. IEEE, 2025. 2",
1534
+ "[36] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021. 1, 2, 5",
1535
+ "[37] Ruizhe Wang, Chunliang Hua, Tomakayev Shingys, Mengyuan Niu, Qingxin Yang, Lizhong Gao, Yi Zheng, Junyan Yang, and Qiao Wang. Enhancement of 3d gaussian splatting using raw mesh for photorealistic recreation of architectures. arXiv preprint arXiv:2407.15435, 2024. 2",
1536
+ "[38] LORENSEN WE. Marching cubes: A high resolution 3d surface construction algorithm. Computer graphics, 21(1): 7-12, 1987. 4",
1537
+ "[39] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality meshes. arXiv preprint arXiv:2404.12385, 2024. 1, 2",
1538
+ "[40] Shuang Wu, Youtian Lin, Feihu Zhang, Yifei Zeng, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859-121881, 2024. 1, 2",
1539
+ "[41] Tong Wu, Jiaqi Wang, Xingang Pan, Xudong Xu, Christian Theobalt, Ziwei Liu, and Dahua Lin. Voxurf: Voxel-based efficient and accurate neural surface reconstruction. In Int. Conf. Learn. Represent., 2023. 1, 3, 4, 5, 6, 7",
1540
+ "[42] Haolin Xiong, Sairisheek Muttukuru, Rishi Upadhyay, Pradyumna Chari, and Achuta Kadambi. Sparsegs: Real-"
1541
+ ],
1542
+ "bbox": [
1543
+ 91,
1544
+ 90,
1545
+ 480,
1546
+ 900
1547
+ ],
1548
+ "page_idx": 9
1549
+ },
1550
+ {
1551
+ "type": "list",
1552
+ "sub_type": "ref_text",
1553
+ "list_items": [
1554
+ "time $360^{\\circ}$ sparse view synthesis using gaussian splatting. Arxiv, 2023. 6",
1555
+ "[43] Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024. 1, 2",
1556
+ "[44] Runyi Yang, Zhenxin Zhu, Zhou Jiang, Baijun Ye, Xiaoxue Chen, Yifei Zhang, Yuantao Chen, Jian Zhao, and Hao Zhao. Spectrally pruned gaussian fields with neural compensation. arXiv preprint arXiv:2405.00676, 2024. 2",
1557
+ "[45] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20331-20341, 2024. 2",
1558
+ "[46] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. Adv. Neural Inform. Process. Syst., 34:4805-4815, 2021. 2",
1559
+ "[47] Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, et al. gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research, 26(34):1-17, 2025. 2",
1560
+ "[48] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from a single image. In Int. Conf. Comput. Vis., pages 9043-9053, 2023. 4",
1561
+ "[49] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4578-4587, 2021. 1, 2",
1562
+ "[50] Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, and Bo Dai. Gsdf: 3dgs meets sdf for improved neural rendering and reconstruction. Advances in Neural Information Processing Systems, 37:129507-129530, 2024. 2, 6, 7",
1563
+ "[51] Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. Adv. Neural Inform. Process. Syst., 2022. 6",
1564
+ "[52] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 19447-19456, 2024. 2",
1565
+ "[53] Zehao Yu, Torsten Sattler, and Andreas Geiger. Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes. ACM Trans. Graph., 2024. 1, 2, 6, 7",
1566
+ "[54] Baowen Zhang, Chuan Fang, Rakesh Shrestha, Yixun Liang, Xiaoxiao Long, and Ping Tan. Rade-gs: Rasterizing depth in gaussian splatting. arXiv preprint arXiv:2406.01467, 2024. 2",
1567
+ "[55] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2"
1568
+ ],
1569
+ "bbox": [
1570
+ 516,
1571
+ 92,
1572
+ 903,
1573
+ 856
1574
+ ],
1575
+ "page_idx": 9
1576
+ },
1577
+ {
1578
+ "type": "page_number",
1579
+ "text": "28534",
1580
+ "bbox": [
1581
+ 478,
1582
+ 945,
1583
+ 519,
1584
+ 955
1585
+ ],
1586
+ "page_idx": 9
1587
+ }
1588
+ ]
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_model.json ADDED
@@ -0,0 +1,2288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "header",
5
+ "bbox": [
6
+ 0.107,
7
+ 0.003,
8
+ 0.182,
9
+ 0.043
10
+ ],
11
+ "angle": 0,
12
+ "content": "CVF"
13
+ },
14
+ {
15
+ "type": "header",
16
+ "bbox": [
17
+ 0.239,
18
+ 0.001,
19
+ 0.808,
20
+ 0.047
21
+ ],
22
+ "angle": 0,
23
+ "content": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore."
24
+ },
25
+ {
26
+ "type": "title",
27
+ "bbox": [
28
+ 0.126,
29
+ 0.131,
30
+ 0.873,
31
+ 0.154
32
+ ],
33
+ "angle": 0,
34
+ "content": "SurfaceSplat: Connecting Surface Reconstruction and Gaussian Splatting"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.111,
40
+ 0.179,
41
+ 0.888,
42
+ 0.242
43
+ ],
44
+ "angle": 0,
45
+ "content": "Zihui Gao\\(^{1,3*}\\), Jia-Wang Bian\\(^{2*}\\), Guosheng Lin\\(^{3}\\), Hao Chen\\(^{1,\\dagger}\\), Chunhua Shen\\(^{1}\\) \n\\(^{1}\\)Zhejiang University, China \n\\(^{2}\\)ByteDance Seed \n\\(^{3}\\)Nanyang Technological University, Singapore \n*Equal contribution †Corresponding author"
46
+ },
47
+ {
48
+ "type": "image",
49
+ "bbox": [
50
+ 0.094,
51
+ 0.288,
52
+ 0.907,
53
+ 0.399
54
+ ],
55
+ "angle": 0,
56
+ "content": null
57
+ },
58
+ {
59
+ "type": "image_caption",
60
+ "bbox": [
61
+ 0.089,
62
+ 0.408,
63
+ 0.907,
64
+ 0.48
65
+ ],
66
+ "angle": 0,
67
+ "content": "Figure 1. Sparse view reconstruction and rendering comparison. Left: Qualitative results from 10 images evenly sampled from a casually captured 360-degree video. Right: Quantitative analysis of 5, 10, and 20 input views, averaged across the selected 9 MobileBrick test scenes. 3DGS-based methods (e.g., GOF) achieve superior novel view rendering than SDF-based methods (e.g., Voxurf) due to their sparse representations, which capture fine details. However, SDF-based methods outperform the former in mesh reconstruction, as their dense representations better preserve global geometry. Our approach combines the strengths of both, achieving optimal performance."
68
+ },
69
+ {
70
+ "type": "title",
71
+ "bbox": [
72
+ 0.248,
73
+ 0.491,
74
+ 0.327,
75
+ 0.506
76
+ ],
77
+ "angle": 0,
78
+ "content": "Abstract"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.089,
84
+ 0.524,
85
+ 0.486,
86
+ 0.721
87
+ ],
88
+ "angle": 0,
89
+ "content": "Surface reconstruction and novel view rendering from sparse-view images are challenging. Signed Distance Function (SDF)-based methods struggle with fine details, while 3D Gaussian Splatting (3DGS)-based approaches lack global geometry coherence. We propose a novel hybrid method that combines the strengths of both approaches: SDF captures coarse geometry to enhance 3DGS-based rendering, while newly rendered images from 3DGS refine the details of SDF for accurate surface reconstruction. As a result, our method surpasses state-of-the-art approaches in surface reconstruction and novel view synthesis on the DTU and MobileBrick datasets. Code will be released at: https://github.com/aim-uofa/SurfaceSplat."
90
+ },
91
+ {
92
+ "type": "title",
93
+ "bbox": [
94
+ 0.091,
95
+ 0.754,
96
+ 0.222,
97
+ 0.769
98
+ ],
99
+ "angle": 0,
100
+ "content": "1. Introduction"
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.089,
106
+ 0.78,
107
+ 0.483,
108
+ 0.903
109
+ ],
110
+ "angle": 0,
111
+ "content": "3D reconstruction from multi-view images is a core problem in computer vision with applications in virtual reality, robotics, and autonomous driving. Recent advances in Neural Radiance Fields (NeRF) [25] and 3D Gaussian Splatting (3DGS) [17] have significantly advanced the field. However, their performance degrades under sparse-view conditions, a common real-world challenge. This paper tackles sparse-view reconstruction to bridge this gap. Unlike ap-"
112
+ },
113
+ {
114
+ "type": "text",
115
+ "bbox": [
116
+ 0.512,
117
+ 0.492,
118
+ 0.907,
119
+ 0.553
120
+ ],
121
+ "angle": 0,
122
+ "content": "proaches that leverage generative models [10, 39, 40, 43] or learn geometry priors through large-scale pretraining [6, 21, 29, 49], we focus on identifying the optimal 3D representations for surface reconstruction and novel view synthesis."
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.511,
128
+ 0.553,
129
+ 0.909,
130
+ 0.81
131
+ ],
132
+ "angle": 0,
133
+ "content": "Surface reconstruction methods primarily use the Signed Distance Function (SDF) or 3DGS-based representations. Here, SDF-based approaches, such as NeuS [36] and Voxsurf [41], model scene geometry continuously with dense representations and optimize them via differentiable volume rendering [25]. In contrast, 3DGS-based methods like GOF [53] and 2DGS [15] leverage a pre-computed sparse point cloud for image rendering and progressively densify and refine it through differentiable rasterization. Due to their dense representations, SDF-based methods capture global structures well but lack fine details, while the sparse nature of 3DGS-based methods enables high-frequency detail preservation but compromises global coherence. As a result, both approaches struggle with poor reconstruction quality under sparse-view conditions. Typically, SDF-based methods outperform 3DGS in surface reconstruction, while 3DGS excels in image rendering, as illustrated in Fig. 1."
134
+ },
135
+ {
136
+ "type": "text",
137
+ "bbox": [
138
+ 0.511,
139
+ 0.811,
140
+ 0.909,
141
+ 0.902
142
+ ],
143
+ "angle": 0,
144
+ "content": "Recognizing the complementary strengths of SDF-based (dense) and 3DGS-based (sparse) representations, we propose a novel hybrid approach, SurfaceSplat, as illustrated in Fig. 2. Our method is built on two key ideas: (i) SDF for Improved 3DGS: To address the limitation of 3DGS in learning global geometry, we first fit the global struc"
145
+ },
146
+ {
147
+ "type": "page_number",
148
+ "bbox": [
149
+ 0.479,
150
+ 0.945,
151
+ 0.519,
152
+ 0.957
153
+ ],
154
+ "angle": 0,
155
+ "content": "28525"
156
+ }
157
+ ],
158
+ [
159
+ {
160
+ "type": "text",
161
+ "bbox": [
162
+ 0.09,
163
+ 0.092,
164
+ 0.482,
165
+ 0.257
166
+ ],
167
+ "angle": 0,
168
+ "content": "ture using an SDF-based representation, rapidly generating a smooth yet coarse mesh. We then initialize 3DGS by sampling point clouds from the mesh surface, ensuring global consistency while allowing 3DGS to refine fine details during training. (ii) 3DGS for Enhanced SDF: To compensate for the inability of SDF-based methods to capture fine details under sparse-view settings, we leverage the improved 3DGS from the first step to render additional novel viewpoint images, expanding the dataset. This enriched supervision helps the SDF-based method learn finer structural details, leading to improved reconstruction quality."
169
+ },
170
+ {
171
+ "type": "text",
172
+ "bbox": [
173
+ 0.09,
174
+ 0.258,
175
+ 0.482,
176
+ 0.334
177
+ ],
178
+ "angle": 0,
179
+ "content": "We conduct experiments on two real-world datasets, DTU [16] and MobileBrick [19]. Our method, SurfaceSplat, achieves state-of-the-art performance in sparse-view novel view rendering and 3D mesh reconstruction. In summary, we make the following contributions:"
180
+ },
181
+ {
182
+ "type": "text",
183
+ "bbox": [
184
+ 0.091,
185
+ 0.336,
186
+ 0.481,
187
+ 0.395
188
+ ],
189
+ "angle": 0,
190
+ "content": "- We propose SurfaceSplat, which synergistically combines the strengths of SDF-based and 3DGS-based representations to achieve optimal global geometry preservation while capturing fine local details."
191
+ },
192
+ {
193
+ "type": "text",
194
+ "bbox": [
195
+ 0.091,
196
+ 0.397,
197
+ 0.481,
198
+ 0.456
199
+ ],
200
+ "angle": 0,
201
+ "content": "- We conducted a comprehensive evaluation and ablations on DTU and MobileBrick datasets. SurfaceSplat achieves state-of-the-art performance in novel view synthesis and mesh reconstruction under sparse-view conditions."
202
+ },
203
+ {
204
+ "type": "list",
205
+ "bbox": [
206
+ 0.091,
207
+ 0.336,
208
+ 0.481,
209
+ 0.456
210
+ ],
211
+ "angle": 0,
212
+ "content": null
213
+ },
214
+ {
215
+ "type": "title",
216
+ "bbox": [
217
+ 0.091,
218
+ 0.474,
219
+ 0.228,
220
+ 0.49
221
+ ],
222
+ "angle": 0,
223
+ "content": "2. Related work"
224
+ },
225
+ {
226
+ "type": "title",
227
+ "bbox": [
228
+ 0.091,
229
+ 0.499,
230
+ 0.443,
231
+ 0.515
232
+ ],
233
+ "angle": 0,
234
+ "content": "2.1. Novel View Synthesis from Sparse Inputs"
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.09,
240
+ 0.521,
241
+ 0.483,
242
+ 0.734
243
+ ],
244
+ "angle": 0,
245
+ "content": "Neural Radiance Fields (NeRFs)-based methods [2, 3, 5, 7, 12, 25, 26, 33, 35, 45, 55] have revolutionized novel view synthesis with implicit neural representations, and 3DGS-based methods [17, 23, 32, 34, 44, 47, 52] enable efficient training and real-time rendering through explicit 3D point clouds. However, both approaches suffer from performance degradation in sparse-view settings. To address this issue, recent methods have explored generative models [10, 39, 40, 43] or leveraged large-scale training to learn geometric priors [6, 21, 29, 49]. Unlike these approaches, we argue that the key challenge lies in the lack of effective geometric initialization for 3DGS. To overcome this, we investigate how neural surface reconstruction methods can enhance its performance."
246
+ },
247
+ {
248
+ "type": "title",
249
+ "bbox": [
250
+ 0.091,
251
+ 0.743,
252
+ 0.365,
253
+ 0.758
254
+ ],
255
+ "angle": 0,
256
+ "content": "2.2. Neural Surface Reconstruction"
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.09,
262
+ 0.765,
263
+ 0.483,
264
+ 0.901
265
+ ],
266
+ "angle": 0,
267
+ "content": "SDF-based methods, such as NeuS [36], VolSDF [46], Neuralangelo [20], and PoRF [4] use dense neural representations and differentiable volume rendering to achieve high-quality reconstructions with 3D supervision. However, they suffer from long optimization times and require dense viewpoint images. Recent methods, such as 2DGS [15] and GOF [53], extend 3DGS [17] by leveraging modified Gaussians and depth correction to accelerate geometry extraction. While 3DGS-based methods [1, 8, 13, 15, 18, 37, 53,"
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.512,
273
+ 0.092,
274
+ 0.905,
275
+ 0.168
276
+ ],
277
+ "angle": 0,
278
+ "content": "[54] excel at capturing fine local details, their sparse representations struggle to maintain global geometry, leading to incomplete and fragmented reconstructions. This paper focuses on integrating the strengths of both representations to achieve optimal neural surface reconstruction."
279
+ },
280
+ {
281
+ "type": "title",
282
+ "bbox": [
283
+ 0.513,
284
+ 0.179,
285
+ 0.744,
286
+ 0.194
287
+ ],
288
+ "angle": 0,
289
+ "content": "2.3. Combing 3DGS and SDF"
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.512,
295
+ 0.201,
296
+ 0.907,
297
+ 0.399
298
+ ],
299
+ "angle": 0,
300
+ "content": "Several recent approaches have integrated SDF-based [27, 28] and 3DGS-based representations to improve surface reconstruction. NeuSG [9] and GSDF [50] jointly optimize SDF and 3DGS, enforcing geometric consistency (e.g., depths and normals) to improve surface detail [14]. Similarly, 3DGSR [24] combines SDF values with Gaussian opacity in a joint optimization framework for better geometry. While effective in dense-view settings, these methods struggle to reconstruct high-quality structures under sparse-view conditions, as shown in our experiments in Sec. 4. Our approach specifically targets sparse-view scenarios by leveraging a complementary structure to enhance both rendering and reconstruction quality."
301
+ },
302
+ {
303
+ "type": "title",
304
+ "bbox": [
305
+ 0.513,
306
+ 0.414,
307
+ 0.605,
308
+ 0.429
309
+ ],
310
+ "angle": 0,
311
+ "content": "3. Method"
312
+ },
313
+ {
314
+ "type": "text",
315
+ "bbox": [
316
+ 0.512,
317
+ 0.44,
318
+ 0.907,
319
+ 0.577
320
+ ],
321
+ "angle": 0,
322
+ "content": "Our method takes sparse viewpoint images with camera poses as input, aiming to reconstruct 3D geometry and color for novel view synthesis and mesh extraction. Fig. 2 provides an overview of SurfaceSplat. In the following sections, we first introduce the preliminaries in Sec. 3.1, then explain how SDF-based mesh reconstruction improves 3DGS for novel view synthesis in Sec. 3.2, and finally describe how 3DGS-based rendering enhances SDF-based surface reconstruction quality in Sec. 3.3."
323
+ },
324
+ {
325
+ "type": "title",
326
+ "bbox": [
327
+ 0.513,
328
+ 0.588,
329
+ 0.655,
330
+ 0.603
331
+ ],
332
+ "angle": 0,
333
+ "content": "3.1. Preliminaries"
334
+ },
335
+ {
336
+ "type": "text",
337
+ "bbox": [
338
+ 0.512,
339
+ 0.61,
340
+ 0.906,
341
+ 0.762
342
+ ],
343
+ "angle": 0,
344
+ "content": "SDF-based representation. NeuS [36] proposes to model scene coordinates as signed distance function (SDF) values and optimize using differentiable volume rendering, similar to NeRF [25]. After optimization, object surfaces are extracted using the marching cubes algorithm [22]. To render a pixel, a ray is cast from the camera center \\(o\\) through the pixel along the viewing direction \\(v\\) as \\(\\{p(t) = o + tv|t\\geq 0\\}\\), and the pixel color is computed by integrating \\(N\\) sampled points along the ray \\(\\{p_i = o + t_iv|i = 1,\\dots,N,t_i < t_{i + 1}\\}\\) using volume rendering:"
345
+ },
346
+ {
347
+ "type": "equation",
348
+ "bbox": [
349
+ 0.576,
350
+ 0.774,
351
+ 0.907,
352
+ 0.819
353
+ ],
354
+ "angle": 0,
355
+ "content": "\\[\n\\hat {C} (r) = \\sum_ {i = 1} ^ {N} T _ {i} \\alpha_ {i} c _ {i}, T _ {i} = \\prod_ {j = 1} ^ {i - 1} (1 - \\alpha_ {j}), \\qquad (1)\n\\]"
356
+ },
357
+ {
358
+ "type": "text",
359
+ "bbox": [
360
+ 0.513,
361
+ 0.829,
362
+ 0.905,
363
+ 0.86
364
+ ],
365
+ "angle": 0,
366
+ "content": "where \\(\\alpha_{i}\\) represents opacity and \\(T_{i}\\) is the accumulated transmittance. It is computed as:"
367
+ },
368
+ {
369
+ "type": "equation",
370
+ "bbox": [
371
+ 0.532,
372
+ 0.871,
373
+ 0.907,
374
+ 0.907
375
+ ],
376
+ "angle": 0,
377
+ "content": "\\[\n\\alpha_ {i} = \\max \\left(\\frac {\\Phi_ {s} (f (p (t _ {i}))) - \\Phi_ {s} (f (p (t _ {i + 1})))}{\\Phi_ {s} (f (p (t _ {i})))}, 0\\right), \\quad (2)\n\\]"
378
+ },
379
+ {
380
+ "type": "page_number",
381
+ "bbox": [
382
+ 0.48,
383
+ 0.946,
384
+ 0.52,
385
+ 0.957
386
+ ],
387
+ "angle": 0,
388
+ "content": "28526"
389
+ }
390
+ ],
391
+ [
392
+ {
393
+ "type": "image",
394
+ "bbox": [
395
+ 0.134,
396
+ 0.09,
397
+ 0.868,
398
+ 0.462
399
+ ],
400
+ "angle": 0,
401
+ "content": null
402
+ },
403
+ {
404
+ "type": "image_caption",
405
+ "bbox": [
406
+ 0.089,
407
+ 0.47,
408
+ 0.908,
409
+ 0.515
410
+ ],
411
+ "angle": 0,
412
+ "content": "Figure 2. Overview of the proposed SurfaceSplat. (A) We reconstruct a coarse mesh using an SDF-based representation. (B) Point clouds are sampled from the mesh surface to initialize 3DGS. (C) 3DGS renders new viewpoint images to expand the training set, refining the mesh. (D) Steps B and C can be repeated for iterative optimization, progressively improving performance."
413
+ },
414
+ {
415
+ "type": "text",
416
+ "bbox": [
417
+ 0.089,
418
+ 0.539,
419
+ 0.487,
420
+ 0.741
421
+ ],
422
+ "angle": 0,
423
+ "content": "where \\( f(x) \\) is the SDF function and \\( \\Phi_s(x) = (1 + e^{-sx})^{-1} \\) is the Sigmoid function, with \\( s \\) learned during training. Based on this, Voxurf [41] proposes a hybrid representation that combines a voxel grid with a shallow MLP to reconstruct the implicit SDF field. In the coarse stage, Voxurf [41] optimizes for a better overall shape by using 3D convolution and interpolation to estimate SDF values. In the fine stage, it increases the voxel grid resolution and employs a dual-color MLP architecture, consisting of two networks: \\( g_{geo} \\), which takes hierarchical geometry features as input, and \\( g_{feat} \\), which receives local features from \\( V^{(\\mathrm{feat})} \\) along with surface normals. We incorporate Voxurf in this work due to its effective balance between accuracy and efficiency."
424
+ },
425
+ {
426
+ "type": "text",
427
+ "bbox": [
428
+ 0.089,
429
+ 0.763,
430
+ 0.487,
431
+ 0.904
432
+ ],
433
+ "angle": 0,
434
+ "content": "3DGS-based representation. 3DGS [17] models a set of 3D Gaussians to represent the scene, which is similar to point clouds. Each Gaussian ellipse has a color and an opacity and is defined by its centered position \\( x \\) (mean), and a full covariance matrix \\( \\Sigma: G(x) = e^{-\\frac{1}{2} x^T \\Sigma^{-1} x} \\). When projecting 3D Gaussians to 2D for rendering, the splattering method is used to position the Gaussians on 2D planes, which involves a new covariance matrix \\( \\Sigma' \\) in camera coordinates defined as: \\( \\Sigma' = J W \\Sigma W^T J^T \\), where \\( W \\) denotes"
435
+ },
436
+ {
437
+ "type": "text",
438
+ "bbox": [
439
+ 0.512,
440
+ 0.54,
441
+ 0.909,
442
+ 0.617
443
+ ],
444
+ "angle": 0,
445
+ "content": "a given viewing transformation matrix and \\(J\\) is the Jacobian of the affine approximation of the projective transformation. To enable differentiable optimization, \\(\\Sigma\\) is further decomposed into a scaling matrix \\(S\\) and a rotation matrix \\(R\\): \\(\\Sigma = R S T^T R^T\\)."
446
+ },
447
+ {
448
+ "type": "title",
449
+ "bbox": [
450
+ 0.513,
451
+ 0.627,
452
+ 0.745,
453
+ 0.645
454
+ ],
455
+ "angle": 0,
456
+ "content": "3.2. SDF for Improved 3DGS"
457
+ },
458
+ {
459
+ "type": "text",
460
+ "bbox": [
461
+ 0.512,
462
+ 0.65,
463
+ 0.909,
464
+ 0.803
465
+ ],
466
+ "angle": 0,
467
+ "content": "3DGS [17] typically initializes with sparse point clouds estimated by COLMAP [31], which are often inaccurate or missing in low-texture or little over-lapping regions. To address this, we propose initializing 3DGS by uniformly sampling points from a mesh surface derived from a SDF representation, ensuring high-quality novel view rendering while preserving global geometry. Below, we detail our proposed method for mesh reconstruction, mesh cleaning, and point cloud sampling. A visual example of the reconstructed meshes and sampled points is shown in Fig. 3."
468
+ },
469
+ {
470
+ "type": "text",
471
+ "bbox": [
472
+ 0.512,
473
+ 0.826,
474
+ 0.912,
475
+ 0.903
476
+ ],
477
+ "angle": 0,
478
+ "content": "Coarse mesh reconstruction. Given \\(M\\) sparse images \\(\\{\\mathcal{I}\\}\\) and their camera poses \\(\\{\\pi\\}\\), our objective is to reconstruct a 3D surface for sampling points. As our focus is on robust global geometry rather than highly accurate surfaces, and to ensure efficient mesh reconstruction,"
479
+ },
480
+ {
481
+ "type": "page_number",
482
+ "bbox": [
483
+ 0.479,
484
+ 0.945,
485
+ 0.522,
486
+ 0.958
487
+ ],
488
+ "angle": 0,
489
+ "content": "28527"
490
+ }
491
+ ],
492
+ [
493
+ {
494
+ "type": "image",
495
+ "bbox": [
496
+ 0.095,
497
+ 0.099,
498
+ 0.195,
499
+ 0.184
500
+ ],
501
+ "angle": 0,
502
+ "content": null
503
+ },
504
+ {
505
+ "type": "image_caption",
506
+ "bbox": [
507
+ 0.104,
508
+ 0.188,
509
+ 0.192,
510
+ 0.199
511
+ ],
512
+ "angle": 0,
513
+ "content": "(a) Reference image"
514
+ },
515
+ {
516
+ "type": "image",
517
+ "bbox": [
518
+ 0.207,
519
+ 0.089,
520
+ 0.334,
521
+ 0.183
522
+ ],
523
+ "angle": 0,
524
+ "content": null
525
+ },
526
+ {
527
+ "type": "image_caption",
528
+ "bbox": [
529
+ 0.234,
530
+ 0.188,
531
+ 0.305,
532
+ 0.198
533
+ ],
534
+ "angle": 0,
535
+ "content": "(b) Coarse mesh"
536
+ },
537
+ {
538
+ "type": "image",
539
+ "bbox": [
540
+ 0.346,
541
+ 0.089,
542
+ 0.468,
543
+ 0.183
544
+ ],
545
+ "angle": 0,
546
+ "content": null
547
+ },
548
+ {
549
+ "type": "image_caption",
550
+ "bbox": [
551
+ 0.359,
552
+ 0.188,
553
+ 0.474,
554
+ 0.198
555
+ ],
556
+ "angle": 0,
557
+ "content": "(c) Coarse mesh w/ normal"
558
+ },
559
+ {
560
+ "type": "image",
561
+ "bbox": [
562
+ 0.102,
563
+ 0.208,
564
+ 0.194,
565
+ 0.275
566
+ ],
567
+ "angle": 0,
568
+ "content": null
569
+ },
570
+ {
571
+ "type": "image_caption",
572
+ "bbox": [
573
+ 0.107,
574
+ 0.277,
575
+ 0.192,
576
+ 0.296
577
+ ],
578
+ "angle": 0,
579
+ "content": "(d) Coarse mesh w/ normal and clean"
580
+ },
581
+ {
582
+ "type": "image",
583
+ "bbox": [
584
+ 0.221,
585
+ 0.208,
586
+ 0.314,
587
+ 0.275
588
+ ],
589
+ "angle": 0,
590
+ "content": null
591
+ },
592
+ {
593
+ "type": "image_caption",
594
+ "bbox": [
595
+ 0.215,
596
+ 0.281,
597
+ 0.325,
598
+ 0.292
599
+ ],
600
+ "angle": 0,
601
+ "content": "(e) Color points sampling"
602
+ },
603
+ {
604
+ "type": "image",
605
+ "bbox": [
606
+ 0.334,
607
+ 0.209,
608
+ 0.48,
609
+ 0.281
610
+ ],
611
+ "angle": 0,
612
+ "content": null
613
+ },
614
+ {
615
+ "type": "image_caption",
616
+ "bbox": [
617
+ 0.359,
618
+ 0.281,
619
+ 0.474,
620
+ 0.292
621
+ ],
622
+ "angle": 0,
623
+ "content": "(f) COLMAP sparse points"
624
+ },
625
+ {
626
+ "type": "image_caption",
627
+ "bbox": [
628
+ 0.09,
629
+ 0.306,
630
+ 0.484,
631
+ 0.404
632
+ ],
633
+ "angle": 0,
634
+ "content": "Figure 3. Visualization of our mesh reconstruction, cleaning, and point sampling. (b) Naïve coarse mesh reconstruction following Voxurf [41]. (c) Coarse mesh reconstructed with our proposed normal loss, reducing floaters. (d) Post-processed mesh with both normal loss and our cleaning methods. (e) Our sampled point clouds used for initializing 3DGS. (f) COLMAP-estimated point clouds, typically used for 3DGS initialization."
635
+ },
636
+ {
637
+ "type": "text",
638
+ "bbox": [
639
+ 0.09,
640
+ 0.426,
641
+ 0.484,
642
+ 0.546
643
+ ],
644
+ "angle": 0,
645
+ "content": "we adopt the coarse-stage surface reconstruction from Vox-urf [41]. Specifically, we use a grid-based SDF representation \\(V^{(\\mathrm{sdf})}\\) for efficient mesh reconstruction. For each sampled 3D point \\(\\mathbf{x} \\in \\mathbb{R}^3\\), the grid outputs the corresponding SDF value: \\(V^{(\\mathrm{sdf})}: \\mathbb{R}^3 \\to \\mathbb{R}\\). We use differentiable volume rendering to render image pixels \\(\\hat{C}(r)\\) and employs image reconstruction loss to supervise. The loss function \\(\\mathcal{L}\\) is formulated as:"
646
+ },
647
+ {
648
+ "type": "equation",
649
+ "bbox": [
650
+ 0.105,
651
+ 0.55,
652
+ 0.483,
653
+ 0.575
654
+ ],
655
+ "angle": 0,
656
+ "content": "\\[\n\\mathcal {L} = \\mathcal {L} _ {\\text {r e c o n}} + \\mathcal {L} _ {T V} \\left(V ^ {(\\mathrm {s d f})}\\right) + \\mathcal {L} _ {\\text {s m o o t h}} \\left(\\nabla V ^ {(\\mathrm {s d f})}\\right), \\tag {3}\n\\]"
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.09,
662
+ 0.58,
663
+ 0.483,
664
+ 0.7
665
+ ],
666
+ "angle": 0,
667
+ "content": "where the reconstruction loss \\(\\mathcal{L}_{\\mathrm{recon}}\\) calculates photometric image rendering loss, originating from both the \\(g_{geo}\\) and \\(g_{feat}\\) branches. The \\(\\mathcal{L}_{TV}\\) encourages a continuous and compact geometry, while the smoothness regularization \\(\\mathcal{L}_{\\mathrm{smooth}}\\) promotes local smoothness of the geometric surface. We refer to Voxurf [41] for the detailed implementation of the loss functions. The coarse reconstruction typically completes in 15 minutes in our experiments."
668
+ },
669
+ {
670
+ "type": "text",
671
+ "bbox": [
672
+ 0.09,
673
+ 0.701,
674
+ 0.483,
675
+ 0.837
676
+ ],
677
+ "angle": 0,
678
+ "content": "Due to the limited number of training views, the learned grid often exhibits floating artifacts, as shown in Fig. 3 (b), which leads to incorrect point sampling. To mitigate this, we introduce a normal consistency loss to improve training stability, effectively reducing floaters and smoothing the geometric surface. Our approach leverages the predicted monocular surface normal \\(\\hat{N} (\\mathbf{r})\\) from the Metric3D model [48] to supervise the volume-rendered normal \\(\\bar{N} (\\mathbf{r})\\) in the same coordinate system. The formulation is:"
679
+ },
680
+ {
681
+ "type": "equation",
682
+ "bbox": [
683
+ 0.17,
684
+ 0.844,
685
+ 0.483,
686
+ 0.866
687
+ ],
688
+ "angle": 0,
689
+ "content": "\\[\n\\mathcal {L} _ {\\text {n o r m a l}} = \\sum \\left(\\| \\hat {N} (\\mathbf {r}) - \\bar {N} (\\mathbf {r}) \\| _ {1}\\right). \\tag {4}\n\\]"
690
+ },
691
+ {
692
+ "type": "text",
693
+ "bbox": [
694
+ 0.09,
695
+ 0.871,
696
+ 0.483,
697
+ 0.902
698
+ ],
699
+ "angle": 0,
700
+ "content": "We integrate this loss with Eqn. (3) during training to effectively remove floaters. Fig. 3 (c) shows a coarse mesh re"
701
+ },
702
+ {
703
+ "type": "text",
704
+ "bbox": [
705
+ 0.513,
706
+ 0.092,
707
+ 0.907,
708
+ 0.121
709
+ ],
710
+ "angle": 0,
711
+ "content": "constructed with the normal loss, demonstrating improved surface smoothness and reduced artifacts."
712
+ },
713
+ {
714
+ "type": "text",
715
+ "bbox": [
716
+ 0.512,
717
+ 0.142,
718
+ 0.907,
719
+ 0.28
720
+ ],
721
+ "angle": 0,
722
+ "content": "Mesh cleaning. Even though the proposed normal loss significantly reduces floaters, some still persist, adding noise to the subsequent 3DGS initialization. To mitigate this, we apply a mesh cleaning step that refines the coarse mesh by removing non-main components. Specifically, we first use Marching Cube algorithm [38] to extract triangle mesh \\(\\mathcal{M} = (\\mathcal{V},\\mathcal{F})\\) from SDF grid \\(V^{(\\mathrm{sdf})}\\). Then we cluster the connected mesh triangles to \\(\\{\\mathcal{F}_i\\}\\), identify the largest cluster index: \\(|\\mathcal{F}_{i_{\\max}}| = \\max (|\\mathcal{F}_i|)\\) and get remove parts"
723
+ },
724
+ {
725
+ "type": "equation",
726
+ "bbox": [
727
+ 0.605,
728
+ 0.291,
729
+ 0.907,
730
+ 0.309
731
+ ],
732
+ "angle": 0,
733
+ "content": "\\[\n\\mathcal {F} _ {\\text {r e m o v e}} = \\{f \\in \\mathcal {F} \\mid f \\notin \\mathcal {F} _ {i _ {\\max }} \\}. \\tag {5}\n\\]"
734
+ },
735
+ {
736
+ "type": "text",
737
+ "bbox": [
738
+ 0.513,
739
+ 0.32,
740
+ 0.907,
741
+ 0.366
742
+ ],
743
+ "angle": 0,
744
+ "content": "Finally, we filter the floaters \\(\\mathcal{F}_{\\mathrm{remove}}\\) from \\(\\mathcal{M}\\), resulting in \\(\\mathcal{M}_1 = \\mathcal{M} \\setminus \\mathcal{F}_{\\mathrm{remove}}\\). Fig. 3 (d) illustrates the refined mesh after applying our cleaning method."
745
+ },
746
+ {
747
+ "type": "text",
748
+ "bbox": [
749
+ 0.512,
750
+ 0.386,
751
+ 0.907,
752
+ 0.598
753
+ ],
754
+ "angle": 0,
755
+ "content": "Sampling surface points for 3DGS. Since the mesh obtained from Marching Cubes includes regions that are invisible from the training views, directly sampling points from the mesh surface can introduce noise into 3DGS. To mitigate this, we propose a depth-based sampling strategy. First, we project the reconstructed mesh onto the training views using their known camera poses to generate depth maps \\(\\{\\mathcal{D}\\}\\). Since these depth maps originate from a 3D mesh, they maintain multi-view consistency. We then randomly sample points from valid depth regions, ensuring they correspond to visible object surfaces. The sampled pixels \\((u,v)\\), along with their depth values \\(d(u,v)\\), are back-projected to colorized 3D points \\(\\mathbf{P} = \\{(x_i,y_i,z_i) \\mid i = 1,2,\\dots,N\\}\\) using the following formulation:"
756
+ },
757
+ {
758
+ "type": "equation",
759
+ "bbox": [
760
+ 0.561,
761
+ 0.609,
762
+ 0.906,
763
+ 0.631
764
+ ],
765
+ "angle": 0,
766
+ "content": "\\[\n\\left[ \\begin{array}{l l l} x _ {i} & y _ {i} & z _ {i} \\end{array} \\right] = \\boldsymbol {\\pi} _ {\\boldsymbol {k}} \\mathbf {K} ^ {- 1} \\left[ \\begin{array}{l l l} d \\cdot u & d \\cdot v & d \\end{array} \\right] ^ {T}. \\tag {6}\n\\]"
767
+ },
768
+ {
769
+ "type": "text",
770
+ "bbox": [
771
+ 0.512,
772
+ 0.642,
773
+ 0.907,
774
+ 0.779
775
+ ],
776
+ "angle": 0,
777
+ "content": "This approach ensures that the sampled points are uniformly distributed on the object's surface while remaining visible in the training views, leading to a more stable and accurate 3DGS initialization. As our reconstructed mesh primarily covers foreground regions, we combine our sampled point cloud with COLMAP sparse points when rendering background regions, serving as the initialization for 3DGS. Fig. 3 (e) and (f) illustrate our sampled point clouds and COLMAP-estimated point clouds, respectively."
778
+ },
779
+ {
780
+ "type": "title",
781
+ "bbox": [
782
+ 0.513,
783
+ 0.788,
784
+ 0.743,
785
+ 0.803
786
+ ],
787
+ "angle": 0,
788
+ "content": "3.3. 3DGS for Enhanced SDF"
789
+ },
790
+ {
791
+ "type": "text",
792
+ "bbox": [
793
+ 0.512,
794
+ 0.81,
795
+ 0.906,
796
+ 0.901
797
+ ],
798
+ "angle": 0,
799
+ "content": "We argue that the primary bottleneck for SDF-based mesh reconstruction is insufficient supervision due to limited training views. To address this, we generate additional novel viewpoint images using a 3DGS-based method and combine them with the original sparse views to enhance the training of SDF-based reconstruction."
800
+ },
801
+ {
802
+ "type": "page_number",
803
+ "bbox": [
804
+ 0.479,
805
+ 0.945,
806
+ 0.519,
807
+ 0.957
808
+ ],
809
+ "angle": 0,
810
+ "content": "28528"
811
+ }
812
+ ],
813
+ [
814
+ {
815
+ "type": "image",
816
+ "bbox": [
817
+ 0.098,
818
+ 0.09,
819
+ 0.28,
820
+ 0.232
821
+ ],
822
+ "angle": 0,
823
+ "content": null
824
+ },
825
+ {
826
+ "type": "image_caption",
827
+ "bbox": [
828
+ 0.097,
829
+ 0.24,
830
+ 0.283,
831
+ 0.253
832
+ ],
833
+ "angle": 0,
834
+ "content": "(a) Camera position perturbation"
835
+ },
836
+ {
837
+ "type": "image",
838
+ "bbox": [
839
+ 0.306,
840
+ 0.091,
841
+ 0.49,
842
+ 0.231
843
+ ],
844
+ "angle": 0,
845
+ "content": null
846
+ },
847
+ {
848
+ "type": "image_caption",
849
+ "bbox": [
850
+ 0.308,
851
+ 0.24,
852
+ 0.483,
853
+ 0.253
854
+ ],
855
+ "angle": 0,
856
+ "content": "(b) Camera poses interpolation"
857
+ },
858
+ {
859
+ "type": "image_caption",
860
+ "bbox": [
861
+ 0.102,
862
+ 0.269,
863
+ 0.47,
864
+ 0.283
865
+ ],
866
+ "angle": 0,
867
+ "content": "Figure 4. Top-view visualization of pose expansion strategies."
868
+ },
869
+ {
870
+ "type": "text",
871
+ "bbox": [
872
+ 0.09,
873
+ 0.309,
874
+ 0.484,
875
+ 0.43
876
+ ],
877
+ "angle": 0,
878
+ "content": "Rendering novel viewpoint images. We utilize the improved 3DGS, initialized with our proposed mesh-based point sampling method, to render images. Thanks to our robust and dense point initialization, the 3D Gaussian \\(\\mathcal{G}\\) can converge after \\(7k\\) iterations in just 5 minutes, yielding \\(\\mathcal{G} = f(\\mathbf{P},\\{I\\},\\{\\pi\\})\\). Given new camera poses \\(\\{\\pi_{\\mathrm{new}}\\}\\), the 3D Gaussian \\(\\mathcal{G}\\) can be projected to generate novel-view images as follows:"
879
+ },
880
+ {
881
+ "type": "equation",
882
+ "bbox": [
883
+ 0.196,
884
+ 0.441,
885
+ 0.482,
886
+ 0.459
887
+ ],
888
+ "angle": 0,
889
+ "content": "\\[\n\\{\\mathcal {I} _ {\\text {n e w}} \\} = \\operatorname {S p l a t} (\\mathcal {G}, \\left\\{\\pi_ {\\text {n e w}} \\right\\}). \\tag {7}\n\\]"
890
+ },
891
+ {
892
+ "type": "text",
893
+ "bbox": [
894
+ 0.09,
895
+ 0.469,
896
+ 0.484,
897
+ 0.53
898
+ ],
899
+ "angle": 0,
900
+ "content": "The newly rendered images \\(\\{\\mathcal{I}_{\\mathrm{new}}\\}\\) are combined with the input images \\(\\{\\mathcal{I}\\}\\) to train the SDF-based mesh reconstruction. The key challenge lies in selecting new camera viewpoints \\(\\{\\pi_{\\mathrm{new}}\\}\\) that best enhance surface reconstruction:"
901
+ },
902
+ {
903
+ "type": "equation",
904
+ "bbox": [
905
+ 0.225,
906
+ 0.541,
907
+ 0.482,
908
+ 0.559
909
+ ],
910
+ "angle": 0,
911
+ "content": "\\[\n\\left\\{\\boldsymbol {\\pi} _ {\\text {n e w}} \\right\\} = g \\left(\\left\\{\\boldsymbol {\\pi} \\right\\}\\right) \\tag {8}\n\\]"
912
+ },
913
+ {
914
+ "type": "text",
915
+ "bbox": [
916
+ 0.09,
917
+ 0.569,
918
+ 0.484,
919
+ 0.645
920
+ ],
921
+ "angle": 0,
922
+ "content": "where \\( g \\) is our pose expansion strategy. To ensure new viewpoints remain consistent with the original pose distribution and avoid excessive deviation that could blur or diminish the foreground, we explore two methods for generating new camera poses. Fig. 4 shows the generated pose position."
923
+ },
924
+ {
925
+ "type": "text",
926
+ "bbox": [
927
+ 0.09,
928
+ 0.663,
929
+ 0.484,
930
+ 0.724
931
+ ],
932
+ "angle": 0,
933
+ "content": "Camera position perturbation. To generate new camera positions while preserving proximity to the original distribution, a perturbation \\(\\Delta \\mathbf{p}\\) is applied to the initial camera positions \\(\\{\\pmb {c}\\}\\). The new camera centers \\(\\{\\pmb {c}_m^{\\prime}\\}\\) are computed:"
934
+ },
935
+ {
936
+ "type": "equation",
937
+ "bbox": [
938
+ 0.235,
939
+ 0.734,
940
+ 0.483,
941
+ 0.752
942
+ ],
943
+ "angle": 0,
944
+ "content": "\\[\n\\boldsymbol {c} _ {m} ^ {\\prime} = \\boldsymbol {c} + \\Delta \\mathbf {p}, \\tag {9}\n\\]"
945
+ },
946
+ {
947
+ "type": "text",
948
+ "bbox": [
949
+ 0.09,
950
+ 0.762,
951
+ 0.483,
952
+ 0.793
953
+ ],
954
+ "angle": 0,
955
+ "content": "where \\(\\Delta \\mathbf{p} = (\\Delta x, \\Delta y, \\Delta z)\\) represents a controlled offset vector designed to modulate the new viewpoints."
956
+ },
957
+ {
958
+ "type": "text",
959
+ "bbox": [
960
+ 0.09,
961
+ 0.811,
962
+ 0.484,
963
+ 0.903
964
+ ],
965
+ "angle": 0,
966
+ "content": "Camera pose interpolation. Our method takes a set of camera rotation matrices \\(\\{\\mathbf{R}\\}\\) and camera positions \\(\\{e\\}\\) as input. To generate smooth transitions between viewpoints, we employ cubic spline interpolation [11]. This approach interpolates both camera positions and orientations, producing interpolated camera centers \\(\\{c_m'\\}\\) and rotation matrices"
967
+ },
968
+ {
969
+ "type": "text",
970
+ "bbox": [
971
+ 0.512,
972
+ 0.091,
973
+ 0.906,
974
+ 0.183
975
+ ],
976
+ "angle": 0,
977
+ "content": "\\(\\{\\mathbf{R}_m^{\\prime}\\}\\) that ensure visual continuity and positional coherence. By maintaining these properties, the newly generated camera poses facilitate high-quality transitions, making them well-suited for 3D mesh reconstruction. The visualizations of the images generated from new viewpoints can be found in Fig. 2 of the supplementary material."
978
+ },
979
+ {
980
+ "type": "text",
981
+ "bbox": [
982
+ 0.512,
983
+ 0.202,
984
+ 0.906,
985
+ 0.293
986
+ ],
987
+ "angle": 0,
988
+ "content": "Refining surface reconstruction. We reuse the reconstructed coarse mesh and refine it with the original and expanded novel viewpoint images. Following the fine-stage reconstruction of Voxurf [41], we increase the grid resolution and introduce a dual color network and hierarchical geometry features for detailed surface reconstruction."
989
+ },
990
+ {
991
+ "type": "title",
992
+ "bbox": [
993
+ 0.513,
994
+ 0.302,
995
+ 0.706,
996
+ 0.318
997
+ ],
998
+ "angle": 0,
999
+ "content": "3.4. Cyclic Optimization"
1000
+ },
1001
+ {
1002
+ "type": "text",
1003
+ "bbox": [
1004
+ 0.512,
1005
+ 0.324,
1006
+ 0.906,
1007
+ 0.369
1008
+ ],
1009
+ "angle": 0,
1010
+ "content": "We propose an interactive optimization process, which begins by generating an initial coarse mesh \\(\\mathcal{M}^{(0)}\\). Then, in each iteration \\(n\\), the process follows two steps:"
1011
+ },
1012
+ {
1013
+ "type": "text",
1014
+ "bbox": [
1015
+ 0.512,
1016
+ 0.37,
1017
+ 0.906,
1018
+ 0.417
1019
+ ],
1020
+ "angle": 0,
1021
+ "content": "1. Rendering Step: We optimize a 3DGS model for rendering novel view images, which is initialized by sampling points from the current coarse mesh \\(\\mathcal{M}_c^{(n)}\\), represented by:"
1022
+ },
1023
+ {
1024
+ "type": "equation",
1025
+ "bbox": [
1026
+ 0.648,
1027
+ 0.428,
1028
+ 0.905,
1029
+ 0.453
1030
+ ],
1031
+ "angle": 0,
1032
+ "content": "\\[\n\\mathcal {I} ^ {(n)} = \\mathcal {R} \\left(\\mathcal {M} _ {c} ^ {(n)}\\right) \\tag {10}\n\\]"
1033
+ },
1034
+ {
1035
+ "type": "text",
1036
+ "bbox": [
1037
+ 0.513,
1038
+ 0.463,
1039
+ 0.906,
1040
+ 0.509
1041
+ ],
1042
+ "angle": 0,
1043
+ "content": "2. Meshing Step: We refine the current mesh by finetuning it using both the newly rendered images and the original input images:"
1044
+ },
1045
+ {
1046
+ "type": "equation",
1047
+ "bbox": [
1048
+ 0.627,
1049
+ 0.52,
1050
+ 0.905,
1051
+ 0.547
1052
+ ],
1053
+ "angle": 0,
1054
+ "content": "\\[\n\\mathcal {M} _ {f} ^ {(n)} = \\mathcal {O} \\left(\\mathcal {M} _ {c} ^ {(n)}, \\mathcal {I} ^ {(n)}\\right) \\tag {11}\n\\]"
1055
+ },
1056
+ {
1057
+ "type": "text",
1058
+ "bbox": [
1059
+ 0.512,
1060
+ 0.556,
1061
+ 0.906,
1062
+ 0.587
1063
+ ],
1064
+ "angle": 0,
1065
+ "content": "where \\(\\mathcal{O}\\) represents the SDF grid optimization. Then, we update the refined mesh:"
1066
+ },
1067
+ {
1068
+ "type": "equation",
1069
+ "bbox": [
1070
+ 0.65,
1071
+ 0.597,
1072
+ 0.905,
1073
+ 0.62
1074
+ ],
1075
+ "angle": 0,
1076
+ "content": "\\[\n\\mathcal {M} _ {c} ^ {(n + 1)} = \\mathcal {M} _ {f} ^ {(n)}. \\tag {12}\n\\]"
1077
+ },
1078
+ {
1079
+ "type": "text",
1080
+ "bbox": [
1081
+ 0.512,
1082
+ 0.629,
1083
+ 0.906,
1084
+ 0.705
1085
+ ],
1086
+ "angle": 0,
1087
+ "content": "By iterating this process, our method allows SDF-based reconstruction and 3DGS-based rendering to complement each other, improving both reconstruction accuracy and novel view synthesis. To balance efficiency and accuracy, we typically perform only one iteration."
1088
+ },
1089
+ {
1090
+ "type": "title",
1091
+ "bbox": [
1092
+ 0.513,
1093
+ 0.718,
1094
+ 0.646,
1095
+ 0.735
1096
+ ],
1097
+ "angle": 0,
1098
+ "content": "4. Experiments"
1099
+ },
1100
+ {
1101
+ "type": "title",
1102
+ "bbox": [
1103
+ 0.513,
1104
+ 0.743,
1105
+ 0.704,
1106
+ 0.76
1107
+ ],
1108
+ "angle": 0,
1109
+ "content": "4.1. Experimental Setup"
1110
+ },
1111
+ {
1112
+ "type": "text",
1113
+ "bbox": [
1114
+ 0.512,
1115
+ 0.765,
1116
+ 0.907,
1117
+ 0.902
1118
+ ],
1119
+ "angle": 0,
1120
+ "content": "Datasets. We conduct a comprehensive evaluation of the proposed method on the MobileBrick [19] and DTU [16] datasets. MobileBrick is a multi-view RGB-D dataset captured on a mobile device, providing precise 3D annotations for detailed 3D object reconstruction. Unlike the DTU dataset, which is captured in a controlled lab environment, MobileBrick represents more challenging, real-world conditions, making it more reflective of everyday scenarios. Following previous methods [19, 36, 41], we use 15 test"
1121
+ },
1122
+ {
1123
+ "type": "page_number",
1124
+ "bbox": [
1125
+ 0.479,
1126
+ 0.945,
1127
+ 0.519,
1128
+ 0.957
1129
+ ],
1130
+ "angle": 0,
1131
+ "content": "28529"
1132
+ }
1133
+ ],
1134
+ [
1135
+ {
1136
+ "type": "table_caption",
1137
+ "bbox": [
1138
+ 0.09,
1139
+ 0.089,
1140
+ 0.908,
1141
+ 0.119
1142
+ ],
1143
+ "angle": 0,
1144
+ "content": "Table 1. Surface reconstruction and novel view synthesis results on MobileBrick. The results are averaged over all 18 test scenes with an initial input of 10 images per scene. PSNR-F is computed only on foreground regions. The best results are bolded."
1145
+ },
1146
+ {
1147
+ "type": "table",
1148
+ "bbox": [
1149
+ 0.118,
1150
+ 0.13,
1151
+ 0.885,
1152
+ 0.359
1153
+ ],
1154
+ "angle": 0,
1155
+ "content": "<table><tr><td></td><td colspan=\"7\">Mesh Reconstruction</td><td colspan=\"2\">Rendering</td><td rowspan=\"3\">Time</td></tr><tr><td></td><td colspan=\"3\">σ = 2.5mm</td><td colspan=\"3\">σ = 5mm</td><td rowspan=\"2\">CD (mm)↓</td><td rowspan=\"2\">PSNR↑</td><td rowspan=\"2\">PSNR-F↑</td></tr><tr><td></td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td></tr><tr><td>Voxurf [41]</td><td>62.89</td><td>62.54</td><td>62.42</td><td>80.93</td><td>80.61</td><td>80.38</td><td>13.3</td><td>14.34</td><td>18.34</td><td>55 mins</td></tr><tr><td>MonoSDF [51]</td><td>41.56</td><td>32.47</td><td>36.22</td><td>57.88</td><td>48.19</td><td>52.21</td><td>37.7</td><td>14.71</td><td>15.42</td><td>6 hrs</td></tr><tr><td>2DGS [15]</td><td>49.83</td><td>45.32</td><td>47.10</td><td>72.65</td><td>64.88</td><td>67.96</td><td>14.8</td><td>17.12</td><td>18.52</td><td>10 mins</td></tr><tr><td>GOF [53]</td><td>50.24</td><td>61.11</td><td>54.96</td><td>74.99</td><td>82.68</td><td>78.16</td><td>11.0</td><td>16.52</td><td>18.36</td><td>50 mins</td></tr><tr><td>3DGS [17]</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>17.19</td><td>19.12</td><td>10 mins</td></tr><tr><td>SparseGS [42]</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>\\</td><td>16.93</td><td>18.74</td><td>30 mins</td></tr><tr><td>Ours</td><td>68.36</td><td>69.79</td><td>68.97</td><td>86.79</td><td>86.82</td><td>86.65</td><td>9.7</td><td>17.48</td><td>20.45</td><td>1 hr</td></tr><tr><td>Ours (Two cycles)</td><td>69.61</td><td>68.89</td><td>69.14</td><td>87.79</td><td>85.93</td><td>86.74</td><td>9.9</td><td>17.58</td><td>20.55</td><td>1.6 hr</td></tr></table>"
1156
+ },
1157
+ {
1158
+ "type": "table_caption",
1159
+ "bbox": [
1160
+ 0.09,
1161
+ 0.371,
1162
+ 0.908,
1163
+ 0.414
1164
+ ],
1165
+ "angle": 0,
1166
+ "content": "Table 2. Surface reconstruction results on DTU with 5 input views. Values indicate Chamfer Distance in millimeters (mm). -\" denotes failure cases where COLMAP could not generate point clouds for 3DGS initialization. GSDF-10 is reported with 10 input images, as it fails in sparser settings. The best results are bolded, while the second-best are underlined."
1167
+ },
1168
+ {
1169
+ "type": "table",
1170
+ "bbox": [
1171
+ 0.094,
1172
+ 0.424,
1173
+ 0.905,
1174
+ 0.585
1175
+ ],
1176
+ "angle": 0,
1177
+ "content": "<table><tr><td>Scan</td><td>24</td><td>37</td><td>40</td><td>55</td><td>63</td><td>65</td><td>69</td><td>83</td><td>97</td><td>105</td><td>106</td><td>110</td><td>114</td><td>118</td><td>122</td><td>Mean</td><td>Time</td></tr><tr><td>Voxurf [41]</td><td>2.74</td><td>4.50</td><td>3.39</td><td>1.52</td><td>2.24</td><td>2.00</td><td>2.94</td><td>1.29</td><td>2.49</td><td>1.28</td><td>2.45</td><td>4.69</td><td>0.93</td><td>2.74</td><td>1.29</td><td>2.43</td><td>50 mins</td></tr><tr><td>MonoSDF [51]</td><td>1.30</td><td>3.45</td><td>1.45</td><td>0.61</td><td>1.43</td><td>1.17</td><td>1.07</td><td>1.42</td><td>1.49</td><td>0.79</td><td>3.06</td><td>2.60</td><td>0.60</td><td>2.21</td><td>2.87</td><td>1.70</td><td>6 hrs</td></tr><tr><td>SparseNeuS [21]</td><td>3.57</td><td>3.73</td><td>3.11</td><td>1.50</td><td>2.36</td><td>2.89</td><td>1.91</td><td>2.10</td><td>2.89</td><td>2.01</td><td>2.08</td><td>3.44</td><td>1.21</td><td>2.19</td><td>2.11</td><td>2.43</td><td>Pretrain + 2 hrs ft</td></tr><tr><td>2DGS [15]</td><td>4.26</td><td>4.80</td><td>5.53</td><td>1.50</td><td>3.01</td><td>1.99</td><td>2.66</td><td>3.65</td><td>3.06</td><td>2.54</td><td>2.15</td><td>-</td><td>0.96</td><td>2.17</td><td>1.31</td><td>2.84</td><td>6 mins</td></tr><tr><td>GOF (TSDF) [53]</td><td>7.30</td><td>5.80</td><td>6.03</td><td>2.79</td><td>4.23</td><td>3.41</td><td>3.44</td><td>4.37</td><td>3.75</td><td>2.99</td><td>3.19</td><td>-</td><td>2.64</td><td>3.67</td><td>2.25</td><td>4.03</td><td>50 mins</td></tr><tr><td>GOF [53]</td><td>4.37</td><td>3.68</td><td>3.84</td><td>2.29</td><td>4.40</td><td>3.28</td><td>2.84</td><td>4.64</td><td>3.40</td><td>3.76</td><td>3.56</td><td>-</td><td>3.06</td><td>2.95</td><td>2.91</td><td>3.55</td><td>50 mins</td></tr><tr><td>GSDF-10 [50]</td><td>6.89</td><td>6.82</td><td>7.97</td><td>6.54</td><td>5.22</td><td>1.91</td><td>5.56</td><td>4.38</td><td>7.01</td><td>3.69</td><td>6.33</td><td>6.33</td><td>3.95</td><td>6.30</td><td>2.09</td><td>5.40</td><td>3 hrs</td></tr><tr><td>Ours</td><td>1.55</td><td>2.64</td><td>1.52</td><td>1.40</td><td>1.51</td><td>1.46</td><td>1.23</td><td>1.43</td><td>1.82</td><td>1.19</td><td>1.49</td><td>1.80</td><td>0.54</td><td>1.19</td><td>1.04</td><td>1.45</td><td>1 hr</td></tr></table>"
1178
+ },
1179
+ {
1180
+ "type": "text",
1181
+ "bbox": [
1182
+ 0.089,
1183
+ 0.609,
1184
+ 0.482,
1185
+ 0.792
1186
+ ],
1187
+ "angle": 0,
1188
+ "content": "scenes from DTU and 18 test scenes from MobileBrick for evaluation. In the MobileBrick dataset, each scene consists of 360-degree multi-view images, from which we sample 10 images with \\(10\\%\\) overlap for sparse view reconstruction. In contrast, the DTU dataset, with higher overlap, is sampled with 5 frames per scene. We also present reconstruction results for the little-overlapping 3-view setting in the supplementary materials. For fair comparison, 3DGS-based methods are initialized using point clouds from COLMAP[30] with ground-truth poses. The selected images and poses are used for 3D reconstruction, while the remaining images serve as a test set for evaluating novel view rendering."
1189
+ },
1190
+ {
1191
+ "type": "text",
1192
+ "bbox": [
1193
+ 0.089,
1194
+ 0.81,
1195
+ 0.483,
1196
+ 0.903
1197
+ ],
1198
+ "angle": 0,
1199
+ "content": "Baselines. We compare our proposed method with both SDF-based and 3DGS-based approaches for surface reconstruction. The SDF-based methods include MonoSDF[51], Voxurf[41], and SparseNeuS [21], which is pre-trained on large-scale data. The 3DGS-based methods include 2DGS[15] and GOF[53]. Additionally, we compare with"
1200
+ },
1201
+ {
1202
+ "type": "text",
1203
+ "bbox": [
1204
+ 0.512,
1205
+ 0.609,
1206
+ 0.907,
1207
+ 0.671
1208
+ ],
1209
+ "angle": 0,
1210
+ "content": "GSDF [50], which integrates both SDF and 3DGS, similar to our approach, but is designed for dense-view settings. For novel view rendering, we evaluate all these methods along with 3DGS[17] and SparseGS[42]."
1211
+ },
1212
+ {
1213
+ "type": "text",
1214
+ "bbox": [
1215
+ 0.512,
1216
+ 0.687,
1217
+ 0.909,
1218
+ 0.84
1219
+ ],
1220
+ "angle": 0,
1221
+ "content": "Evaluation metrics. We follow the official evaluation metrics on MobileBrick, reporting Chamfer Distance, precision, recall, and F1 score at two thresholds: \\(2.5mm\\) and \\(5mm\\). For the DTU dataset, we use Chamfer Distance as the primary metric for surface reconstruction. To evaluate novel view rendering performance, we report PSNR for full images and PSNR-F, which is computed only over foreground regions. In each scene, we train models using sparse input images and test on all remaining views. The final result is averaged over all evaluation images."
1222
+ },
1223
+ {
1224
+ "type": "text",
1225
+ "bbox": [
1226
+ 0.513,
1227
+ 0.856,
1228
+ 0.909,
1229
+ 0.903
1230
+ ],
1231
+ "angle": 0,
1232
+ "content": "Implementation details. We set the voxel grid resolution to \\(96^3\\) during coarse mesh training, requiring approximately 15 minutes for 10k iterations. The weight of the proposed"
1233
+ },
1234
+ {
1235
+ "type": "page_number",
1236
+ "bbox": [
1237
+ 0.479,
1238
+ 0.945,
1239
+ 0.521,
1240
+ 0.958
1241
+ ],
1242
+ "angle": 0,
1243
+ "content": "28530"
1244
+ }
1245
+ ],
1246
+ [
1247
+ {
1248
+ "type": "table_caption",
1249
+ "bbox": [
1250
+ 0.09,
1251
+ 0.089,
1252
+ 0.486,
1253
+ 0.148
1254
+ ],
1255
+ "angle": 0,
1256
+ "content": "Table 3. Surface reconstruction results with varying numbers of input views on MobileBrick (porsche) and DTU (scan69). The Baseline represents a pure SDF-based reconstruction without the assistance from 3DGS. \\(\\delta\\) indicates the improvement."
1257
+ },
1258
+ {
1259
+ "type": "table",
1260
+ "bbox": [
1261
+ 0.116,
1262
+ 0.156,
1263
+ 0.462,
1264
+ 0.245
1265
+ ],
1266
+ "angle": 0,
1267
+ "content": "<table><tr><td rowspan=\"2\">Input</td><td colspan=\"3\">MobileBrick / F1 score</td><td colspan=\"3\">DTU / CD</td></tr><tr><td>Baseline</td><td>Ours</td><td>δ</td><td>Baseline</td><td>Ours</td><td>δ</td></tr><tr><td>5</td><td>33.50</td><td>43.11</td><td>+9.61</td><td>2.940</td><td>1.230</td><td>-1.710</td></tr><tr><td>10</td><td>59.66</td><td>62.37</td><td>+2.71</td><td>1.362</td><td>1.165</td><td>-0.197</td></tr><tr><td>20</td><td>63.18</td><td>63.88</td><td>+0.7</td><td>1.043</td><td>0.965</td><td>-0.078</td></tr></table>"
1268
+ },
1269
+ {
1270
+ "type": "text",
1271
+ "bbox": [
1272
+ 0.09,
1273
+ 0.269,
1274
+ 0.485,
1275
+ 0.378
1276
+ ],
1277
+ "angle": 0,
1278
+ "content": "normal loss is set to 0.05, while all other parameters follow Voxurf [41]. Next, we train 3DGS [17] for 7k iterations, which takes around 5 minutes, and render 10 new viewpoint images within 30 seconds. After expanding the training images, we increase the voxel grid resolution to \\(256^{3}\\) and train for 20k iterations, taking approximately 40 minutes. Thus, a complete optimization cycle takes roughly 1 hour."
1279
+ },
1280
+ {
1281
+ "type": "title",
1282
+ "bbox": [
1283
+ 0.091,
1284
+ 0.386,
1285
+ 0.232,
1286
+ 0.403
1287
+ ],
1288
+ "angle": 0,
1289
+ "content": "4.2. Comparisons"
1290
+ },
1291
+ {
1292
+ "type": "text",
1293
+ "bbox": [
1294
+ 0.09,
1295
+ 0.408,
1296
+ 0.485,
1297
+ 0.621
1298
+ ],
1299
+ "angle": 0,
1300
+ "content": "Results on MobileBrick. Table 1 presents a quantitative comparison of our method against previous approaches. The results show that Voxurf [41], which utilizes an SDF-based representation, outperforms 2DGS [15] and GOF [53] (both 3DGS-based methods) in surface reconstruction metrics, particularly in terms of the F1 score. However, all 3DGS-based methods achieve notably better novel view rendering performance, as evidenced by their higher PSNR values compared to Voxurf. A visual comparison is illustrated in Fig. 5 and Fig. 6. BBy leveraging the strengths of both SDF and 3DGS representations, our method achieves state-of-the-art performance in surface reconstruction and novel view synthesis. To balance efficiency and performance, we adopt a single-cycle approach in practice."
1301
+ },
1302
+ {
1303
+ "type": "text",
1304
+ "bbox": [
1305
+ 0.09,
1306
+ 0.641,
1307
+ 0.484,
1308
+ 0.794
1309
+ ],
1310
+ "angle": 0,
1311
+ "content": "Results on DTU. Table 2 presents surface reconstruction results on the DTU dataset, which is particularly challenging due to the use of only 5 uniformly sampled frames for reconstruction. SparseNeuS [21] is a pre-trained model that requires an additional 2 hours of fine-tuning. COLMAP fails to generate sparse point clouds for scene 110, preventing 3DGS initialization. GSDF [50] struggles in sparse-view settings, so we train it on 10 images. Despite these challenges, our method achieves robust reconstruction and significantly outperforms other approaches."
1312
+ },
1313
+ {
1314
+ "type": "title",
1315
+ "bbox": [
1316
+ 0.091,
1317
+ 0.803,
1318
+ 0.203,
1319
+ 0.818
1320
+ ],
1321
+ "angle": 0,
1322
+ "content": "4.3. Ablations"
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "bbox": [
1327
+ 0.09,
1328
+ 0.825,
1329
+ 0.485,
1330
+ 0.903
1331
+ ],
1332
+ "angle": 0,
1333
+ "content": "Efficacy of 3DGS for Improving SDF. Table 3 compares our method with a pure SDF-based reconstruction baseline at different sparsity levels, using up to 20 images per scene. The results on MobileBrick and DTU validate the effectiveness of our 3DGS-assisted SDF approach. More results are"
1334
+ },
1335
+ {
1336
+ "type": "table_caption",
1337
+ "bbox": [
1338
+ 0.513,
1339
+ 0.089,
1340
+ 0.907,
1341
+ 0.119
1342
+ ],
1343
+ "angle": 0,
1344
+ "content": "Table 4. 3DGS rendering results with different initializations, averaged across all 18 MobileBrick test scenes."
1345
+ },
1346
+ {
1347
+ "type": "table",
1348
+ "bbox": [
1349
+ 0.547,
1350
+ 0.129,
1351
+ 0.875,
1352
+ 0.202
1353
+ ],
1354
+ "angle": 0,
1355
+ "content": "<table><tr><td>Method</td><td>Foreground PSNR</td></tr><tr><td>3DGS (COLMAP)</td><td>19.13</td></tr><tr><td>3DGS w/ mesh clean</td><td>19.88</td></tr><tr><td>3DGS w/ normal and mesh clean</td><td>20.45</td></tr></table>"
1356
+ },
1357
+ {
1358
+ "type": "table_caption",
1359
+ "bbox": [
1360
+ 0.513,
1361
+ 0.215,
1362
+ 0.907,
1363
+ 0.243
1364
+ ],
1365
+ "angle": 0,
1366
+ "content": "Table 5. Ablation study on pose expansion strategies for in MobileBrick (aston) with 10 input images."
1367
+ },
1368
+ {
1369
+ "type": "table",
1370
+ "bbox": [
1371
+ 0.526,
1372
+ 0.254,
1373
+ 0.903,
1374
+ 0.334
1375
+ ],
1376
+ "angle": 0,
1377
+ "content": "<table><tr><td></td><td>F1↑</td><td>Recall(%)↑</td><td>CD (mm)↓</td></tr><tr><td>Baseline</td><td>55.8</td><td>49.9</td><td>8.7</td></tr><tr><td>Camera position perturbation</td><td>59.9</td><td>57.4</td><td>6.6</td></tr><tr><td>Camera poses interpolation</td><td>60.8</td><td>59.1</td><td>6.4</td></tr></table>"
1378
+ },
1379
+ {
1380
+ "type": "text",
1381
+ "bbox": [
1382
+ 0.513,
1383
+ 0.359,
1384
+ 0.78,
1385
+ 0.375
1386
+ ],
1387
+ "angle": 0,
1388
+ "content": "provided in the supplementary material."
1389
+ },
1390
+ {
1391
+ "type": "image",
1392
+ "bbox": [
1393
+ 0.532,
1394
+ 0.392,
1395
+ 0.887,
1396
+ 0.557
1397
+ ],
1398
+ "angle": 0,
1399
+ "content": null
1400
+ },
1401
+ {
1402
+ "type": "image_caption",
1403
+ "bbox": [
1404
+ 0.513,
1405
+ 0.569,
1406
+ 0.907,
1407
+ 0.613
1408
+ ],
1409
+ "angle": 0,
1410
+ "content": "Figure 7. econstruction quality with varying numbers of 3DGS-rendered novel view images from expanded poses, averaged across all 18 MobileBrick test scenes, with an initial input of 10 images."
1411
+ },
1412
+ {
1413
+ "type": "text",
1414
+ "bbox": [
1415
+ 0.512,
1416
+ 0.635,
1417
+ 0.907,
1418
+ 0.713
1419
+ ],
1420
+ "angle": 0,
1421
+ "content": "Efficacy of SDF for enhancing 3DGS. Table 4 compares the novel view rendering results for 3DGS using point clouds initialized with different sampling strategies. The results demonstrate that our proposed mesh cleaning and normal supervision notably improve 3DGS performance."
1422
+ },
1423
+ {
1424
+ "type": "text",
1425
+ "bbox": [
1426
+ 0.512,
1427
+ 0.73,
1428
+ 0.909,
1429
+ 0.853
1430
+ ],
1431
+ "angle": 0,
1432
+ "content": "Number of newly rendered views. Fig. 7 illustrates the impact of the number of newly rendered images on surface reconstruction. On MobileBrick, rendering 10 novel views significantly improves Chamfer Distance \\((26.5\\%)\\) and F1 \\((7.2\\%)\\). As the number of novel views increases, accuracy gains gradually diminish. This suggests that while additional renderings refine reconstruction, the majority of benefits are achieved with the first 10 rendered images."
1433
+ },
1434
+ {
1435
+ "type": "text",
1436
+ "bbox": [
1437
+ 0.513,
1438
+ 0.871,
1439
+ 0.909,
1440
+ 0.903
1441
+ ],
1442
+ "angle": 0,
1443
+ "content": "Different pose expansion strategies. Table 5 summarizes the reconstruction performance with expansion images"
1444
+ },
1445
+ {
1446
+ "type": "page_number",
1447
+ "bbox": [
1448
+ 0.48,
1449
+ 0.945,
1450
+ 0.518,
1451
+ 0.957
1452
+ ],
1453
+ "angle": 0,
1454
+ "content": "28531"
1455
+ }
1456
+ ],
1457
+ [
1458
+ {
1459
+ "type": "image",
1460
+ "bbox": [
1461
+ 0.117,
1462
+ 0.09,
1463
+ 0.891,
1464
+ 0.402
1465
+ ],
1466
+ "angle": 0,
1467
+ "content": null
1468
+ },
1469
+ {
1470
+ "type": "image_caption",
1471
+ "bbox": [
1472
+ 0.134,
1473
+ 0.411,
1474
+ 0.861,
1475
+ 0.427
1476
+ ],
1477
+ "angle": 0,
1478
+ "content": "Figure 5. Qualitative mesh reconstruction comparisons on MobileBrick. See more visual results in supplementary material."
1479
+ },
1480
+ {
1481
+ "type": "image",
1482
+ "bbox": [
1483
+ 0.107,
1484
+ 0.442,
1485
+ 0.892,
1486
+ 0.65
1487
+ ],
1488
+ "angle": 0,
1489
+ "content": null
1490
+ },
1491
+ {
1492
+ "type": "image_caption",
1493
+ "bbox": [
1494
+ 0.283,
1495
+ 0.66,
1496
+ 0.713,
1497
+ 0.675
1498
+ ],
1499
+ "angle": 0,
1500
+ "content": "Figure 6. Qualitative novel view synthesis comparisons on MobileBrick."
1501
+ },
1502
+ {
1503
+ "type": "text",
1504
+ "bbox": [
1505
+ 0.09,
1506
+ 0.701,
1507
+ 0.483,
1508
+ 0.778
1509
+ ],
1510
+ "angle": 0,
1511
+ "content": "from different strategies. We double the number of original input camera poses, generating new viewpoints and rendering additional images accordingly. The two strategies significantly enhance surface reconstruction quality, with camera pose interpolation yielding the greatest improvement."
1512
+ },
1513
+ {
1514
+ "type": "title",
1515
+ "bbox": [
1516
+ 0.091,
1517
+ 0.788,
1518
+ 0.21,
1519
+ 0.803
1520
+ ],
1521
+ "angle": 0,
1522
+ "content": "5. Conclusion"
1523
+ },
1524
+ {
1525
+ "type": "text",
1526
+ "bbox": [
1527
+ 0.09,
1528
+ 0.811,
1529
+ 0.484,
1530
+ 0.902
1531
+ ],
1532
+ "angle": 0,
1533
+ "content": "This paper introduces a novel framework for sparse-view reconstruction, where SDF-based and 3DGS-based representations complement each other to enhance both surface reconstruction and novel view rendering. Specifically, our method leverages SDF for modeling global geometry and 3DGS for capturing fine details, achieving significant im"
1534
+ },
1535
+ {
1536
+ "type": "text",
1537
+ "bbox": [
1538
+ 0.513,
1539
+ 0.702,
1540
+ 0.907,
1541
+ 0.731
1542
+ ],
1543
+ "angle": 0,
1544
+ "content": "provements over state-of-the-art methods on two widely used real-world datasets."
1545
+ },
1546
+ {
1547
+ "type": "text",
1548
+ "bbox": [
1549
+ 0.512,
1550
+ 0.765,
1551
+ 0.907,
1552
+ 0.903
1553
+ ],
1554
+ "angle": 0,
1555
+ "content": "Limitation and future work. Although our method can theoretically be generalized to any SDF and novel view rendering approaches, our current implementation is built on Voxurf and 3DGS, which were selected for their efficiency-performance trade-off. As a result, our method is currently limited to object-level scenes and struggles with extremely sparse inputs, such as only two images. In the future, we aim to extend our approach to handle more diverse scenes and further improve its robustness to sparse inputs."
1556
+ },
1557
+ {
1558
+ "type": "page_number",
1559
+ "bbox": [
1560
+ 0.479,
1561
+ 0.945,
1562
+ 0.519,
1563
+ 0.957
1564
+ ],
1565
+ "angle": 0,
1566
+ "content": "28532"
1567
+ }
1568
+ ],
1569
+ [
1570
+ {
1571
+ "type": "title",
1572
+ "bbox": [
1573
+ 0.092,
1574
+ 0.092,
1575
+ 0.238,
1576
+ 0.108
1577
+ ],
1578
+ "angle": 0,
1579
+ "content": "Acknowledgments"
1580
+ },
1581
+ {
1582
+ "type": "text",
1583
+ "bbox": [
1584
+ 0.091,
1585
+ 0.113,
1586
+ 0.483,
1587
+ 0.143
1588
+ ],
1589
+ "angle": 0,
1590
+ "content": "This work was supported by the National Natural Science Foundation of China (No. 62206244)."
1591
+ },
1592
+ {
1593
+ "type": "title",
1594
+ "bbox": [
1595
+ 0.093,
1596
+ 0.155,
1597
+ 0.188,
1598
+ 0.17
1599
+ ],
1600
+ "angle": 0,
1601
+ "content": "References"
1602
+ },
1603
+ {
1604
+ "type": "ref_text",
1605
+ "bbox": [
1606
+ 0.102,
1607
+ 0.18,
1608
+ 0.483,
1609
+ 0.234
1610
+ ],
1611
+ "angle": 0,
1612
+ "content": "[1] Jiayang Bai, Letian Huang, Jie Guo, Wen Gong, Yuanqi Li, and Yanwen Guo. 360-gs: Layout-guided panoramic gaussian splatting for indoor roaming. In International Conference on 3D Vision 2025. 2"
1613
+ },
1614
+ {
1615
+ "type": "ref_text",
1616
+ "bbox": [
1617
+ 0.101,
1618
+ 0.236,
1619
+ 0.483,
1620
+ 0.303
1621
+ ],
1622
+ "angle": 0,
1623
+ "content": "[2] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Int. Conf. Comput. Vis., pages 5855–5864, 2021. 2"
1624
+ },
1625
+ {
1626
+ "type": "ref_text",
1627
+ "bbox": [
1628
+ 0.102,
1629
+ 0.305,
1630
+ 0.483,
1631
+ 0.359
1632
+ ],
1633
+ "angle": 0,
1634
+ "content": "[3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5470-5479, 2022. 2"
1635
+ },
1636
+ {
1637
+ "type": "ref_text",
1638
+ "bbox": [
1639
+ 0.102,
1640
+ 0.361,
1641
+ 0.482,
1642
+ 0.401
1643
+ ],
1644
+ "angle": 0,
1645
+ "content": "[4] Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, and Philip Torr. Porf: Pose residual field for accurate neural surface reconstruction. In ICLR, 2024. 2"
1646
+ },
1647
+ {
1648
+ "type": "ref_text",
1649
+ "bbox": [
1650
+ 0.102,
1651
+ 0.403,
1652
+ 0.482,
1653
+ 0.442
1654
+ ],
1655
+ "angle": 0,
1656
+ "content": "[5] Wenjing Bian, Zirui Wang, Kejie Li, Jiawang Bian, and Victor Adrian Prisacariu. Nope-nerf: Optimising neural radiance field with no pose prior. 2023. 2"
1657
+ },
1658
+ {
1659
+ "type": "ref_text",
1660
+ "bbox": [
1661
+ 0.102,
1662
+ 0.444,
1663
+ 0.482,
1664
+ 0.498
1665
+ ],
1666
+ "angle": 0,
1667
+ "content": "[6] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Int. Conf. Comput. Vis., pages 14124-14133, 2021. 1, 2"
1668
+ },
1669
+ {
1670
+ "type": "ref_text",
1671
+ "bbox": [
1672
+ 0.102,
1673
+ 0.5,
1674
+ 0.482,
1675
+ 0.54
1676
+ ],
1677
+ "angle": 0,
1678
+ "content": "[7] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In Eur. Conf. Comput. Vis., pages 333-350. Springer, 2022. 2"
1679
+ },
1680
+ {
1681
+ "type": "ref_text",
1682
+ "bbox": [
1683
+ 0.102,
1684
+ 0.541,
1685
+ 0.482,
1686
+ 0.609
1687
+ ],
1688
+ "angle": 0,
1689
+ "content": "[8] Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, and Guofeng Zhang. Pgsr: Planar-based gaussian splatting for efficient and high-fidelity surface reconstruction. IEEE Transactions on Visualization and Computer Graphics, 2024. 2"
1690
+ },
1691
+ {
1692
+ "type": "ref_text",
1693
+ "bbox": [
1694
+ 0.102,
1695
+ 0.611,
1696
+ 0.482,
1697
+ 0.651
1698
+ ],
1699
+ "angle": 0,
1700
+ "content": "[9] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846, 2023. 2"
1701
+ },
1702
+ {
1703
+ "type": "ref_text",
1704
+ "bbox": [
1705
+ 0.094,
1706
+ 0.652,
1707
+ 0.482,
1708
+ 0.719
1709
+ ],
1710
+ "angle": 0,
1711
+ "content": "[10] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 1, 2"
1712
+ },
1713
+ {
1714
+ "type": "ref_text",
1715
+ "bbox": [
1716
+ 0.094,
1717
+ 0.721,
1718
+ 0.482,
1719
+ 0.748
1720
+ ],
1721
+ "angle": 0,
1722
+ "content": "[11] C De Boor. A practical guide to splines. Springer-Verlag google schola, 2:4135-4195, 1978. 5"
1723
+ },
1724
+ {
1725
+ "type": "ref_text",
1726
+ "bbox": [
1727
+ 0.094,
1728
+ 0.749,
1729
+ 0.482,
1730
+ 0.804
1731
+ ],
1732
+ "angle": 0,
1733
+ "content": "[12] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5501-5510, 2022. 2"
1734
+ },
1735
+ {
1736
+ "type": "ref_text",
1737
+ "bbox": [
1738
+ 0.094,
1739
+ 0.805,
1740
+ 0.482,
1741
+ 0.859
1742
+ ],
1743
+ "angle": 0,
1744
+ "content": "[13] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5354-5363, 2024. 2"
1745
+ },
1746
+ {
1747
+ "type": "ref_text",
1748
+ "bbox": [
1749
+ 0.094,
1750
+ 0.86,
1751
+ 0.482,
1752
+ 0.9
1753
+ ],
1754
+ "angle": 0,
1755
+ "content": "[14] Siming He, Zach Osman, and Pratik Chaudhari. From nerfs to gaussian splats, and back. arXiv preprint arXiv:2405.09717, 2024. 2"
1756
+ },
1757
+ {
1758
+ "type": "list",
1759
+ "bbox": [
1760
+ 0.094,
1761
+ 0.18,
1762
+ 0.483,
1763
+ 0.9
1764
+ ],
1765
+ "angle": 0,
1766
+ "content": null
1767
+ },
1768
+ {
1769
+ "type": "ref_text",
1770
+ "bbox": [
1771
+ 0.517,
1772
+ 0.093,
1773
+ 0.906,
1774
+ 0.147
1775
+ ],
1776
+ "angle": 0,
1777
+ "content": "[15] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splattering for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers. Association for Computing Machinery, 2024. 1, 2, 6, 7"
1778
+ },
1779
+ {
1780
+ "type": "ref_text",
1781
+ "bbox": [
1782
+ 0.518,
1783
+ 0.149,
1784
+ 0.906,
1785
+ 0.203
1786
+ ],
1787
+ "angle": 0,
1788
+ "content": "[16] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanæs. Large scale multi-view stereopsis evaluation. In IEEE Conf. Comput. Vis. Pattern Recog., 2014. 2, 5"
1789
+ },
1790
+ {
1791
+ "type": "ref_text",
1792
+ "bbox": [
1793
+ 0.518,
1794
+ 0.205,
1795
+ 0.906,
1796
+ 0.259
1797
+ ],
1798
+ "angle": 0,
1799
+ "content": "[17] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 2023. 1, 2, 3, 6, 7"
1800
+ },
1801
+ {
1802
+ "type": "ref_text",
1803
+ "bbox": [
1804
+ 0.517,
1805
+ 0.261,
1806
+ 0.905,
1807
+ 0.302
1808
+ ],
1809
+ "angle": 0,
1810
+ "content": "[18] Jiyeop Kim and Jongwoo Lim. Integrating meshes and 3d gaussians for indoor scene reconstruction with sam mask guidance. arXiv preprint arXiv:2407.16173, 2024. 2"
1811
+ },
1812
+ {
1813
+ "type": "ref_text",
1814
+ "bbox": [
1815
+ 0.517,
1816
+ 0.303,
1817
+ 0.905,
1818
+ 0.357
1819
+ ],
1820
+ "angle": 0,
1821
+ "content": "[19] Kejie Li, Jia-Wang Bian, Robert Castle, Philip HS Torr, and Victor Adrian Prisacariu. Mobilebrick: Building legs for 3d reconstruction on mobile devices. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4892-4901, 2023. 2, 5"
1822
+ },
1823
+ {
1824
+ "type": "ref_text",
1825
+ "bbox": [
1826
+ 0.517,
1827
+ 0.359,
1828
+ 0.905,
1829
+ 0.425
1830
+ ],
1831
+ "angle": 0,
1832
+ "content": "[20] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8456-8465, 2023. 2"
1833
+ },
1834
+ {
1835
+ "type": "ref_text",
1836
+ "bbox": [
1837
+ 0.517,
1838
+ 0.428,
1839
+ 0.905,
1840
+ 0.483
1841
+ ],
1842
+ "angle": 0,
1843
+ "content": "[21] Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. In Eur. Conf. Comput. Vis., pages 210-227. Springer, 2022. 1, 2, 6, 7"
1844
+ },
1845
+ {
1846
+ "type": "ref_text",
1847
+ "bbox": [
1848
+ 0.517,
1849
+ 0.485,
1850
+ 0.905,
1851
+ 0.537
1852
+ ],
1853
+ "angle": 0,
1854
+ "content": "[22] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Semin al graphics: pioneering efforts that shaped the field, pages 347-353, 1998. 2"
1855
+ },
1856
+ {
1857
+ "type": "ref_text",
1858
+ "bbox": [
1859
+ 0.517,
1860
+ 0.539,
1861
+ 0.905,
1862
+ 0.594
1863
+ ],
1864
+ "angle": 0,
1865
+ "content": "[23] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20654-20664, 2024. 2"
1866
+ },
1867
+ {
1868
+ "type": "ref_text",
1869
+ "bbox": [
1870
+ 0.518,
1871
+ 0.596,
1872
+ 0.905,
1873
+ 0.662
1874
+ ],
1875
+ "angle": 0,
1876
+ "content": "[24] Xiaoyang Lyu, Yang-Tian Sun, Yi-Hua Huang, Xiuzhe Wu, Ziyi Yang, Yilun Chen, Jiangmiao Pang, and Xiaojuan Qi. 3dgsr: Implicit surface reconstruction with 3d gaussian splatting. ACM Transactions on Graphics (TOG), 43(6):1-12, 2024. 2"
1877
+ },
1878
+ {
1879
+ "type": "ref_text",
1880
+ "bbox": [
1881
+ 0.518,
1882
+ 0.665,
1883
+ 0.905,
1884
+ 0.719
1885
+ ],
1886
+ "angle": 0,
1887
+ "content": "[25] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Eur. Conf. Comput. Vis., 2020. 1, 2"
1888
+ },
1889
+ {
1890
+ "type": "ref_text",
1891
+ "bbox": [
1892
+ 0.518,
1893
+ 0.721,
1894
+ 0.905,
1895
+ 0.774
1896
+ ],
1897
+ "angle": 0,
1898
+ "content": "[26] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):1-15, 2022. 2"
1899
+ },
1900
+ {
1901
+ "type": "ref_text",
1902
+ "bbox": [
1903
+ 0.518,
1904
+ 0.776,
1905
+ 0.905,
1906
+ 0.829
1907
+ ],
1908
+ "angle": 0,
1909
+ "content": "[27] Stanley Osher, Ronald Fedkiw, Stanley Osher, and Ronald Fedkiw. Constructing signed distance functions. Level set methods and dynamic implicit surfaces, pages 63-74, 2003. 2"
1910
+ },
1911
+ {
1912
+ "type": "ref_text",
1913
+ "bbox": [
1914
+ 0.517,
1915
+ 0.832,
1916
+ 0.905,
1917
+ 0.899
1918
+ ],
1919
+ "angle": 0,
1920
+ "content": "[28] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 165-174, 2019. 2"
1921
+ },
1922
+ {
1923
+ "type": "list",
1924
+ "bbox": [
1925
+ 0.517,
1926
+ 0.093,
1927
+ 0.906,
1928
+ 0.899
1929
+ ],
1930
+ "angle": 0,
1931
+ "content": null
1932
+ },
1933
+ {
1934
+ "type": "page_number",
1935
+ "bbox": [
1936
+ 0.48,
1937
+ 0.945,
1938
+ 0.519,
1939
+ 0.957
1940
+ ],
1941
+ "angle": 0,
1942
+ "content": "28533"
1943
+ }
1944
+ ],
1945
+ [
1946
+ {
1947
+ "type": "ref_text",
1948
+ "bbox": [
1949
+ 0.093,
1950
+ 0.092,
1951
+ 0.482,
1952
+ 0.161
1953
+ ],
1954
+ "angle": 0,
1955
+ "content": "[29] Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys, and Sabine Susstrunk. Volrecon: Volume rendering of signed ray distance functions for generalizable multi-view reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 16685-16695, 2023. 1, 2"
1956
+ },
1957
+ {
1958
+ "type": "ref_text",
1959
+ "bbox": [
1960
+ 0.093,
1961
+ 0.163,
1962
+ 0.482,
1963
+ 0.207
1964
+ ],
1965
+ "angle": 0,
1966
+ "content": "[30] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In IEEE Conf. Comput. Vis. Pattern Recog., 2016. 6"
1967
+ },
1968
+ {
1969
+ "type": "ref_text",
1970
+ "bbox": [
1971
+ 0.093,
1972
+ 0.208,
1973
+ 0.482,
1974
+ 0.248
1975
+ ],
1976
+ "angle": 0,
1977
+ "content": "[31] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: exploring photo collections in 3d. In ACM SIGgraph, pages 835-846, 2006. 3"
1978
+ },
1979
+ {
1980
+ "type": "ref_text",
1981
+ "bbox": [
1982
+ 0.093,
1983
+ 0.25,
1984
+ 0.482,
1985
+ 0.305
1986
+ ],
1987
+ "angle": 0,
1988
+ "content": "[32] Xiaowei Song, Jv Zheng, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, and Hao Zhao. Saags: Scale-adaptive gaussian splatting for training-free antialIASing. arXiv preprint arXiv:2403.19615, 2024. 2"
1989
+ },
1990
+ {
1991
+ "type": "ref_text",
1992
+ "bbox": [
1993
+ 0.093,
1994
+ 0.307,
1995
+ 0.482,
1996
+ 0.362
1997
+ ],
1998
+ "angle": 0,
1999
+ "content": "[33] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5459-5469, 2022. 2"
2000
+ },
2001
+ {
2002
+ "type": "ref_text",
2003
+ "bbox": [
2004
+ 0.093,
2005
+ 0.364,
2006
+ 0.482,
2007
+ 0.433
2008
+ ],
2009
+ "angle": 0,
2010
+ "content": "[34] Hao Sun, Junping Qin, Lei Wang, Kai Yan, Zheng Liu, Xinglong Jia, and Xiaole Shi. 3dgs-hd: Elimination of unrealistic artifacts in 3d gaussian splatting. In 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), pages 696-702. IEEE, 2024. 2"
2011
+ },
2012
+ {
2013
+ "type": "ref_text",
2014
+ "bbox": [
2015
+ 0.093,
2016
+ 0.435,
2017
+ 0.482,
2018
+ 0.504
2019
+ ],
2020
+ "angle": 0,
2021
+ "content": "[35] Matias Turkulainen, Xuqian Ren, Jaroslav Melekhov, Otto Seiskari, Esa Rahtu, and Juho Kannala. Dn-splatter: Depth and normal priors for gaussian splatting and meshing. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2421-2431. IEEE, 2025. 2"
2022
+ },
2023
+ {
2024
+ "type": "ref_text",
2025
+ "bbox": [
2026
+ 0.093,
2027
+ 0.505,
2028
+ 0.482,
2029
+ 0.56
2030
+ ],
2031
+ "angle": 0,
2032
+ "content": "[36] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021. 1, 2, 5"
2033
+ },
2034
+ {
2035
+ "type": "ref_text",
2036
+ "bbox": [
2037
+ 0.093,
2038
+ 0.562,
2039
+ 0.482,
2040
+ 0.63
2041
+ ],
2042
+ "angle": 0,
2043
+ "content": "[37] Ruizhe Wang, Chunliang Hua, Tomakayev Shingys, Mengyuan Niu, Qingxin Yang, Lizhong Gao, Yi Zheng, Junyan Yang, and Qiao Wang. Enhancement of 3d gaussian splatting using raw mesh for photorealistic recreation of architectures. arXiv preprint arXiv:2407.15435, 2024. 2"
2044
+ },
2045
+ {
2046
+ "type": "ref_text",
2047
+ "bbox": [
2048
+ 0.093,
2049
+ 0.632,
2050
+ 0.482,
2051
+ 0.673
2052
+ ],
2053
+ "angle": 0,
2054
+ "content": "[38] LORENSEN WE. Marching cubes: A high resolution 3d surface construction algorithm. Computer graphics, 21(1): 7-12, 1987. 4"
2055
+ },
2056
+ {
2057
+ "type": "ref_text",
2058
+ "bbox": [
2059
+ 0.093,
2060
+ 0.676,
2061
+ 0.482,
2062
+ 0.743
2063
+ ],
2064
+ "angle": 0,
2065
+ "content": "[39] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality meshes. arXiv preprint arXiv:2404.12385, 2024. 1, 2"
2066
+ },
2067
+ {
2068
+ "type": "ref_text",
2069
+ "bbox": [
2070
+ 0.093,
2071
+ 0.746,
2072
+ 0.482,
2073
+ 0.814
2074
+ ],
2075
+ "angle": 0,
2076
+ "content": "[40] Shuang Wu, Youtian Lin, Feihu Zhang, Yifei Zeng, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859-121881, 2024. 1, 2"
2077
+ },
2078
+ {
2079
+ "type": "ref_text",
2080
+ "bbox": [
2081
+ 0.093,
2082
+ 0.817,
2083
+ 0.482,
2084
+ 0.872
2085
+ ],
2086
+ "angle": 0,
2087
+ "content": "[41] Tong Wu, Jiaqi Wang, Xingang Pan, Xudong Xu, Christian Theobalt, Ziwei Liu, and Dahua Lin. Voxurf: Voxel-based efficient and accurate neural surface reconstruction. In Int. Conf. Learn. Represent., 2023. 1, 3, 4, 5, 6, 7"
2088
+ },
2089
+ {
2090
+ "type": "ref_text",
2091
+ "bbox": [
2092
+ 0.093,
2093
+ 0.874,
2094
+ 0.482,
2095
+ 0.901
2096
+ ],
2097
+ "angle": 0,
2098
+ "content": "[42] Haolin Xiong, Sairisheek Muttukuru, Rishi Upadhyay, Pradyumna Chari, and Achuta Kadambi. Sparsegs: Real-"
2099
+ },
2100
+ {
2101
+ "type": "list",
2102
+ "bbox": [
2103
+ 0.093,
2104
+ 0.092,
2105
+ 0.482,
2106
+ 0.901
2107
+ ],
2108
+ "angle": 0,
2109
+ "content": null
2110
+ },
2111
+ {
2112
+ "type": "ref_text",
2113
+ "bbox": [
2114
+ 0.545,
2115
+ 0.093,
2116
+ 0.905,
2117
+ 0.12
2118
+ ],
2119
+ "angle": 0,
2120
+ "content": "time \\(360^{\\circ}\\) sparse view synthesis using gaussian splatting. Arxiv, 2023. 6"
2121
+ },
2122
+ {
2123
+ "type": "ref_text",
2124
+ "bbox": [
2125
+ 0.517,
2126
+ 0.122,
2127
+ 0.905,
2128
+ 0.19
2129
+ ],
2130
+ "angle": 0,
2131
+ "content": "[43] Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024. 1, 2"
2132
+ },
2133
+ {
2134
+ "type": "ref_text",
2135
+ "bbox": [
2136
+ 0.517,
2137
+ 0.192,
2138
+ 0.905,
2139
+ 0.248
2140
+ ],
2141
+ "angle": 0,
2142
+ "content": "[44] Runyi Yang, Zhenxin Zhu, Zhou Jiang, Baijun Ye, Xiaoxue Chen, Yifei Zhang, Yuantao Chen, Jian Zhao, and Hao Zhao. Spectrally pruned gaussian fields with neural compensation. arXiv preprint arXiv:2405.00676, 2024. 2"
2143
+ },
2144
+ {
2145
+ "type": "ref_text",
2146
+ "bbox": [
2147
+ 0.517,
2148
+ 0.25,
2149
+ 0.905,
2150
+ 0.316
2151
+ ],
2152
+ "angle": 0,
2153
+ "content": "[45] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20331-20341, 2024. 2"
2154
+ },
2155
+ {
2156
+ "type": "ref_text",
2157
+ "bbox": [
2158
+ 0.517,
2159
+ 0.319,
2160
+ 0.905,
2161
+ 0.36
2162
+ ],
2163
+ "angle": 0,
2164
+ "content": "[46] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. Adv. Neural Inform. Process. Syst., 34:4805-4815, 2021. 2"
2165
+ },
2166
+ {
2167
+ "type": "ref_text",
2168
+ "bbox": [
2169
+ 0.517,
2170
+ 0.362,
2171
+ 0.905,
2172
+ 0.431
2173
+ ],
2174
+ "angle": 0,
2175
+ "content": "[47] Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, et al. gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research, 26(34):1-17, 2025. 2"
2176
+ },
2177
+ {
2178
+ "type": "ref_text",
2179
+ "bbox": [
2180
+ 0.517,
2181
+ 0.433,
2182
+ 0.905,
2183
+ 0.488
2184
+ ],
2185
+ "angle": 0,
2186
+ "content": "[48] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from a single image. In Int. Conf. Comput. Vis., pages 9043-9053, 2023. 4"
2187
+ },
2188
+ {
2189
+ "type": "ref_text",
2190
+ "bbox": [
2191
+ 0.517,
2192
+ 0.49,
2193
+ 0.905,
2194
+ 0.544
2195
+ ],
2196
+ "angle": 0,
2197
+ "content": "[49] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4578-4587, 2021. 1, 2"
2198
+ },
2199
+ {
2200
+ "type": "ref_text",
2201
+ "bbox": [
2202
+ 0.517,
2203
+ 0.546,
2204
+ 0.905,
2205
+ 0.601
2206
+ ],
2207
+ "angle": 0,
2208
+ "content": "[50] Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, and Bo Dai. Gsdf: 3dgs meets sdf for improved neural rendering and reconstruction. Advances in Neural Information Processing Systems, 37:129507-129530, 2024. 2, 6, 7"
2209
+ },
2210
+ {
2211
+ "type": "ref_text",
2212
+ "bbox": [
2213
+ 0.517,
2214
+ 0.603,
2215
+ 0.905,
2216
+ 0.657
2217
+ ],
2218
+ "angle": 0,
2219
+ "content": "[51] Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. Adv. Neural Inform. Process. Syst., 2022. 6"
2220
+ },
2221
+ {
2222
+ "type": "ref_text",
2223
+ "bbox": [
2224
+ 0.517,
2225
+ 0.66,
2226
+ 0.905,
2227
+ 0.713
2228
+ ],
2229
+ "angle": 0,
2230
+ "content": "[52] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 19447-19456, 2024. 2"
2231
+ },
2232
+ {
2233
+ "type": "ref_text",
2234
+ "bbox": [
2235
+ 0.517,
2236
+ 0.716,
2237
+ 0.905,
2238
+ 0.757
2239
+ ],
2240
+ "angle": 0,
2241
+ "content": "[53] Zehao Yu, Torsten Sattler, and Andreas Geiger. Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes. ACM Trans. Graph., 2024. 1, 2, 6, 7"
2242
+ },
2243
+ {
2244
+ "type": "ref_text",
2245
+ "bbox": [
2246
+ 0.517,
2247
+ 0.759,
2248
+ 0.905,
2249
+ 0.813
2250
+ ],
2251
+ "angle": 0,
2252
+ "content": "[54] Baowen Zhang, Chuan Fang, Rakesh Shrestha, Yixun Liang, Xiaoxiao Long, and Ping Tan. Rade-gs: Rasterizing depth in gaussian splatting. arXiv preprint arXiv:2406.01467, 2024. 2"
2253
+ },
2254
+ {
2255
+ "type": "ref_text",
2256
+ "bbox": [
2257
+ 0.517,
2258
+ 0.814,
2259
+ 0.905,
2260
+ 0.857
2261
+ ],
2262
+ "angle": 0,
2263
+ "content": "[55] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2"
2264
+ },
2265
+ {
2266
+ "type": "list",
2267
+ "bbox": [
2268
+ 0.517,
2269
+ 0.093,
2270
+ 0.905,
2271
+ 0.857
2272
+ ],
2273
+ "angle": 0,
2274
+ "content": null
2275
+ },
2276
+ {
2277
+ "type": "page_number",
2278
+ "bbox": [
2279
+ 0.48,
2280
+ 0.946,
2281
+ 0.52,
2282
+ 0.957
2283
+ ],
2284
+ "angle": 0,
2285
+ "content": "28534"
2286
+ }
2287
+ ]
2288
+ ]
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/386f37e7-3405-4869-8ba6-9588babfe21c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c72f1b7a4ca6b370f0de2c28e4486da2dea115638737a63db2fd88376c9359eb
3
+ size 10777529
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/full.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SurfaceSplat: Connecting Surface Reconstruction and Gaussian Splatting
2
+
3
+ Zihui Gao $^{1,3*}$ , Jia-Wang Bian $^{2*}$ , Guosheng Lin $^{3}$ , Hao Chen $^{1,\dagger}$ , Chunhua Shen $^{1}$ $^{1}$ Zhejiang University, China
4
+ $^{2}$ ByteDance Seed
5
+ $^{3}$ Nanyang Technological University, Singapore
6
+ *Equal contribution †Corresponding author
7
+
8
+ ![](images/7425cd0e4e59e0429e9307924a60856a725b765284f8a001420ee39cf2007ac1.jpg)
9
+ Figure 1. Sparse view reconstruction and rendering comparison. Left: Qualitative results from 10 images evenly sampled from a casually captured 360-degree video. Right: Quantitative analysis of 5, 10, and 20 input views, averaged across the selected 9 MobileBrick test scenes. 3DGS-based methods (e.g., GOF) achieve superior novel view rendering than SDF-based methods (e.g., Voxurf) due to their sparse representations, which capture fine details. However, SDF-based methods outperform the former in mesh reconstruction, as their dense representations better preserve global geometry. Our approach combines the strengths of both, achieving optimal performance.
10
+
11
+ # Abstract
12
+
13
+ Surface reconstruction and novel view rendering from sparse-view images are challenging. Signed Distance Function (SDF)-based methods struggle with fine details, while 3D Gaussian Splatting (3DGS)-based approaches lack global geometry coherence. We propose a novel hybrid method that combines the strengths of both approaches: SDF captures coarse geometry to enhance 3DGS-based rendering, while newly rendered images from 3DGS refine the details of SDF for accurate surface reconstruction. As a result, our method surpasses state-of-the-art approaches in surface reconstruction and novel view synthesis on the DTU and MobileBrick datasets. Code will be released at: https://github.com/aim-uofa/SurfaceSplat.
14
+
15
+ # 1. Introduction
16
+
17
+ 3D reconstruction from multi-view images is a core problem in computer vision with applications in virtual reality, robotics, and autonomous driving. Recent advances in Neural Radiance Fields (NeRF) [25] and 3D Gaussian Splatting (3DGS) [17] have significantly advanced the field. However, their performance degrades under sparse-view conditions, a common real-world challenge. This paper tackles sparse-view reconstruction to bridge this gap. Unlike ap-
18
+
19
+ proaches that leverage generative models [10, 39, 40, 43] or learn geometry priors through large-scale pretraining [6, 21, 29, 49], we focus on identifying the optimal 3D representations for surface reconstruction and novel view synthesis.
20
+
21
+ Surface reconstruction methods primarily use the Signed Distance Function (SDF) or 3DGS-based representations. Here, SDF-based approaches, such as NeuS [36] and Voxsurf [41], model scene geometry continuously with dense representations and optimize them via differentiable volume rendering [25]. In contrast, 3DGS-based methods like GOF [53] and 2DGS [15] leverage a pre-computed sparse point cloud for image rendering and progressively densify and refine it through differentiable rasterization. Due to their dense representations, SDF-based methods capture global structures well but lack fine details, while the sparse nature of 3DGS-based methods enables high-frequency detail preservation but compromises global coherence. As a result, both approaches struggle with poor reconstruction quality under sparse-view conditions. Typically, SDF-based methods outperform 3DGS in surface reconstruction, while 3DGS excels in image rendering, as illustrated in Fig. 1.
22
+
23
+ Recognizing the complementary strengths of SDF-based (dense) and 3DGS-based (sparse) representations, we propose a novel hybrid approach, SurfaceSplat, as illustrated in Fig. 2. Our method is built on two key ideas: (i) SDF for Improved 3DGS: To address the limitation of 3DGS in learning global geometry, we first fit the global struc
24
+
25
+ ture using an SDF-based representation, rapidly generating a smooth yet coarse mesh. We then initialize 3DGS by sampling point clouds from the mesh surface, ensuring global consistency while allowing 3DGS to refine fine details during training. (ii) 3DGS for Enhanced SDF: To compensate for the inability of SDF-based methods to capture fine details under sparse-view settings, we leverage the improved 3DGS from the first step to render additional novel viewpoint images, expanding the dataset. This enriched supervision helps the SDF-based method learn finer structural details, leading to improved reconstruction quality.
26
+
27
+ We conduct experiments on two real-world datasets, DTU [16] and MobileBrick [19]. Our method, SurfaceSplat, achieves state-of-the-art performance in sparse-view novel view rendering and 3D mesh reconstruction. In summary, we make the following contributions:
28
+
29
+ - We propose SurfaceSplat, which synergistically combines the strengths of SDF-based and 3DGS-based representations to achieve optimal global geometry preservation while capturing fine local details.
30
+ - We conducted a comprehensive evaluation and ablations on DTU and MobileBrick datasets. SurfaceSplat achieves state-of-the-art performance in novel view synthesis and mesh reconstruction under sparse-view conditions.
31
+
32
+ # 2. Related work
33
+
34
+ # 2.1. Novel View Synthesis from Sparse Inputs
35
+
36
+ Neural Radiance Fields (NeRFs)-based methods [2, 3, 5, 7, 12, 25, 26, 33, 35, 45, 55] have revolutionized novel view synthesis with implicit neural representations, and 3DGS-based methods [17, 23, 32, 34, 44, 47, 52] enable efficient training and real-time rendering through explicit 3D point clouds. However, both approaches suffer from performance degradation in sparse-view settings. To address this issue, recent methods have explored generative models [10, 39, 40, 43] or leveraged large-scale training to learn geometric priors [6, 21, 29, 49]. Unlike these approaches, we argue that the key challenge lies in the lack of effective geometric initialization for 3DGS. To overcome this, we investigate how neural surface reconstruction methods can enhance its performance.
37
+
38
+ # 2.2. Neural Surface Reconstruction
39
+
40
+ SDF-based methods, such as NeuS [36], VolSDF [46], Neuralangelo [20], and PoRF [4] use dense neural representations and differentiable volume rendering to achieve high-quality reconstructions with 3D supervision. However, they suffer from long optimization times and require dense viewpoint images. Recent methods, such as 2DGS [15] and GOF [53], extend 3DGS [17] by leveraging modified Gaussians and depth correction to accelerate geometry extraction. While 3DGS-based methods [1, 8, 13, 15, 18, 37, 53,
41
+
42
+ [54] excel at capturing fine local details, their sparse representations struggle to maintain global geometry, leading to incomplete and fragmented reconstructions. This paper focuses on integrating the strengths of both representations to achieve optimal neural surface reconstruction.
43
+
44
+ # 2.3. Combing 3DGS and SDF
45
+
46
+ Several recent approaches have integrated SDF-based [27, 28] and 3DGS-based representations to improve surface reconstruction. NeuSG [9] and GSDF [50] jointly optimize SDF and 3DGS, enforcing geometric consistency (e.g., depths and normals) to improve surface detail [14]. Similarly, 3DGSR [24] combines SDF values with Gaussian opacity in a joint optimization framework for better geometry. While effective in dense-view settings, these methods struggle to reconstruct high-quality structures under sparse-view conditions, as shown in our experiments in Sec. 4. Our approach specifically targets sparse-view scenarios by leveraging a complementary structure to enhance both rendering and reconstruction quality.
47
+
48
+ # 3. Method
49
+
50
+ Our method takes sparse viewpoint images with camera poses as input, aiming to reconstruct 3D geometry and color for novel view synthesis and mesh extraction. Fig. 2 provides an overview of SurfaceSplat. In the following sections, we first introduce the preliminaries in Sec. 3.1, then explain how SDF-based mesh reconstruction improves 3DGS for novel view synthesis in Sec. 3.2, and finally describe how 3DGS-based rendering enhances SDF-based surface reconstruction quality in Sec. 3.3.
51
+
52
+ # 3.1. Preliminaries
53
+
54
+ SDF-based representation. NeuS [36] proposes to model scene coordinates as signed distance function (SDF) values and optimize using differentiable volume rendering, similar to NeRF [25]. After optimization, object surfaces are extracted using the marching cubes algorithm [22]. To render a pixel, a ray is cast from the camera center $o$ through the pixel along the viewing direction $v$ as $\{p(t) = o + tv|t\geq 0\}$ , and the pixel color is computed by integrating $N$ sampled points along the ray $\{p_i = o + t_iv|i = 1,\dots,N,t_i < t_{i + 1}\}$ using volume rendering:
55
+
56
+ $$
57
+ \hat {C} (r) = \sum_ {i = 1} ^ {N} T _ {i} \alpha_ {i} c _ {i}, T _ {i} = \prod_ {j = 1} ^ {i - 1} (1 - \alpha_ {j}), \qquad (1)
58
+ $$
59
+
60
+ where $\alpha_{i}$ represents opacity and $T_{i}$ is the accumulated transmittance. It is computed as:
61
+
62
+ $$
63
+ \alpha_ {i} = \max \left(\frac {\Phi_ {s} (f (p (t _ {i}))) - \Phi_ {s} (f (p (t _ {i + 1})))}{\Phi_ {s} (f (p (t _ {i})))}, 0\right), \quad (2)
64
+ $$
65
+
66
+ ![](images/864fc9a4b81b720525d6915e06f47ec878f368449e2c2f764959287c3cf100f3.jpg)
67
+ Figure 2. Overview of the proposed SurfaceSplat. (A) We reconstruct a coarse mesh using an SDF-based representation. (B) Point clouds are sampled from the mesh surface to initialize 3DGS. (C) 3DGS renders new viewpoint images to expand the training set, refining the mesh. (D) Steps B and C can be repeated for iterative optimization, progressively improving performance.
68
+
69
+ where $f(x)$ is the SDF function and $\Phi_s(x) = (1 + e^{-sx})^{-1}$ is the Sigmoid function, with $s$ learned during training. Based on this, Voxurf [41] proposes a hybrid representation that combines a voxel grid with a shallow MLP to reconstruct the implicit SDF field. In the coarse stage, Voxurf [41] optimizes for a better overall shape by using 3D convolution and interpolation to estimate SDF values. In the fine stage, it increases the voxel grid resolution and employs a dual-color MLP architecture, consisting of two networks: $g_{geo}$ , which takes hierarchical geometry features as input, and $g_{feat}$ , which receives local features from $V^{(\mathrm{feat})}$ along with surface normals. We incorporate Voxurf in this work due to its effective balance between accuracy and efficiency.
70
+
71
+ 3DGS-based representation. 3DGS [17] models a set of 3D Gaussians to represent the scene, which is similar to point clouds. Each Gaussian ellipse has a color and an opacity and is defined by its centered position $x$ (mean), and a full covariance matrix $\Sigma: G(x) = e^{-\frac{1}{2} x^T \Sigma^{-1} x}$ . When projecting 3D Gaussians to 2D for rendering, the splattering method is used to position the Gaussians on 2D planes, which involves a new covariance matrix $\Sigma'$ in camera coordinates defined as: $\Sigma' = J W \Sigma W^T J^T$ , where $W$ denotes
72
+
73
+ a given viewing transformation matrix and $J$ is the Jacobian of the affine approximation of the projective transformation. To enable differentiable optimization, $\Sigma$ is further decomposed into a scaling matrix $S$ and a rotation matrix $R$ : $\Sigma = R S T^T R^T$ .
74
+
75
+ # 3.2. SDF for Improved 3DGS
76
+
77
+ 3DGS [17] typically initializes with sparse point clouds estimated by COLMAP [31], which are often inaccurate or missing in low-texture or little over-lapping regions. To address this, we propose initializing 3DGS by uniformly sampling points from a mesh surface derived from a SDF representation, ensuring high-quality novel view rendering while preserving global geometry. Below, we detail our proposed method for mesh reconstruction, mesh cleaning, and point cloud sampling. A visual example of the reconstructed meshes and sampled points is shown in Fig. 3.
78
+
79
+ Coarse mesh reconstruction. Given $M$ sparse images $\{\mathcal{I}\}$ and their camera poses $\{\pi\}$ , our objective is to reconstruct a 3D surface for sampling points. As our focus is on robust global geometry rather than highly accurate surfaces, and to ensure efficient mesh reconstruction,
80
+
81
+ ![](images/3428cfe527d2a9f41057c4af43ab41f4c9840de87f526892c615e41d3ccf082c.jpg)
82
+ (a) Reference image
83
+
84
+ ![](images/81bcf7aba56ce80e7c1ca18ad8c5d6d7aee7aa7954d88388b3237dc60bf1fe08.jpg)
85
+ (b) Coarse mesh
86
+
87
+ ![](images/9f735c3e405e2761b225386571feeab7ceaa3298eb1810892bca9b79a74e503d.jpg)
88
+ (c) Coarse mesh w/ normal
89
+
90
+ ![](images/12d82b3a567f559e418c56b5a6388ced39cee9ea44f7c5428b6777fb745271c0.jpg)
91
+ (d) Coarse mesh w/ normal and clean
92
+
93
+ ![](images/d9169facfb9cbac91ae540a6147773f84038626207e2cedbc22dd241b6ff396d.jpg)
94
+ (e) Color points sampling
95
+
96
+ ![](images/9177d658f6f1962ff6008441838d00a44f3495d0d3fa2b6a562aabdc09fe62cb.jpg)
97
+ (f) COLMAP sparse points
98
+ Figure 3. Visualization of our mesh reconstruction, cleaning, and point sampling. (b) Naïve coarse mesh reconstruction following Voxurf [41]. (c) Coarse mesh reconstructed with our proposed normal loss, reducing floaters. (d) Post-processed mesh with both normal loss and our cleaning methods. (e) Our sampled point clouds used for initializing 3DGS. (f) COLMAP-estimated point clouds, typically used for 3DGS initialization.
99
+
100
+ we adopt the coarse-stage surface reconstruction from Vox-urf [41]. Specifically, we use a grid-based SDF representation $V^{(\mathrm{sdf})}$ for efficient mesh reconstruction. For each sampled 3D point $\mathbf{x} \in \mathbb{R}^3$ , the grid outputs the corresponding SDF value: $V^{(\mathrm{sdf})}: \mathbb{R}^3 \to \mathbb{R}$ . We use differentiable volume rendering to render image pixels $\hat{C}(r)$ and employs image reconstruction loss to supervise. The loss function $\mathcal{L}$ is formulated as:
101
+
102
+ $$
103
+ \mathcal {L} = \mathcal {L} _ {\text {r e c o n}} + \mathcal {L} _ {T V} \left(V ^ {(\mathrm {s d f})}\right) + \mathcal {L} _ {\text {s m o o t h}} \left(\nabla V ^ {(\mathrm {s d f})}\right), \tag {3}
104
+ $$
105
+
106
+ where the reconstruction loss $\mathcal{L}_{\mathrm{recon}}$ calculates photometric image rendering loss, originating from both the $g_{geo}$ and $g_{feat}$ branches. The $\mathcal{L}_{TV}$ encourages a continuous and compact geometry, while the smoothness regularization $\mathcal{L}_{\mathrm{smooth}}$ promotes local smoothness of the geometric surface. We refer to Voxurf [41] for the detailed implementation of the loss functions. The coarse reconstruction typically completes in 15 minutes in our experiments.
107
+
108
+ Due to the limited number of training views, the learned grid often exhibits floating artifacts, as shown in Fig. 3 (b), which leads to incorrect point sampling. To mitigate this, we introduce a normal consistency loss to improve training stability, effectively reducing floaters and smoothing the geometric surface. Our approach leverages the predicted monocular surface normal $\hat{N} (\mathbf{r})$ from the Metric3D model [48] to supervise the volume-rendered normal $\bar{N} (\mathbf{r})$ in the same coordinate system. The formulation is:
109
+
110
+ $$
111
+ \mathcal {L} _ {\text {n o r m a l}} = \sum \left(\| \hat {N} (\mathbf {r}) - \bar {N} (\mathbf {r}) \| _ {1}\right). \tag {4}
112
+ $$
113
+
114
+ We integrate this loss with Eqn. (3) during training to effectively remove floaters. Fig. 3 (c) shows a coarse mesh re
115
+
116
+ constructed with the normal loss, demonstrating improved surface smoothness and reduced artifacts.
117
+
118
+ Mesh cleaning. Even though the proposed normal loss significantly reduces floaters, some still persist, adding noise to the subsequent 3DGS initialization. To mitigate this, we apply a mesh cleaning step that refines the coarse mesh by removing non-main components. Specifically, we first use Marching Cube algorithm [38] to extract triangle mesh $\mathcal{M} = (\mathcal{V},\mathcal{F})$ from SDF grid $V^{(\mathrm{sdf})}$ . Then we cluster the connected mesh triangles to $\{\mathcal{F}_i\}$ , identify the largest cluster index: $|\mathcal{F}_{i_{\max}}| = \max (|\mathcal{F}_i|)$ and get remove parts
119
+
120
+ $$
121
+ \mathcal {F} _ {\text {r e m o v e}} = \{f \in \mathcal {F} \mid f \notin \mathcal {F} _ {i _ {\max }} \}. \tag {5}
122
+ $$
123
+
124
+ Finally, we filter the floaters $\mathcal{F}_{\mathrm{remove}}$ from $\mathcal{M}$ , resulting in $\mathcal{M}_1 = \mathcal{M} \setminus \mathcal{F}_{\mathrm{remove}}$ . Fig. 3 (d) illustrates the refined mesh after applying our cleaning method.
125
+
126
+ Sampling surface points for 3DGS. Since the mesh obtained from Marching Cubes includes regions that are invisible from the training views, directly sampling points from the mesh surface can introduce noise into 3DGS. To mitigate this, we propose a depth-based sampling strategy. First, we project the reconstructed mesh onto the training views using their known camera poses to generate depth maps $\{\mathcal{D}\}$ . Since these depth maps originate from a 3D mesh, they maintain multi-view consistency. We then randomly sample points from valid depth regions, ensuring they correspond to visible object surfaces. The sampled pixels $(u,v)$ , along with their depth values $d(u,v)$ , are back-projected to colorized 3D points $\mathbf{P} = \{(x_i,y_i,z_i) \mid i = 1,2,\dots,N\}$ using the following formulation:
127
+
128
+ $$
129
+ \left[ \begin{array}{l l l} x _ {i} & y _ {i} & z _ {i} \end{array} \right] = \boldsymbol {\pi} _ {\boldsymbol {k}} \mathbf {K} ^ {- 1} \left[ \begin{array}{l l l} d \cdot u & d \cdot v & d \end{array} \right] ^ {T}. \tag {6}
130
+ $$
131
+
132
+ This approach ensures that the sampled points are uniformly distributed on the object's surface while remaining visible in the training views, leading to a more stable and accurate 3DGS initialization. As our reconstructed mesh primarily covers foreground regions, we combine our sampled point cloud with COLMAP sparse points when rendering background regions, serving as the initialization for 3DGS. Fig. 3 (e) and (f) illustrate our sampled point clouds and COLMAP-estimated point clouds, respectively.
133
+
134
+ # 3.3. 3DGS for Enhanced SDF
135
+
136
+ We argue that the primary bottleneck for SDF-based mesh reconstruction is insufficient supervision due to limited training views. To address this, we generate additional novel viewpoint images using a 3DGS-based method and combine them with the original sparse views to enhance the training of SDF-based reconstruction.
137
+
138
+ ![](images/65e865b72c212a0c00d0b6705503acca33ae5b7c051a666a248d7b0f51cd2080.jpg)
139
+ (a) Camera position perturbation
140
+ Figure 4. Top-view visualization of pose expansion strategies.
141
+
142
+ ![](images/a0b7eb9f94a07d21f7cb812f5f22d13f3f67e2f67a8f85c864345d64896ce5f3.jpg)
143
+ (b) Camera poses interpolation
144
+
145
+ Rendering novel viewpoint images. We utilize the improved 3DGS, initialized with our proposed mesh-based point sampling method, to render images. Thanks to our robust and dense point initialization, the 3D Gaussian $\mathcal{G}$ can converge after $7k$ iterations in just 5 minutes, yielding $\mathcal{G} = f(\mathbf{P},\{I\},\{\pi\})$ . Given new camera poses $\{\pi_{\mathrm{new}}\}$ , the 3D Gaussian $\mathcal{G}$ can be projected to generate novel-view images as follows:
146
+
147
+ $$
148
+ \{\mathcal {I} _ {\text {n e w}} \} = \operatorname {S p l a t} (\mathcal {G}, \left\{\pi_ {\text {n e w}} \right\}). \tag {7}
149
+ $$
150
+
151
+ The newly rendered images $\{\mathcal{I}_{\mathrm{new}}\}$ are combined with the input images $\{\mathcal{I}\}$ to train the SDF-based mesh reconstruction. The key challenge lies in selecting new camera viewpoints $\{\pi_{\mathrm{new}}\}$ that best enhance surface reconstruction:
152
+
153
+ $$
154
+ \left\{\boldsymbol {\pi} _ {\text {n e w}} \right\} = g \left(\left\{\boldsymbol {\pi} \right\}\right) \tag {8}
155
+ $$
156
+
157
+ where $g$ is our pose expansion strategy. To ensure new viewpoints remain consistent with the original pose distribution and avoid excessive deviation that could blur or diminish the foreground, we explore two methods for generating new camera poses. Fig. 4 shows the generated pose position.
158
+
159
+ Camera position perturbation. To generate new camera positions while preserving proximity to the original distribution, a perturbation $\Delta \mathbf{p}$ is applied to the initial camera positions $\{\pmb {c}\}$ . The new camera centers $\{\pmb {c}_m^{\prime}\}$ are computed:
160
+
161
+ $$
162
+ \boldsymbol {c} _ {m} ^ {\prime} = \boldsymbol {c} + \Delta \mathbf {p}, \tag {9}
163
+ $$
164
+
165
+ where $\Delta \mathbf{p} = (\Delta x, \Delta y, \Delta z)$ represents a controlled offset vector designed to modulate the new viewpoints.
166
+
167
+ Camera pose interpolation. Our method takes a set of camera rotation matrices $\{\mathbf{R}\}$ and camera positions $\{e\}$ as input. To generate smooth transitions between viewpoints, we employ cubic spline interpolation [11]. This approach interpolates both camera positions and orientations, producing interpolated camera centers $\{c_m'\}$ and rotation matrices
168
+
169
+ $\{\mathbf{R}_m^{\prime}\}$ that ensure visual continuity and positional coherence. By maintaining these properties, the newly generated camera poses facilitate high-quality transitions, making them well-suited for 3D mesh reconstruction. The visualizations of the images generated from new viewpoints can be found in Fig. 2 of the supplementary material.
170
+
171
+ Refining surface reconstruction. We reuse the reconstructed coarse mesh and refine it with the original and expanded novel viewpoint images. Following the fine-stage reconstruction of Voxurf [41], we increase the grid resolution and introduce a dual color network and hierarchical geometry features for detailed surface reconstruction.
172
+
173
+ # 3.4. Cyclic Optimization
174
+
175
+ We propose an interactive optimization process, which begins by generating an initial coarse mesh $\mathcal{M}^{(0)}$ . Then, in each iteration $n$ , the process follows two steps:
176
+
177
+ 1. Rendering Step: We optimize a 3DGS model for rendering novel view images, which is initialized by sampling points from the current coarse mesh $\mathcal{M}_c^{(n)}$ , represented by:
178
+
179
+ $$
180
+ \mathcal {I} ^ {(n)} = \mathcal {R} \left(\mathcal {M} _ {c} ^ {(n)}\right) \tag {10}
181
+ $$
182
+
183
+ 2. Meshing Step: We refine the current mesh by finetuning it using both the newly rendered images and the original input images:
184
+
185
+ $$
186
+ \mathcal {M} _ {f} ^ {(n)} = \mathcal {O} \left(\mathcal {M} _ {c} ^ {(n)}, \mathcal {I} ^ {(n)}\right) \tag {11}
187
+ $$
188
+
189
+ where $\mathcal{O}$ represents the SDF grid optimization. Then, we update the refined mesh:
190
+
191
+ $$
192
+ \mathcal {M} _ {c} ^ {(n + 1)} = \mathcal {M} _ {f} ^ {(n)}. \tag {12}
193
+ $$
194
+
195
+ By iterating this process, our method allows SDF-based reconstruction and 3DGS-based rendering to complement each other, improving both reconstruction accuracy and novel view synthesis. To balance efficiency and accuracy, we typically perform only one iteration.
196
+
197
+ # 4. Experiments
198
+
199
+ # 4.1. Experimental Setup
200
+
201
+ Datasets. We conduct a comprehensive evaluation of the proposed method on the MobileBrick [19] and DTU [16] datasets. MobileBrick is a multi-view RGB-D dataset captured on a mobile device, providing precise 3D annotations for detailed 3D object reconstruction. Unlike the DTU dataset, which is captured in a controlled lab environment, MobileBrick represents more challenging, real-world conditions, making it more reflective of everyday scenarios. Following previous methods [19, 36, 41], we use 15 test
202
+
203
+ Table 1. Surface reconstruction and novel view synthesis results on MobileBrick. The results are averaged over all 18 test scenes with an initial input of 10 images per scene. PSNR-F is computed only on foreground regions. The best results are bolded.
204
+
205
+ <table><tr><td></td><td colspan="7">Mesh Reconstruction</td><td colspan="2">Rendering</td><td rowspan="3">Time</td></tr><tr><td></td><td colspan="3">σ = 2.5mm</td><td colspan="3">σ = 5mm</td><td rowspan="2">CD (mm)↓</td><td rowspan="2">PSNR↑</td><td rowspan="2">PSNR-F↑</td></tr><tr><td></td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td><td>Accu.(%)↑</td><td>Recall(%)↑</td><td>F1↑</td></tr><tr><td>Voxurf [41]</td><td>62.89</td><td>62.54</td><td>62.42</td><td>80.93</td><td>80.61</td><td>80.38</td><td>13.3</td><td>14.34</td><td>18.34</td><td>55 mins</td></tr><tr><td>MonoSDF [51]</td><td>41.56</td><td>32.47</td><td>36.22</td><td>57.88</td><td>48.19</td><td>52.21</td><td>37.7</td><td>14.71</td><td>15.42</td><td>6 hrs</td></tr><tr><td>2DGS [15]</td><td>49.83</td><td>45.32</td><td>47.10</td><td>72.65</td><td>64.88</td><td>67.96</td><td>14.8</td><td>17.12</td><td>18.52</td><td>10 mins</td></tr><tr><td>GOF [53]</td><td>50.24</td><td>61.11</td><td>54.96</td><td>74.99</td><td>82.68</td><td>78.16</td><td>11.0</td><td>16.52</td><td>18.36</td><td>50 mins</td></tr><tr><td>3DGS [17]</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>17.19</td><td>19.12</td><td>10 mins</td></tr><tr><td>SparseGS [42]</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>\</td><td>16.93</td><td>18.74</td><td>30 mins</td></tr><tr><td>Ours</td><td>68.36</td><td>69.79</td><td>68.97</td><td>86.79</td><td>86.82</td><td>86.65</td><td>9.7</td><td>17.48</td><td>20.45</td><td>1 hr</td></tr><tr><td>Ours (Two cycles)</td><td>69.61</td><td>68.89</td><td>69.14</td><td>87.79</td><td>85.93</td><td>86.74</td><td>9.9</td><td>17.58</td><td>20.55</td><td>1.6 hr</td></tr></table>
206
+
207
+ Table 2. Surface reconstruction results on DTU with 5 input views. Values indicate Chamfer Distance in millimeters (mm). -" denotes failure cases where COLMAP could not generate point clouds for 3DGS initialization. GSDF-10 is reported with 10 input images, as it fails in sparser settings. The best results are bolded, while the second-best are underlined.
208
+
209
+ <table><tr><td>Scan</td><td>24</td><td>37</td><td>40</td><td>55</td><td>63</td><td>65</td><td>69</td><td>83</td><td>97</td><td>105</td><td>106</td><td>110</td><td>114</td><td>118</td><td>122</td><td>Mean</td><td>Time</td></tr><tr><td>Voxurf [41]</td><td>2.74</td><td>4.50</td><td>3.39</td><td>1.52</td><td>2.24</td><td>2.00</td><td>2.94</td><td>1.29</td><td>2.49</td><td>1.28</td><td>2.45</td><td>4.69</td><td>0.93</td><td>2.74</td><td>1.29</td><td>2.43</td><td>50 mins</td></tr><tr><td>MonoSDF [51]</td><td>1.30</td><td>3.45</td><td>1.45</td><td>0.61</td><td>1.43</td><td>1.17</td><td>1.07</td><td>1.42</td><td>1.49</td><td>0.79</td><td>3.06</td><td>2.60</td><td>0.60</td><td>2.21</td><td>2.87</td><td>1.70</td><td>6 hrs</td></tr><tr><td>SparseNeuS [21]</td><td>3.57</td><td>3.73</td><td>3.11</td><td>1.50</td><td>2.36</td><td>2.89</td><td>1.91</td><td>2.10</td><td>2.89</td><td>2.01</td><td>2.08</td><td>3.44</td><td>1.21</td><td>2.19</td><td>2.11</td><td>2.43</td><td>Pretrain + 2 hrs ft</td></tr><tr><td>2DGS [15]</td><td>4.26</td><td>4.80</td><td>5.53</td><td>1.50</td><td>3.01</td><td>1.99</td><td>2.66</td><td>3.65</td><td>3.06</td><td>2.54</td><td>2.15</td><td>-</td><td>0.96</td><td>2.17</td><td>1.31</td><td>2.84</td><td>6 mins</td></tr><tr><td>GOF (TSDF) [53]</td><td>7.30</td><td>5.80</td><td>6.03</td><td>2.79</td><td>4.23</td><td>3.41</td><td>3.44</td><td>4.37</td><td>3.75</td><td>2.99</td><td>3.19</td><td>-</td><td>2.64</td><td>3.67</td><td>2.25</td><td>4.03</td><td>50 mins</td></tr><tr><td>GOF [53]</td><td>4.37</td><td>3.68</td><td>3.84</td><td>2.29</td><td>4.40</td><td>3.28</td><td>2.84</td><td>4.64</td><td>3.40</td><td>3.76</td><td>3.56</td><td>-</td><td>3.06</td><td>2.95</td><td>2.91</td><td>3.55</td><td>50 mins</td></tr><tr><td>GSDF-10 [50]</td><td>6.89</td><td>6.82</td><td>7.97</td><td>6.54</td><td>5.22</td><td>1.91</td><td>5.56</td><td>4.38</td><td>7.01</td><td>3.69</td><td>6.33</td><td>6.33</td><td>3.95</td><td>6.30</td><td>2.09</td><td>5.40</td><td>3 hrs</td></tr><tr><td>Ours</td><td>1.55</td><td>2.64</td><td>1.52</td><td>1.40</td><td>1.51</td><td>1.46</td><td>1.23</td><td>1.43</td><td>1.82</td><td>1.19</td><td>1.49</td><td>1.80</td><td>0.54</td><td>1.19</td><td>1.04</td><td>1.45</td><td>1 hr</td></tr></table>
210
+
211
+ scenes from DTU and 18 test scenes from MobileBrick for evaluation. In the MobileBrick dataset, each scene consists of 360-degree multi-view images, from which we sample 10 images with $10\%$ overlap for sparse view reconstruction. In contrast, the DTU dataset, with higher overlap, is sampled with 5 frames per scene. We also present reconstruction results for the little-overlapping 3-view setting in the supplementary materials. For fair comparison, 3DGS-based methods are initialized using point clouds from COLMAP[30] with ground-truth poses. The selected images and poses are used for 3D reconstruction, while the remaining images serve as a test set for evaluating novel view rendering.
212
+
213
+ Baselines. We compare our proposed method with both SDF-based and 3DGS-based approaches for surface reconstruction. The SDF-based methods include MonoSDF[51], Voxurf[41], and SparseNeuS [21], which is pre-trained on large-scale data. The 3DGS-based methods include 2DGS[15] and GOF[53]. Additionally, we compare with
214
+
215
+ GSDF [50], which integrates both SDF and 3DGS, similar to our approach, but is designed for dense-view settings. For novel view rendering, we evaluate all these methods along with 3DGS[17] and SparseGS[42].
216
+
217
+ Evaluation metrics. We follow the official evaluation metrics on MobileBrick, reporting Chamfer Distance, precision, recall, and F1 score at two thresholds: $2.5mm$ and $5mm$ . For the DTU dataset, we use Chamfer Distance as the primary metric for surface reconstruction. To evaluate novel view rendering performance, we report PSNR for full images and PSNR-F, which is computed only over foreground regions. In each scene, we train models using sparse input images and test on all remaining views. The final result is averaged over all evaluation images.
218
+
219
+ Implementation details. We set the voxel grid resolution to $96^3$ during coarse mesh training, requiring approximately 15 minutes for 10k iterations. The weight of the proposed
220
+
221
+ Table 3. Surface reconstruction results with varying numbers of input views on MobileBrick (porsche) and DTU (scan69). The Baseline represents a pure SDF-based reconstruction without the assistance from 3DGS. $\delta$ indicates the improvement.
222
+
223
+ <table><tr><td rowspan="2">Input</td><td colspan="3">MobileBrick / F1 score</td><td colspan="3">DTU / CD</td></tr><tr><td>Baseline</td><td>Ours</td><td>δ</td><td>Baseline</td><td>Ours</td><td>δ</td></tr><tr><td>5</td><td>33.50</td><td>43.11</td><td>+9.61</td><td>2.940</td><td>1.230</td><td>-1.710</td></tr><tr><td>10</td><td>59.66</td><td>62.37</td><td>+2.71</td><td>1.362</td><td>1.165</td><td>-0.197</td></tr><tr><td>20</td><td>63.18</td><td>63.88</td><td>+0.7</td><td>1.043</td><td>0.965</td><td>-0.078</td></tr></table>
224
+
225
+ normal loss is set to 0.05, while all other parameters follow Voxurf [41]. Next, we train 3DGS [17] for 7k iterations, which takes around 5 minutes, and render 10 new viewpoint images within 30 seconds. After expanding the training images, we increase the voxel grid resolution to $256^{3}$ and train for 20k iterations, taking approximately 40 minutes. Thus, a complete optimization cycle takes roughly 1 hour.
226
+
227
+ # 4.2. Comparisons
228
+
229
+ Results on MobileBrick. Table 1 presents a quantitative comparison of our method against previous approaches. The results show that Voxurf [41], which utilizes an SDF-based representation, outperforms 2DGS [15] and GOF [53] (both 3DGS-based methods) in surface reconstruction metrics, particularly in terms of the F1 score. However, all 3DGS-based methods achieve notably better novel view rendering performance, as evidenced by their higher PSNR values compared to Voxurf. A visual comparison is illustrated in Fig. 5 and Fig. 6. BBy leveraging the strengths of both SDF and 3DGS representations, our method achieves state-of-the-art performance in surface reconstruction and novel view synthesis. To balance efficiency and performance, we adopt a single-cycle approach in practice.
230
+
231
+ Results on DTU. Table 2 presents surface reconstruction results on the DTU dataset, which is particularly challenging due to the use of only 5 uniformly sampled frames for reconstruction. SparseNeuS [21] is a pre-trained model that requires an additional 2 hours of fine-tuning. COLMAP fails to generate sparse point clouds for scene 110, preventing 3DGS initialization. GSDF [50] struggles in sparse-view settings, so we train it on 10 images. Despite these challenges, our method achieves robust reconstruction and significantly outperforms other approaches.
232
+
233
+ # 4.3. Ablations
234
+
235
+ Efficacy of 3DGS for Improving SDF. Table 3 compares our method with a pure SDF-based reconstruction baseline at different sparsity levels, using up to 20 images per scene. The results on MobileBrick and DTU validate the effectiveness of our 3DGS-assisted SDF approach. More results are
236
+
237
+ Table 4. 3DGS rendering results with different initializations, averaged across all 18 MobileBrick test scenes.
238
+
239
+ <table><tr><td>Method</td><td>Foreground PSNR</td></tr><tr><td>3DGS (COLMAP)</td><td>19.13</td></tr><tr><td>3DGS w/ mesh clean</td><td>19.88</td></tr><tr><td>3DGS w/ normal and mesh clean</td><td>20.45</td></tr></table>
240
+
241
+ Table 5. Ablation study on pose expansion strategies for in MobileBrick (aston) with 10 input images.
242
+
243
+ <table><tr><td></td><td>F1↑</td><td>Recall(%)↑</td><td>CD (mm)↓</td></tr><tr><td>Baseline</td><td>55.8</td><td>49.9</td><td>8.7</td></tr><tr><td>Camera position perturbation</td><td>59.9</td><td>57.4</td><td>6.6</td></tr><tr><td>Camera poses interpolation</td><td>60.8</td><td>59.1</td><td>6.4</td></tr></table>
244
+
245
+ provided in the supplementary material.
246
+
247
+ ![](images/98f9192688a6ecee21a676bc469d401813d2180877d44440bae719bfd53bef1f.jpg)
248
+ Figure 7. econstruction quality with varying numbers of 3DGS-rendered novel view images from expanded poses, averaged across all 18 MobileBrick test scenes, with an initial input of 10 images.
249
+
250
+ Efficacy of SDF for enhancing 3DGS. Table 4 compares the novel view rendering results for 3DGS using point clouds initialized with different sampling strategies. The results demonstrate that our proposed mesh cleaning and normal supervision notably improve 3DGS performance.
251
+
252
+ Number of newly rendered views. Fig. 7 illustrates the impact of the number of newly rendered images on surface reconstruction. On MobileBrick, rendering 10 novel views significantly improves Chamfer Distance $(26.5\%)$ and F1 $(7.2\%)$ . As the number of novel views increases, accuracy gains gradually diminish. This suggests that while additional renderings refine reconstruction, the majority of benefits are achieved with the first 10 rendered images.
253
+
254
+ Different pose expansion strategies. Table 5 summarizes the reconstruction performance with expansion images
255
+
256
+ ![](images/ac1735dc640facf888705f357ae92374d7c76550ce1403c1e2a1051e629377c3.jpg)
257
+ Figure 5. Qualitative mesh reconstruction comparisons on MobileBrick. See more visual results in supplementary material.
258
+
259
+ ![](images/0362c2caddfb443c8ba7575d01665c0a3a62eb537c5b74ce8804c586df3eed42.jpg)
260
+ Figure 6. Qualitative novel view synthesis comparisons on MobileBrick.
261
+
262
+ from different strategies. We double the number of original input camera poses, generating new viewpoints and rendering additional images accordingly. The two strategies significantly enhance surface reconstruction quality, with camera pose interpolation yielding the greatest improvement.
263
+
264
+ # 5. Conclusion
265
+
266
+ This paper introduces a novel framework for sparse-view reconstruction, where SDF-based and 3DGS-based representations complement each other to enhance both surface reconstruction and novel view rendering. Specifically, our method leverages SDF for modeling global geometry and 3DGS for capturing fine details, achieving significant im
267
+
268
+ provements over state-of-the-art methods on two widely used real-world datasets.
269
+
270
+ Limitation and future work. Although our method can theoretically be generalized to any SDF and novel view rendering approaches, our current implementation is built on Voxurf and 3DGS, which were selected for their efficiency-performance trade-off. As a result, our method is currently limited to object-level scenes and struggles with extremely sparse inputs, such as only two images. In the future, we aim to extend our approach to handle more diverse scenes and further improve its robustness to sparse inputs.
271
+
272
+ # Acknowledgments
273
+
274
+ This work was supported by the National Natural Science Foundation of China (No. 62206244).
275
+
276
+ # References
277
+
278
+ [1] Jiayang Bai, Letian Huang, Jie Guo, Wen Gong, Yuanqi Li, and Yanwen Guo. 360-gs: Layout-guided panoramic gaussian splatting for indoor roaming. In International Conference on 3D Vision 2025. 2
279
+ [2] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Int. Conf. Comput. Vis., pages 5855–5864, 2021. 2
280
+ [3] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5470-5479, 2022. 2
281
+ [4] Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, and Philip Torr. Porf: Pose residual field for accurate neural surface reconstruction. In ICLR, 2024. 2
282
+ [5] Wenjing Bian, Zirui Wang, Kejie Li, Jiawang Bian, and Victor Adrian Prisacariu. Nope-nerf: Optimising neural radiance field with no pose prior. 2023. 2
283
+ [6] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Int. Conf. Comput. Vis., pages 14124-14133, 2021. 1, 2
284
+ [7] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In Eur. Conf. Comput. Vis., pages 333-350. Springer, 2022. 2
285
+ [8] Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, and Guofeng Zhang. Pgsr: Planar-based gaussian splatting for efficient and high-fidelity surface reconstruction. IEEE Transactions on Visualization and Computer Graphics, 2024. 2
286
+ [9] Hanlin Chen, Chen Li, and Gim Hee Lee. Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846, 2023. 2
287
+ [10] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 1, 2
288
+ [11] C De Boor. A practical guide to splines. Springer-Verlag google schola, 2:4135-4195, 1978. 5
289
+ [12] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5501-5510, 2022. 2
290
+ [13] Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5354-5363, 2024. 2
291
+ [14] Siming He, Zach Osman, and Pratik Chaudhari. From nerfs to gaussian splats, and back. arXiv preprint arXiv:2405.09717, 2024. 2
292
+
293
+ [15] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splattering for geometrically accurate radiance fields. In SIGGRAPH 2024 Conference Papers. Association for Computing Machinery, 2024. 1, 2, 6, 7
294
+ [16] Rasmus Jensen, Anders Dahl, George Vogiatzis, Engil Tola, and Henrik Aanæs. Large scale multi-view stereopsis evaluation. In IEEE Conf. Comput. Vis. Pattern Recog., 2014. 2, 5
295
+ [17] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 2023. 1, 2, 3, 6, 7
296
+ [18] Jiyeop Kim and Jongwoo Lim. Integrating meshes and 3d gaussians for indoor scene reconstruction with sam mask guidance. arXiv preprint arXiv:2407.16173, 2024. 2
297
+ [19] Kejie Li, Jia-Wang Bian, Robert Castle, Philip HS Torr, and Victor Adrian Prisacariu. Mobilebrick: Building legs for 3d reconstruction on mobile devices. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4892-4901, 2023. 2, 5
298
+ [20] Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. Neuralangelo: High-fidelity neural surface reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8456-8465, 2023. 2
299
+ [21] Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. In Eur. Conf. Comput. Vis., pages 210-227. Springer, 2022. 1, 2, 6, 7
300
+ [22] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Semin al graphics: pioneering efforts that shaped the field, pages 347-353, 1998. 2
301
+ [23] Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20654-20664, 2024. 2
302
+ [24] Xiaoyang Lyu, Yang-Tian Sun, Yi-Hua Huang, Xiuzhe Wu, Ziyi Yang, Yilun Chen, Jiangmiao Pang, and Xiaojuan Qi. 3dgsr: Implicit surface reconstruction with 3d gaussian splatting. ACM Transactions on Graphics (TOG), 43(6):1-12, 2024. 2
303
+ [25] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Eur. Conf. Comput. Vis., 2020. 1, 2
304
+ [26] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):1-15, 2022. 2
305
+ [27] Stanley Osher, Ronald Fedkiw, Stanley Osher, and Ronald Fedkiw. Constructing signed distance functions. Level set methods and dynamic implicit surfaces, pages 63-74, 2003. 2
306
+ [28] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 165-174, 2019. 2
307
+
308
+ [29] Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys, and Sabine Susstrunk. Volrecon: Volume rendering of signed ray distance functions for generalizable multi-view reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 16685-16695, 2023. 1, 2
309
+ [30] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In IEEE Conf. Comput. Vis. Pattern Recog., 2016. 6
310
+ [31] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: exploring photo collections in 3d. In ACM SIGgraph, pages 835-846, 2006. 3
311
+ [32] Xiaowei Song, Jv Zheng, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, and Hao Zhao. Saags: Scale-adaptive gaussian splatting for training-free antialIASing. arXiv preprint arXiv:2403.19615, 2024. 2
312
+ [33] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5459-5469, 2022. 2
313
+ [34] Hao Sun, Junping Qin, Lei Wang, Kai Yan, Zheng Liu, Xinglong Jia, and Xiaole Shi. 3dgs-hd: Elimination of unrealistic artifacts in 3d gaussian splatting. In 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), pages 696-702. IEEE, 2024. 2
314
+ [35] Matias Turkulainen, Xuqian Ren, Jaroslav Melekhov, Otto Seiskari, Esa Rahtu, and Juho Kannala. Dn-splatter: Depth and normal priors for gaussian splatting and meshing. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2421-2431. IEEE, 2025. 2
315
+ [36] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In Adv. Neural Inform. Process. Syst., 2021. 1, 2, 5
316
+ [37] Ruizhe Wang, Chunliang Hua, Tomakayev Shingys, Mengyuan Niu, Qingxin Yang, Lizhong Gao, Yi Zheng, Junyan Yang, and Qiao Wang. Enhancement of 3d gaussian splatting using raw mesh for photorealistic recreation of architectures. arXiv preprint arXiv:2407.15435, 2024. 2
317
+ [38] LORENSEN WE. Marching cubes: A high resolution 3d surface construction algorithm. Computer graphics, 21(1): 7-12, 1987. 4
318
+ [39] Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, and Zexiang Xu. Meshlrm: Large reconstruction model for high-quality meshes. arXiv preprint arXiv:2404.12385, 2024. 1, 2
319
+ [40] Shuang Wu, Youtian Lin, Feihu Zhang, Yifei Zeng, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859-121881, 2024. 1, 2
320
+ [41] Tong Wu, Jiaqi Wang, Xingang Pan, Xudong Xu, Christian Theobalt, Ziwei Liu, and Dahua Lin. Voxurf: Voxel-based efficient and accurate neural surface reconstruction. In Int. Conf. Learn. Represent., 2023. 1, 3, 4, 5, 6, 7
321
+ [42] Haolin Xiong, Sairisheek Muttukuru, Rishi Upadhyay, Pradyumna Chari, and Achuta Kadambi. Sparsegs: Real-
322
+
323
+ time $360^{\circ}$ sparse view synthesis using gaussian splatting. Arxiv, 2023. 6
324
+ [43] Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Shenghua Gao, and Ying Shan. Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models. arXiv preprint arXiv:2404.07191, 2024. 1, 2
325
+ [44] Runyi Yang, Zhenxin Zhu, Zhou Jiang, Baijun Ye, Xiaoxue Chen, Yifei Zhang, Yuantao Chen, Jian Zhao, and Hao Zhao. Spectrally pruned gaussian fields with neural compensation. arXiv preprint arXiv:2405.00676, 2024. 2
326
+ [45] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 20331-20341, 2024. 2
327
+ [46] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. Adv. Neural Inform. Process. Syst., 34:4805-4815, 2021. 2
328
+ [47] Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, et al. gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research, 26(34):1-17, 2025. 2
329
+ [48] Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen. Metric3d: Towards zero-shot metric 3d prediction from a single image. In Int. Conf. Comput. Vis., pages 9043-9053, 2023. 4
330
+ [49] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4578-4587, 2021. 1, 2
331
+ [50] Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, and Bo Dai. Gsdf: 3dgs meets sdf for improved neural rendering and reconstruction. Advances in Neural Information Processing Systems, 37:129507-129530, 2024. 2, 6, 7
332
+ [51] Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. Adv. Neural Inform. Process. Syst., 2022. 6
333
+ [52] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splattering. In IEEE Conf. Comput. Vis. Pattern Recog., pages 19447-19456, 2024. 2
334
+ [53] Zehao Yu, Torsten Sattler, and Andreas Geiger. Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes. ACM Trans. Graph., 2024. 1, 2, 6, 7
335
+ [54] Baowen Zhang, Chuan Fang, Rakesh Shrestha, Yixun Liang, Xiaoxiao Long, and Ping Tan. Rade-gs: Rasterizing depth in gaussian splatting. arXiv preprint arXiv:2406.01467, 2024. 2
336
+ [55] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a7df9e73a96db4da8a5e4f5637904e7f1bb40f96db85ec47bbe49697fadb2bb
3
+ size 894797
2025/SurfaceSplat_ Connecting Surface Reconstruction and Gaussian Splatting/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_content_list.json ADDED
@@ -0,0 +1,1615 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "SweetTok: Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 117,
8
+ 128,
9
+ 880,
10
+ 174
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zhentao Tan,\\* Ben Xue,\\* Jian Jia, Junhao Wang, Wencai Ye, Shaoyun Shi, Mingjie Sun \nWenjin Wu, Quan Chen†, Peng Jiang \nKuaishou Technology, Beijing, China",
17
+ "bbox": [
18
+ 158,
19
+ 203,
20
+ 836,
21
+ 257
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "{tanzhentao03, xueben, jiajian, wangjunhao05, yewencai, shishaoyun, sunmingjie, wuwenjin, chenquan06, jiangpeng}@kuaishou.com",
28
+ "bbox": [
29
+ 196,
30
+ 258,
31
+ 787,
32
+ 292
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Abstract",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 246,
42
+ 325,
43
+ 326,
44
+ 343
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "This paper presents the Semantic-aWarE spatial-tEmporal Tokenizer (SweetTok), a novel video tokenizer to overcome the limitations in current video tokenization methods for compacted yet effective discretization. Unlike previous approaches that process flattened local visual patches via direct discretization or adaptive query tokenization, SweetTok proposes a decoupling framework, compressing visual inputs through distinct spatial and temporal queries via Decoupled Query AutoEncoder (DQAE). This design allows SweetTok to efficiently compress video token count while achieving superior fidelity by capturing essential information across spatial and temporal dimensions. Furthermore, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression to address the differences in semantic representation between appearance and motion information. SweetTok significantly improves video reconstruction results by $42.8\\%$ w.r.t rFVD on UCF-101 dataset. With a better token compression strategy, it also boosts downstream video generation results by $15.1\\%$ w.r.t gFVD. Additionally, the compressed decoupled tokens are imbued with semantic information, enabling few-shot recognition capabilities powered by LLMs in downstream applications.",
51
+ "bbox": [
52
+ 88,
53
+ 358,
54
+ 483,
55
+ 705
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1. Introduction",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 89,
65
+ 720,
66
+ 220,
67
+ 736
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Visual tokenizers [1-3, 8, 14, 39, 41, 49] are emerging as essential components in the field of modern computer vision models, particularly in the generation [1, 14, 46] and understanding [17, 26, 35, 42, 43] of vision data. These tools convert visual inputs into discrete tokens, capturing essential temporal and spatial features that facilitate advanced analysis by formulating visual-related tasks as a token prediction process.",
74
+ "bbox": [
75
+ 89,
76
+ 746,
77
+ 482,
78
+ 866
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "image",
84
+ "img_path": "images/3105bb54fbc4a424cc206efb753bb7c17b5c6e29d357167ab2e2236a6c07ffb7.jpg",
85
+ "image_caption": [
86
+ "Figure 1. Illustration of our framework. We build a compact visual latent space by reducing token count in a decoupled style and leveraging motion-enhanced semantic text embedding. The encoded tokens can be applied to downstream tasks, such as generation and understanding."
87
+ ],
88
+ "image_footnote": [],
89
+ "bbox": [
90
+ 535,
91
+ 325,
92
+ 885,
93
+ 628
94
+ ],
95
+ "page_idx": 0
96
+ },
97
+ {
98
+ "type": "text",
99
+ "text": "Compression ratio and reconstruction fidelity are vital criteria for evaluating a tokenizer. Recent visual tokenizers, especially video tokenizers [1, 2, 41] typically retain a low compression ratio. This is because visual tokens are usually derived from 2D patches [12] or 3D tubes [2, 14] which preserve location relationships (e.g., each token corresponds to a specific region of input [51]), leading to redundancy in both spatial and temporal dimensions. Most recent work LARP [3] quantizes the flattened video patches through adaptive holistic queries to achieve high compression ratio. However, it is observed that directly flattening",
100
+ "bbox": [
101
+ 511,
102
+ 734,
103
+ 908,
104
+ 902
105
+ ],
106
+ "page_idx": 0
107
+ },
108
+ {
109
+ "type": "header",
110
+ "text": "CVF",
111
+ "bbox": [
112
+ 106,
113
+ 2,
114
+ 181,
115
+ 42
116
+ ],
117
+ "page_idx": 0
118
+ },
119
+ {
120
+ "type": "header",
121
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
122
+ "bbox": [
123
+ 238,
124
+ 0,
125
+ 807,
126
+ 46
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "page_footnote",
132
+ "text": "*Equal contribution",
133
+ "bbox": [
134
+ 107,
135
+ 875,
136
+ 215,
137
+ 887
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "page_footnote",
143
+ "text": "† Corresponding author",
144
+ "bbox": [
145
+ 109,
146
+ 887,
147
+ 233,
148
+ 898
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "page_number",
154
+ "text": "23541",
155
+ "bbox": [
156
+ 478,
157
+ 944,
158
+ 517,
159
+ 955
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "text",
165
+ "text": "video tokens into sequence may lead to difficulty in learning intertwined spatial temporal information resulting in low reconstruction performance. Therefore, a new compression method needs to be proposed, one that takes into account the spatiotemporal properties of video.",
166
+ "bbox": [
167
+ 89,
168
+ 90,
169
+ 480,
170
+ 167
171
+ ],
172
+ "page_idx": 1
173
+ },
174
+ {
175
+ "type": "text",
176
+ "text": "Another issue, meanwhile, is that a higher compression ratio typically results in a greater loss of reconstruction details. To complement visual information during compression, one common strategy is to introduce pretrained language embeddings as the latent codebook [27, 50, 52], leveraging their semantic representation capabilities. However, previous works primarily focus on image modality, overlooking the relationships between text and motion in video domain.",
177
+ "bbox": [
178
+ 89,
179
+ 169,
180
+ 480,
181
+ 305
182
+ ],
183
+ "page_idx": 1
184
+ },
185
+ {
186
+ "type": "text",
187
+ "text": "To address existing limitations, we propose SweetTok - Semantic-aWarE spatial-tEmporal Tokenizer - as illustrated in Figure 1. Considering the heterogeneous redundancy in static images and dynamic frames, we propose the Decoupled Query AutoEncoder (DQAE) to compress spatial and temporal information into separate learnable queries. Different from previous works [3, 25, 51], our findings indicate that coupling the compression of spatiotemporal information increases the difficulty for the decoder to learn the motion information of the same pixel across consecutive frames. Thus, taking the decoupled spatial and temporal queries as inputs, we devise a strategy of spatial decoding followed by temporal decoding to achieve a separate reconstruction of the spatial and temporal dimensions of visual information. Additionally, the decoupled spatiotemporal reconstruction approach naturally allows for finetuning on image data, making our SweetTok flexible to image reconstruction task.",
188
+ "bbox": [
189
+ 91,
190
+ 306,
191
+ 482,
192
+ 579
193
+ ],
194
+ "page_idx": 1
195
+ },
196
+ {
197
+ "type": "text",
198
+ "text": "Furthermore, to integrate the semantic information inherent in pre-traiend language model, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression addressing the differences in semantic representation between spatial and temporal information. Specifically, we design two language-based codebooks based on the part of speech, using nouns and adjectives for spatial static information and verbs and adverbs for temporal motion information. By incorporating language-based codebooks, the learnable compressed queries can also be easily adapted to downstream visual understanding tasks by in-context learning of LLM.",
199
+ "bbox": [
200
+ 89,
201
+ 580,
202
+ 482,
203
+ 762
204
+ ],
205
+ "page_idx": 1
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "Exhaustive experiments demonstrate the effectiveness of SweetTok. Compared with vanilla video tokenizer without token compression (OmniTok[2]), SweetTok improves rFVD by $52.3\\%$ on UCF-101 [33] using only $25\\%$ of the tokens. Compared with vanilla query-based tokenizer (LARP [3]), SweetTok reduces rFVD from 35.15 to 20.46 and gFVD from 99 to 84, on UCF-101. By directly finetuning the decoupled spatial branch on the ImageNet-1k [10], SweetTok also demonstrates a substantial improvement in",
210
+ "bbox": [
211
+ 89,
212
+ 763,
213
+ 482,
214
+ 900
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "rFID, decreasing it from 0.59 to 0.37.",
221
+ "bbox": [
222
+ 513,
223
+ 90,
224
+ 763,
225
+ 104
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "In summary, our work makes the following key contributions:",
232
+ "bbox": [
233
+ 511,
234
+ 107,
235
+ 903,
236
+ 136
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "list",
242
+ "sub_type": "text",
243
+ "list_items": [
244
+ "- We introduce SweetTok, a cutting-edge video tokenizer that achieves the state-of-the-art reconstruction fidelity with a high compression ratio via spatial-temporal decoupling and decoupled query autoencoder (DQAE), reaching a \"sweet spot\" between compression and fidelity.",
245
+ "- We propose a motion-enhanced language codebook (MLC) to more effectively capture the action information embedded in the video modality, thereby improving reconstruction quality and supporting downstream video understanding tasks.",
246
+ "- We perform extensive experiments to verify the effectiveness of SweetTok, which exhibits the state-of-the-art performance on video reconstruction, image reconstruction, and class-conditional video generation tasks, leading by a large margin of $42.8\\%$ , $37.2\\%$ , and $15.1\\%$ ."
247
+ ],
248
+ "bbox": [
249
+ 511,
250
+ 140,
251
+ 903,
252
+ 367
253
+ ],
254
+ "page_idx": 1
255
+ },
256
+ {
257
+ "type": "text",
258
+ "text": "2. Background",
259
+ "text_level": 1,
260
+ "bbox": [
261
+ 513,
262
+ 383,
263
+ 640,
264
+ 401
265
+ ],
266
+ "page_idx": 1
267
+ },
268
+ {
269
+ "type": "text",
270
+ "text": "2.1. Visual Tokenizer With Vector Quantization",
271
+ "text_level": 1,
272
+ "bbox": [
273
+ 511,
274
+ 409,
275
+ 880,
276
+ 425
277
+ ],
278
+ "page_idx": 1
279
+ },
280
+ {
281
+ "type": "text",
282
+ "text": "Exploring visual tokenizers and their applications in generative models has led to significant advancements in image/video-related tasks. The general idea is to discretize visual data into tokens, then tasks like visual generation [8, 14, 47, 48] & understanding [6, 12, 17, 18, 26, 35, 42, 43] can be tackled in a sequence prediction style as natural language processing [11, 29, 36]. Our work belongs to the series of Vector Quantized Variational AutoEncoder (VQVAE) [31, 39] tokenizers, which introduce a discrete latent space for continuous VAE [20] encoder-decoder structure. It typically encodes a high-dimensional image into a low-dimensional latent representation, then queries the nearest index from a learnable codebook to quantize the latent vector, and finally decodes back reversely to reconstruct the raw input signal. Since this type of tokenizer acquires reconstruction loss, it can maintain high-level semantic and low-level details of input vision. VQGAN [13] adopted adversarial training loss to improve high-frequency details. ViT-VQGAN [47] upgraded encoder-decoder with visiontransformer (ViT) architecture [12] and further boosted results. TiTok [51] replaced 2D image structure with 1D sequence latent representation, then used a self-attention transformer [40] to compress token number.",
283
+ "bbox": [
284
+ 511,
285
+ 431,
286
+ 903,
287
+ 779
288
+ ],
289
+ "page_idx": 1
290
+ },
291
+ {
292
+ "type": "text",
293
+ "text": "However, the above methods can only process image data. For video modality, TATS [14] used 3D-CNN to encode video patches and adopted sliding windows to deal with long-term relations. CViViT [41] used ViT [12] structure to encode spatial patches and then adopted a causal transformer to model temporal information. OmniTokenizer [2] and MAGVIT [1, 49] adopted similar transformer architecture and introduced image pre-training to improve",
294
+ "bbox": [
295
+ 511,
296
+ 780,
297
+ 906,
298
+ 901
299
+ ],
300
+ "page_idx": 1
301
+ },
302
+ {
303
+ "type": "page_number",
304
+ "text": "23542",
305
+ "bbox": [
306
+ 478,
307
+ 944,
308
+ 519,
309
+ 957
310
+ ],
311
+ "page_idx": 1
312
+ },
313
+ {
314
+ "type": "image",
315
+ "img_path": "images/d87274cbaa95ce624d15f0ba4cda55698a5017642fbbd4b99d1a5f3c217a50d6.jpg",
316
+ "image_caption": [
317
+ "Figure 2. Pipeline overview. (a) Vanilla video tokenizers directly quantize flattened video patches. (b) Vanilla query-based tokenizers compress flattend video patches into adaptive queries. (c) SweetTok proposes decoupled query-based autoencoder (DQAE, §3.2.2). The spatial encoder quantizes the first frame's patch embeddings, while the temporal encoder quantizes residual between consecutive frames. The spatial decoder reconstructs the first frame's patches, replicates them $T$ times, and passes them to the temporal decoder for final information fusion and reconstruction. It also proposes motion-enhanced language codebook (MLC, §3.2.3) to complement reconstructed video information via action-related language semantics."
318
+ ],
319
+ "image_footnote": [],
320
+ "bbox": [
321
+ 96,
322
+ 89,
323
+ 361,
324
+ 388
325
+ ],
326
+ "page_idx": 2
327
+ },
328
+ {
329
+ "type": "image",
330
+ "img_path": "images/27c4e5b2d5a09d4ae384924f22048a982aab73a48766497e7dee269a3cc096de.jpg",
331
+ "image_caption": [],
332
+ "image_footnote": [],
333
+ "bbox": [
334
+ 385,
335
+ 88,
336
+ 903,
337
+ 388
338
+ ],
339
+ "page_idx": 2
340
+ },
341
+ {
342
+ "type": "text",
343
+ "text": "video tokenizer. LARP [3], on the other hand, introduces to compress flattened video patches into adaptive holistic queries with the guidance of a pre-trained auto-regressive model. In this paper, we inherit the popular spatial-temporal decomposition design for video data.",
344
+ "bbox": [
345
+ 88,
346
+ 510,
347
+ 482,
348
+ 585
349
+ ],
350
+ "page_idx": 2
351
+ },
352
+ {
353
+ "type": "text",
354
+ "text": "2.2. Language-based Latent Codebook",
355
+ "text_level": 1,
356
+ "bbox": [
357
+ 89,
358
+ 604,
359
+ 390,
360
+ 619
361
+ ],
362
+ "page_idx": 2
363
+ },
364
+ {
365
+ "type": "text",
366
+ "text": "The codebooks learned by vanilla VQ-VAEs are not interpretable with lexical meanings. Therefore, many works attempt to utilize pretrained language models embedding codebooks to enhance semantics. LQAE [27] replaced the visual codebook with frozen word embeddings from BERT [11]. SPAE [50] quantized image latent space in a pyramid structure to preserve semantic information from low-level to high-level. It also used large language model (LLM) codebook [9] so that the encoded image token can be directly adapted to visual understanding tasks through in-context learning [5] ability of LLM. We follow this evaluation pipeline for few-shot classification in our paper. V2L-Tokenizer [54] utilized CLIP [30] pretrained encoder and injected a learnable projector to align visual-text latent space implicitly. VQCT [52] replaced the projector with graph convolution networks [22] to consider the relationship between vocabularies. Furthermore, De-Diffusion [44] directly encoded image into plain text as latent space inter",
367
+ "bbox": [
368
+ 91,
369
+ 628,
370
+ 483,
371
+ 900
372
+ ],
373
+ "page_idx": 2
374
+ },
375
+ {
376
+ "type": "text",
377
+ "text": "face and decodes back through a text-to-image (T2I) diffusion model [32]. However, these studies primarily focus on the image modality. In this paper, we explore the design of the language codebook specifically for the video modality by splitting the codebook according to the video's spatial-temporal attribute.",
378
+ "bbox": [
379
+ 511,
380
+ 510,
381
+ 906,
382
+ 602
383
+ ],
384
+ "page_idx": 2
385
+ },
386
+ {
387
+ "type": "text",
388
+ "text": "3. Method",
389
+ "text_level": 1,
390
+ "bbox": [
391
+ 511,
392
+ 616,
393
+ 604,
394
+ 632
395
+ ],
396
+ "page_idx": 2
397
+ },
398
+ {
399
+ "type": "text",
400
+ "text": "3.1. Preliminary",
401
+ "text_level": 1,
402
+ "bbox": [
403
+ 511,
404
+ 642,
405
+ 640,
406
+ 657
407
+ ],
408
+ "page_idx": 2
409
+ },
410
+ {
411
+ "type": "text",
412
+ "text": "A typical visual vector-quantization (VQ) model [1, 2, 14, 49] contains three parts: encoder $\\mathcal{E}$ , decoder $\\mathcal{D}$ and latent quantizer $\\mathcal{Q}$ as shown in Figure 2 (a). Take video modality as an example, given a video input $x\\in \\mathbb{R}^{T\\times H\\times W\\times 3}$ , where $T$ represents the temporal length and $H\\times W$ denotes spatial resolution, encoder $\\mathcal{E}(x)$ projects it into latent space $\\mathbb{Z}\\in \\mathbb{R}^{N\\times D}$ , where $D$ is latent dimension and $N$ is token number. A quantizer $\\mathcal{Q}$ is constructed in this latent space $\\mathbb{Z}$ by querying the nearest neighbor in codebook $C\\in \\mathbb{R}^{L_c\\times D}$ , where $L_{c}$ is codebook size. Then $\\mathcal{D}$ decodes latent space back to pixel space and applies self-supervised reconstruction loss:",
413
+ "bbox": [
414
+ 511,
415
+ 664,
416
+ 906,
417
+ 843
418
+ ],
419
+ "page_idx": 2
420
+ },
421
+ {
422
+ "type": "equation",
423
+ "text": "\n$$\n\\mathcal {L} _ {\\text {r e c}} (x, \\mathcal {D} (\\mathcal {Q} (\\mathcal {E} (x)))) \\tag {1}\n$$\n",
424
+ "text_format": "latex",
425
+ "bbox": [
426
+ 632,
427
+ 847,
428
+ 903,
429
+ 864
430
+ ],
431
+ "page_idx": 2
432
+ },
433
+ {
434
+ "type": "text",
435
+ "text": "Unlike traditional visual VQ models, LARP [3] quantizes flattened video patches into holistic queries $\\mathbf{Q} \\in$",
436
+ "bbox": [
437
+ 511,
438
+ 869,
439
+ 906,
440
+ 902
441
+ ],
442
+ "page_idx": 2
443
+ },
444
+ {
445
+ "type": "page_number",
446
+ "text": "23543",
447
+ "bbox": [
448
+ 478,
449
+ 944,
450
+ 517,
451
+ 955
452
+ ],
453
+ "page_idx": 2
454
+ },
455
+ {
456
+ "type": "text",
457
+ "text": "$\\mathbb{R}^{L\\times D}$ as shown in Figure 2 (b), where $L$ is the adaptive query size. As shown in Equation (2), the encoder $\\mathcal{E}$ processes concatenated flattened video patches $\\mathbf{E} =$ flatten $(x)\\in \\mathbb{R}^{N\\times D}$ and query tokens $\\mathbf{Q}\\in \\mathbb{R}^{L\\times D}$ outputting $\\mathbf{Z}_{\\mathbf{Q}}\\in \\mathbb{R}^{L\\times D}$ for quantization. The decoder $\\mathcal{D}$ reconstructs the video patches $\\tilde{\\mathbf{x}}\\in \\mathbb{R}^{T\\times H\\times W\\times 3}$ from the concatenation of learnable video queries $\\mathbf{E}_{\\mathbf{Q}}\\in \\mathbb{R}^{N\\times D}$ and the query discrete embeddings $\\tilde{\\mathbf{Z}}_{\\mathbf{Q}}$",
458
+ "bbox": [
459
+ 89,
460
+ 89,
461
+ 483,
462
+ 214
463
+ ],
464
+ "page_idx": 3
465
+ },
466
+ {
467
+ "type": "equation",
468
+ "text": "\n$$\n\\mathbf {Z} _ {\\mathbf {E}} | | \\mathbf {Z} _ {\\mathbf {Q}} = \\mathcal {E} (\\mathbf {E} | | \\mathbf {Q}), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q}} = \\mathcal {Q} (\\mathbf {Z} _ {\\mathbf {Q}}), \\tilde {x} = \\mathcal {D} (\\mathbf {E} _ {\\mathbf {Q}} | | \\tilde {\\mathbf {Z}} _ {\\mathbf {Q}}). \\tag {2}\n$$\n",
469
+ "text_format": "latex",
470
+ "bbox": [
471
+ 102,
472
+ 224,
473
+ 482,
474
+ 255
475
+ ],
476
+ "page_idx": 3
477
+ },
478
+ {
479
+ "type": "text",
480
+ "text": "However, directly compressing video from flattened patches is challenging due to intertwined redundant temporal data and low-level spatial content, leading to suboptimal performance. Thus, we designed SweetTok to balance reconstruction performance with a high compression ratio. Details are elaborated in Section 3.2.",
481
+ "bbox": [
482
+ 89,
483
+ 255,
484
+ 483,
485
+ 345
486
+ ],
487
+ "page_idx": 3
488
+ },
489
+ {
490
+ "type": "text",
491
+ "text": "3.2. Decoupled Spatial-Temporal Tokenization",
492
+ "text_level": 1,
493
+ "bbox": [
494
+ 89,
495
+ 354,
496
+ 450,
497
+ 371
498
+ ],
499
+ "page_idx": 3
500
+ },
501
+ {
502
+ "type": "text",
503
+ "text": "As noted in Section 3.1, directly quantizing video data via flattened patches hampers model learning due to intertwined redundant temporal and complex spatial information. Thus, we propose separately quantizing spatial and temporal dimensions before combining them to reconstruct the video, following the divide-and-conquer principle. The decoupling strategy enables high compression while ensuring higher fidelity. The main pipeline is shown in Figure 2 (c).",
504
+ "bbox": [
505
+ 89,
506
+ 376,
507
+ 483,
508
+ 512
509
+ ],
510
+ "page_idx": 3
511
+ },
512
+ {
513
+ "type": "text",
514
+ "text": "3.2.1. Patchify",
515
+ "text_level": 1,
516
+ "bbox": [
517
+ 89,
518
+ 520,
519
+ 194,
520
+ 535
521
+ ],
522
+ "page_idx": 3
523
+ },
524
+ {
525
+ "type": "text",
526
+ "text": "Given a video frame sequence $x \\in \\mathbb{R}^{T \\times H \\times W \\times 3}$ , we select the first frame $x_{1}$ as a reference for spatial information, the remaining $T - 1$ frames $x_{2:T}$ for temporal information, following the strategy in [2]. We apply two patch kernels $\\mathcal{P}_s, \\mathcal{P}_t$ with shapes $p_h \\times p_w$ and $p_t \\times p_h \\times p_w$ to $x_{1}$ and $x_{2:T}$ separately, generating $v_s \\in \\mathbb{R}^{1 \\times \\frac{H}{p_h} \\times \\frac{W}{p_w} \\times D}$ and $v_t \\in \\mathbb{R}^{\\frac{T - 1}{p_t} \\times \\frac{H}{p_h} \\times \\frac{W}{p_w} \\times D}$ shown below:",
527
+ "bbox": [
528
+ 89,
529
+ 537,
530
+ 483,
531
+ 652
532
+ ],
533
+ "page_idx": 3
534
+ },
535
+ {
536
+ "type": "equation",
537
+ "text": "\n$$\nv _ {s} = \\mathcal {P} _ {s} \\left(x _ {1}\\right), v _ {t} = \\mathcal {P} _ {t} \\left(x _ {2: T}\\right) \\tag {3}\n$$\n",
538
+ "text_format": "latex",
539
+ "bbox": [
540
+ 189,
541
+ 664,
542
+ 482,
543
+ 680
544
+ ],
545
+ "page_idx": 3
546
+ },
547
+ {
548
+ "type": "text",
549
+ "text": "$v_{s}$ and $v_{t}$ are inputs for transformer-based autoencoder, where $v_{s}$ contains spatial information, $v_{t}$ contains temporal information. In practice, for a video with 17 frames and a resolution of $256 \\times 256$ , we set $(p_{t}, p_{h}, p_{w})$ to $(4, 8, 8)$ , thus patchify frames into $v_{s}$ with shape $1 \\times 32 \\times 32$ and $v_{t}$ with shape $4 \\times 32 \\times 32$ . We use $t = \\frac{T - 1}{p_t} = 4$ to denote $v_{t}$ 's length.",
550
+ "bbox": [
551
+ 89,
552
+ 691,
553
+ 483,
554
+ 799
555
+ ],
556
+ "page_idx": 3
557
+ },
558
+ {
559
+ "type": "text",
560
+ "text": "3.2.2.Decoupled Query AutoEncoder (DQAE)",
561
+ "text_level": 1,
562
+ "bbox": [
563
+ 89,
564
+ 805,
565
+ 418,
566
+ 821
567
+ ],
568
+ "page_idx": 3
569
+ },
570
+ {
571
+ "type": "text",
572
+ "text": "To decouple spatial and temporal dimensions, we need to compress video patches separately along each dimension. Inspired by [3, 51] and Q-Former [25], we compress each dimension into an adaptive query tokens via cross-attention interactions. As an innovation, we recursively inject these",
573
+ "bbox": [
574
+ 89,
575
+ 824,
576
+ 483,
577
+ 901
578
+ ],
579
+ "page_idx": 3
580
+ },
581
+ {
582
+ "type": "text",
583
+ "text": "cross-attention query modules into transformer-based autoencoder to transfer information, forming our DQAE module shown in the right gray box of Figure 2 (c). For the rest of our paper, we use $\\mathcal{E}_{DQAE}$ and $\\mathcal{D}_{DQAE}$ as the encoder and decoder of the DQAE module for simplicity.",
584
+ "bbox": [
585
+ 511,
586
+ 90,
587
+ 905,
588
+ 167
589
+ ],
590
+ "page_idx": 3
591
+ },
592
+ {
593
+ "type": "text",
594
+ "text": "Spatial Tokenization. We observe that for most video parts, the first frame holds the most spatial information, so we use $v_{s}$ as the input to $\\mathcal{E}_{DQAE_s}$ for quantization shown below:",
595
+ "bbox": [
596
+ 511,
597
+ 186,
598
+ 905,
599
+ 247
600
+ ],
601
+ "page_idx": 3
602
+ },
603
+ {
604
+ "type": "equation",
605
+ "text": "\n$$\n\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {s}}} = \\mathcal {E} _ {D Q A E _ {s}} (\\mathbf {Q} _ {\\mathbf {s}}, v _ {s}), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {s}}} = \\mathcal {Q} _ {M L C} (\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {s}}}), \\quad (4)\n$$\n",
606
+ "text_format": "latex",
607
+ "bbox": [
608
+ 539,
609
+ 260,
610
+ 906,
611
+ 280
612
+ ],
613
+ "page_idx": 3
614
+ },
615
+ {
616
+ "type": "text",
617
+ "text": "where $\\mathbf{Q}_{\\mathbf{s}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}$ are the learnable spatial query embeddings and $\\mathbf{Z}_{\\mathbf{Q}_{\\mathbf{s}}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}$ are the output embeddings encoding information from the first frame patches $v_{s}$ . $\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{s}}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}$ are the quantized spatial token embeddings and $\\mathcal{Q}_{MLC}$ stands for Motion-enhanced Language Codebook (MLC) quantizer which will be elaborated in the following section. After quantization, we inject our informative spatial token embeddings $\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{s}}}$ into a learnable spatial patch queries $\\mathbf{Q}_{v_s}$ through decoder $\\mathcal{D}_{DQA_E_s}$ as below:",
618
+ "bbox": [
619
+ 511,
620
+ 290,
621
+ 906,
622
+ 439
623
+ ],
624
+ "page_idx": 3
625
+ },
626
+ {
627
+ "type": "equation",
628
+ "text": "\n$$\n\\tilde {v} _ {s} = \\mathcal {D} _ {D Q A E _ {s}} \\left(\\mathbf {Q} _ {v _ {s}}, \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {s}}\\right), \\tag {5}\n$$\n",
629
+ "text_format": "latex",
630
+ "bbox": [
631
+ 616,
632
+ 455,
633
+ 905,
634
+ 474
635
+ ],
636
+ "page_idx": 3
637
+ },
638
+ {
639
+ "type": "text",
640
+ "text": "where $\\tilde{v}_s$ are the reconstructed first frame video patches. The temporal component which will be stated in the following, combined with $\\tilde{v}_s$ is used for the final video reconstruction. We set $L_{\\text{spatial}} = 256$ in real implementation.",
641
+ "bbox": [
642
+ 511,
643
+ 482,
644
+ 905,
645
+ 544
646
+ ],
647
+ "page_idx": 3
648
+ },
649
+ {
650
+ "type": "text",
651
+ "text": "Temporal Tokenization. It is observed that for video data, there is much redundancy along temporal dimension. It motivates us to employ frame-wise residual $\\Delta v_{t} = (\\Delta v_{t}^{1},\\Delta v_{t}^{2},\\dots,\\Delta v_{t}^{k})_{k = \\frac{H}{p_{h}}\\times \\frac{W}{p_{w}}}$ , where $\\Delta v_{t}^{i} = v_{s}^{i} - v_{t}^{i}$ , for tokenization. We use the first frame of the video for frame-wise residual because the spatial tokenization phase reconstruct it. Then, the residual $\\Delta v = (\\Delta v_{1},\\Delta v_{2},\\dots,\\Delta v_{t})_{t = \\frac{T - 1}{p_{t}}}$ is input to $\\mathcal{E}_{DQAE_t}$ for temporal compression, as shown below:",
652
+ "bbox": [
653
+ 511,
654
+ 551,
655
+ 906,
656
+ 696
657
+ ],
658
+ "page_idx": 3
659
+ },
660
+ {
661
+ "type": "equation",
662
+ "text": "\n$$\n\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {t}}} = \\mathcal {E} _ {D Q A E _ {t}} (\\mathbf {Q} _ {\\mathbf {t}}, \\Delta v), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {t}}} = \\mathcal {Q} _ {M L C} (\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {t}}}), \\quad (6)\n$$\n",
663
+ "text_format": "latex",
664
+ "bbox": [
665
+ 535,
666
+ 709,
667
+ 905,
668
+ 729
669
+ ],
670
+ "page_idx": 3
671
+ },
672
+ {
673
+ "type": "text",
674
+ "text": "where $\\mathbf{Q}_{\\mathbf{t}} \\in \\mathbb{R}^{L_{temporal} \\times D}$ are the learnable temporal query embeddings and $\\mathbf{Z}_{\\mathbf{Q}_{\\mathbf{t}}} \\in \\mathbb{R}^{L_{temporal} \\times D}$ are the output embeddings encoding information of the frame-wise residual. $\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{t}}} \\in \\mathbb{R}^{L_{temporal} \\times D}$ are the quantized frame-wise temporal residual token embeddings. In practical implementation, $L_{temporal} = 1024$ . After reconstructing the first frame patches, we recover the entire video patches by combining $\\tilde{v}_s$ with the frame-wise quantized residual $\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{t}}}$ . We tiled $\\tilde{v}_s$ for $t$ times and sent it to $\\mathcal{D}_{DQAE_t}$ shown below:",
675
+ "bbox": [
676
+ 511,
677
+ 738,
678
+ 905,
679
+ 876
680
+ ],
681
+ "page_idx": 3
682
+ },
683
+ {
684
+ "type": "equation",
685
+ "text": "\n$$\n\\tilde {v} = \\mathcal {D} _ {D Q A E _ {t}} \\left(\\left[ \\tilde {v} _ {s} \\right| | \\dots | | \\tilde {v} _ {s} \\right], \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {t}}}), \\tag {7}\n$$\n",
686
+ "text_format": "latex",
687
+ "bbox": [
688
+ 594,
689
+ 883,
690
+ 905,
691
+ 902
692
+ ],
693
+ "page_idx": 3
694
+ },
695
+ {
696
+ "type": "page_number",
697
+ "text": "23544",
698
+ "bbox": [
699
+ 478,
700
+ 944,
701
+ 519,
702
+ 955
703
+ ],
704
+ "page_idx": 3
705
+ },
706
+ {
707
+ "type": "text",
708
+ "text": "where $\\tilde{v}$ is the reconstructed video patches. The vectors $\\tilde{v}$ reside in the latent space, necessitating the use of a \"pixel decoder\" $\\mathcal{D}_{pixel}$ to reconstruct the video data, as illustrated below:",
709
+ "bbox": [
710
+ 89,
711
+ 90,
712
+ 483,
713
+ 151
714
+ ],
715
+ "page_idx": 4
716
+ },
717
+ {
718
+ "type": "equation",
719
+ "text": "\n$$\n\\tilde {x} = \\mathcal {D} _ {\\text {p i x e l}} (\\tilde {v}). \\tag {8}\n$$\n",
720
+ "text_format": "latex",
721
+ "bbox": [
722
+ 233,
723
+ 167,
724
+ 482,
725
+ 185
726
+ ],
727
+ "page_idx": 4
728
+ },
729
+ {
730
+ "type": "text",
731
+ "text": "The whole DQAE is supervised by the reconstruction loss $\\mathcal{L}_{rec}$ containing a $L_{2}$ loss, a LPIPS perception loss: $\\mathcal{L}_{Lpips}$ , a quantizer loss: $\\mathcal{L}_{vq}$ and a GAN loss: $\\mathcal{L}_g$ following the principle [13].",
732
+ "bbox": [
733
+ 89,
734
+ 191,
735
+ 483,
736
+ 253
737
+ ],
738
+ "page_idx": 4
739
+ },
740
+ {
741
+ "type": "text",
742
+ "text": "3.2.3. Motion-enhanced Language Codebook (MLC)",
743
+ "text_level": 1,
744
+ "bbox": [
745
+ 89,
746
+ 261,
747
+ 460,
748
+ 277
749
+ ],
750
+ "page_idx": 4
751
+ },
752
+ {
753
+ "type": "text",
754
+ "text": "To mitigate information loss during compression, we introduce an language codebook (LC) quantizer to enhance semantic richness. The Previous works [27, 50] have shown that text representations can enhance image VQ-VAEs, as the text provides additional semantic information from pretrained language models. However, previous works mainly focus on the relationship between static image appearance and text semantics [52]. Our experiment in Table 5 shows is insufficient for video data.",
755
+ "bbox": [
756
+ 89,
757
+ 279,
758
+ 483,
759
+ 415
760
+ ],
761
+ "page_idx": 4
762
+ },
763
+ {
764
+ "type": "text",
765
+ "text": "To address this, we propose a Motion-enhanced Language Codebook (MLC), where the video motion information is enhanced via action-related vocabularies. Specifically, we split the dictionary into four subsets: nouns, adjectives, verbs, and adverbs. Intuitively, static and appearance information is typically embedded in nouns and adjectives, while motion information is generally embedded in verbs and adverbs. Therefore, we choose nouns and adjectives for spatial query tokens $\\mathbf{Q}_{\\mathrm{s}}$ , and verbs and adverbs for temporal query tokens $\\mathbf{Q}_{\\mathrm{t}}$ . Figure 3 also shows that the encoded latent words by SweetTok capture semantic meanings related to both visual appearance and motion.",
766
+ "bbox": [
767
+ 89,
768
+ 416,
769
+ 483,
770
+ 598
771
+ ],
772
+ "page_idx": 4
773
+ },
774
+ {
775
+ "type": "text",
776
+ "text": "As for details, we first extract candidate vocabularies of the whole dataset from video captions. Afterward, we extract CLIP [30] text embedding of these vocabularies to fill in the columns of our codebook $C \\in \\mathbb{R}^{L \\times D}$ . We utilize a graph convolution network $\\mathcal{F}$ to project CLIP embeddings [30] into the visual latent space. Graph edges are constructed when a pair of \"spatial-spatial\", \"spatial-temporal\" or \"temporal-temporal\" words co-occur within a 5-token window in the current video caption.",
777
+ "bbox": [
778
+ 89,
779
+ 598,
780
+ 483,
781
+ 733
782
+ ],
783
+ "page_idx": 4
784
+ },
785
+ {
786
+ "type": "text",
787
+ "text": "Given two encoded continuous latent vectors: $z_{s} \\in \\mathbf{Q}_{\\mathbf{s}}$ and $z_{t} \\in \\mathbf{Q}_{\\mathbf{t}}$ , $z_{s}$ is passed through spatial quantization codebook, and $z_{t}$ is passed through temporal quantization codebook. The quantized $\\hat{z}_{s}$ and $\\hat{z}_{t}$ are obtained by nearest neighbor searching:",
788
+ "bbox": [
789
+ 89,
790
+ 734,
791
+ 483,
792
+ 810
793
+ ],
794
+ "page_idx": 4
795
+ },
796
+ {
797
+ "type": "equation",
798
+ "text": "\n$$\nz _ {s}, z _ {t} = \\mathcal {E} (x), \\tag {9}\n$$\n",
799
+ "text_format": "latex",
800
+ "bbox": [
801
+ 236,
802
+ 823,
803
+ 480,
804
+ 839
805
+ ],
806
+ "page_idx": 4
807
+ },
808
+ {
809
+ "type": "equation",
810
+ "text": "\n$$\n\\hat {z} _ {s} = \\mathcal {F} \\left(c _ {i}\\right), i = \\underset {c _ {i} \\in C _ {\\text {n o u n}} \\cup C _ {\\text {a d j}}} {\\arg \\min } \\left| \\left| z _ {s} - \\mathcal {F} \\left(c _ {i}\\right) \\right| \\right|, \\tag {10}\n$$\n",
811
+ "text_format": "latex",
812
+ "bbox": [
813
+ 120,
814
+ 840,
815
+ 482,
816
+ 867
817
+ ],
818
+ "page_idx": 4
819
+ },
820
+ {
821
+ "type": "equation",
822
+ "text": "\n$$\n\\hat {z} _ {t} = \\mathcal {F} \\left(c _ {i}\\right), i = \\underset {c _ {i} \\in C _ {\\text {v e r b}} \\cup C _ {\\text {a d v}}} {\\arg \\min } \\left| \\left| z _ {t} - \\mathcal {F} \\left(c _ {i}\\right) \\right| \\right|. \\tag {11}\n$$\n",
823
+ "text_format": "latex",
824
+ "bbox": [
825
+ 122,
826
+ 871,
827
+ 482,
828
+ 898
829
+ ],
830
+ "page_idx": 4
831
+ },
832
+ {
833
+ "type": "image",
834
+ "img_path": "images/73a772bdbacade7159b1c5638265974f191dc195de177d7490aac38f8cdedcb8.jpg",
835
+ "image_caption": [],
836
+ "image_footnote": [],
837
+ "bbox": [
838
+ 519,
839
+ 90,
840
+ 895,
841
+ 172
842
+ ],
843
+ "page_idx": 4
844
+ },
845
+ {
846
+ "type": "image",
847
+ "img_path": "images/44feb6a04c3e4e031dcb9d76067d89004dc282c8df9f6fdb1f01371bb2cdc29d.jpg",
848
+ "image_caption": [
849
+ "Figure 3. The semantics of spatial-temporal \"words\". The attention weights of the last encoder's cross-attention layer are visualized via heatmap, showing the visual regions corresponding to the related latent words."
850
+ ],
851
+ "image_footnote": [],
852
+ "bbox": [
853
+ 516,
854
+ 180,
855
+ 602,
856
+ 256
857
+ ],
858
+ "page_idx": 4
859
+ },
860
+ {
861
+ "type": "image",
862
+ "img_path": "images/3c3a3fc36e02007b4b3ad9080f331425545dafa7229cee7c92af75b25b6858c2.jpg",
863
+ "image_caption": [],
864
+ "image_footnote": [],
865
+ "bbox": [
866
+ 604,
867
+ 180,
868
+ 689,
869
+ 256
870
+ ],
871
+ "page_idx": 4
872
+ },
873
+ {
874
+ "type": "image",
875
+ "img_path": "images/280e11ef9c69e5c827376813469dd56f7781e66cb7d077530bdef40c2de45ee2.jpg",
876
+ "image_caption": [],
877
+ "image_footnote": [],
878
+ "bbox": [
879
+ 691,
880
+ 183,
881
+ 898,
882
+ 255
883
+ ],
884
+ "page_idx": 4
885
+ },
886
+ {
887
+ "type": "text",
888
+ "text": "Finally, the gradient is passed to the encoder via vector-quantization commitment loss proposed in [39], a common method to approximate differentiability $(sg[\\cdot ]$ stands for stop-gradient operator):",
889
+ "bbox": [
890
+ 511,
891
+ 339,
892
+ 906,
893
+ 401
894
+ ],
895
+ "page_idx": 4
896
+ },
897
+ {
898
+ "type": "equation",
899
+ "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {v q} = \\left\\| s g \\left(z _ {s}\\right) - \\mathcal {Q} \\left(z _ {s}\\right) \\right\\| ^ {2} + \\left\\| z _ {s} - s g \\left[ \\mathcal {Q} \\left(z _ {s}\\right) \\right] \\right\\| ^ {2} \\tag {12} \\\\ + \\left\\| s g \\left[ z _ {t} \\right] - \\mathcal {Q} \\left(z _ {t}\\right) \\right\\| ^ {2} + \\left\\| z _ {t} - s g \\left[ \\mathcal {Q} \\left(z _ {t}\\right) \\right] \\right\\| ^ {2} \\\\ \\end{array}\n$$\n",
900
+ "text_format": "latex",
901
+ "bbox": [
902
+ 532,
903
+ 412,
904
+ 903,
905
+ 450
906
+ ],
907
+ "page_idx": 4
908
+ },
909
+ {
910
+ "type": "text",
911
+ "text": "4. Experiments",
912
+ "text_level": 1,
913
+ "bbox": [
914
+ 511,
915
+ 476,
916
+ 645,
917
+ 493
918
+ ],
919
+ "page_idx": 4
920
+ },
921
+ {
922
+ "type": "text",
923
+ "text": "4.1. Experiments Settings",
924
+ "text_level": 1,
925
+ "bbox": [
926
+ 511,
927
+ 501,
928
+ 714,
929
+ 518
930
+ ],
931
+ "page_idx": 4
932
+ },
933
+ {
934
+ "type": "text",
935
+ "text": "Dataset. We evaluate the tokenization performance of SweetTok on video datasets, including UCF-101 [33], and Kinetics-600 [7, 19]. Following [2], all video frames are resized to $256 \\times 256$ resolution for experiments. Note that some of the previous works use a resolution of $128 \\times 128$ , which cannot be directly compared to our work due to the difference in task difficulty. However, we still include these results in the table and highlight them in gray. Moreover, we fine-tune SweetTok's spatial component on ImageNet [10] to obtain a strong image tokenizer. The semantic capabilities of SweetTok are tested through few-shot image classification on Real-Name Open-Ended miniImageNet [37] and few-shot video action recognition on UCF-101, as described in [53].",
936
+ "bbox": [
937
+ 511,
938
+ 523,
939
+ 906,
940
+ 736
941
+ ],
942
+ "page_idx": 4
943
+ },
944
+ {
945
+ "type": "text",
946
+ "text": "Evaluation Metrics. For video reconstruction experiments, we evaluate using the Reconstruction Frechet Video Distance (rFVD) [38]. For video generation, we use the Generation Frechet Video Distance (gFVD) metric. For image reconstruction, we categorize recent methods by the number of compressed tokens, with each group assessed using the Frechet Inception Distance (FID) [15].",
947
+ "bbox": [
948
+ 511,
949
+ 744,
950
+ 905,
951
+ 851
952
+ ],
953
+ "page_idx": 4
954
+ },
955
+ {
956
+ "type": "text",
957
+ "text": "Implementation Details. SweetTok adopts a spatial-temporal architecture consisting of 8 spatial layers and 4",
958
+ "bbox": [
959
+ 511,
960
+ 869,
961
+ 906,
962
+ 901
963
+ ],
964
+ "page_idx": 4
965
+ },
966
+ {
967
+ "type": "page_number",
968
+ "text": "23545",
969
+ "bbox": [
970
+ 478,
971
+ 944,
972
+ 517,
973
+ 955
974
+ ],
975
+ "page_idx": 4
976
+ },
977
+ {
978
+ "type": "image",
979
+ "img_path": "images/8918b3dd3532316d5daa9086b344bcd000b0f4cbb31f3071c39548d9c4a5fa80.jpg",
980
+ "image_caption": [
981
+ "OmniTok [2]"
982
+ ],
983
+ "image_footnote": [],
984
+ "bbox": [
985
+ 91,
986
+ 88,
987
+ 362,
988
+ 364
989
+ ],
990
+ "page_idx": 5
991
+ },
992
+ {
993
+ "type": "image",
994
+ "img_path": "images/c7be81241c2d9e3d8422b70e96858af38c9ac6838ff8ad41a6e6eb221e67a0cb.jpg",
995
+ "image_caption": [
996
+ "LARP [3]"
997
+ ],
998
+ "image_footnote": [],
999
+ "bbox": [
1000
+ 364,
1001
+ 89,
1002
+ 633,
1003
+ 364
1004
+ ],
1005
+ "page_idx": 5
1006
+ },
1007
+ {
1008
+ "type": "image",
1009
+ "img_path": "images/f9070406ffc0389b95dfa0228a8221866b1fba75ae87ae7eefcfa0211a1e0cd0.jpg",
1010
+ "image_caption": [
1011
+ "SweetTok (ours)",
1012
+ "Figure 4. Video reconstruction result on UCF-101 dataset. We also visualize the reconstruction and GT error map, where brighter areas indicate larger errors."
1013
+ ],
1014
+ "image_footnote": [],
1015
+ "bbox": [
1016
+ 635,
1017
+ 90,
1018
+ 903,
1019
+ 364
1020
+ ],
1021
+ "page_idx": 5
1022
+ },
1023
+ {
1024
+ "type": "table",
1025
+ "img_path": "images/ec315eebb0594fca491409df717eb18f70dbc374cdeeb7d0076f29000191ff14.jpg",
1026
+ "table_caption": [],
1027
+ "table_footnote": [],
1028
+ "table_body": "<table><tr><td rowspan=\"2\">Tokenizer</td><td rowspan=\"2\">#Tokens</td><td rowspan=\"2\">#Params Tokenizer</td><td colspan=\"2\">rFVD ↓</td></tr><tr><td>UCF-101</td><td>K-600</td></tr><tr><td>MAGVIT-V2 [1]</td><td>1280</td><td>307M</td><td>8.6</td><td>-</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>20</td><td>13</td></tr><tr><td>MaskGIT [8]</td><td>4352</td><td>227M</td><td>240</td><td>202</td></tr><tr><td>VQGAN [13]</td><td>4352</td><td>227M</td><td>299</td><td>270</td></tr><tr><td>TATS [14]</td><td>4096</td><td>32M</td><td>162</td><td>-</td></tr><tr><td>MAGVIT [49]</td><td>4096</td><td>158M</td><td>58</td><td>-</td></tr><tr><td>OmniTok [2]</td><td>5120</td><td>82.2M</td><td>42</td><td>26</td></tr><tr><td>LARP-B [3]</td><td>1024</td><td>143M</td><td>64</td><td>35</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>35</td><td>23</td></tr><tr><td>SweetTok*</td><td>5120</td><td>128M</td><td>11</td><td>8</td></tr><tr><td>SweetTok</td><td>1280</td><td>128M</td><td>20</td><td>25</td></tr></table>",
1029
+ "bbox": [
1030
+ 99,
1031
+ 422,
1032
+ 470,
1033
+ 614
1034
+ ],
1035
+ "page_idx": 5
1036
+ },
1037
+ {
1038
+ "type": "text",
1039
+ "text": "temporal layers, with both the encoder and decoder configured to a hidden dimension of 512. The latent space dimension is set to 256. For the LLM codebook quantizer, we exclude words with a frequency below 5, resulting in a selection of 5,078 nouns, 5,403 adjectives, 9,267 verbs, and 1,872 adverbs. This forms a spatial codebook of size 10,481 and a temporal codebook of size 11,139. The model is trained with a batch size of 8 for 1000K iterations. All training is performed on NVIDIA A100 GPUs. Adam [21] is employed for optimization $(\\beta_{1} = 0.9$ and $\\beta_{2} = 0.99)$ . During each stage, we use a cosine learning rate scheduler with a max learning rate of 1e-4 and a min learning rate of 1e-5, warmed up by 10K iterations.",
1040
+ "bbox": [
1041
+ 89,
1042
+ 704,
1043
+ 483,
1044
+ 900
1045
+ ],
1046
+ "page_idx": 5
1047
+ },
1048
+ {
1049
+ "type": "table",
1050
+ "img_path": "images/05e76646dd714201ebb62aa84f51be33605d95a55a9ad3237f4e05767a2d7a45.jpg",
1051
+ "table_caption": [
1052
+ "Table 1. Video reconstruction FVD on the UCF-101 and K-600 dataset, using a frame resolution $256 \\times 256$ . “*” denotes training SweetTok without token compression. Lines in “gray” indicate results at a resolution of $128 \\times 128$ ."
1053
+ ],
1054
+ "table_footnote": [],
1055
+ "table_body": "<table><tr><td>Tokenizer</td><td>Type</td><td>#Tokens</td><td>#Params Generator</td><td>gFVD ↓</td></tr><tr><td>MAGVIT [49]</td><td>AR</td><td>1024</td><td>306M</td><td>265</td></tr><tr><td>MAGVIT-V2 [1]</td><td>AR</td><td>1280</td><td>307M</td><td>109</td></tr><tr><td>MAGVIT [49]</td><td>MLM</td><td>1024</td><td>306M</td><td>76</td></tr><tr><td>MAGVIT-V2 [1]</td><td>MLM</td><td>1280</td><td>307M</td><td>58</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>57</td></tr><tr><td>CogVideo [16]</td><td>AR</td><td>6800</td><td>9.4B</td><td>626</td></tr><tr><td>TATS [14]</td><td>AR</td><td>4096</td><td>321M</td><td>332</td></tr><tr><td>Video-LaVIT [17]</td><td>AR</td><td>512</td><td>7B</td><td>280</td></tr><tr><td>OmniTok [2]</td><td>AR</td><td>5120</td><td>650M</td><td>191</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>99</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>650M</td><td>84</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>1.9B</td><td>65</td></tr></table>",
1056
+ "bbox": [
1057
+ 535,
1058
+ 422,
1059
+ 880,
1060
+ 616
1061
+ ],
1062
+ "page_idx": 5
1063
+ },
1064
+ {
1065
+ "type": "text",
1066
+ "text": "Table 2. Class-conditional video generation results on UCF-101. Each video is composed of 17 frames with a resolution of 256 × 256. \"AR\" and \"MLM\" represents autoregressive and masked-language-modeling generator. Lines in \"gray\" indicate results at a resolution of $128 \\times 128$ .",
1067
+ "bbox": [
1068
+ 511,
1069
+ 619,
1070
+ 906,
1071
+ 688
1072
+ ],
1073
+ "page_idx": 5
1074
+ },
1075
+ {
1076
+ "type": "text",
1077
+ "text": "4.2. Video Reconstruction & Generation",
1078
+ "text_level": 1,
1079
+ "bbox": [
1080
+ 511,
1081
+ 710,
1082
+ 826,
1083
+ 724
1084
+ ],
1085
+ "page_idx": 5
1086
+ },
1087
+ {
1088
+ "type": "text",
1089
+ "text": "We first evaluate the tokenization capability of SweetTok on the UCF-101 and K-600 video datasets. As shown in Table 1, SweetTok uses 1,280 tokens (256 spatial tokens and 1,024 temporal tokens), which is four times fewer than OmniTok's 5,120 tokens and comparable with LARP's 1024 tokens. Despite its high compression ratio, our method achieves a $52.3\\%$ improvement on UCF-101 and competitive performance on K-600 compared with OmniTok. With a similar token count, SweetTok, despite its smaller model size compared to LARP-L, delivers a $42.8\\%$ improvement on UCF-101 and comparable results on K-600. Our Sweet",
1090
+ "bbox": [
1091
+ 511,
1092
+ 734,
1093
+ 906,
1094
+ 900
1095
+ ],
1096
+ "page_idx": 5
1097
+ },
1098
+ {
1099
+ "type": "page_number",
1100
+ "text": "23546",
1101
+ "bbox": [
1102
+ 478,
1103
+ 944,
1104
+ 519,
1105
+ 955
1106
+ ],
1107
+ "page_idx": 5
1108
+ },
1109
+ {
1110
+ "type": "image",
1111
+ "img_path": "images/18542e9d83b35a62d8fd87d4d0f0ec0fd97b0dfcc1c1425648464b7ef48549c2.jpg",
1112
+ "image_caption": [
1113
+ "Figure 5. Class-conditional video generation result on UCF-101 dataset. The action class label of each row is: \"PlayingTabla\", \"TennisS-wing\", and \"HorseRiding\"."
1114
+ ],
1115
+ "image_footnote": [],
1116
+ "bbox": [
1117
+ 93,
1118
+ 88,
1119
+ 362,
1120
+ 311
1121
+ ],
1122
+ "page_idx": 6
1123
+ },
1124
+ {
1125
+ "type": "image",
1126
+ "img_path": "images/f0221d5adffbd44e8de2678aedadfaf0b31b5ef7f229628f76a2431947166e4b.jpg",
1127
+ "image_caption": [],
1128
+ "image_footnote": [],
1129
+ "bbox": [
1130
+ 364,
1131
+ 88,
1132
+ 633,
1133
+ 311
1134
+ ],
1135
+ "page_idx": 6
1136
+ },
1137
+ {
1138
+ "type": "image",
1139
+ "img_path": "images/d51f7ce5924627f17b63d794c39c6e6619f0b48c45f82c5e848ff9a79bb54ec6.jpg",
1140
+ "image_caption": [],
1141
+ "image_footnote": [],
1142
+ "bbox": [
1143
+ 635,
1144
+ 88,
1145
+ 905,
1146
+ 311
1147
+ ],
1148
+ "page_idx": 6
1149
+ },
1150
+ {
1151
+ "type": "table",
1152
+ "img_path": "images/7ce9d5606976ab0e5e33b6acc78cb152f76552076f52d36ea330cf49537e42c9.jpg",
1153
+ "table_caption": [],
1154
+ "table_footnote": [],
1155
+ "table_body": "<table><tr><td>Tokenizer</td><td>#Tokens</td><td>Codebook Size</td><td>rFID ↓</td></tr><tr><td>VQGAN [13]</td><td>256</td><td>1024</td><td>7.94</td></tr><tr><td>RQ-VAE [24]</td><td>256</td><td>16384</td><td>3.20</td></tr><tr><td>MaskGIT[49]</td><td>256</td><td>1024</td><td>2.28</td></tr><tr><td>LlamaGen-16 [34]</td><td>256</td><td>16384</td><td>2.19</td></tr><tr><td>TiTok [51]</td><td>256</td><td>4096</td><td>1.71</td></tr><tr><td>TokenFlow [28]</td><td>256</td><td>4096</td><td>1.03</td></tr><tr><td>SweetTok</td><td>256</td><td>10481</td><td>0.73</td></tr><tr><td>ViT-VQGAN [47]</td><td>1024</td><td>8192</td><td>1.28</td></tr><tr><td>OmniTok [2]</td><td>1024</td><td>8192</td><td>1.11</td></tr><tr><td>OmniTok◇ [2]</td><td>1024</td><td>8192</td><td>0.69</td></tr><tr><td>LlamaGen-8 [34]</td><td>1024</td><td>16384</td><td>0.59</td></tr><tr><td>SweetTok*</td><td>1024</td><td>10481</td><td>0.37</td></tr></table>",
1156
+ "bbox": [
1157
+ 102,
1158
+ 354,
1159
+ 467,
1160
+ 565
1161
+ ],
1162
+ "page_idx": 6
1163
+ },
1164
+ {
1165
+ "type": "text",
1166
+ "text": "Table 3. Image reconstruction FID on the ImageNet dataset, using a resolution of $256 \\times 256$ . “◇” denotes continuous latent space without quantization. “*” denotes training SweetTok without token compression.",
1167
+ "bbox": [
1168
+ 89,
1169
+ 566,
1170
+ 482,
1171
+ 625
1172
+ ],
1173
+ "page_idx": 6
1174
+ },
1175
+ {
1176
+ "type": "text",
1177
+ "text": "Tok demonstrates significant performance gains, achieving $68.8\\%$ and $28.5\\%$ improvements in rFVD on UCF-101 and K-600 datasets, respectively, compared to LARP-B with similar model size. Notably, if we increase the token number to 5,120, SweetTok* significantly outperforms all baselines, achieving an rFVD of 10.74 on UCF-101 and 7.51 on K-600.",
1178
+ "bbox": [
1179
+ 89,
1180
+ 643,
1181
+ 482,
1182
+ 747
1183
+ ],
1184
+ "page_idx": 6
1185
+ },
1186
+ {
1187
+ "type": "text",
1188
+ "text": "The generative capability of SweetTok is evaluated on UCF-101 in a class-conditional generation task. Decoupled tokens extracted by SweetTok are concatenated to form training sequences for VideoGPT [45], following the same generation protocol as OmniTok. As shown in Table 2, SweetTok achieves a significant performance improvement, with a gFVD score of 84, $56\\%$ lower than OmniTok's 191. This improvement is attributed to SweetTok's effective token compression, which substantially reduces the training complexity for downstream autoregressive models. With",
1189
+ "bbox": [
1190
+ 88,
1191
+ 750,
1192
+ 482,
1193
+ 901
1194
+ ],
1195
+ "page_idx": 6
1196
+ },
1197
+ {
1198
+ "type": "text",
1199
+ "text": "equivalent token counts, SweetTok demonstrates superior performance, achieving a $15.1\\%$ improvement in gFVD (84 vs. LARP's 99) at comparable generator sizes. Furthermore, it exhibits scaling law characteristics, with gFVD improving from 84 to 65 as the model scales up to 1.9B parameters.",
1200
+ "bbox": [
1201
+ 511,
1202
+ 358,
1203
+ 906,
1204
+ 446
1205
+ ],
1206
+ "page_idx": 6
1207
+ },
1208
+ {
1209
+ "type": "text",
1210
+ "text": "The visualization results are presented in Figure 5. To ensure a fair comparison, we select generated videos with similar appearances, as the generation process inherently involves randomness. The results demonstrate significantly improved detail, such as clearer human facial features and finer table textures. Additionally, SweetTok effectively preserves temporal consistency, even under large motion scenarios.",
1211
+ "bbox": [
1212
+ 511,
1213
+ 449,
1214
+ 908,
1215
+ 570
1216
+ ],
1217
+ "page_idx": 6
1218
+ },
1219
+ {
1220
+ "type": "text",
1221
+ "text": "4.3. Image Reconstruction",
1222
+ "text_level": 1,
1223
+ "bbox": [
1224
+ 511,
1225
+ 577,
1226
+ 720,
1227
+ 593
1228
+ ],
1229
+ "page_idx": 6
1230
+ },
1231
+ {
1232
+ "type": "text",
1233
+ "text": "To demonstrate the flexibility of our decoupled query design, we show that by directly fine-tuning the spatial branch $DQAE_{s}$ , SweetTok also achieves advanced performance for image reconstruction on ImageNet. As shown in Table 3, we compare SweetTok with recent methods under various token compression settings. With 256 spatial tokens, SweetTok outperforms TiTok [51] by $27.8\\%$ , reducing rFID from 1.01 to 0.73. When using 1,024 spatial tokens, SweetTok* achieves a significant improvement over both VQ-based and non-VQ-based methods (marked $\\diamond$ ), achieving an rFID of 0.37, which surpasses LlamaGen-8 [34] by $37.3\\%$ . Visualization results are shown in Figure 6, SweetTok maintains better global appearance and local details.",
1234
+ "bbox": [
1235
+ 511,
1236
+ 599,
1237
+ 906,
1238
+ 796
1239
+ ],
1240
+ "page_idx": 6
1241
+ },
1242
+ {
1243
+ "type": "text",
1244
+ "text": "4.4. Ablation Studies",
1245
+ "text_level": 1,
1246
+ "bbox": [
1247
+ 511,
1248
+ 801,
1249
+ 679,
1250
+ 816
1251
+ ],
1252
+ "page_idx": 6
1253
+ },
1254
+ {
1255
+ "type": "text",
1256
+ "text": "Decoupled Query AutoEncoder. We demonstrate the effectiveness of our spatial-temporal decoupling design for token compression. As shown in Table 4, naively downsampling video tokens by 1D linear-interpolation from 5,120 patches into 1,280 tokens results in poor performance of a",
1257
+ "bbox": [
1258
+ 511,
1259
+ 824,
1260
+ 906,
1261
+ 902
1262
+ ],
1263
+ "page_idx": 6
1264
+ },
1265
+ {
1266
+ "type": "page_number",
1267
+ "text": "23547",
1268
+ "bbox": [
1269
+ 478,
1270
+ 944,
1271
+ 519,
1272
+ 957
1273
+ ],
1274
+ "page_idx": 6
1275
+ },
1276
+ {
1277
+ "type": "image",
1278
+ "img_path": "images/1d94ac1515a489aaf898772d0f0cefbff39ecc9a0bdcddbe5088291d3793310b.jpg",
1279
+ "image_caption": [
1280
+ "GT",
1281
+ "Figure 6. Visualization of image reconstruction results."
1282
+ ],
1283
+ "image_footnote": [],
1284
+ "bbox": [
1285
+ 93,
1286
+ 88,
1287
+ 187,
1288
+ 378
1289
+ ],
1290
+ "page_idx": 7
1291
+ },
1292
+ {
1293
+ "type": "image",
1294
+ "img_path": "images/1b434c63c04a778058ac507ce92da498853d4ff6eb5a14287ff4b82c0b88e410.jpg",
1295
+ "image_caption": [
1296
+ "TiTok [51]"
1297
+ ],
1298
+ "image_footnote": [],
1299
+ "bbox": [
1300
+ 191,
1301
+ 88,
1302
+ 287,
1303
+ 378
1304
+ ],
1305
+ "page_idx": 7
1306
+ },
1307
+ {
1308
+ "type": "image",
1309
+ "img_path": "images/7e46e214711fd5ca9bb200003c9e5c3fad4cfd38944111c3633b782ce7793e45.jpg",
1310
+ "image_caption": [
1311
+ "OmniTok [2]"
1312
+ ],
1313
+ "image_footnote": [],
1314
+ "bbox": [
1315
+ 289,
1316
+ 88,
1317
+ 383,
1318
+ 378
1319
+ ],
1320
+ "page_idx": 7
1321
+ },
1322
+ {
1323
+ "type": "image",
1324
+ "img_path": "images/9c2d75630cd7624d78b82fb901bdfc8e7d847afef406cf8fc81833b448a3656a.jpg",
1325
+ "image_caption": [
1326
+ "SweetTok"
1327
+ ],
1328
+ "image_footnote": [],
1329
+ "bbox": [
1330
+ 385,
1331
+ 88,
1332
+ 480,
1333
+ 378
1334
+ ],
1335
+ "page_idx": 7
1336
+ },
1337
+ {
1338
+ "type": "table",
1339
+ "img_path": "images/7bacfc4dbd6884218f73630d703cef036f7f0ab6b2844d580387b1e8472a0eb4.jpg",
1340
+ "table_caption": [],
1341
+ "table_footnote": [],
1342
+ "table_body": "<table><tr><td>Compression Method</td><td>#Tokens</td><td>rFVD ↓</td></tr><tr><td>Vanilla Downsample</td><td>1280</td><td>227.65</td></tr><tr><td>Vanilla Query-based (LARP [3])</td><td>1024</td><td>35.15</td></tr><tr><td>Decoupled Query-based (DQAE)</td><td>1280</td><td>20.46</td></tr></table>",
1343
+ "bbox": [
1344
+ 112,
1345
+ 419,
1346
+ 459,
1347
+ 488
1348
+ ],
1349
+ "page_idx": 7
1350
+ },
1351
+ {
1352
+ "type": "text",
1353
+ "text": "rFVD of 227.65. Training SweetTok without decoupling (similarly to vanilla query-based method in LARP [3]) results in sub-optimal result, obtaining a rFVD of 35.15. Decoupling query (DQAE) achieves best result of 20.46 rFVD. There are two factors: (1) the flattening operation discards substantial consecutive temporal information, and (2) without decoupling, the model struggles to learn efficiently from the intertwined temporal and spatial information.",
1354
+ "bbox": [
1355
+ 89,
1356
+ 542,
1357
+ 483,
1358
+ 664
1359
+ ],
1360
+ "page_idx": 7
1361
+ },
1362
+ {
1363
+ "type": "text",
1364
+ "text": "Motion-enhanced Language Codebook. We evaluate the effects of the motion-enhanced language codebooks (MLC) on UCF-101 datasets. As illustrated in Table 5, vanilla language codebook design enhances rFVD performance from 29.45 to 24.80, leading to a sub-optimal result. Notably, our motion-enhanced temporal language codebook significantly benefits video reconstruction tasks, further reducing the rFVD score from 24.80 to 20.46. This underscores the importance of our unique design for video modalities. Additionally, we compare different types of language-based embeddings, such as using Qwen-2.5B [4] embeddings in place of CLIP [30] embeddings. The experiments indicate that naively increasing the complexity of pre-trained language model is not cost-effective for SweetTok.",
1365
+ "bbox": [
1366
+ 89,
1367
+ 674,
1368
+ 483,
1369
+ 898
1370
+ ],
1371
+ "page_idx": 7
1372
+ },
1373
+ {
1374
+ "type": "table",
1375
+ "img_path": "images/10603e1f9a27acbeddd9c144bca25005194b55b668583b29f72b064bb004e77f.jpg",
1376
+ "table_caption": [
1377
+ "Table 4. Ablation study of different token count compression method for video tokenizers."
1378
+ ],
1379
+ "table_footnote": [],
1380
+ "table_body": "<table><tr><td>Methods</td><td>rFVD ↓</td></tr><tr><td>Baseline (w/o LC)</td><td>29.45</td></tr><tr><td>+ LC</td><td>24.80</td></tr><tr><td>+ MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ CLIP [30]-based MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ Qwen [4]-based MLC</td><td>20.12</td></tr></table>",
1381
+ "bbox": [
1382
+ 552,
1383
+ 88,
1384
+ 864,
1385
+ 193
1386
+ ],
1387
+ "page_idx": 7
1388
+ },
1389
+ {
1390
+ "type": "table",
1391
+ "img_path": "images/a05e3684e37b3869cbad65319359a15b98b1d465c742f331e0c2e722f2d7cce7.jpg",
1392
+ "table_caption": [
1393
+ "Table 5. Ablation study of different codebooks, including vanilla language codebook (LC), motion-enhanced language codebook (MLC) and more advanced pre-trained language codebook."
1394
+ ],
1395
+ "table_footnote": [],
1396
+ "table_body": "<table><tr><td>Methods</td><td colspan=\"4\">ImageNet</td><td>UCF-101</td></tr><tr><td>K-way-N-shot</td><td>2-1</td><td>2-3</td><td>2-5</td><td>Avg</td><td>5-5</td></tr><tr><td>SPAE [50]</td><td>84.8</td><td>92.5</td><td>92.6</td><td>89.9</td><td>-</td></tr><tr><td>V2L [54]</td><td>76.3</td><td>91.2</td><td>95.3</td><td>87.6</td><td>-</td></tr><tr><td>ARN[53]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>83.1</td></tr><tr><td>HF-AR [23]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>86.4</td></tr><tr><td>SweetTok</td><td>86.8</td><td>90.5</td><td>95.2</td><td>90.8</td><td>90.1</td></tr></table>",
1397
+ "bbox": [
1398
+ 527,
1399
+ 241,
1400
+ 888,
1401
+ 359
1402
+ ],
1403
+ "page_idx": 7
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "text": "Table 6. Few-shot visual classification accuracy $(\\uparrow)$ , evaluated on both image and video modality.",
1408
+ "bbox": [
1409
+ 511,
1410
+ 364,
1411
+ 906,
1412
+ 392
1413
+ ],
1414
+ "page_idx": 7
1415
+ },
1416
+ {
1417
+ "type": "text",
1418
+ "text": "4.5. Visual Semantic Comprehension",
1419
+ "text_level": 1,
1420
+ "bbox": [
1421
+ 511,
1422
+ 407,
1423
+ 799,
1424
+ 422
1425
+ ],
1426
+ "page_idx": 7
1427
+ },
1428
+ {
1429
+ "type": "text",
1430
+ "text": "Few-Shot Visual Classification. We conducted experiments on few-shot image classification and video action recognition tasks. SweetTok extracted visual tokens and transformed them into natural language words via our language-based codebook. Subsequently, CLIP computed the similarity between the visual inputs and text embeddings. Top 21 tokens with the highest similarity were selected to form a prompt for prediction using the Qwen LLM. For the image classification task, we adhered to the V2L protocol [54], comparing SweetTok against SPAE [50] and V2L. In the video action recognition task, we used ARN [53] and HF-AR [23] as baselines. The results in Table 6 indicate that SweetTok achieved an accuracy of $90.8\\%$ on the miniImageNet dataset, surpassing SPAE $89.9\\%$ and V2L $87.6\\%$ . On the UCF-101 dataset, SweetTok attain an average accuracy of $90.1\\%$ , outperforming ARN $83.1\\%$ and HF-AR $86.4\\%$ . These findings demonstrate SweetTok's semantic ability in both image and video tasks. More results are in the supplementary.",
1431
+ "bbox": [
1432
+ 511,
1433
+ 429,
1434
+ 906,
1435
+ 717
1436
+ ],
1437
+ "page_idx": 7
1438
+ },
1439
+ {
1440
+ "type": "text",
1441
+ "text": "5. Conclusions",
1442
+ "text_level": 1,
1443
+ "bbox": [
1444
+ 511,
1445
+ 722,
1446
+ 640,
1447
+ 738
1448
+ ],
1449
+ "page_idx": 7
1450
+ },
1451
+ {
1452
+ "type": "text",
1453
+ "text": "We present SweetTok, an efficient video tokenization framework that compresses spatial and temporal information through the decoupled query autoencoder. Combined with motion-enhanced language codebook, SweetTok reduces token count for video data more effectively, achieving higher reconstruction fidelity compared to previous state-of-the-arts. Our approach offers a compact representation of video data, making it well-suited for downstream tasks such as video generation, understanding, marking a significant step in efficient video tokenization.",
1454
+ "bbox": [
1455
+ 511,
1456
+ 748,
1457
+ 906,
1458
+ 898
1459
+ ],
1460
+ "page_idx": 7
1461
+ },
1462
+ {
1463
+ "type": "page_number",
1464
+ "text": "23548",
1465
+ "bbox": [
1466
+ 478,
1467
+ 944,
1468
+ 517,
1469
+ 955
1470
+ ],
1471
+ "page_idx": 7
1472
+ },
1473
+ {
1474
+ "type": "text",
1475
+ "text": "References",
1476
+ "text_level": 1,
1477
+ "bbox": [
1478
+ 91,
1479
+ 89,
1480
+ 187,
1481
+ 104
1482
+ ],
1483
+ "page_idx": 8
1484
+ },
1485
+ {
1486
+ "type": "list",
1487
+ "sub_type": "ref_text",
1488
+ "list_items": [
1489
+ "[1] Language model beats diffusion-tokenizer is key to visual generation. ICLR, 2023. 1, 2, 3, 6",
1490
+ "[2] Omnitokensizer: A joint image-video tokenizer for visual generation. NeurIPS, 2024. 1, 2, 3, 4, 5, 6, 7, 8",
1491
+ "[3] Larp: Tokenizing videos with a learned autoregressive generative prior. *ICLR Oral*, 2025. 1, 2, 3, 4, 6, 7, 8",
1492
+ "[4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.8",
1493
+ "[5] Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 3",
1494
+ "[6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213-229. Springer, 2020. 2",
1495
+ "[7] Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint arXiv:1808.01340, 2018. 5",
1496
+ "[8] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In CVPR, pages 11315-11325, 2022. 1, 2, 6",
1497
+ "[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. JMLR, 24(240):1-113, 2023. 3",
1498
+ "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009. 2, 5",
1499
+ "[11] Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2018. 2, 3",
1500
+ "[12] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2020. 1, 2",
1501
+ "[13] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In CVPR, 2021. 2, 5, 6, 7",
1502
+ "[14] Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, and Devi Parikh. Long video generation with time-agnostic vqgan and time-sensitive transformer. In ECCV, pages 102-118. Springer, 2022. 1, 2, 3, 6",
1503
+ "[15] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. NeurIPS, 30, 2017. 5",
1504
+ "[16] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 6",
1505
+ "[17] Yang Jin, Zhicheng Sun, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, and Yadong Mu. Video-lavit: Unified videolanguage pre-training with decoupled visual-motional tokenization. In ICML, pages 22185-22209, 2024. 1, 2, 6"
1506
+ ],
1507
+ "bbox": [
1508
+ 93,
1509
+ 114,
1510
+ 482,
1511
+ 900
1512
+ ],
1513
+ "page_idx": 8
1514
+ },
1515
+ {
1516
+ "type": "list",
1517
+ "sub_type": "ref_text",
1518
+ "list_items": [
1519
+ "[18] Yang Jin, Kun Xu, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Yadong Mu, et al. Unified language-vision pretraining in llm with dynamic discrete visual tokenization. In ICLR, 2024. 2",
1520
+ "[19] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 5",
1521
+ "[20] Diederik P Kingma. Auto-encoding variational bayes. ICLR, 2013. 2",
1522
+ "[21] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6",
1523
+ "[22] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *ICLR*, 2016. 3",
1524
+ "[23] Neeraj Kumar and Siddhansh Narang. Few shot activity recognition using variational inference. arXiv preprint arXiv:2108.08990, 2021. 8",
1525
+ "[24] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In CVPR, pages 11523-11532, 2022. 7",
1526
+ "[25] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, pages 19730–19742, 2023. 2, 4",
1527
+ "[26] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. EMNLP, 2023. 1, 2",
1528
+ "[27] Hao Liu, Wilson Yan, and Pieter Abbeel. Language quantized autoencoders: Towards unsupervised text-image alignment. NeurIPS, 36, 2023. 2, 3, 5",
1529
+ "[28] Liao Qu, Huichao Zhang, Yiheng Liu, Xu Wang, Yi Jiang, Yiming Gao, Hu Ye, Daniel K Du, Zehuan Yuan, and Xinglong Wu. Tokenflow: Unified image tokenizer for multimodal understanding and generation. arXiv preprint arXiv:2412.03069, 2024. 7",
1530
+ "[29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 2",
1531
+ "[30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 3, 5, 8",
1532
+ "[31] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. NeurIPS, 32, 2019. 2",
1533
+ "[32] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 3",
1534
+ "[33] K Soomro. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 2, 5",
1535
+ "[34] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model"
1536
+ ],
1537
+ "bbox": [
1538
+ 516,
1539
+ 92,
1540
+ 903,
1541
+ 900
1542
+ ],
1543
+ "page_idx": 8
1544
+ },
1545
+ {
1546
+ "type": "page_number",
1547
+ "text": "23549",
1548
+ "bbox": [
1549
+ 478,
1550
+ 944,
1551
+ 517,
1552
+ 955
1553
+ ],
1554
+ "page_idx": 8
1555
+ },
1556
+ {
1557
+ "type": "list",
1558
+ "sub_type": "ref_text",
1559
+ "list_items": [
1560
+ "beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024. 7",
1561
+ "[35] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. ICLR, 2024. 1, 2",
1562
+ "[36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 2",
1563
+ "[37] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. NeurIPS, 34:200-212, 2021. 5",
1564
+ "[38] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 5",
1565
+ "[39] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. NeurIPS, 30, 2017. 1, 2, 5",
1566
+ "[40] A Vaswani. Attention is all you need. NeurIPS, 2017. 2",
1567
+ "[41] Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual descriptions. In ICLR, 2023. 1, 2",
1568
+ "[42] Junke Wang, Dongdong Chen, Chong Luo, Bo He, Lu Yuan, Zuxuan Wu, and Yu-Gang Jiang. Omnivid: A generative framework for universal video understanding. In CVPR, pages 18209-18220, 2024. 1, 2",
1569
+ "[43] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. ECCV, 2024. 1, 2",
1570
+ "[44] Chen Wei, Chenxi Liu, Siyuan Qiao, Zhishuai Zhang, Alan Yuille, and Jiahui Yu. De-diffusion makes text a strong cross-modal interface. In CVPR, pages 13492-13503, 2024. 3",
1571
+ "[45] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 7",
1572
+ "[46] Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, and Seunghoon Hong. Towards end-to-end generative modeling of long videos with memory-efficient bidirectional transformers. In CVPR, pages 22888-22897, 2023. 1",
1573
+ "[47] Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. ICLR, 2022. 2, 7",
1574
+ "[48] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. ICLR, 2024. 2",
1575
+ "[49] Lijun Yu, Yong Cheng, Kihyuk Sohn, José Lezama, Han Zhang, Huiwen Chang, Alexander G Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, and Lu Jiang. MAGVIT:"
1576
+ ],
1577
+ "bbox": [
1578
+ 91,
1579
+ 90,
1580
+ 482,
1581
+ 900
1582
+ ],
1583
+ "page_idx": 9
1584
+ },
1585
+ {
1586
+ "type": "list",
1587
+ "sub_type": "ref_text",
1588
+ "list_items": [
1589
+ "Masked generative video transformer. In CVPR, 2023. 1, 2, 3, 6, 7",
1590
+ "[50] Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, et al. Spae: Semantic pyramid autoencoder for multimodal generation with frozen llms. NeurIPS, 36, 2023. 2, 3, 5, 8",
1591
+ "[51] Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, and Liang-Chieh Chen. An image is worth 32 tokens for reconstruction and generation. NeurIPS, 2024. 1, 2, 4, 7, 8",
1592
+ "[52] Baoquan Zhang, Huaibin Wang, Chuyao Luo, Xutao Li, Guotao Liang, Yunming Ye, Xiaochen Qi, and Yao He. Codebook transfer with part-of-speech for vector-quantized image modeling. In CVPR, pages 7757-7766, 2024. 2, 3, 5",
1593
+ "[53] Hongguang Zhang, Li Zhang, Xiaojuan Qi, Hongdong Li, Philip HS Torr, and Piotr Koniusz. Few-shot action recognition with permutation-invariant attention. In ECCV, pages 525-542. Springer, 2020. 5, 8",
1594
+ "[54] Lei Zhu, Fangyun Wei, and Yanye Lu. Beyond text: Frozen large language models in visual signal comprehension. In CVPR, pages 27047-27057, 2024. 3, 8"
1595
+ ],
1596
+ "bbox": [
1597
+ 516,
1598
+ 92,
1599
+ 903,
1600
+ 402
1601
+ ],
1602
+ "page_idx": 9
1603
+ },
1604
+ {
1605
+ "type": "page_number",
1606
+ "text": "23550",
1607
+ "bbox": [
1608
+ 478,
1609
+ 944,
1610
+ 519,
1611
+ 955
1612
+ ],
1613
+ "page_idx": 9
1614
+ }
1615
+ ]
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_model.json ADDED
@@ -0,0 +1,2288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "header",
5
+ "bbox": [
6
+ 0.107,
7
+ 0.003,
8
+ 0.182,
9
+ 0.043
10
+ ],
11
+ "angle": 0,
12
+ "content": "CVF"
13
+ },
14
+ {
15
+ "type": "header",
16
+ "bbox": [
17
+ 0.239,
18
+ 0.001,
19
+ 0.808,
20
+ 0.047
21
+ ],
22
+ "angle": 0,
23
+ "content": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore."
24
+ },
25
+ {
26
+ "type": "title",
27
+ "bbox": [
28
+ 0.118,
29
+ 0.13,
30
+ 0.882,
31
+ 0.175
32
+ ],
33
+ "angle": 0,
34
+ "content": "SweetTok: Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.16,
40
+ 0.204,
41
+ 0.837,
42
+ 0.258
43
+ ],
44
+ "angle": 0,
45
+ "content": "Zhentao Tan,\\* Ben Xue,\\* Jian Jia, Junhao Wang, Wencai Ye, Shaoyun Shi, Mingjie Sun \nWenjin Wu, Quan Chen†, Peng Jiang \nKuaishou Technology, Beijing, China"
46
+ },
47
+ {
48
+ "type": "text",
49
+ "bbox": [
50
+ 0.197,
51
+ 0.26,
52
+ 0.788,
53
+ 0.293
54
+ ],
55
+ "angle": 0,
56
+ "content": "{tanzhentao03, xueben, jiajian, wangjunhao05, yewencai, shishaoyun, sunmingjie, wuwenjin, chenquan06, jiangpeng}@kuaishou.com"
57
+ },
58
+ {
59
+ "type": "title",
60
+ "bbox": [
61
+ 0.248,
62
+ 0.327,
63
+ 0.327,
64
+ 0.344
65
+ ],
66
+ "angle": 0,
67
+ "content": "Abstract"
68
+ },
69
+ {
70
+ "type": "text",
71
+ "bbox": [
72
+ 0.089,
73
+ 0.359,
74
+ 0.485,
75
+ 0.707
76
+ ],
77
+ "angle": 0,
78
+ "content": "This paper presents the Semantic-aWarE spatial-tEmporal Tokenizer (SweetTok), a novel video tokenizer to overcome the limitations in current video tokenization methods for compacted yet effective discretization. Unlike previous approaches that process flattened local visual patches via direct discretization or adaptive query tokenization, SweetTok proposes a decoupling framework, compressing visual inputs through distinct spatial and temporal queries via Decoupled Query AutoEncoder (DQAE). This design allows SweetTok to efficiently compress video token count while achieving superior fidelity by capturing essential information across spatial and temporal dimensions. Furthermore, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression to address the differences in semantic representation between appearance and motion information. SweetTok significantly improves video reconstruction results by \\(42.8\\%\\) w.r.t rFVD on UCF-101 dataset. With a better token compression strategy, it also boosts downstream video generation results by \\(15.1\\%\\) w.r.t gFVD. Additionally, the compressed decoupled tokens are imbued with semantic information, enabling few-shot recognition capabilities powered by LLMs in downstream applications."
79
+ },
80
+ {
81
+ "type": "title",
82
+ "bbox": [
83
+ 0.091,
84
+ 0.722,
85
+ 0.222,
86
+ 0.737
87
+ ],
88
+ "angle": 0,
89
+ "content": "1. Introduction"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.09,
95
+ 0.747,
96
+ 0.483,
97
+ 0.867
98
+ ],
99
+ "angle": 0,
100
+ "content": "Visual tokenizers [1-3, 8, 14, 39, 41, 49] are emerging as essential components in the field of modern computer vision models, particularly in the generation [1, 14, 46] and understanding [17, 26, 35, 42, 43] of vision data. These tools convert visual inputs into discrete tokens, capturing essential temporal and spatial features that facilitate advanced analysis by formulating visual-related tasks as a token prediction process."
101
+ },
102
+ {
103
+ "type": "image",
104
+ "bbox": [
105
+ 0.536,
106
+ 0.326,
107
+ 0.887,
108
+ 0.63
109
+ ],
110
+ "angle": 0,
111
+ "content": null
112
+ },
113
+ {
114
+ "type": "image_caption",
115
+ "bbox": [
116
+ 0.512,
117
+ 0.642,
118
+ 0.908,
119
+ 0.713
120
+ ],
121
+ "angle": 0,
122
+ "content": "Figure 1. Illustration of our framework. We build a compact visual latent space by reducing token count in a decoupled style and leveraging motion-enhanced semantic text embedding. The encoded tokens can be applied to downstream tasks, such as generation and understanding."
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.512,
128
+ 0.735,
129
+ 0.909,
130
+ 0.903
131
+ ],
132
+ "angle": 0,
133
+ "content": "Compression ratio and reconstruction fidelity are vital criteria for evaluating a tokenizer. Recent visual tokenizers, especially video tokenizers [1, 2, 41] typically retain a low compression ratio. This is because visual tokens are usually derived from 2D patches [12] or 3D tubes [2, 14] which preserve location relationships (e.g., each token corresponds to a specific region of input [51]), leading to redundancy in both spatial and temporal dimensions. Most recent work LARP [3] quantizes the flattened video patches through adaptive holistic queries to achieve high compression ratio. However, it is observed that directly flattening"
134
+ },
135
+ {
136
+ "type": "page_footnote",
137
+ "bbox": [
138
+ 0.109,
139
+ 0.875,
140
+ 0.217,
141
+ 0.888
142
+ ],
143
+ "angle": 0,
144
+ "content": "*Equal contribution"
145
+ },
146
+ {
147
+ "type": "page_footnote",
148
+ "bbox": [
149
+ 0.111,
150
+ 0.888,
151
+ 0.234,
152
+ 0.9
153
+ ],
154
+ "angle": 0,
155
+ "content": "† Corresponding author"
156
+ },
157
+ {
158
+ "type": "list",
159
+ "bbox": [
160
+ 0.109,
161
+ 0.875,
162
+ 0.234,
163
+ 0.9
164
+ ],
165
+ "angle": 0,
166
+ "content": null
167
+ },
168
+ {
169
+ "type": "page_number",
170
+ "bbox": [
171
+ 0.479,
172
+ 0.945,
173
+ 0.518,
174
+ 0.957
175
+ ],
176
+ "angle": 0,
177
+ "content": "23541"
178
+ }
179
+ ],
180
+ [
181
+ {
182
+ "type": "text",
183
+ "bbox": [
184
+ 0.09,
185
+ 0.092,
186
+ 0.482,
187
+ 0.168
188
+ ],
189
+ "angle": 0,
190
+ "content": "video tokens into sequence may lead to difficulty in learning intertwined spatial temporal information resulting in low reconstruction performance. Therefore, a new compression method needs to be proposed, one that takes into account the spatiotemporal properties of video."
191
+ },
192
+ {
193
+ "type": "text",
194
+ "bbox": [
195
+ 0.09,
196
+ 0.17,
197
+ 0.482,
198
+ 0.306
199
+ ],
200
+ "angle": 0,
201
+ "content": "Another issue, meanwhile, is that a higher compression ratio typically results in a greater loss of reconstruction details. To complement visual information during compression, one common strategy is to introduce pretrained language embeddings as the latent codebook [27, 50, 52], leveraging their semantic representation capabilities. However, previous works primarily focus on image modality, overlooking the relationships between text and motion in video domain."
202
+ },
203
+ {
204
+ "type": "text",
205
+ "bbox": [
206
+ 0.093,
207
+ 0.308,
208
+ 0.483,
209
+ 0.58
210
+ ],
211
+ "angle": 0,
212
+ "content": "To address existing limitations, we propose SweetTok - Semantic-aWarE spatial-tEmporal Tokenizer - as illustrated in Figure 1. Considering the heterogeneous redundancy in static images and dynamic frames, we propose the Decoupled Query AutoEncoder (DQAE) to compress spatial and temporal information into separate learnable queries. Different from previous works [3, 25, 51], our findings indicate that coupling the compression of spatiotemporal information increases the difficulty for the decoder to learn the motion information of the same pixel across consecutive frames. Thus, taking the decoupled spatial and temporal queries as inputs, we devise a strategy of spatial decoding followed by temporal decoding to achieve a separate reconstruction of the spatial and temporal dimensions of visual information. Additionally, the decoupled spatiotemporal reconstruction approach naturally allows for finetuning on image data, making our SweetTok flexible to image reconstruction task."
213
+ },
214
+ {
215
+ "type": "text",
216
+ "bbox": [
217
+ 0.09,
218
+ 0.582,
219
+ 0.483,
220
+ 0.763
221
+ ],
222
+ "angle": 0,
223
+ "content": "Furthermore, to integrate the semantic information inherent in pre-traiend language model, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression addressing the differences in semantic representation between spatial and temporal information. Specifically, we design two language-based codebooks based on the part of speech, using nouns and adjectives for spatial static information and verbs and adverbs for temporal motion information. By incorporating language-based codebooks, the learnable compressed queries can also be easily adapted to downstream visual understanding tasks by in-context learning of LLM."
224
+ },
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.09,
229
+ 0.765,
230
+ 0.483,
231
+ 0.901
232
+ ],
233
+ "angle": 0,
234
+ "content": "Exhaustive experiments demonstrate the effectiveness of SweetTok. Compared with vanilla video tokenizer without token compression (OmniTok[2]), SweetTok improves rFVD by \\(52.3\\%\\) on UCF-101 [33] using only \\(25\\%\\) of the tokens. Compared with vanilla query-based tokenizer (LARP [3]), SweetTok reduces rFVD from 35.15 to 20.46 and gFVD from 99 to 84, on UCF-101. By directly finetuning the decoupled spatial branch on the ImageNet-1k [10], SweetTok also demonstrates a substantial improvement in"
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.514,
240
+ 0.092,
241
+ 0.764,
242
+ 0.106
243
+ ],
244
+ "angle": 0,
245
+ "content": "rFID, decreasing it from 0.59 to 0.37."
246
+ },
247
+ {
248
+ "type": "text",
249
+ "bbox": [
250
+ 0.513,
251
+ 0.108,
252
+ 0.905,
253
+ 0.137
254
+ ],
255
+ "angle": 0,
256
+ "content": "In summary, our work makes the following key contributions:"
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.514,
262
+ 0.141,
263
+ 0.905,
264
+ 0.216
265
+ ],
266
+ "angle": 0,
267
+ "content": "- We introduce SweetTok, a cutting-edge video tokenizer that achieves the state-of-the-art reconstruction fidelity with a high compression ratio via spatial-temporal decoupling and decoupled query autoencoder (DQAE), reaching a \"sweet spot\" between compression and fidelity."
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.513,
273
+ 0.217,
274
+ 0.905,
275
+ 0.291
276
+ ],
277
+ "angle": 0,
278
+ "content": "- We propose a motion-enhanced language codebook (MLC) to more effectively capture the action information embedded in the video modality, thereby improving reconstruction quality and supporting downstream video understanding tasks."
279
+ },
280
+ {
281
+ "type": "text",
282
+ "bbox": [
283
+ 0.514,
284
+ 0.292,
285
+ 0.905,
286
+ 0.368
287
+ ],
288
+ "angle": 0,
289
+ "content": "- We perform extensive experiments to verify the effectiveness of SweetTok, which exhibits the state-of-the-art performance on video reconstruction, image reconstruction, and class-conditional video generation tasks, leading by a large margin of \\(42.8\\%\\), \\(37.2\\%\\), and \\(15.1\\%\\)."
290
+ },
291
+ {
292
+ "type": "list",
293
+ "bbox": [
294
+ 0.513,
295
+ 0.141,
296
+ 0.905,
297
+ 0.368
298
+ ],
299
+ "angle": 0,
300
+ "content": null
301
+ },
302
+ {
303
+ "type": "title",
304
+ "bbox": [
305
+ 0.514,
306
+ 0.385,
307
+ 0.642,
308
+ 0.402
309
+ ],
310
+ "angle": 0,
311
+ "content": "2. Background"
312
+ },
313
+ {
314
+ "type": "title",
315
+ "bbox": [
316
+ 0.513,
317
+ 0.41,
318
+ 0.882,
319
+ 0.426
320
+ ],
321
+ "angle": 0,
322
+ "content": "2.1. Visual Tokenizer With Vector Quantization"
323
+ },
324
+ {
325
+ "type": "text",
326
+ "bbox": [
327
+ 0.513,
328
+ 0.433,
329
+ 0.905,
330
+ 0.78
331
+ ],
332
+ "angle": 0,
333
+ "content": "Exploring visual tokenizers and their applications in generative models has led to significant advancements in image/video-related tasks. The general idea is to discretize visual data into tokens, then tasks like visual generation [8, 14, 47, 48] & understanding [6, 12, 17, 18, 26, 35, 42, 43] can be tackled in a sequence prediction style as natural language processing [11, 29, 36]. Our work belongs to the series of Vector Quantized Variational AutoEncoder (VQVAE) [31, 39] tokenizers, which introduce a discrete latent space for continuous VAE [20] encoder-decoder structure. It typically encodes a high-dimensional image into a low-dimensional latent representation, then queries the nearest index from a learnable codebook to quantize the latent vector, and finally decodes back reversely to reconstruct the raw input signal. Since this type of tokenizer acquires reconstruction loss, it can maintain high-level semantic and low-level details of input vision. VQGAN [13] adopted adversarial training loss to improve high-frequency details. ViT-VQGAN [47] upgraded encoder-decoder with visiontransformer (ViT) architecture [12] and further boosted results. TiTok [51] replaced 2D image structure with 1D sequence latent representation, then used a self-attention transformer [40] to compress token number."
334
+ },
335
+ {
336
+ "type": "text",
337
+ "bbox": [
338
+ 0.512,
339
+ 0.781,
340
+ 0.907,
341
+ 0.902
342
+ ],
343
+ "angle": 0,
344
+ "content": "However, the above methods can only process image data. For video modality, TATS [14] used 3D-CNN to encode video patches and adopted sliding windows to deal with long-term relations. CViViT [41] used ViT [12] structure to encode spatial patches and then adopted a causal transformer to model temporal information. OmniTokenizer [2] and MAGVIT [1, 49] adopted similar transformer architecture and introduced image pre-training to improve"
345
+ },
346
+ {
347
+ "type": "page_number",
348
+ "bbox": [
349
+ 0.479,
350
+ 0.945,
351
+ 0.521,
352
+ 0.958
353
+ ],
354
+ "angle": 0,
355
+ "content": "23542"
356
+ }
357
+ ],
358
+ [
359
+ {
360
+ "type": "image",
361
+ "bbox": [
362
+ 0.097,
363
+ 0.09,
364
+ 0.362,
365
+ 0.39
366
+ ],
367
+ "angle": 0,
368
+ "content": null
369
+ },
370
+ {
371
+ "type": "image",
372
+ "bbox": [
373
+ 0.387,
374
+ 0.089,
375
+ 0.905,
376
+ 0.39
377
+ ],
378
+ "angle": 0,
379
+ "content": null
380
+ },
381
+ {
382
+ "type": "image_caption",
383
+ "bbox": [
384
+ 0.089,
385
+ 0.4,
386
+ 0.908,
387
+ 0.485
388
+ ],
389
+ "angle": 0,
390
+ "content": "Figure 2. Pipeline overview. (a) Vanilla video tokenizers directly quantize flattened video patches. (b) Vanilla query-based tokenizers compress flattend video patches into adaptive queries. (c) SweetTok proposes decoupled query-based autoencoder (DQAE, §3.2.2). The spatial encoder quantizes the first frame's patch embeddings, while the temporal encoder quantizes residual between consecutive frames. The spatial decoder reconstructs the first frame's patches, replicates them \\( T \\) times, and passes them to the temporal decoder for final information fusion and reconstruction. It also proposes motion-enhanced language codebook (MLC, §3.2.3) to complement reconstructed video information via action-related language semantics."
391
+ },
392
+ {
393
+ "type": "text",
394
+ "bbox": [
395
+ 0.089,
396
+ 0.511,
397
+ 0.483,
398
+ 0.587
399
+ ],
400
+ "angle": 0,
401
+ "content": "video tokenizer. LARP [3], on the other hand, introduces to compress flattened video patches into adaptive holistic queries with the guidance of a pre-trained auto-regressive model. In this paper, we inherit the popular spatial-temporal decomposition design for video data."
402
+ },
403
+ {
404
+ "type": "title",
405
+ "bbox": [
406
+ 0.09,
407
+ 0.605,
408
+ 0.391,
409
+ 0.62
410
+ ],
411
+ "angle": 0,
412
+ "content": "2.2. Language-based Latent Codebook"
413
+ },
414
+ {
415
+ "type": "text",
416
+ "bbox": [
417
+ 0.093,
418
+ 0.629,
419
+ 0.485,
420
+ 0.901
421
+ ],
422
+ "angle": 0,
423
+ "content": "The codebooks learned by vanilla VQ-VAEs are not interpretable with lexical meanings. Therefore, many works attempt to utilize pretrained language models embedding codebooks to enhance semantics. LQAE [27] replaced the visual codebook with frozen word embeddings from BERT [11]. SPAE [50] quantized image latent space in a pyramid structure to preserve semantic information from low-level to high-level. It also used large language model (LLM) codebook [9] so that the encoded image token can be directly adapted to visual understanding tasks through in-context learning [5] ability of LLM. We follow this evaluation pipeline for few-shot classification in our paper. V2L-Tokenizer [54] utilized CLIP [30] pretrained encoder and injected a learnable projector to align visual-text latent space implicitly. VQCT [52] replaced the projector with graph convolution networks [22] to consider the relationship between vocabularies. Furthermore, De-Diffusion [44] directly encoded image into plain text as latent space inter"
424
+ },
425
+ {
426
+ "type": "text",
427
+ "bbox": [
428
+ 0.512,
429
+ 0.511,
430
+ 0.907,
431
+ 0.603
432
+ ],
433
+ "angle": 0,
434
+ "content": "face and decodes back through a text-to-image (T2I) diffusion model [32]. However, these studies primarily focus on the image modality. In this paper, we explore the design of the language codebook specifically for the video modality by splitting the codebook according to the video's spatial-temporal attribute."
435
+ },
436
+ {
437
+ "type": "title",
438
+ "bbox": [
439
+ 0.513,
440
+ 0.617,
441
+ 0.605,
442
+ 0.633
443
+ ],
444
+ "angle": 0,
445
+ "content": "3. Method"
446
+ },
447
+ {
448
+ "type": "title",
449
+ "bbox": [
450
+ 0.513,
451
+ 0.643,
452
+ 0.642,
453
+ 0.659
454
+ ],
455
+ "angle": 0,
456
+ "content": "3.1. Preliminary"
457
+ },
458
+ {
459
+ "type": "text",
460
+ "bbox": [
461
+ 0.512,
462
+ 0.665,
463
+ 0.907,
464
+ 0.844
465
+ ],
466
+ "angle": 0,
467
+ "content": "A typical visual vector-quantization (VQ) model [1, 2, 14, 49] contains three parts: encoder \\(\\mathcal{E}\\), decoder \\(\\mathcal{D}\\) and latent quantizer \\(\\mathcal{Q}\\) as shown in Figure 2 (a). Take video modality as an example, given a video input \\(x\\in \\mathbb{R}^{T\\times H\\times W\\times 3}\\), where \\(T\\) represents the temporal length and \\(H\\times W\\) denotes spatial resolution, encoder \\(\\mathcal{E}(x)\\) projects it into latent space \\(\\mathbb{Z}\\in \\mathbb{R}^{N\\times D}\\), where \\(D\\) is latent dimension and \\(N\\) is token number. A quantizer \\(\\mathcal{Q}\\) is constructed in this latent space \\(\\mathbb{Z}\\) by querying the nearest neighbor in codebook \\(C\\in \\mathbb{R}^{L_c\\times D}\\), where \\(L_{c}\\) is codebook size. Then \\(\\mathcal{D}\\) decodes latent space back to pixel space and applies self-supervised reconstruction loss:"
468
+ },
469
+ {
470
+ "type": "equation",
471
+ "bbox": [
472
+ 0.633,
473
+ 0.848,
474
+ 0.905,
475
+ 0.865
476
+ ],
477
+ "angle": 0,
478
+ "content": "\\[\n\\mathcal {L} _ {\\text {r e c}} (x, \\mathcal {D} (\\mathcal {Q} (\\mathcal {E} (x)))) \\tag {1}\n\\]"
479
+ },
480
+ {
481
+ "type": "text",
482
+ "bbox": [
483
+ 0.512,
484
+ 0.871,
485
+ 0.907,
486
+ 0.903
487
+ ],
488
+ "angle": 0,
489
+ "content": "Unlike traditional visual VQ models, LARP [3] quantizes flattened video patches into holistic queries \\(\\mathbf{Q} \\in\\)"
490
+ },
491
+ {
492
+ "type": "page_number",
493
+ "bbox": [
494
+ 0.479,
495
+ 0.945,
496
+ 0.518,
497
+ 0.957
498
+ ],
499
+ "angle": 0,
500
+ "content": "23543"
501
+ }
502
+ ],
503
+ [
504
+ {
505
+ "type": "text",
506
+ "bbox": [
507
+ 0.09,
508
+ 0.09,
509
+ 0.485,
510
+ 0.215
511
+ ],
512
+ "angle": 0,
513
+ "content": "\\(\\mathbb{R}^{L\\times D}\\) as shown in Figure 2 (b), where \\(L\\) is the adaptive query size. As shown in Equation (2), the encoder \\(\\mathcal{E}\\) processes concatenated flattened video patches \\(\\mathbf{E} =\\) flatten \\((x)\\in \\mathbb{R}^{N\\times D}\\) and query tokens \\(\\mathbf{Q}\\in \\mathbb{R}^{L\\times D}\\) outputting \\(\\mathbf{Z}_{\\mathbf{Q}}\\in \\mathbb{R}^{L\\times D}\\) for quantization. The decoder \\(\\mathcal{D}\\) reconstructs the video patches \\(\\tilde{\\mathbf{x}}\\in \\mathbb{R}^{T\\times H\\times W\\times 3}\\) from the concatenation of learnable video queries \\(\\mathbf{E}_{\\mathbf{Q}}\\in \\mathbb{R}^{N\\times D}\\) and the query discrete embeddings \\(\\tilde{\\mathbf{Z}}_{\\mathbf{Q}}\\)"
514
+ },
515
+ {
516
+ "type": "equation",
517
+ "bbox": [
518
+ 0.104,
519
+ 0.225,
520
+ 0.483,
521
+ 0.256
522
+ ],
523
+ "angle": 0,
524
+ "content": "\\[\n\\mathbf {Z} _ {\\mathbf {E}} | | \\mathbf {Z} _ {\\mathbf {Q}} = \\mathcal {E} (\\mathbf {E} | | \\mathbf {Q}), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q}} = \\mathcal {Q} (\\mathbf {Z} _ {\\mathbf {Q}}), \\tilde {x} = \\mathcal {D} (\\mathbf {E} _ {\\mathbf {Q}} | | \\tilde {\\mathbf {Z}} _ {\\mathbf {Q}}). \\tag {2}\n\\]"
525
+ },
526
+ {
527
+ "type": "text",
528
+ "bbox": [
529
+ 0.09,
530
+ 0.256,
531
+ 0.484,
532
+ 0.346
533
+ ],
534
+ "angle": 0,
535
+ "content": "However, directly compressing video from flattened patches is challenging due to intertwined redundant temporal data and low-level spatial content, leading to suboptimal performance. Thus, we designed SweetTok to balance reconstruction performance with a high compression ratio. Details are elaborated in Section 3.2."
536
+ },
537
+ {
538
+ "type": "title",
539
+ "bbox": [
540
+ 0.091,
541
+ 0.355,
542
+ 0.451,
543
+ 0.372
544
+ ],
545
+ "angle": 0,
546
+ "content": "3.2. Decoupled Spatial-Temporal Tokenization"
547
+ },
548
+ {
549
+ "type": "text",
550
+ "bbox": [
551
+ 0.09,
552
+ 0.377,
553
+ 0.484,
554
+ 0.513
555
+ ],
556
+ "angle": 0,
557
+ "content": "As noted in Section 3.1, directly quantizing video data via flattened patches hampers model learning due to intertwined redundant temporal and complex spatial information. Thus, we propose separately quantizing spatial and temporal dimensions before combining them to reconstruct the video, following the divide-and-conquer principle. The decoupling strategy enables high compression while ensuring higher fidelity. The main pipeline is shown in Figure 2 (c)."
558
+ },
559
+ {
560
+ "type": "title",
561
+ "bbox": [
562
+ 0.091,
563
+ 0.521,
564
+ 0.196,
565
+ 0.536
566
+ ],
567
+ "angle": 0,
568
+ "content": "3.2.1. Patchify"
569
+ },
570
+ {
571
+ "type": "text",
572
+ "bbox": [
573
+ 0.09,
574
+ 0.539,
575
+ 0.484,
576
+ 0.653
577
+ ],
578
+ "angle": 0,
579
+ "content": "Given a video frame sequence \\(x \\in \\mathbb{R}^{T \\times H \\times W \\times 3}\\), we select the first frame \\(x_{1}\\) as a reference for spatial information, the remaining \\(T - 1\\) frames \\(x_{2:T}\\) for temporal information, following the strategy in [2]. We apply two patch kernels \\(\\mathcal{P}_s, \\mathcal{P}_t\\) with shapes \\(p_h \\times p_w\\) and \\(p_t \\times p_h \\times p_w\\) to \\(x_{1}\\) and \\(x_{2:T}\\) separately, generating \\(v_s \\in \\mathbb{R}^{1 \\times \\frac{H}{p_h} \\times \\frac{W}{p_w} \\times D}\\) and \\(v_t \\in \\mathbb{R}^{\\frac{T - 1}{p_t} \\times \\frac{H}{p_h} \\times \\frac{W}{p_w} \\times D}\\) shown below:"
580
+ },
581
+ {
582
+ "type": "equation",
583
+ "bbox": [
584
+ 0.191,
585
+ 0.665,
586
+ 0.483,
587
+ 0.681
588
+ ],
589
+ "angle": 0,
590
+ "content": "\\[\nv _ {s} = \\mathcal {P} _ {s} \\left(x _ {1}\\right), v _ {t} = \\mathcal {P} _ {t} \\left(x _ {2: T}\\right) \\tag {3}\n\\]"
591
+ },
592
+ {
593
+ "type": "text",
594
+ "bbox": [
595
+ 0.09,
596
+ 0.693,
597
+ 0.484,
598
+ 0.8
599
+ ],
600
+ "angle": 0,
601
+ "content": "\\(v_{s}\\) and \\(v_{t}\\) are inputs for transformer-based autoencoder, where \\(v_{s}\\) contains spatial information, \\(v_{t}\\) contains temporal information. In practice, for a video with 17 frames and a resolution of \\(256 \\times 256\\), we set \\((p_{t}, p_{h}, p_{w})\\) to \\((4, 8, 8)\\), thus patchify frames into \\(v_{s}\\) with shape \\(1 \\times 32 \\times 32\\) and \\(v_{t}\\) with shape \\(4 \\times 32 \\times 32\\). We use \\(t = \\frac{T - 1}{p_t} = 4\\) to denote \\(v_{t}\\)'s length."
602
+ },
603
+ {
604
+ "type": "title",
605
+ "bbox": [
606
+ 0.091,
607
+ 0.806,
608
+ 0.419,
609
+ 0.822
610
+ ],
611
+ "angle": 0,
612
+ "content": "3.2.2.Decoupled Query AutoEncoder (DQAE)"
613
+ },
614
+ {
615
+ "type": "text",
616
+ "bbox": [
617
+ 0.09,
618
+ 0.825,
619
+ 0.484,
620
+ 0.902
621
+ ],
622
+ "angle": 0,
623
+ "content": "To decouple spatial and temporal dimensions, we need to compress video patches separately along each dimension. Inspired by [3, 51] and Q-Former [25], we compress each dimension into an adaptive query tokens via cross-attention interactions. As an innovation, we recursively inject these"
624
+ },
625
+ {
626
+ "type": "text",
627
+ "bbox": [
628
+ 0.512,
629
+ 0.092,
630
+ 0.906,
631
+ 0.168
632
+ ],
633
+ "angle": 0,
634
+ "content": "cross-attention query modules into transformer-based autoencoder to transfer information, forming our DQAE module shown in the right gray box of Figure 2 (c). For the rest of our paper, we use \\(\\mathcal{E}_{DQAE}\\) and \\(\\mathcal{D}_{DQAE}\\) as the encoder and decoder of the DQAE module for simplicity."
635
+ },
636
+ {
637
+ "type": "text",
638
+ "bbox": [
639
+ 0.512,
640
+ 0.187,
641
+ 0.906,
642
+ 0.248
643
+ ],
644
+ "angle": 0,
645
+ "content": "Spatial Tokenization. We observe that for most video parts, the first frame holds the most spatial information, so we use \\( v_{s} \\) as the input to \\( \\mathcal{E}_{DQAE_s} \\) for quantization shown below:"
646
+ },
647
+ {
648
+ "type": "equation",
649
+ "bbox": [
650
+ 0.54,
651
+ 0.261,
652
+ 0.907,
653
+ 0.281
654
+ ],
655
+ "angle": 0,
656
+ "content": "\\[\n\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {s}}} = \\mathcal {E} _ {D Q A E _ {s}} (\\mathbf {Q} _ {\\mathbf {s}}, v _ {s}), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {s}}} = \\mathcal {Q} _ {M L C} (\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {s}}}), \\quad (4)\n\\]"
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.512,
662
+ 0.291,
663
+ 0.907,
664
+ 0.44
665
+ ],
666
+ "angle": 0,
667
+ "content": "where \\(\\mathbf{Q}_{\\mathbf{s}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}\\) are the learnable spatial query embeddings and \\(\\mathbf{Z}_{\\mathbf{Q}_{\\mathbf{s}}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}\\) are the output embeddings encoding information from the first frame patches \\(v_{s}\\). \\(\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{s}}} \\in \\mathbb{R}^{L_{\\text{spatial}} \\times D}\\) are the quantized spatial token embeddings and \\(\\mathcal{Q}_{MLC}\\) stands for Motion-enhanced Language Codebook (MLC) quantizer which will be elaborated in the following section. After quantization, we inject our informative spatial token embeddings \\(\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{s}}}\\) into a learnable spatial patch queries \\(\\mathbf{Q}_{v_s}\\) through decoder \\(\\mathcal{D}_{DQA_E_s}\\) as below:"
668
+ },
669
+ {
670
+ "type": "equation",
671
+ "bbox": [
672
+ 0.617,
673
+ 0.457,
674
+ 0.906,
675
+ 0.476
676
+ ],
677
+ "angle": 0,
678
+ "content": "\\[\n\\tilde {v} _ {s} = \\mathcal {D} _ {D Q A E _ {s}} \\left(\\mathbf {Q} _ {v _ {s}}, \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {s}}\\right), \\tag {5}\n\\]"
679
+ },
680
+ {
681
+ "type": "text",
682
+ "bbox": [
683
+ 0.512,
684
+ 0.483,
685
+ 0.906,
686
+ 0.545
687
+ ],
688
+ "angle": 0,
689
+ "content": "where \\(\\tilde{v}_s\\) are the reconstructed first frame video patches. The temporal component which will be stated in the following, combined with \\(\\tilde{v}_s\\) is used for the final video reconstruction. We set \\(L_{\\text{spatial}} = 256\\) in real implementation."
690
+ },
691
+ {
692
+ "type": "text",
693
+ "bbox": [
694
+ 0.513,
695
+ 0.553,
696
+ 0.907,
697
+ 0.698
698
+ ],
699
+ "angle": 0,
700
+ "content": "Temporal Tokenization. It is observed that for video data, there is much redundancy along temporal dimension. It motivates us to employ frame-wise residual \\(\\Delta v_{t} = (\\Delta v_{t}^{1},\\Delta v_{t}^{2},\\dots,\\Delta v_{t}^{k})_{k = \\frac{H}{p_{h}}\\times \\frac{W}{p_{w}}}\\), where \\(\\Delta v_{t}^{i} = v_{s}^{i} - v_{t}^{i}\\), for tokenization. We use the first frame of the video for frame-wise residual because the spatial tokenization phase reconstruct it. Then, the residual \\(\\Delta v = (\\Delta v_{1},\\Delta v_{2},\\dots,\\Delta v_{t})_{t = \\frac{T - 1}{p_{t}}}\\) is input to \\(\\mathcal{E}_{DQAE_t}\\) for temporal compression, as shown below:"
701
+ },
702
+ {
703
+ "type": "equation",
704
+ "bbox": [
705
+ 0.537,
706
+ 0.71,
707
+ 0.906,
708
+ 0.73
709
+ ],
710
+ "angle": 0,
711
+ "content": "\\[\n\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {t}}} = \\mathcal {E} _ {D Q A E _ {t}} (\\mathbf {Q} _ {\\mathbf {t}}, \\Delta v), \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {t}}} = \\mathcal {Q} _ {M L C} (\\mathbf {Z} _ {\\mathbf {Q} _ {\\mathbf {t}}}), \\quad (6)\n\\]"
712
+ },
713
+ {
714
+ "type": "text",
715
+ "bbox": [
716
+ 0.512,
717
+ 0.739,
718
+ 0.906,
719
+ 0.877
720
+ ],
721
+ "angle": 0,
722
+ "content": "where \\(\\mathbf{Q}_{\\mathbf{t}} \\in \\mathbb{R}^{L_{temporal} \\times D}\\) are the learnable temporal query embeddings and \\(\\mathbf{Z}_{\\mathbf{Q}_{\\mathbf{t}}} \\in \\mathbb{R}^{L_{temporal} \\times D}\\) are the output embeddings encoding information of the frame-wise residual. \\(\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{t}}} \\in \\mathbb{R}^{L_{temporal} \\times D}\\) are the quantized frame-wise temporal residual token embeddings. In practical implementation, \\(L_{temporal} = 1024\\). After reconstructing the first frame patches, we recover the entire video patches by combining \\(\\tilde{v}_s\\) with the frame-wise quantized residual \\(\\tilde{\\mathbf{Z}}_{\\mathbf{Q}_{\\mathbf{t}}}\\). We tiled \\(\\tilde{v}_s\\) for \\(t\\) times and sent it to \\(\\mathcal{D}_{DQAE_t}\\) shown below:"
723
+ },
724
+ {
725
+ "type": "equation",
726
+ "bbox": [
727
+ 0.595,
728
+ 0.884,
729
+ 0.906,
730
+ 0.903
731
+ ],
732
+ "angle": 0,
733
+ "content": "\\[\n\\tilde {v} = \\mathcal {D} _ {D Q A E _ {t}} \\left(\\left[ \\tilde {v} _ {s} \\right| | \\dots | | \\tilde {v} _ {s} \\right], \\tilde {\\mathbf {Z}} _ {\\mathbf {Q} _ {\\mathbf {t}}}), \\tag {7}\n\\]"
734
+ },
735
+ {
736
+ "type": "page_number",
737
+ "bbox": [
738
+ 0.479,
739
+ 0.945,
740
+ 0.521,
741
+ 0.957
742
+ ],
743
+ "angle": 0,
744
+ "content": "23544"
745
+ }
746
+ ],
747
+ [
748
+ {
749
+ "type": "text",
750
+ "bbox": [
751
+ 0.09,
752
+ 0.092,
753
+ 0.485,
754
+ 0.152
755
+ ],
756
+ "angle": 0,
757
+ "content": "where \\(\\tilde{v}\\) is the reconstructed video patches. The vectors \\(\\tilde{v}\\) reside in the latent space, necessitating the use of a \"pixel decoder\" \\(\\mathcal{D}_{pixel}\\) to reconstruct the video data, as illustrated below:"
758
+ },
759
+ {
760
+ "type": "equation",
761
+ "bbox": [
762
+ 0.235,
763
+ 0.168,
764
+ 0.483,
765
+ 0.186
766
+ ],
767
+ "angle": 0,
768
+ "content": "\\[\n\\tilde {x} = \\mathcal {D} _ {\\text {p i x e l}} (\\tilde {v}). \\tag {8}\n\\]"
769
+ },
770
+ {
771
+ "type": "text",
772
+ "bbox": [
773
+ 0.09,
774
+ 0.193,
775
+ 0.484,
776
+ 0.255
777
+ ],
778
+ "angle": 0,
779
+ "content": "The whole DQAE is supervised by the reconstruction loss \\(\\mathcal{L}_{rec}\\) containing a \\(L_{2}\\) loss, a LPIPS perception loss: \\(\\mathcal{L}_{Lpips}\\), a quantizer loss: \\(\\mathcal{L}_{vq}\\) and a GAN loss: \\(\\mathcal{L}_g\\) following the principle [13]."
780
+ },
781
+ {
782
+ "type": "title",
783
+ "bbox": [
784
+ 0.091,
785
+ 0.262,
786
+ 0.462,
787
+ 0.278
788
+ ],
789
+ "angle": 0,
790
+ "content": "3.2.3. Motion-enhanced Language Codebook (MLC)"
791
+ },
792
+ {
793
+ "type": "text",
794
+ "bbox": [
795
+ 0.09,
796
+ 0.28,
797
+ 0.484,
798
+ 0.416
799
+ ],
800
+ "angle": 0,
801
+ "content": "To mitigate information loss during compression, we introduce an language codebook (LC) quantizer to enhance semantic richness. The Previous works [27, 50] have shown that text representations can enhance image VQ-VAEs, as the text provides additional semantic information from pretrained language models. However, previous works mainly focus on the relationship between static image appearance and text semantics [52]. Our experiment in Table 5 shows is insufficient for video data."
802
+ },
803
+ {
804
+ "type": "text",
805
+ "bbox": [
806
+ 0.09,
807
+ 0.417,
808
+ 0.484,
809
+ 0.599
810
+ ],
811
+ "angle": 0,
812
+ "content": "To address this, we propose a Motion-enhanced Language Codebook (MLC), where the video motion information is enhanced via action-related vocabularies. Specifically, we split the dictionary into four subsets: nouns, adjectives, verbs, and adverbs. Intuitively, static and appearance information is typically embedded in nouns and adjectives, while motion information is generally embedded in verbs and adverbs. Therefore, we choose nouns and adjectives for spatial query tokens \\(\\mathbf{Q}_{\\mathrm{s}}\\), and verbs and adverbs for temporal query tokens \\(\\mathbf{Q}_{\\mathrm{t}}\\). Figure 3 also shows that the encoded latent words by SweetTok capture semantic meanings related to both visual appearance and motion."
813
+ },
814
+ {
815
+ "type": "text",
816
+ "bbox": [
817
+ 0.09,
818
+ 0.599,
819
+ 0.484,
820
+ 0.734
821
+ ],
822
+ "angle": 0,
823
+ "content": "As for details, we first extract candidate vocabularies of the whole dataset from video captions. Afterward, we extract CLIP [30] text embedding of these vocabularies to fill in the columns of our codebook \\(C \\in \\mathbb{R}^{L \\times D}\\). We utilize a graph convolution network \\(\\mathcal{F}\\) to project CLIP embeddings [30] into the visual latent space. Graph edges are constructed when a pair of \"spatial-spatial\", \"spatial-temporal\" or \"temporal-temporal\" words co-occur within a 5-token window in the current video caption."
824
+ },
825
+ {
826
+ "type": "text",
827
+ "bbox": [
828
+ 0.09,
829
+ 0.735,
830
+ 0.484,
831
+ 0.811
832
+ ],
833
+ "angle": 0,
834
+ "content": "Given two encoded continuous latent vectors: \\( z_{s} \\in \\mathbf{Q}_{\\mathbf{s}} \\) and \\( z_{t} \\in \\mathbf{Q}_{\\mathbf{t}} \\), \\( z_{s} \\) is passed through spatial quantization codebook, and \\( z_{t} \\) is passed through temporal quantization codebook. The quantized \\( \\hat{z}_{s} \\) and \\( \\hat{z}_{t} \\) are obtained by nearest neighbor searching:"
835
+ },
836
+ {
837
+ "type": "equation",
838
+ "bbox": [
839
+ 0.238,
840
+ 0.824,
841
+ 0.482,
842
+ 0.84
843
+ ],
844
+ "angle": 0,
845
+ "content": "\\[\nz _ {s}, z _ {t} = \\mathcal {E} (x), \\tag {9}\n\\]"
846
+ },
847
+ {
848
+ "type": "equation",
849
+ "bbox": [
850
+ 0.122,
851
+ 0.842,
852
+ 0.483,
853
+ 0.868
854
+ ],
855
+ "angle": 0,
856
+ "content": "\\[\n\\hat {z} _ {s} = \\mathcal {F} \\left(c _ {i}\\right), i = \\underset {c _ {i} \\in C _ {\\text {n o u n}} \\cup C _ {\\text {a d j}}} {\\arg \\min } \\left| \\left| z _ {s} - \\mathcal {F} \\left(c _ {i}\\right) \\right| \\right|, \\tag {10}\n\\]"
857
+ },
858
+ {
859
+ "type": "equation",
860
+ "bbox": [
861
+ 0.124,
862
+ 0.872,
863
+ 0.483,
864
+ 0.899
865
+ ],
866
+ "angle": 0,
867
+ "content": "\\[\n\\hat {z} _ {t} = \\mathcal {F} \\left(c _ {i}\\right), i = \\underset {c _ {i} \\in C _ {\\text {v e r b}} \\cup C _ {\\text {a d v}}} {\\arg \\min } \\left| \\left| z _ {t} - \\mathcal {F} \\left(c _ {i}\\right) \\right| \\right|. \\tag {11}\n\\]"
868
+ },
869
+ {
870
+ "type": "image",
871
+ "bbox": [
872
+ 0.52,
873
+ 0.092,
874
+ 0.897,
875
+ 0.173
876
+ ],
877
+ "angle": 0,
878
+ "content": null
879
+ },
880
+ {
881
+ "type": "image",
882
+ "bbox": [
883
+ 0.517,
884
+ 0.181,
885
+ 0.603,
886
+ 0.257
887
+ ],
888
+ "angle": 0,
889
+ "content": null
890
+ },
891
+ {
892
+ "type": "image",
893
+ "bbox": [
894
+ 0.605,
895
+ 0.181,
896
+ 0.69,
897
+ 0.257
898
+ ],
899
+ "angle": 0,
900
+ "content": null
901
+ },
902
+ {
903
+ "type": "image",
904
+ "bbox": [
905
+ 0.692,
906
+ 0.184,
907
+ 0.899,
908
+ 0.256
909
+ ],
910
+ "angle": 0,
911
+ "content": null
912
+ },
913
+ {
914
+ "type": "image_caption",
915
+ "bbox": [
916
+ 0.512,
917
+ 0.268,
918
+ 0.907,
919
+ 0.325
920
+ ],
921
+ "angle": 0,
922
+ "content": "Figure 3. The semantics of spatial-temporal \"words\". The attention weights of the last encoder's cross-attention layer are visualized via heatmap, showing the visual regions corresponding to the related latent words."
923
+ },
924
+ {
925
+ "type": "text",
926
+ "bbox": [
927
+ 0.512,
928
+ 0.34,
929
+ 0.907,
930
+ 0.402
931
+ ],
932
+ "angle": 0,
933
+ "content": "Finally, the gradient is passed to the encoder via vector-quantization commitment loss proposed in [39], a common method to approximate differentiability \\((sg[\\cdot ]\\) stands for stop-gradient operator):"
934
+ },
935
+ {
936
+ "type": "equation",
937
+ "bbox": [
938
+ 0.534,
939
+ 0.413,
940
+ 0.905,
941
+ 0.451
942
+ ],
943
+ "angle": 0,
944
+ "content": "\\[\n\\begin{array}{l} \\mathcal {L} _ {v q} = \\left\\| s g \\left(z _ {s}\\right) - \\mathcal {Q} \\left(z _ {s}\\right) \\right\\| ^ {2} + \\left\\| z _ {s} - s g \\left[ \\mathcal {Q} \\left(z _ {s}\\right) \\right] \\right\\| ^ {2} \\tag {12} \\\\ + \\left\\| s g \\left[ z _ {t} \\right] - \\mathcal {Q} \\left(z _ {t}\\right) \\right\\| ^ {2} + \\left\\| z _ {t} - s g \\left[ \\mathcal {Q} \\left(z _ {t}\\right) \\right] \\right\\| ^ {2} \\\\ \\end{array}\n\\]"
945
+ },
946
+ {
947
+ "type": "title",
948
+ "bbox": [
949
+ 0.513,
950
+ 0.477,
951
+ 0.646,
952
+ 0.494
953
+ ],
954
+ "angle": 0,
955
+ "content": "4. Experiments"
956
+ },
957
+ {
958
+ "type": "title",
959
+ "bbox": [
960
+ 0.513,
961
+ 0.502,
962
+ 0.715,
963
+ 0.519
964
+ ],
965
+ "angle": 0,
966
+ "content": "4.1. Experiments Settings"
967
+ },
968
+ {
969
+ "type": "text",
970
+ "bbox": [
971
+ 0.512,
972
+ 0.524,
973
+ 0.907,
974
+ 0.737
975
+ ],
976
+ "angle": 0,
977
+ "content": "Dataset. We evaluate the tokenization performance of SweetTok on video datasets, including UCF-101 [33], and Kinetics-600 [7, 19]. Following [2], all video frames are resized to \\(256 \\times 256\\) resolution for experiments. Note that some of the previous works use a resolution of \\(128 \\times 128\\), which cannot be directly compared to our work due to the difference in task difficulty. However, we still include these results in the table and highlight them in gray. Moreover, we fine-tune SweetTok's spatial component on ImageNet [10] to obtain a strong image tokenizer. The semantic capabilities of SweetTok are tested through few-shot image classification on Real-Name Open-Ended miniImageNet [37] and few-shot video action recognition on UCF-101, as described in [53]."
978
+ },
979
+ {
980
+ "type": "text",
981
+ "bbox": [
982
+ 0.512,
983
+ 0.745,
984
+ 0.906,
985
+ 0.852
986
+ ],
987
+ "angle": 0,
988
+ "content": "Evaluation Metrics. For video reconstruction experiments, we evaluate using the Reconstruction Frechet Video Distance (rFVD) [38]. For video generation, we use the Generation Frechet Video Distance (gFVD) metric. For image reconstruction, we categorize recent methods by the number of compressed tokens, with each group assessed using the Frechet Inception Distance (FID) [15]."
989
+ },
990
+ {
991
+ "type": "text",
992
+ "bbox": [
993
+ 0.512,
994
+ 0.871,
995
+ 0.907,
996
+ 0.902
997
+ ],
998
+ "angle": 0,
999
+ "content": "Implementation Details. SweetTok adopts a spatial-temporal architecture consisting of 8 spatial layers and 4"
1000
+ },
1001
+ {
1002
+ "type": "page_number",
1003
+ "bbox": [
1004
+ 0.479,
1005
+ 0.945,
1006
+ 0.519,
1007
+ 0.957
1008
+ ],
1009
+ "angle": 0,
1010
+ "content": "23545"
1011
+ }
1012
+ ],
1013
+ [
1014
+ {
1015
+ "type": "image",
1016
+ "bbox": [
1017
+ 0.093,
1018
+ 0.089,
1019
+ 0.363,
1020
+ 0.366
1021
+ ],
1022
+ "angle": 0,
1023
+ "content": null
1024
+ },
1025
+ {
1026
+ "type": "image_caption",
1027
+ "bbox": [
1028
+ 0.188,
1029
+ 0.367,
1030
+ 0.267,
1031
+ 0.379
1032
+ ],
1033
+ "angle": 0,
1034
+ "content": "OmniTok [2]"
1035
+ },
1036
+ {
1037
+ "type": "image",
1038
+ "bbox": [
1039
+ 0.365,
1040
+ 0.09,
1041
+ 0.634,
1042
+ 0.366
1043
+ ],
1044
+ "angle": 0,
1045
+ "content": null
1046
+ },
1047
+ {
1048
+ "type": "image_caption",
1049
+ "bbox": [
1050
+ 0.469,
1051
+ 0.367,
1052
+ 0.529,
1053
+ 0.38
1054
+ ],
1055
+ "angle": 0,
1056
+ "content": "LARP [3]"
1057
+ },
1058
+ {
1059
+ "type": "image",
1060
+ "bbox": [
1061
+ 0.636,
1062
+ 0.091,
1063
+ 0.905,
1064
+ 0.366
1065
+ ],
1066
+ "angle": 0,
1067
+ "content": null
1068
+ },
1069
+ {
1070
+ "type": "image_caption",
1071
+ "bbox": [
1072
+ 0.721,
1073
+ 0.367,
1074
+ 0.82,
1075
+ 0.379
1076
+ ],
1077
+ "angle": 0,
1078
+ "content": "SweetTok (ours)"
1079
+ },
1080
+ {
1081
+ "type": "image_caption",
1082
+ "bbox": [
1083
+ 0.09,
1084
+ 0.383,
1085
+ 0.907,
1086
+ 0.41
1087
+ ],
1088
+ "angle": 0,
1089
+ "content": "Figure 4. Video reconstruction result on UCF-101 dataset. We also visualize the reconstruction and GT error map, where brighter areas indicate larger errors."
1090
+ },
1091
+ {
1092
+ "type": "table",
1093
+ "bbox": [
1094
+ 0.101,
1095
+ 0.424,
1096
+ 0.472,
1097
+ 0.616
1098
+ ],
1099
+ "angle": 0,
1100
+ "content": "<table><tr><td rowspan=\"2\">Tokenizer</td><td rowspan=\"2\">#Tokens</td><td rowspan=\"2\">#Params Tokenizer</td><td colspan=\"2\">rFVD ↓</td></tr><tr><td>UCF-101</td><td>K-600</td></tr><tr><td>MAGVIT-V2 [1]</td><td>1280</td><td>307M</td><td>8.6</td><td>-</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>20</td><td>13</td></tr><tr><td>MaskGIT [8]</td><td>4352</td><td>227M</td><td>240</td><td>202</td></tr><tr><td>VQGAN [13]</td><td>4352</td><td>227M</td><td>299</td><td>270</td></tr><tr><td>TATS [14]</td><td>4096</td><td>32M</td><td>162</td><td>-</td></tr><tr><td>MAGVIT [49]</td><td>4096</td><td>158M</td><td>58</td><td>-</td></tr><tr><td>OmniTok [2]</td><td>5120</td><td>82.2M</td><td>42</td><td>26</td></tr><tr><td>LARP-B [3]</td><td>1024</td><td>143M</td><td>64</td><td>35</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>35</td><td>23</td></tr><tr><td>SweetTok*</td><td>5120</td><td>128M</td><td>11</td><td>8</td></tr><tr><td>SweetTok</td><td>1280</td><td>128M</td><td>20</td><td>25</td></tr></table>"
1101
+ },
1102
+ {
1103
+ "type": "table_caption",
1104
+ "bbox": [
1105
+ 0.09,
1106
+ 0.619,
1107
+ 0.483,
1108
+ 0.675
1109
+ ],
1110
+ "angle": 0,
1111
+ "content": "Table 1. Video reconstruction FVD on the UCF-101 and K-600 dataset, using a frame resolution \\(256 \\times 256\\). “*” denotes training SweetTok without token compression. Lines in “gray” indicate results at a resolution of \\(128 \\times 128\\)."
1112
+ },
1113
+ {
1114
+ "type": "text",
1115
+ "bbox": [
1116
+ 0.09,
1117
+ 0.705,
1118
+ 0.484,
1119
+ 0.901
1120
+ ],
1121
+ "angle": 0,
1122
+ "content": "temporal layers, with both the encoder and decoder configured to a hidden dimension of 512. The latent space dimension is set to 256. For the LLM codebook quantizer, we exclude words with a frequency below 5, resulting in a selection of 5,078 nouns, 5,403 adjectives, 9,267 verbs, and 1,872 adverbs. This forms a spatial codebook of size 10,481 and a temporal codebook of size 11,139. The model is trained with a batch size of 8 for 1000K iterations. All training is performed on NVIDIA A100 GPUs. Adam [21] is employed for optimization \\((\\beta_{1} = 0.9\\) and \\(\\beta_{2} = 0.99)\\). During each stage, we use a cosine learning rate scheduler with a max learning rate of 1e-4 and a min learning rate of 1e-5, warmed up by 10K iterations."
1123
+ },
1124
+ {
1125
+ "type": "table",
1126
+ "bbox": [
1127
+ 0.536,
1128
+ 0.424,
1129
+ 0.882,
1130
+ 0.617
1131
+ ],
1132
+ "angle": 0,
1133
+ "content": "<table><tr><td>Tokenizer</td><td>Type</td><td>#Tokens</td><td>#Params Generator</td><td>gFVD ↓</td></tr><tr><td>MAGVIT [49]</td><td>AR</td><td>1024</td><td>306M</td><td>265</td></tr><tr><td>MAGVIT-V2 [1]</td><td>AR</td><td>1280</td><td>307M</td><td>109</td></tr><tr><td>MAGVIT [49]</td><td>MLM</td><td>1024</td><td>306M</td><td>76</td></tr><tr><td>MAGVIT-V2 [1]</td><td>MLM</td><td>1280</td><td>307M</td><td>58</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>57</td></tr><tr><td>CogVideo [16]</td><td>AR</td><td>6800</td><td>9.4B</td><td>626</td></tr><tr><td>TATS [14]</td><td>AR</td><td>4096</td><td>321M</td><td>332</td></tr><tr><td>Video-LaVIT [17]</td><td>AR</td><td>512</td><td>7B</td><td>280</td></tr><tr><td>OmniTok [2]</td><td>AR</td><td>5120</td><td>650M</td><td>191</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>99</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>650M</td><td>84</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>1.9B</td><td>65</td></tr></table>"
1134
+ },
1135
+ {
1136
+ "type": "table_caption",
1137
+ "bbox": [
1138
+ 0.513,
1139
+ 0.62,
1140
+ 0.907,
1141
+ 0.689
1142
+ ],
1143
+ "angle": 0,
1144
+ "content": "Table 2. Class-conditional video generation results on UCF-101. Each video is composed of 17 frames with a resolution of 256 × 256. \"AR\" and \"MLM\" represents autoregressive and masked-language-modeling generator. Lines in \"gray\" indicate results at a resolution of \\(128 \\times 128\\)."
1145
+ },
1146
+ {
1147
+ "type": "title",
1148
+ "bbox": [
1149
+ 0.513,
1150
+ 0.712,
1151
+ 0.827,
1152
+ 0.726
1153
+ ],
1154
+ "angle": 0,
1155
+ "content": "4.2. Video Reconstruction & Generation"
1156
+ },
1157
+ {
1158
+ "type": "text",
1159
+ "bbox": [
1160
+ 0.512,
1161
+ 0.735,
1162
+ 0.907,
1163
+ 0.901
1164
+ ],
1165
+ "angle": 0,
1166
+ "content": "We first evaluate the tokenization capability of SweetTok on the UCF-101 and K-600 video datasets. As shown in Table 1, SweetTok uses 1,280 tokens (256 spatial tokens and 1,024 temporal tokens), which is four times fewer than OmniTok's 5,120 tokens and comparable with LARP's 1024 tokens. Despite its high compression ratio, our method achieves a \\(52.3\\%\\) improvement on UCF-101 and competitive performance on K-600 compared with OmniTok. With a similar token count, SweetTok, despite its smaller model size compared to LARP-L, delivers a \\(42.8\\%\\) improvement on UCF-101 and comparable results on K-600. Our Sweet"
1167
+ },
1168
+ {
1169
+ "type": "page_number",
1170
+ "bbox": [
1171
+ 0.479,
1172
+ 0.945,
1173
+ 0.521,
1174
+ 0.957
1175
+ ],
1176
+ "angle": 0,
1177
+ "content": "23546"
1178
+ }
1179
+ ],
1180
+ [
1181
+ {
1182
+ "type": "image",
1183
+ "bbox": [
1184
+ 0.094,
1185
+ 0.089,
1186
+ 0.363,
1187
+ 0.312
1188
+ ],
1189
+ "angle": 0,
1190
+ "content": null
1191
+ },
1192
+ {
1193
+ "type": "image",
1194
+ "bbox": [
1195
+ 0.366,
1196
+ 0.089,
1197
+ 0.634,
1198
+ 0.312
1199
+ ],
1200
+ "angle": 0,
1201
+ "content": null
1202
+ },
1203
+ {
1204
+ "type": "image",
1205
+ "bbox": [
1206
+ 0.636,
1207
+ 0.089,
1208
+ 0.906,
1209
+ 0.312
1210
+ ],
1211
+ "angle": 0,
1212
+ "content": null
1213
+ },
1214
+ {
1215
+ "type": "image_caption",
1216
+ "bbox": [
1217
+ 0.09,
1218
+ 0.315,
1219
+ 0.908,
1220
+ 0.345
1221
+ ],
1222
+ "angle": 0,
1223
+ "content": "Figure 5. Class-conditional video generation result on UCF-101 dataset. The action class label of each row is: \"PlayingTabla\", \"TennisS-wing\", and \"HorseRiding\"."
1224
+ },
1225
+ {
1226
+ "type": "table",
1227
+ "bbox": [
1228
+ 0.104,
1229
+ 0.356,
1230
+ 0.468,
1231
+ 0.566
1232
+ ],
1233
+ "angle": 0,
1234
+ "content": "<table><tr><td>Tokenizer</td><td>#Tokens</td><td>Codebook Size</td><td>rFID ↓</td></tr><tr><td>VQGAN [13]</td><td>256</td><td>1024</td><td>7.94</td></tr><tr><td>RQ-VAE [24]</td><td>256</td><td>16384</td><td>3.20</td></tr><tr><td>MaskGIT[49]</td><td>256</td><td>1024</td><td>2.28</td></tr><tr><td>LlamaGen-16 [34]</td><td>256</td><td>16384</td><td>2.19</td></tr><tr><td>TiTok [51]</td><td>256</td><td>4096</td><td>1.71</td></tr><tr><td>TokenFlow [28]</td><td>256</td><td>4096</td><td>1.03</td></tr><tr><td>SweetTok</td><td>256</td><td>10481</td><td>0.73</td></tr><tr><td>ViT-VQGAN [47]</td><td>1024</td><td>8192</td><td>1.28</td></tr><tr><td>OmniTok [2]</td><td>1024</td><td>8192</td><td>1.11</td></tr><tr><td>OmniTok◇ [2]</td><td>1024</td><td>8192</td><td>0.69</td></tr><tr><td>LlamaGen-8 [34]</td><td>1024</td><td>16384</td><td>0.59</td></tr><tr><td>SweetTok*</td><td>1024</td><td>10481</td><td>0.37</td></tr></table>"
1235
+ },
1236
+ {
1237
+ "type": "table_caption",
1238
+ "bbox": [
1239
+ 0.09,
1240
+ 0.568,
1241
+ 0.483,
1242
+ 0.625
1243
+ ],
1244
+ "angle": 0,
1245
+ "content": "Table 3. Image reconstruction FID on the ImageNet dataset, using a resolution of \\(256 \\times 256\\). “◇” denotes continuous latent space without quantization. “*” denotes training SweetTok without token compression."
1246
+ },
1247
+ {
1248
+ "type": "text",
1249
+ "bbox": [
1250
+ 0.09,
1251
+ 0.644,
1252
+ 0.483,
1253
+ 0.748
1254
+ ],
1255
+ "angle": 0,
1256
+ "content": "Tok demonstrates significant performance gains, achieving \\(68.8\\%\\) and \\(28.5\\%\\) improvements in rFVD on UCF-101 and K-600 datasets, respectively, compared to LARP-B with similar model size. Notably, if we increase the token number to 5,120, SweetTok* significantly outperforms all baselines, achieving an rFVD of 10.74 on UCF-101 and 7.51 on K-600."
1257
+ },
1258
+ {
1259
+ "type": "text",
1260
+ "bbox": [
1261
+ 0.089,
1262
+ 0.75,
1263
+ 0.483,
1264
+ 0.902
1265
+ ],
1266
+ "angle": 0,
1267
+ "content": "The generative capability of SweetTok is evaluated on UCF-101 in a class-conditional generation task. Decoupled tokens extracted by SweetTok are concatenated to form training sequences for VideoGPT [45], following the same generation protocol as OmniTok. As shown in Table 2, SweetTok achieves a significant performance improvement, with a gFVD score of 84, \\(56\\%\\) lower than OmniTok's 191. This improvement is attributed to SweetTok's effective token compression, which substantially reduces the training complexity for downstream autoregressive models. With"
1268
+ },
1269
+ {
1270
+ "type": "text",
1271
+ "bbox": [
1272
+ 0.512,
1273
+ 0.359,
1274
+ 0.907,
1275
+ 0.448
1276
+ ],
1277
+ "angle": 0,
1278
+ "content": "equivalent token counts, SweetTok demonstrates superior performance, achieving a \\(15.1\\%\\) improvement in gFVD (84 vs. LARP's 99) at comparable generator sizes. Furthermore, it exhibits scaling law characteristics, with gFVD improving from 84 to 65 as the model scales up to 1.9B parameters."
1279
+ },
1280
+ {
1281
+ "type": "text",
1282
+ "bbox": [
1283
+ 0.512,
1284
+ 0.45,
1285
+ 0.909,
1286
+ 0.571
1287
+ ],
1288
+ "angle": 0,
1289
+ "content": "The visualization results are presented in Figure 5. To ensure a fair comparison, we select generated videos with similar appearances, as the generation process inherently involves randomness. The results demonstrate significantly improved detail, such as clearer human facial features and finer table textures. Additionally, SweetTok effectively preserves temporal consistency, even under large motion scenarios."
1290
+ },
1291
+ {
1292
+ "type": "title",
1293
+ "bbox": [
1294
+ 0.513,
1295
+ 0.578,
1296
+ 0.722,
1297
+ 0.594
1298
+ ],
1299
+ "angle": 0,
1300
+ "content": "4.3. Image Reconstruction"
1301
+ },
1302
+ {
1303
+ "type": "text",
1304
+ "bbox": [
1305
+ 0.512,
1306
+ 0.6,
1307
+ 0.907,
1308
+ 0.797
1309
+ ],
1310
+ "angle": 0,
1311
+ "content": "To demonstrate the flexibility of our decoupled query design, we show that by directly fine-tuning the spatial branch \\(DQAE_{s}\\), SweetTok also achieves advanced performance for image reconstruction on ImageNet. As shown in Table 3, we compare SweetTok with recent methods under various token compression settings. With 256 spatial tokens, SweetTok outperforms TiTok [51] by \\(27.8\\%\\), reducing rFID from 1.01 to 0.73. When using 1,024 spatial tokens, SweetTok* achieves a significant improvement over both VQ-based and non-VQ-based methods (marked \\(\\diamond\\)), achieving an rFID of 0.37, which surpasses LlamaGen-8 [34] by \\(37.3\\%\\). Visualization results are shown in Figure 6, SweetTok maintains better global appearance and local details."
1312
+ },
1313
+ {
1314
+ "type": "title",
1315
+ "bbox": [
1316
+ 0.513,
1317
+ 0.803,
1318
+ 0.681,
1319
+ 0.818
1320
+ ],
1321
+ "angle": 0,
1322
+ "content": "4.4. Ablation Studies"
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "bbox": [
1327
+ 0.512,
1328
+ 0.825,
1329
+ 0.907,
1330
+ 0.903
1331
+ ],
1332
+ "angle": 0,
1333
+ "content": "Decoupled Query AutoEncoder. We demonstrate the effectiveness of our spatial-temporal decoupling design for token compression. As shown in Table 4, naively downsampling video tokens by 1D linear-interpolation from 5,120 patches into 1,280 tokens results in poor performance of a"
1334
+ },
1335
+ {
1336
+ "type": "page_number",
1337
+ "bbox": [
1338
+ 0.479,
1339
+ 0.945,
1340
+ 0.521,
1341
+ 0.958
1342
+ ],
1343
+ "angle": 0,
1344
+ "content": "23547"
1345
+ }
1346
+ ],
1347
+ [
1348
+ {
1349
+ "type": "image",
1350
+ "bbox": [
1351
+ 0.094,
1352
+ 0.089,
1353
+ 0.189,
1354
+ 0.38
1355
+ ],
1356
+ "angle": 0,
1357
+ "content": null
1358
+ },
1359
+ {
1360
+ "type": "image_caption",
1361
+ "bbox": [
1362
+ 0.131,
1363
+ 0.381,
1364
+ 0.152,
1365
+ 0.391
1366
+ ],
1367
+ "angle": 0,
1368
+ "content": "GT"
1369
+ },
1370
+ {
1371
+ "type": "image",
1372
+ "bbox": [
1373
+ 0.192,
1374
+ 0.089,
1375
+ 0.288,
1376
+ 0.38
1377
+ ],
1378
+ "angle": 0,
1379
+ "content": null
1380
+ },
1381
+ {
1382
+ "type": "image_caption",
1383
+ "bbox": [
1384
+ 0.206,
1385
+ 0.381,
1386
+ 0.271,
1387
+ 0.393
1388
+ ],
1389
+ "angle": 0,
1390
+ "content": "TiTok [51]"
1391
+ },
1392
+ {
1393
+ "type": "image",
1394
+ "bbox": [
1395
+ 0.29,
1396
+ 0.089,
1397
+ 0.385,
1398
+ 0.38
1399
+ ],
1400
+ "angle": 0,
1401
+ "content": null
1402
+ },
1403
+ {
1404
+ "type": "image_caption",
1405
+ "bbox": [
1406
+ 0.298,
1407
+ 0.381,
1408
+ 0.376,
1409
+ 0.393
1410
+ ],
1411
+ "angle": 0,
1412
+ "content": "OmniTok [2]"
1413
+ },
1414
+ {
1415
+ "type": "image",
1416
+ "bbox": [
1417
+ 0.387,
1418
+ 0.089,
1419
+ 0.482,
1420
+ 0.38
1421
+ ],
1422
+ "angle": 0,
1423
+ "content": null
1424
+ },
1425
+ {
1426
+ "type": "image_caption",
1427
+ "bbox": [
1428
+ 0.405,
1429
+ 0.381,
1430
+ 0.464,
1431
+ 0.393
1432
+ ],
1433
+ "angle": 0,
1434
+ "content": "SweetTok"
1435
+ },
1436
+ {
1437
+ "type": "image_caption",
1438
+ "bbox": [
1439
+ 0.122,
1440
+ 0.397,
1441
+ 0.451,
1442
+ 0.411
1443
+ ],
1444
+ "angle": 0,
1445
+ "content": "Figure 6. Visualization of image reconstruction results."
1446
+ },
1447
+ {
1448
+ "type": "table",
1449
+ "bbox": [
1450
+ 0.113,
1451
+ 0.42,
1452
+ 0.46,
1453
+ 0.489
1454
+ ],
1455
+ "angle": 0,
1456
+ "content": "<table><tr><td>Compression Method</td><td>#Tokens</td><td>rFVD ↓</td></tr><tr><td>Vanilla Downsample</td><td>1280</td><td>227.65</td></tr><tr><td>Vanilla Query-based (LARP [3])</td><td>1024</td><td>35.15</td></tr><tr><td>Decoupled Query-based (DQAE)</td><td>1280</td><td>20.46</td></tr></table>"
1457
+ },
1458
+ {
1459
+ "type": "table_caption",
1460
+ "bbox": [
1461
+ 0.09,
1462
+ 0.493,
1463
+ 0.483,
1464
+ 0.521
1465
+ ],
1466
+ "angle": 0,
1467
+ "content": "Table 4. Ablation study of different token count compression method for video tokenizers."
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "bbox": [
1472
+ 0.09,
1473
+ 0.543,
1474
+ 0.484,
1475
+ 0.665
1476
+ ],
1477
+ "angle": 0,
1478
+ "content": "rFVD of 227.65. Training SweetTok without decoupling (similarly to vanilla query-based method in LARP [3]) results in sub-optimal result, obtaining a rFVD of 35.15. Decoupling query (DQAE) achieves best result of 20.46 rFVD. There are two factors: (1) the flattening operation discards substantial consecutive temporal information, and (2) without decoupling, the model struggles to learn efficiently from the intertwined temporal and spatial information."
1479
+ },
1480
+ {
1481
+ "type": "text",
1482
+ "bbox": [
1483
+ 0.09,
1484
+ 0.675,
1485
+ 0.484,
1486
+ 0.9
1487
+ ],
1488
+ "angle": 0,
1489
+ "content": "Motion-enhanced Language Codebook. We evaluate the effects of the motion-enhanced language codebooks (MLC) on UCF-101 datasets. As illustrated in Table 5, vanilla language codebook design enhances rFVD performance from 29.45 to 24.80, leading to a sub-optimal result. Notably, our motion-enhanced temporal language codebook significantly benefits video reconstruction tasks, further reducing the rFVD score from 24.80 to 20.46. This underscores the importance of our unique design for video modalities. Additionally, we compare different types of language-based embeddings, such as using Qwen-2.5B [4] embeddings in place of CLIP [30] embeddings. The experiments indicate that naively increasing the complexity of pre-trained language model is not cost-effective for SweetTok."
1490
+ },
1491
+ {
1492
+ "type": "table",
1493
+ "bbox": [
1494
+ 0.553,
1495
+ 0.089,
1496
+ 0.865,
1497
+ 0.194
1498
+ ],
1499
+ "angle": 0,
1500
+ "content": "<table><tr><td>Methods</td><td>rFVD ↓</td></tr><tr><td>Baseline (w/o LC)</td><td>29.45</td></tr><tr><td>+ LC</td><td>24.80</td></tr><tr><td>+ MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ CLIP [30]-based MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ Qwen [4]-based MLC</td><td>20.12</td></tr></table>"
1501
+ },
1502
+ {
1503
+ "type": "table_caption",
1504
+ "bbox": [
1505
+ 0.513,
1506
+ 0.197,
1507
+ 0.907,
1508
+ 0.24
1509
+ ],
1510
+ "angle": 0,
1511
+ "content": "Table 5. Ablation study of different codebooks, including vanilla language codebook (LC), motion-enhanced language codebook (MLC) and more advanced pre-trained language codebook."
1512
+ },
1513
+ {
1514
+ "type": "table",
1515
+ "bbox": [
1516
+ 0.529,
1517
+ 0.242,
1518
+ 0.889,
1519
+ 0.361
1520
+ ],
1521
+ "angle": 0,
1522
+ "content": "<table><tr><td>Methods</td><td colspan=\"4\">ImageNet</td><td>UCF-101</td></tr><tr><td>K-way-N-shot</td><td>2-1</td><td>2-3</td><td>2-5</td><td>Avg</td><td>5-5</td></tr><tr><td>SPAE [50]</td><td>84.8</td><td>92.5</td><td>92.6</td><td>89.9</td><td>-</td></tr><tr><td>V2L [54]</td><td>76.3</td><td>91.2</td><td>95.3</td><td>87.6</td><td>-</td></tr><tr><td>ARN[53]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>83.1</td></tr><tr><td>HF-AR [23]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>86.4</td></tr><tr><td>SweetTok</td><td>86.8</td><td>90.5</td><td>95.2</td><td>90.8</td><td>90.1</td></tr></table>"
1523
+ },
1524
+ {
1525
+ "type": "table_caption",
1526
+ "bbox": [
1527
+ 0.513,
1528
+ 0.365,
1529
+ 0.907,
1530
+ 0.393
1531
+ ],
1532
+ "angle": 0,
1533
+ "content": "Table 6. Few-shot visual classification accuracy \\((\\uparrow)\\), evaluated on both image and video modality."
1534
+ },
1535
+ {
1536
+ "type": "title",
1537
+ "bbox": [
1538
+ 0.513,
1539
+ 0.408,
1540
+ 0.8,
1541
+ 0.424
1542
+ ],
1543
+ "angle": 0,
1544
+ "content": "4.5. Visual Semantic Comprehension"
1545
+ },
1546
+ {
1547
+ "type": "text",
1548
+ "bbox": [
1549
+ 0.512,
1550
+ 0.43,
1551
+ 0.907,
1552
+ 0.718
1553
+ ],
1554
+ "angle": 0,
1555
+ "content": "Few-Shot Visual Classification. We conducted experiments on few-shot image classification and video action recognition tasks. SweetTok extracted visual tokens and transformed them into natural language words via our language-based codebook. Subsequently, CLIP computed the similarity between the visual inputs and text embeddings. Top 21 tokens with the highest similarity were selected to form a prompt for prediction using the Qwen LLM. For the image classification task, we adhered to the V2L protocol [54], comparing SweetTok against SPAE [50] and V2L. In the video action recognition task, we used ARN [53] and HF-AR [23] as baselines. The results in Table 6 indicate that SweetTok achieved an accuracy of \\(90.8\\%\\) on the miniImageNet dataset, surpassing SPAE \\(89.9\\%\\) and V2L \\(87.6\\%\\). On the UCF-101 dataset, SweetTok attain an average accuracy of \\(90.1\\%\\), outperforming ARN \\(83.1\\%\\) and HF-AR \\(86.4\\%\\). These findings demonstrate SweetTok's semantic ability in both image and video tasks. More results are in the supplementary."
1556
+ },
1557
+ {
1558
+ "type": "title",
1559
+ "bbox": [
1560
+ 0.513,
1561
+ 0.723,
1562
+ 0.641,
1563
+ 0.739
1564
+ ],
1565
+ "angle": 0,
1566
+ "content": "5. Conclusions"
1567
+ },
1568
+ {
1569
+ "type": "text",
1570
+ "bbox": [
1571
+ 0.512,
1572
+ 0.749,
1573
+ 0.907,
1574
+ 0.9
1575
+ ],
1576
+ "angle": 0,
1577
+ "content": "We present SweetTok, an efficient video tokenization framework that compresses spatial and temporal information through the decoupled query autoencoder. Combined with motion-enhanced language codebook, SweetTok reduces token count for video data more effectively, achieving higher reconstruction fidelity compared to previous state-of-the-arts. Our approach offers a compact representation of video data, making it well-suited for downstream tasks such as video generation, understanding, marking a significant step in efficient video tokenization."
1578
+ },
1579
+ {
1580
+ "type": "page_number",
1581
+ "bbox": [
1582
+ 0.479,
1583
+ 0.945,
1584
+ 0.519,
1585
+ 0.957
1586
+ ],
1587
+ "angle": 0,
1588
+ "content": "23548"
1589
+ }
1590
+ ],
1591
+ [
1592
+ {
1593
+ "type": "title",
1594
+ "bbox": [
1595
+ 0.093,
1596
+ 0.09,
1597
+ 0.188,
1598
+ 0.106
1599
+ ],
1600
+ "angle": 0,
1601
+ "content": "References"
1602
+ },
1603
+ {
1604
+ "type": "ref_text",
1605
+ "bbox": [
1606
+ 0.101,
1607
+ 0.116,
1608
+ 0.482,
1609
+ 0.143
1610
+ ],
1611
+ "angle": 0,
1612
+ "content": "[1] Language model beats diffusion-tokenizer is key to visual generation. ICLR, 2023. 1, 2, 3, 6"
1613
+ },
1614
+ {
1615
+ "type": "ref_text",
1616
+ "bbox": [
1617
+ 0.101,
1618
+ 0.145,
1619
+ 0.483,
1620
+ 0.172
1621
+ ],
1622
+ "angle": 0,
1623
+ "content": "[2] Omnitokensizer: A joint image-video tokenizer for visual generation. NeurIPS, 2024. 1, 2, 3, 4, 5, 6, 7, 8"
1624
+ },
1625
+ {
1626
+ "type": "ref_text",
1627
+ "bbox": [
1628
+ 0.102,
1629
+ 0.174,
1630
+ 0.482,
1631
+ 0.201
1632
+ ],
1633
+ "angle": 0,
1634
+ "content": "[3] Larp: Tokenizing videos with a learned autoregressive generative prior. *ICLR Oral*, 2025. 1, 2, 3, 4, 6, 7, 8"
1635
+ },
1636
+ {
1637
+ "type": "ref_text",
1638
+ "bbox": [
1639
+ 0.102,
1640
+ 0.203,
1641
+ 0.483,
1642
+ 0.258
1643
+ ],
1644
+ "angle": 0,
1645
+ "content": "[4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.8"
1646
+ },
1647
+ {
1648
+ "type": "ref_text",
1649
+ "bbox": [
1650
+ 0.102,
1651
+ 0.26,
1652
+ 0.482,
1653
+ 0.288
1654
+ ],
1655
+ "angle": 0,
1656
+ "content": "[5] Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 3"
1657
+ },
1658
+ {
1659
+ "type": "ref_text",
1660
+ "bbox": [
1661
+ 0.102,
1662
+ 0.29,
1663
+ 0.483,
1664
+ 0.344
1665
+ ],
1666
+ "angle": 0,
1667
+ "content": "[6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213-229. Springer, 2020. 2"
1668
+ },
1669
+ {
1670
+ "type": "ref_text",
1671
+ "bbox": [
1672
+ 0.102,
1673
+ 0.346,
1674
+ 0.483,
1675
+ 0.387
1676
+ ],
1677
+ "angle": 0,
1678
+ "content": "[7] Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint arXiv:1808.01340, 2018. 5"
1679
+ },
1680
+ {
1681
+ "type": "ref_text",
1682
+ "bbox": [
1683
+ 0.102,
1684
+ 0.389,
1685
+ 0.482,
1686
+ 0.431
1687
+ ],
1688
+ "angle": 0,
1689
+ "content": "[8] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In CVPR, pages 11315-11325, 2022. 1, 2, 6"
1690
+ },
1691
+ {
1692
+ "type": "ref_text",
1693
+ "bbox": [
1694
+ 0.102,
1695
+ 0.433,
1696
+ 0.482,
1697
+ 0.501
1698
+ ],
1699
+ "angle": 0,
1700
+ "content": "[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. JMLR, 24(240):1-113, 2023. 3"
1701
+ },
1702
+ {
1703
+ "type": "ref_text",
1704
+ "bbox": [
1705
+ 0.094,
1706
+ 0.503,
1707
+ 0.482,
1708
+ 0.544
1709
+ ],
1710
+ "angle": 0,
1711
+ "content": "[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009. 2, 5"
1712
+ },
1713
+ {
1714
+ "type": "ref_text",
1715
+ "bbox": [
1716
+ 0.094,
1717
+ 0.546,
1718
+ 0.482,
1719
+ 0.574
1720
+ ],
1721
+ "angle": 0,
1722
+ "content": "[11] Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2018. 2, 3"
1723
+ },
1724
+ {
1725
+ "type": "ref_text",
1726
+ "bbox": [
1727
+ 0.094,
1728
+ 0.576,
1729
+ 0.482,
1730
+ 0.603
1731
+ ],
1732
+ "angle": 0,
1733
+ "content": "[12] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2020. 1, 2"
1734
+ },
1735
+ {
1736
+ "type": "ref_text",
1737
+ "bbox": [
1738
+ 0.094,
1739
+ 0.605,
1740
+ 0.482,
1741
+ 0.645
1742
+ ],
1743
+ "angle": 0,
1744
+ "content": "[13] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In CVPR, 2021. 2, 5, 6, 7"
1745
+ },
1746
+ {
1747
+ "type": "ref_text",
1748
+ "bbox": [
1749
+ 0.094,
1750
+ 0.647,
1751
+ 0.482,
1752
+ 0.716
1753
+ ],
1754
+ "angle": 0,
1755
+ "content": "[14] Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, and Devi Parikh. Long video generation with time-agnostic vqgan and time-sensitive transformer. In ECCV, pages 102-118. Springer, 2022. 1, 2, 3, 6"
1756
+ },
1757
+ {
1758
+ "type": "ref_text",
1759
+ "bbox": [
1760
+ 0.094,
1761
+ 0.718,
1762
+ 0.482,
1763
+ 0.773
1764
+ ],
1765
+ "angle": 0,
1766
+ "content": "[15] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. NeurIPS, 30, 2017. 5"
1767
+ },
1768
+ {
1769
+ "type": "ref_text",
1770
+ "bbox": [
1771
+ 0.094,
1772
+ 0.775,
1773
+ 0.482,
1774
+ 0.829
1775
+ ],
1776
+ "angle": 0,
1777
+ "content": "[16] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 6"
1778
+ },
1779
+ {
1780
+ "type": "ref_text",
1781
+ "bbox": [
1782
+ 0.094,
1783
+ 0.832,
1784
+ 0.482,
1785
+ 0.901
1786
+ ],
1787
+ "angle": 0,
1788
+ "content": "[17] Yang Jin, Zhicheng Sun, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, and Yadong Mu. Video-lavit: Unified videolanguage pre-training with decoupled visual-motional tokenization. In ICML, pages 22185-22209, 2024. 1, 2, 6"
1789
+ },
1790
+ {
1791
+ "type": "list",
1792
+ "bbox": [
1793
+ 0.094,
1794
+ 0.116,
1795
+ 0.483,
1796
+ 0.901
1797
+ ],
1798
+ "angle": 0,
1799
+ "content": null
1800
+ },
1801
+ {
1802
+ "type": "ref_text",
1803
+ "bbox": [
1804
+ 0.517,
1805
+ 0.093,
1806
+ 0.905,
1807
+ 0.147
1808
+ ],
1809
+ "angle": 0,
1810
+ "content": "[18] Yang Jin, Kun Xu, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Yadong Mu, et al. Unified language-vision pretraining in llm with dynamic discrete visual tokenization. In ICLR, 2024. 2"
1811
+ },
1812
+ {
1813
+ "type": "ref_text",
1814
+ "bbox": [
1815
+ 0.518,
1816
+ 0.15,
1817
+ 0.905,
1818
+ 0.218
1819
+ ],
1820
+ "angle": 0,
1821
+ "content": "[19] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 5"
1822
+ },
1823
+ {
1824
+ "type": "ref_text",
1825
+ "bbox": [
1826
+ 0.517,
1827
+ 0.221,
1828
+ 0.904,
1829
+ 0.246
1830
+ ],
1831
+ "angle": 0,
1832
+ "content": "[20] Diederik P Kingma. Auto-encoding variational bayes. ICLR, 2013. 2"
1833
+ },
1834
+ {
1835
+ "type": "ref_text",
1836
+ "bbox": [
1837
+ 0.517,
1838
+ 0.249,
1839
+ 0.904,
1840
+ 0.275
1841
+ ],
1842
+ "angle": 0,
1843
+ "content": "[21] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6"
1844
+ },
1845
+ {
1846
+ "type": "ref_text",
1847
+ "bbox": [
1848
+ 0.517,
1849
+ 0.278,
1850
+ 0.904,
1851
+ 0.305
1852
+ ],
1853
+ "angle": 0,
1854
+ "content": "[22] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *ICLR*, 2016. 3"
1855
+ },
1856
+ {
1857
+ "type": "ref_text",
1858
+ "bbox": [
1859
+ 0.517,
1860
+ 0.308,
1861
+ 0.905,
1862
+ 0.347
1863
+ ],
1864
+ "angle": 0,
1865
+ "content": "[23] Neeraj Kumar and Siddhansh Narang. Few shot activity recognition using variational inference. arXiv preprint arXiv:2108.08990, 2021. 8"
1866
+ },
1867
+ {
1868
+ "type": "ref_text",
1869
+ "bbox": [
1870
+ 0.517,
1871
+ 0.35,
1872
+ 0.905,
1873
+ 0.403
1874
+ ],
1875
+ "angle": 0,
1876
+ "content": "[24] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In CVPR, pages 11523-11532, 2022. 7"
1877
+ },
1878
+ {
1879
+ "type": "ref_text",
1880
+ "bbox": [
1881
+ 0.517,
1882
+ 0.406,
1883
+ 0.905,
1884
+ 0.461
1885
+ ],
1886
+ "angle": 0,
1887
+ "content": "[25] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, pages 19730–19742, 2023. 2, 4"
1888
+ },
1889
+ {
1890
+ "type": "ref_text",
1891
+ "bbox": [
1892
+ 0.517,
1893
+ 0.463,
1894
+ 0.905,
1895
+ 0.503
1896
+ ],
1897
+ "angle": 0,
1898
+ "content": "[26] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. EMNLP, 2023. 1, 2"
1899
+ },
1900
+ {
1901
+ "type": "ref_text",
1902
+ "bbox": [
1903
+ 0.517,
1904
+ 0.505,
1905
+ 0.905,
1906
+ 0.545
1907
+ ],
1908
+ "angle": 0,
1909
+ "content": "[27] Hao Liu, Wilson Yan, and Pieter Abbeel. Language quantized autoencoders: Towards unsupervised text-image alignment. NeurIPS, 36, 2023. 2, 3, 5"
1910
+ },
1911
+ {
1912
+ "type": "ref_text",
1913
+ "bbox": [
1914
+ 0.518,
1915
+ 0.548,
1916
+ 0.905,
1917
+ 0.616
1918
+ ],
1919
+ "angle": 0,
1920
+ "content": "[28] Liao Qu, Huichao Zhang, Yiheng Liu, Xu Wang, Yi Jiang, Yiming Gao, Hu Ye, Daniel K Du, Zehuan Yuan, and Xinglong Wu. Tokenflow: Unified image tokenizer for multimodal understanding and generation. arXiv preprint arXiv:2412.03069, 2024. 7"
1921
+ },
1922
+ {
1923
+ "type": "ref_text",
1924
+ "bbox": [
1925
+ 0.517,
1926
+ 0.618,
1927
+ 0.905,
1928
+ 0.659
1929
+ ],
1930
+ "angle": 0,
1931
+ "content": "[29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 2"
1932
+ },
1933
+ {
1934
+ "type": "ref_text",
1935
+ "bbox": [
1936
+ 0.517,
1937
+ 0.661,
1938
+ 0.905,
1939
+ 0.73
1940
+ ],
1941
+ "angle": 0,
1942
+ "content": "[30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 3, 5, 8"
1943
+ },
1944
+ {
1945
+ "type": "ref_text",
1946
+ "bbox": [
1947
+ 0.517,
1948
+ 0.732,
1949
+ 0.905,
1950
+ 0.772
1951
+ ],
1952
+ "angle": 0,
1953
+ "content": "[31] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. NeurIPS, 32, 2019. 2"
1954
+ },
1955
+ {
1956
+ "type": "ref_text",
1957
+ "bbox": [
1958
+ 0.518,
1959
+ 0.775,
1960
+ 0.905,
1961
+ 0.828
1962
+ ],
1963
+ "angle": 0,
1964
+ "content": "[32] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 3"
1965
+ },
1966
+ {
1967
+ "type": "ref_text",
1968
+ "bbox": [
1969
+ 0.517,
1970
+ 0.831,
1971
+ 0.905,
1972
+ 0.87
1973
+ ],
1974
+ "angle": 0,
1975
+ "content": "[33] K Soomro. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 2, 5"
1976
+ },
1977
+ {
1978
+ "type": "ref_text",
1979
+ "bbox": [
1980
+ 0.517,
1981
+ 0.873,
1982
+ 0.905,
1983
+ 0.901
1984
+ ],
1985
+ "angle": 0,
1986
+ "content": "[34] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model"
1987
+ },
1988
+ {
1989
+ "type": "list",
1990
+ "bbox": [
1991
+ 0.517,
1992
+ 0.093,
1993
+ 0.905,
1994
+ 0.901
1995
+ ],
1996
+ "angle": 0,
1997
+ "content": null
1998
+ },
1999
+ {
2000
+ "type": "page_number",
2001
+ "bbox": [
2002
+ 0.48,
2003
+ 0.945,
2004
+ 0.519,
2005
+ 0.957
2006
+ ],
2007
+ "angle": 0,
2008
+ "content": "23549"
2009
+ }
2010
+ ],
2011
+ [
2012
+ {
2013
+ "type": "ref_text",
2014
+ "bbox": [
2015
+ 0.125,
2016
+ 0.092,
2017
+ 0.482,
2018
+ 0.12
2019
+ ],
2020
+ "angle": 0,
2021
+ "content": "beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024. 7"
2022
+ },
2023
+ {
2024
+ "type": "ref_text",
2025
+ "bbox": [
2026
+ 0.093,
2027
+ 0.122,
2028
+ 0.483,
2029
+ 0.177
2030
+ ],
2031
+ "angle": 0,
2032
+ "content": "[35] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. ICLR, 2024. 1, 2"
2033
+ },
2034
+ {
2035
+ "type": "ref_text",
2036
+ "bbox": [
2037
+ 0.093,
2038
+ 0.179,
2039
+ 0.482,
2040
+ 0.247
2041
+ ],
2042
+ "angle": 0,
2043
+ "content": "[36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 2"
2044
+ },
2045
+ {
2046
+ "type": "ref_text",
2047
+ "bbox": [
2048
+ 0.093,
2049
+ 0.249,
2050
+ 0.482,
2051
+ 0.303
2052
+ ],
2053
+ "angle": 0,
2054
+ "content": "[37] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. NeurIPS, 34:200-212, 2021. 5"
2055
+ },
2056
+ {
2057
+ "type": "ref_text",
2058
+ "bbox": [
2059
+ 0.093,
2060
+ 0.305,
2061
+ 0.482,
2062
+ 0.36
2063
+ ],
2064
+ "angle": 0,
2065
+ "content": "[38] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 5"
2066
+ },
2067
+ {
2068
+ "type": "ref_text",
2069
+ "bbox": [
2070
+ 0.093,
2071
+ 0.362,
2072
+ 0.482,
2073
+ 0.39
2074
+ ],
2075
+ "angle": 0,
2076
+ "content": "[39] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. NeurIPS, 30, 2017. 1, 2, 5"
2077
+ },
2078
+ {
2079
+ "type": "ref_text",
2080
+ "bbox": [
2081
+ 0.093,
2082
+ 0.392,
2083
+ 0.461,
2084
+ 0.405
2085
+ ],
2086
+ "angle": 0,
2087
+ "content": "[40] A Vaswani. Attention is all you need. NeurIPS, 2017. 2"
2088
+ },
2089
+ {
2090
+ "type": "ref_text",
2091
+ "bbox": [
2092
+ 0.093,
2093
+ 0.407,
2094
+ 0.482,
2095
+ 0.475
2096
+ ],
2097
+ "angle": 0,
2098
+ "content": "[41] Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual descriptions. In ICLR, 2023. 1, 2"
2099
+ },
2100
+ {
2101
+ "type": "ref_text",
2102
+ "bbox": [
2103
+ 0.093,
2104
+ 0.477,
2105
+ 0.482,
2106
+ 0.532
2107
+ ],
2108
+ "angle": 0,
2109
+ "content": "[42] Junke Wang, Dongdong Chen, Chong Luo, Bo He, Lu Yuan, Zuxuan Wu, and Yu-Gang Jiang. Omnivid: A generative framework for universal video understanding. In CVPR, pages 18209-18220, 2024. 1, 2"
2110
+ },
2111
+ {
2112
+ "type": "ref_text",
2113
+ "bbox": [
2114
+ 0.093,
2115
+ 0.534,
2116
+ 0.482,
2117
+ 0.588
2118
+ ],
2119
+ "angle": 0,
2120
+ "content": "[43] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. ECCV, 2024. 1, 2"
2121
+ },
2122
+ {
2123
+ "type": "ref_text",
2124
+ "bbox": [
2125
+ 0.093,
2126
+ 0.591,
2127
+ 0.482,
2128
+ 0.631
2129
+ ],
2130
+ "angle": 0,
2131
+ "content": "[44] Chen Wei, Chenxi Liu, Siyuan Qiao, Zhishuai Zhang, Alan Yuille, and Jiahui Yu. De-diffusion makes text a strong cross-modal interface. In CVPR, pages 13492-13503, 2024. 3"
2132
+ },
2133
+ {
2134
+ "type": "ref_text",
2135
+ "bbox": [
2136
+ 0.093,
2137
+ 0.633,
2138
+ 0.482,
2139
+ 0.674
2140
+ ],
2141
+ "angle": 0,
2142
+ "content": "[45] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 7"
2143
+ },
2144
+ {
2145
+ "type": "ref_text",
2146
+ "bbox": [
2147
+ 0.093,
2148
+ 0.676,
2149
+ 0.482,
2150
+ 0.731
2151
+ ],
2152
+ "angle": 0,
2153
+ "content": "[46] Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, and Seunghoon Hong. Towards end-to-end generative modeling of long videos with memory-efficient bidirectional transformers. In CVPR, pages 22888-22897, 2023. 1"
2154
+ },
2155
+ {
2156
+ "type": "ref_text",
2157
+ "bbox": [
2158
+ 0.093,
2159
+ 0.733,
2160
+ 0.482,
2161
+ 0.787
2162
+ ],
2163
+ "angle": 0,
2164
+ "content": "[47] Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. ICLR, 2022. 2, 7"
2165
+ },
2166
+ {
2167
+ "type": "ref_text",
2168
+ "bbox": [
2169
+ 0.093,
2170
+ 0.789,
2171
+ 0.482,
2172
+ 0.856
2173
+ ],
2174
+ "angle": 0,
2175
+ "content": "[48] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. ICLR, 2024. 2"
2176
+ },
2177
+ {
2178
+ "type": "ref_text",
2179
+ "bbox": [
2180
+ 0.093,
2181
+ 0.859,
2182
+ 0.482,
2183
+ 0.901
2184
+ ],
2185
+ "angle": 0,
2186
+ "content": "[49] Lijun Yu, Yong Cheng, Kihyuk Sohn, José Lezama, Han Zhang, Huiwen Chang, Alexander G Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, and Lu Jiang. MAGVIT:"
2187
+ },
2188
+ {
2189
+ "type": "list",
2190
+ "bbox": [
2191
+ 0.093,
2192
+ 0.092,
2193
+ 0.483,
2194
+ 0.901
2195
+ ],
2196
+ "angle": 0,
2197
+ "content": null
2198
+ },
2199
+ {
2200
+ "type": "ref_text",
2201
+ "bbox": [
2202
+ 0.549,
2203
+ 0.093,
2204
+ 0.905,
2205
+ 0.119
2206
+ ],
2207
+ "angle": 0,
2208
+ "content": "Masked generative video transformer. In CVPR, 2023. 1, 2, 3, 6, 7"
2209
+ },
2210
+ {
2211
+ "type": "ref_text",
2212
+ "bbox": [
2213
+ 0.517,
2214
+ 0.122,
2215
+ 0.905,
2216
+ 0.189
2217
+ ],
2218
+ "angle": 0,
2219
+ "content": "[50] Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, et al. Spae: Semantic pyramid autoencoder for multimodal generation with frozen llms. NeurIPS, 36, 2023. 2, 3, 5, 8"
2220
+ },
2221
+ {
2222
+ "type": "ref_text",
2223
+ "bbox": [
2224
+ 0.517,
2225
+ 0.192,
2226
+ 0.905,
2227
+ 0.246
2228
+ ],
2229
+ "angle": 0,
2230
+ "content": "[51] Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, and Liang-Chieh Chen. An image is worth 32 tokens for reconstruction and generation. NeurIPS, 2024. 1, 2, 4, 7, 8"
2231
+ },
2232
+ {
2233
+ "type": "ref_text",
2234
+ "bbox": [
2235
+ 0.517,
2236
+ 0.249,
2237
+ 0.905,
2238
+ 0.304
2239
+ ],
2240
+ "angle": 0,
2241
+ "content": "[52] Baoquan Zhang, Huaibin Wang, Chuyao Luo, Xutao Li, Guotao Liang, Yunming Ye, Xiaochen Qi, and Yao He. Codebook transfer with part-of-speech for vector-quantized image modeling. In CVPR, pages 7757-7766, 2024. 2, 3, 5"
2242
+ },
2243
+ {
2244
+ "type": "ref_text",
2245
+ "bbox": [
2246
+ 0.517,
2247
+ 0.305,
2248
+ 0.905,
2249
+ 0.36
2250
+ ],
2251
+ "angle": 0,
2252
+ "content": "[53] Hongguang Zhang, Li Zhang, Xiaojuan Qi, Hongdong Li, Philip HS Torr, and Piotr Koniusz. Few-shot action recognition with permutation-invariant attention. In ECCV, pages 525-542. Springer, 2020. 5, 8"
2253
+ },
2254
+ {
2255
+ "type": "ref_text",
2256
+ "bbox": [
2257
+ 0.517,
2258
+ 0.362,
2259
+ 0.905,
2260
+ 0.404
2261
+ ],
2262
+ "angle": 0,
2263
+ "content": "[54] Lei Zhu, Fangyun Wei, and Yanye Lu. Beyond text: Frozen large language models in visual signal comprehension. In CVPR, pages 27047-27057, 2024. 3, 8"
2264
+ },
2265
+ {
2266
+ "type": "list",
2267
+ "bbox": [
2268
+ 0.517,
2269
+ 0.093,
2270
+ 0.905,
2271
+ 0.404
2272
+ ],
2273
+ "angle": 0,
2274
+ "content": null
2275
+ },
2276
+ {
2277
+ "type": "page_number",
2278
+ "bbox": [
2279
+ 0.48,
2280
+ 0.945,
2281
+ 0.52,
2282
+ 0.957
2283
+ ],
2284
+ "angle": 0,
2285
+ "content": "23550"
2286
+ }
2287
+ ]
2288
+ ]
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/394730b6-2240-4ddc-9556-9a00295dec81_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9b6a7d64e59e3c52b839ac6818cbb40c0238e5b0909adb3b24ef6a84885ea00
3
+ size 8522085
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/full.md ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SweetTok: Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization
2
+
3
+ Zhentao Tan,\* Ben Xue,\* Jian Jia, Junhao Wang, Wencai Ye, Shaoyun Shi, Mingjie Sun
4
+ Wenjin Wu, Quan Chen†, Peng Jiang
5
+ Kuaishou Technology, Beijing, China
6
+
7
+ {tanzhentao03, xueben, jiajian, wangjunhao05, yewencai, shishaoyun, sunmingjie, wuwenjin, chenquan06, jiangpeng}@kuaishou.com
8
+
9
+ # Abstract
10
+
11
+ This paper presents the Semantic-aWarE spatial-tEmporal Tokenizer (SweetTok), a novel video tokenizer to overcome the limitations in current video tokenization methods for compacted yet effective discretization. Unlike previous approaches that process flattened local visual patches via direct discretization or adaptive query tokenization, SweetTok proposes a decoupling framework, compressing visual inputs through distinct spatial and temporal queries via Decoupled Query AutoEncoder (DQAE). This design allows SweetTok to efficiently compress video token count while achieving superior fidelity by capturing essential information across spatial and temporal dimensions. Furthermore, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression to address the differences in semantic representation between appearance and motion information. SweetTok significantly improves video reconstruction results by $42.8\%$ w.r.t rFVD on UCF-101 dataset. With a better token compression strategy, it also boosts downstream video generation results by $15.1\%$ w.r.t gFVD. Additionally, the compressed decoupled tokens are imbued with semantic information, enabling few-shot recognition capabilities powered by LLMs in downstream applications.
12
+
13
+ # 1. Introduction
14
+
15
+ Visual tokenizers [1-3, 8, 14, 39, 41, 49] are emerging as essential components in the field of modern computer vision models, particularly in the generation [1, 14, 46] and understanding [17, 26, 35, 42, 43] of vision data. These tools convert visual inputs into discrete tokens, capturing essential temporal and spatial features that facilitate advanced analysis by formulating visual-related tasks as a token prediction process.
16
+
17
+ ![](images/3105bb54fbc4a424cc206efb753bb7c17b5c6e29d357167ab2e2236a6c07ffb7.jpg)
18
+ Figure 1. Illustration of our framework. We build a compact visual latent space by reducing token count in a decoupled style and leveraging motion-enhanced semantic text embedding. The encoded tokens can be applied to downstream tasks, such as generation and understanding.
19
+
20
+ Compression ratio and reconstruction fidelity are vital criteria for evaluating a tokenizer. Recent visual tokenizers, especially video tokenizers [1, 2, 41] typically retain a low compression ratio. This is because visual tokens are usually derived from 2D patches [12] or 3D tubes [2, 14] which preserve location relationships (e.g., each token corresponds to a specific region of input [51]), leading to redundancy in both spatial and temporal dimensions. Most recent work LARP [3] quantizes the flattened video patches through adaptive holistic queries to achieve high compression ratio. However, it is observed that directly flattening
21
+
22
+ video tokens into sequence may lead to difficulty in learning intertwined spatial temporal information resulting in low reconstruction performance. Therefore, a new compression method needs to be proposed, one that takes into account the spatiotemporal properties of video.
23
+
24
+ Another issue, meanwhile, is that a higher compression ratio typically results in a greater loss of reconstruction details. To complement visual information during compression, one common strategy is to introduce pretrained language embeddings as the latent codebook [27, 50, 52], leveraging their semantic representation capabilities. However, previous works primarily focus on image modality, overlooking the relationships between text and motion in video domain.
25
+
26
+ To address existing limitations, we propose SweetTok - Semantic-aWarE spatial-tEmporal Tokenizer - as illustrated in Figure 1. Considering the heterogeneous redundancy in static images and dynamic frames, we propose the Decoupled Query AutoEncoder (DQAE) to compress spatial and temporal information into separate learnable queries. Different from previous works [3, 25, 51], our findings indicate that coupling the compression of spatiotemporal information increases the difficulty for the decoder to learn the motion information of the same pixel across consecutive frames. Thus, taking the decoupled spatial and temporal queries as inputs, we devise a strategy of spatial decoding followed by temporal decoding to achieve a separate reconstruction of the spatial and temporal dimensions of visual information. Additionally, the decoupled spatiotemporal reconstruction approach naturally allows for finetuning on image data, making our SweetTok flexible to image reconstruction task.
27
+
28
+ Furthermore, to integrate the semantic information inherent in pre-traiend language model, we design a Motion-enhanced Language Codebook (MLC) tailored for spatial and temporal compression addressing the differences in semantic representation between spatial and temporal information. Specifically, we design two language-based codebooks based on the part of speech, using nouns and adjectives for spatial static information and verbs and adverbs for temporal motion information. By incorporating language-based codebooks, the learnable compressed queries can also be easily adapted to downstream visual understanding tasks by in-context learning of LLM.
29
+
30
+ Exhaustive experiments demonstrate the effectiveness of SweetTok. Compared with vanilla video tokenizer without token compression (OmniTok[2]), SweetTok improves rFVD by $52.3\%$ on UCF-101 [33] using only $25\%$ of the tokens. Compared with vanilla query-based tokenizer (LARP [3]), SweetTok reduces rFVD from 35.15 to 20.46 and gFVD from 99 to 84, on UCF-101. By directly finetuning the decoupled spatial branch on the ImageNet-1k [10], SweetTok also demonstrates a substantial improvement in
31
+
32
+ rFID, decreasing it from 0.59 to 0.37.
33
+
34
+ In summary, our work makes the following key contributions:
35
+
36
+ - We introduce SweetTok, a cutting-edge video tokenizer that achieves the state-of-the-art reconstruction fidelity with a high compression ratio via spatial-temporal decoupling and decoupled query autoencoder (DQAE), reaching a "sweet spot" between compression and fidelity.
37
+ - We propose a motion-enhanced language codebook (MLC) to more effectively capture the action information embedded in the video modality, thereby improving reconstruction quality and supporting downstream video understanding tasks.
38
+ - We perform extensive experiments to verify the effectiveness of SweetTok, which exhibits the state-of-the-art performance on video reconstruction, image reconstruction, and class-conditional video generation tasks, leading by a large margin of $42.8\%$ , $37.2\%$ , and $15.1\%$ .
39
+
40
+ # 2. Background
41
+
42
+ # 2.1. Visual Tokenizer With Vector Quantization
43
+
44
+ Exploring visual tokenizers and their applications in generative models has led to significant advancements in image/video-related tasks. The general idea is to discretize visual data into tokens, then tasks like visual generation [8, 14, 47, 48] & understanding [6, 12, 17, 18, 26, 35, 42, 43] can be tackled in a sequence prediction style as natural language processing [11, 29, 36]. Our work belongs to the series of Vector Quantized Variational AutoEncoder (VQVAE) [31, 39] tokenizers, which introduce a discrete latent space for continuous VAE [20] encoder-decoder structure. It typically encodes a high-dimensional image into a low-dimensional latent representation, then queries the nearest index from a learnable codebook to quantize the latent vector, and finally decodes back reversely to reconstruct the raw input signal. Since this type of tokenizer acquires reconstruction loss, it can maintain high-level semantic and low-level details of input vision. VQGAN [13] adopted adversarial training loss to improve high-frequency details. ViT-VQGAN [47] upgraded encoder-decoder with visiontransformer (ViT) architecture [12] and further boosted results. TiTok [51] replaced 2D image structure with 1D sequence latent representation, then used a self-attention transformer [40] to compress token number.
45
+
46
+ However, the above methods can only process image data. For video modality, TATS [14] used 3D-CNN to encode video patches and adopted sliding windows to deal with long-term relations. CViViT [41] used ViT [12] structure to encode spatial patches and then adopted a causal transformer to model temporal information. OmniTokenizer [2] and MAGVIT [1, 49] adopted similar transformer architecture and introduced image pre-training to improve
47
+
48
+ ![](images/d87274cbaa95ce624d15f0ba4cda55698a5017642fbbd4b99d1a5f3c217a50d6.jpg)
49
+ Figure 2. Pipeline overview. (a) Vanilla video tokenizers directly quantize flattened video patches. (b) Vanilla query-based tokenizers compress flattend video patches into adaptive queries. (c) SweetTok proposes decoupled query-based autoencoder (DQAE, §3.2.2). The spatial encoder quantizes the first frame's patch embeddings, while the temporal encoder quantizes residual between consecutive frames. The spatial decoder reconstructs the first frame's patches, replicates them $T$ times, and passes them to the temporal decoder for final information fusion and reconstruction. It also proposes motion-enhanced language codebook (MLC, §3.2.3) to complement reconstructed video information via action-related language semantics.
50
+
51
+ ![](images/27c4e5b2d5a09d4ae384924f22048a982aab73a48766497e7dee269a3cc096de.jpg)
52
+
53
+ video tokenizer. LARP [3], on the other hand, introduces to compress flattened video patches into adaptive holistic queries with the guidance of a pre-trained auto-regressive model. In this paper, we inherit the popular spatial-temporal decomposition design for video data.
54
+
55
+ # 2.2. Language-based Latent Codebook
56
+
57
+ The codebooks learned by vanilla VQ-VAEs are not interpretable with lexical meanings. Therefore, many works attempt to utilize pretrained language models embedding codebooks to enhance semantics. LQAE [27] replaced the visual codebook with frozen word embeddings from BERT [11]. SPAE [50] quantized image latent space in a pyramid structure to preserve semantic information from low-level to high-level. It also used large language model (LLM) codebook [9] so that the encoded image token can be directly adapted to visual understanding tasks through in-context learning [5] ability of LLM. We follow this evaluation pipeline for few-shot classification in our paper. V2L-Tokenizer [54] utilized CLIP [30] pretrained encoder and injected a learnable projector to align visual-text latent space implicitly. VQCT [52] replaced the projector with graph convolution networks [22] to consider the relationship between vocabularies. Furthermore, De-Diffusion [44] directly encoded image into plain text as latent space inter
58
+
59
+ face and decodes back through a text-to-image (T2I) diffusion model [32]. However, these studies primarily focus on the image modality. In this paper, we explore the design of the language codebook specifically for the video modality by splitting the codebook according to the video's spatial-temporal attribute.
60
+
61
+ # 3. Method
62
+
63
+ # 3.1. Preliminary
64
+
65
+ A typical visual vector-quantization (VQ) model [1, 2, 14, 49] contains three parts: encoder $\mathcal{E}$ , decoder $\mathcal{D}$ and latent quantizer $\mathcal{Q}$ as shown in Figure 2 (a). Take video modality as an example, given a video input $x\in \mathbb{R}^{T\times H\times W\times 3}$ , where $T$ represents the temporal length and $H\times W$ denotes spatial resolution, encoder $\mathcal{E}(x)$ projects it into latent space $\mathbb{Z}\in \mathbb{R}^{N\times D}$ , where $D$ is latent dimension and $N$ is token number. A quantizer $\mathcal{Q}$ is constructed in this latent space $\mathbb{Z}$ by querying the nearest neighbor in codebook $C\in \mathbb{R}^{L_c\times D}$ , where $L_{c}$ is codebook size. Then $\mathcal{D}$ decodes latent space back to pixel space and applies self-supervised reconstruction loss:
66
+
67
+ $$
68
+ \mathcal {L} _ {\text {r e c}} (x, \mathcal {D} (\mathcal {Q} (\mathcal {E} (x)))) \tag {1}
69
+ $$
70
+
71
+ Unlike traditional visual VQ models, LARP [3] quantizes flattened video patches into holistic queries $\mathbf{Q} \in$
72
+
73
+ $\mathbb{R}^{L\times D}$ as shown in Figure 2 (b), where $L$ is the adaptive query size. As shown in Equation (2), the encoder $\mathcal{E}$ processes concatenated flattened video patches $\mathbf{E} =$ flatten $(x)\in \mathbb{R}^{N\times D}$ and query tokens $\mathbf{Q}\in \mathbb{R}^{L\times D}$ outputting $\mathbf{Z}_{\mathbf{Q}}\in \mathbb{R}^{L\times D}$ for quantization. The decoder $\mathcal{D}$ reconstructs the video patches $\tilde{\mathbf{x}}\in \mathbb{R}^{T\times H\times W\times 3}$ from the concatenation of learnable video queries $\mathbf{E}_{\mathbf{Q}}\in \mathbb{R}^{N\times D}$ and the query discrete embeddings $\tilde{\mathbf{Z}}_{\mathbf{Q}}$
74
+
75
+ $$
76
+ \mathbf {Z} _ {\mathbf {E}} | | \mathbf {Z} _ {\mathbf {Q}} = \mathcal {E} (\mathbf {E} | | \mathbf {Q}), \tilde {\mathbf {Z}} _ {\mathbf {Q}} = \mathcal {Q} (\mathbf {Z} _ {\mathbf {Q}}), \tilde {x} = \mathcal {D} (\mathbf {E} _ {\mathbf {Q}} | | \tilde {\mathbf {Z}} _ {\mathbf {Q}}). \tag {2}
77
+ $$
78
+
79
+ However, directly compressing video from flattened patches is challenging due to intertwined redundant temporal data and low-level spatial content, leading to suboptimal performance. Thus, we designed SweetTok to balance reconstruction performance with a high compression ratio. Details are elaborated in Section 3.2.
80
+
81
+ # 3.2. Decoupled Spatial-Temporal Tokenization
82
+
83
+ As noted in Section 3.1, directly quantizing video data via flattened patches hampers model learning due to intertwined redundant temporal and complex spatial information. Thus, we propose separately quantizing spatial and temporal dimensions before combining them to reconstruct the video, following the divide-and-conquer principle. The decoupling strategy enables high compression while ensuring higher fidelity. The main pipeline is shown in Figure 2 (c).
84
+
85
+ # 3.2.1. Patchify
86
+
87
+ Given a video frame sequence $x \in \mathbb{R}^{T \times H \times W \times 3}$ , we select the first frame $x_{1}$ as a reference for spatial information, the remaining $T - 1$ frames $x_{2:T}$ for temporal information, following the strategy in [2]. We apply two patch kernels $\mathcal{P}_s, \mathcal{P}_t$ with shapes $p_h \times p_w$ and $p_t \times p_h \times p_w$ to $x_{1}$ and $x_{2:T}$ separately, generating $v_s \in \mathbb{R}^{1 \times \frac{H}{p_h} \times \frac{W}{p_w} \times D}$ and $v_t \in \mathbb{R}^{\frac{T - 1}{p_t} \times \frac{H}{p_h} \times \frac{W}{p_w} \times D}$ shown below:
88
+
89
+ $$
90
+ v _ {s} = \mathcal {P} _ {s} \left(x _ {1}\right), v _ {t} = \mathcal {P} _ {t} \left(x _ {2: T}\right) \tag {3}
91
+ $$
92
+
93
+ $v_{s}$ and $v_{t}$ are inputs for transformer-based autoencoder, where $v_{s}$ contains spatial information, $v_{t}$ contains temporal information. In practice, for a video with 17 frames and a resolution of $256 \times 256$ , we set $(p_{t}, p_{h}, p_{w})$ to $(4, 8, 8)$ , thus patchify frames into $v_{s}$ with shape $1 \times 32 \times 32$ and $v_{t}$ with shape $4 \times 32 \times 32$ . We use $t = \frac{T - 1}{p_t} = 4$ to denote $v_{t}$ 's length.
94
+
95
+ # 3.2.2.Decoupled Query AutoEncoder (DQAE)
96
+
97
+ To decouple spatial and temporal dimensions, we need to compress video patches separately along each dimension. Inspired by [3, 51] and Q-Former [25], we compress each dimension into an adaptive query tokens via cross-attention interactions. As an innovation, we recursively inject these
98
+
99
+ cross-attention query modules into transformer-based autoencoder to transfer information, forming our DQAE module shown in the right gray box of Figure 2 (c). For the rest of our paper, we use $\mathcal{E}_{DQAE}$ and $\mathcal{D}_{DQAE}$ as the encoder and decoder of the DQAE module for simplicity.
100
+
101
+ Spatial Tokenization. We observe that for most video parts, the first frame holds the most spatial information, so we use $v_{s}$ as the input to $\mathcal{E}_{DQAE_s}$ for quantization shown below:
102
+
103
+ $$
104
+ \mathbf {Z} _ {\mathbf {Q} _ {\mathbf {s}}} = \mathcal {E} _ {D Q A E _ {s}} (\mathbf {Q} _ {\mathbf {s}}, v _ {s}), \tilde {\mathbf {Z}} _ {\mathbf {Q} _ {\mathbf {s}}} = \mathcal {Q} _ {M L C} (\mathbf {Z} _ {\mathbf {Q} _ {\mathbf {s}}}), \quad (4)
105
+ $$
106
+
107
+ where $\mathbf{Q}_{\mathbf{s}} \in \mathbb{R}^{L_{\text{spatial}} \times D}$ are the learnable spatial query embeddings and $\mathbf{Z}_{\mathbf{Q}_{\mathbf{s}}} \in \mathbb{R}^{L_{\text{spatial}} \times D}$ are the output embeddings encoding information from the first frame patches $v_{s}$ . $\tilde{\mathbf{Z}}_{\mathbf{Q}_{\mathbf{s}}} \in \mathbb{R}^{L_{\text{spatial}} \times D}$ are the quantized spatial token embeddings and $\mathcal{Q}_{MLC}$ stands for Motion-enhanced Language Codebook (MLC) quantizer which will be elaborated in the following section. After quantization, we inject our informative spatial token embeddings $\tilde{\mathbf{Z}}_{\mathbf{Q}_{\mathbf{s}}}$ into a learnable spatial patch queries $\mathbf{Q}_{v_s}$ through decoder $\mathcal{D}_{DQA_E_s}$ as below:
108
+
109
+ $$
110
+ \tilde {v} _ {s} = \mathcal {D} _ {D Q A E _ {s}} \left(\mathbf {Q} _ {v _ {s}}, \tilde {\mathbf {Z}} _ {\mathbf {Q} _ {s}}\right), \tag {5}
111
+ $$
112
+
113
+ where $\tilde{v}_s$ are the reconstructed first frame video patches. The temporal component which will be stated in the following, combined with $\tilde{v}_s$ is used for the final video reconstruction. We set $L_{\text{spatial}} = 256$ in real implementation.
114
+
115
+ Temporal Tokenization. It is observed that for video data, there is much redundancy along temporal dimension. It motivates us to employ frame-wise residual $\Delta v_{t} = (\Delta v_{t}^{1},\Delta v_{t}^{2},\dots,\Delta v_{t}^{k})_{k = \frac{H}{p_{h}}\times \frac{W}{p_{w}}}$ , where $\Delta v_{t}^{i} = v_{s}^{i} - v_{t}^{i}$ , for tokenization. We use the first frame of the video for frame-wise residual because the spatial tokenization phase reconstruct it. Then, the residual $\Delta v = (\Delta v_{1},\Delta v_{2},\dots,\Delta v_{t})_{t = \frac{T - 1}{p_{t}}}$ is input to $\mathcal{E}_{DQAE_t}$ for temporal compression, as shown below:
116
+
117
+ $$
118
+ \mathbf {Z} _ {\mathbf {Q} _ {\mathbf {t}}} = \mathcal {E} _ {D Q A E _ {t}} (\mathbf {Q} _ {\mathbf {t}}, \Delta v), \tilde {\mathbf {Z}} _ {\mathbf {Q} _ {\mathbf {t}}} = \mathcal {Q} _ {M L C} (\mathbf {Z} _ {\mathbf {Q} _ {\mathbf {t}}}), \quad (6)
119
+ $$
120
+
121
+ where $\mathbf{Q}_{\mathbf{t}} \in \mathbb{R}^{L_{temporal} \times D}$ are the learnable temporal query embeddings and $\mathbf{Z}_{\mathbf{Q}_{\mathbf{t}}} \in \mathbb{R}^{L_{temporal} \times D}$ are the output embeddings encoding information of the frame-wise residual. $\tilde{\mathbf{Z}}_{\mathbf{Q}_{\mathbf{t}}} \in \mathbb{R}^{L_{temporal} \times D}$ are the quantized frame-wise temporal residual token embeddings. In practical implementation, $L_{temporal} = 1024$ . After reconstructing the first frame patches, we recover the entire video patches by combining $\tilde{v}_s$ with the frame-wise quantized residual $\tilde{\mathbf{Z}}_{\mathbf{Q}_{\mathbf{t}}}$ . We tiled $\tilde{v}_s$ for $t$ times and sent it to $\mathcal{D}_{DQAE_t}$ shown below:
122
+
123
+ $$
124
+ \tilde {v} = \mathcal {D} _ {D Q A E _ {t}} \left(\left[ \tilde {v} _ {s} \right| | \dots | | \tilde {v} _ {s} \right], \tilde {\mathbf {Z}} _ {\mathbf {Q} _ {\mathbf {t}}}), \tag {7}
125
+ $$
126
+
127
+ where $\tilde{v}$ is the reconstructed video patches. The vectors $\tilde{v}$ reside in the latent space, necessitating the use of a "pixel decoder" $\mathcal{D}_{pixel}$ to reconstruct the video data, as illustrated below:
128
+
129
+ $$
130
+ \tilde {x} = \mathcal {D} _ {\text {p i x e l}} (\tilde {v}). \tag {8}
131
+ $$
132
+
133
+ The whole DQAE is supervised by the reconstruction loss $\mathcal{L}_{rec}$ containing a $L_{2}$ loss, a LPIPS perception loss: $\mathcal{L}_{Lpips}$ , a quantizer loss: $\mathcal{L}_{vq}$ and a GAN loss: $\mathcal{L}_g$ following the principle [13].
134
+
135
+ # 3.2.3. Motion-enhanced Language Codebook (MLC)
136
+
137
+ To mitigate information loss during compression, we introduce an language codebook (LC) quantizer to enhance semantic richness. The Previous works [27, 50] have shown that text representations can enhance image VQ-VAEs, as the text provides additional semantic information from pretrained language models. However, previous works mainly focus on the relationship between static image appearance and text semantics [52]. Our experiment in Table 5 shows is insufficient for video data.
138
+
139
+ To address this, we propose a Motion-enhanced Language Codebook (MLC), where the video motion information is enhanced via action-related vocabularies. Specifically, we split the dictionary into four subsets: nouns, adjectives, verbs, and adverbs. Intuitively, static and appearance information is typically embedded in nouns and adjectives, while motion information is generally embedded in verbs and adverbs. Therefore, we choose nouns and adjectives for spatial query tokens $\mathbf{Q}_{\mathrm{s}}$ , and verbs and adverbs for temporal query tokens $\mathbf{Q}_{\mathrm{t}}$ . Figure 3 also shows that the encoded latent words by SweetTok capture semantic meanings related to both visual appearance and motion.
140
+
141
+ As for details, we first extract candidate vocabularies of the whole dataset from video captions. Afterward, we extract CLIP [30] text embedding of these vocabularies to fill in the columns of our codebook $C \in \mathbb{R}^{L \times D}$ . We utilize a graph convolution network $\mathcal{F}$ to project CLIP embeddings [30] into the visual latent space. Graph edges are constructed when a pair of "spatial-spatial", "spatial-temporal" or "temporal-temporal" words co-occur within a 5-token window in the current video caption.
142
+
143
+ Given two encoded continuous latent vectors: $z_{s} \in \mathbf{Q}_{\mathbf{s}}$ and $z_{t} \in \mathbf{Q}_{\mathbf{t}}$ , $z_{s}$ is passed through spatial quantization codebook, and $z_{t}$ is passed through temporal quantization codebook. The quantized $\hat{z}_{s}$ and $\hat{z}_{t}$ are obtained by nearest neighbor searching:
144
+
145
+ $$
146
+ z _ {s}, z _ {t} = \mathcal {E} (x), \tag {9}
147
+ $$
148
+
149
+ $$
150
+ \hat {z} _ {s} = \mathcal {F} \left(c _ {i}\right), i = \underset {c _ {i} \in C _ {\text {n o u n}} \cup C _ {\text {a d j}}} {\arg \min } \left| \left| z _ {s} - \mathcal {F} \left(c _ {i}\right) \right| \right|, \tag {10}
151
+ $$
152
+
153
+ $$
154
+ \hat {z} _ {t} = \mathcal {F} \left(c _ {i}\right), i = \underset {c _ {i} \in C _ {\text {v e r b}} \cup C _ {\text {a d v}}} {\arg \min } \left| \left| z _ {t} - \mathcal {F} \left(c _ {i}\right) \right| \right|. \tag {11}
155
+ $$
156
+
157
+ ![](images/73a772bdbacade7159b1c5638265974f191dc195de177d7490aac38f8cdedcb8.jpg)
158
+
159
+ ![](images/44feb6a04c3e4e031dcb9d76067d89004dc282c8df9f6fdb1f01371bb2cdc29d.jpg)
160
+ Figure 3. The semantics of spatial-temporal "words". The attention weights of the last encoder's cross-attention layer are visualized via heatmap, showing the visual regions corresponding to the related latent words.
161
+
162
+ ![](images/3c3a3fc36e02007b4b3ad9080f331425545dafa7229cee7c92af75b25b6858c2.jpg)
163
+
164
+ ![](images/280e11ef9c69e5c827376813469dd56f7781e66cb7d077530bdef40c2de45ee2.jpg)
165
+
166
+ Finally, the gradient is passed to the encoder via vector-quantization commitment loss proposed in [39], a common method to approximate differentiability $(sg[\cdot ]$ stands for stop-gradient operator):
167
+
168
+ $$
169
+ \begin{array}{l} \mathcal {L} _ {v q} = \left\| s g \left(z _ {s}\right) - \mathcal {Q} \left(z _ {s}\right) \right\| ^ {2} + \left\| z _ {s} - s g \left[ \mathcal {Q} \left(z _ {s}\right) \right] \right\| ^ {2} \tag {12} \\ + \left\| s g \left[ z _ {t} \right] - \mathcal {Q} \left(z _ {t}\right) \right\| ^ {2} + \left\| z _ {t} - s g \left[ \mathcal {Q} \left(z _ {t}\right) \right] \right\| ^ {2} \\ \end{array}
170
+ $$
171
+
172
+ # 4. Experiments
173
+
174
+ # 4.1. Experiments Settings
175
+
176
+ Dataset. We evaluate the tokenization performance of SweetTok on video datasets, including UCF-101 [33], and Kinetics-600 [7, 19]. Following [2], all video frames are resized to $256 \times 256$ resolution for experiments. Note that some of the previous works use a resolution of $128 \times 128$ , which cannot be directly compared to our work due to the difference in task difficulty. However, we still include these results in the table and highlight them in gray. Moreover, we fine-tune SweetTok's spatial component on ImageNet [10] to obtain a strong image tokenizer. The semantic capabilities of SweetTok are tested through few-shot image classification on Real-Name Open-Ended miniImageNet [37] and few-shot video action recognition on UCF-101, as described in [53].
177
+
178
+ Evaluation Metrics. For video reconstruction experiments, we evaluate using the Reconstruction Frechet Video Distance (rFVD) [38]. For video generation, we use the Generation Frechet Video Distance (gFVD) metric. For image reconstruction, we categorize recent methods by the number of compressed tokens, with each group assessed using the Frechet Inception Distance (FID) [15].
179
+
180
+ Implementation Details. SweetTok adopts a spatial-temporal architecture consisting of 8 spatial layers and 4
181
+
182
+ ![](images/8918b3dd3532316d5daa9086b344bcd000b0f4cbb31f3071c39548d9c4a5fa80.jpg)
183
+ OmniTok [2]
184
+
185
+ ![](images/c7be81241c2d9e3d8422b70e96858af38c9ac6838ff8ad41a6e6eb221e67a0cb.jpg)
186
+ LARP [3]
187
+
188
+ ![](images/f9070406ffc0389b95dfa0228a8221866b1fba75ae87ae7eefcfa0211a1e0cd0.jpg)
189
+ SweetTok (ours)
190
+ Figure 4. Video reconstruction result on UCF-101 dataset. We also visualize the reconstruction and GT error map, where brighter areas indicate larger errors.
191
+
192
+ <table><tr><td rowspan="2">Tokenizer</td><td rowspan="2">#Tokens</td><td rowspan="2">#Params Tokenizer</td><td colspan="2">rFVD ↓</td></tr><tr><td>UCF-101</td><td>K-600</td></tr><tr><td>MAGVIT-V2 [1]</td><td>1280</td><td>307M</td><td>8.6</td><td>-</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>20</td><td>13</td></tr><tr><td>MaskGIT [8]</td><td>4352</td><td>227M</td><td>240</td><td>202</td></tr><tr><td>VQGAN [13]</td><td>4352</td><td>227M</td><td>299</td><td>270</td></tr><tr><td>TATS [14]</td><td>4096</td><td>32M</td><td>162</td><td>-</td></tr><tr><td>MAGVIT [49]</td><td>4096</td><td>158M</td><td>58</td><td>-</td></tr><tr><td>OmniTok [2]</td><td>5120</td><td>82.2M</td><td>42</td><td>26</td></tr><tr><td>LARP-B [3]</td><td>1024</td><td>143M</td><td>64</td><td>35</td></tr><tr><td>LARP-L [3]</td><td>1024</td><td>193M</td><td>35</td><td>23</td></tr><tr><td>SweetTok*</td><td>5120</td><td>128M</td><td>11</td><td>8</td></tr><tr><td>SweetTok</td><td>1280</td><td>128M</td><td>20</td><td>25</td></tr></table>
193
+
194
+ temporal layers, with both the encoder and decoder configured to a hidden dimension of 512. The latent space dimension is set to 256. For the LLM codebook quantizer, we exclude words with a frequency below 5, resulting in a selection of 5,078 nouns, 5,403 adjectives, 9,267 verbs, and 1,872 adverbs. This forms a spatial codebook of size 10,481 and a temporal codebook of size 11,139. The model is trained with a batch size of 8 for 1000K iterations. All training is performed on NVIDIA A100 GPUs. Adam [21] is employed for optimization $(\beta_{1} = 0.9$ and $\beta_{2} = 0.99)$ . During each stage, we use a cosine learning rate scheduler with a max learning rate of 1e-4 and a min learning rate of 1e-5, warmed up by 10K iterations.
195
+
196
+ Table 1. Video reconstruction FVD on the UCF-101 and K-600 dataset, using a frame resolution $256 \times 256$ . “*” denotes training SweetTok without token compression. Lines in “gray” indicate results at a resolution of $128 \times 128$ .
197
+
198
+ <table><tr><td>Tokenizer</td><td>Type</td><td>#Tokens</td><td>#Params Generator</td><td>gFVD ↓</td></tr><tr><td>MAGVIT [49]</td><td>AR</td><td>1024</td><td>306M</td><td>265</td></tr><tr><td>MAGVIT-V2 [1]</td><td>AR</td><td>1280</td><td>307M</td><td>109</td></tr><tr><td>MAGVIT [49]</td><td>MLM</td><td>1024</td><td>306M</td><td>76</td></tr><tr><td>MAGVIT-V2 [1]</td><td>MLM</td><td>1280</td><td>307M</td><td>58</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>57</td></tr><tr><td>CogVideo [16]</td><td>AR</td><td>6800</td><td>9.4B</td><td>626</td></tr><tr><td>TATS [14]</td><td>AR</td><td>4096</td><td>321M</td><td>332</td></tr><tr><td>Video-LaVIT [17]</td><td>AR</td><td>512</td><td>7B</td><td>280</td></tr><tr><td>OmniTok [2]</td><td>AR</td><td>5120</td><td>650M</td><td>191</td></tr><tr><td>LARP-L [3]</td><td>AR</td><td>1024</td><td>632M</td><td>99</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>650M</td><td>84</td></tr><tr><td>SweetTok</td><td>AR</td><td>1280</td><td>1.9B</td><td>65</td></tr></table>
199
+
200
+ Table 2. Class-conditional video generation results on UCF-101. Each video is composed of 17 frames with a resolution of 256 × 256. "AR" and "MLM" represents autoregressive and masked-language-modeling generator. Lines in "gray" indicate results at a resolution of $128 \times 128$ .
201
+
202
+ # 4.2. Video Reconstruction & Generation
203
+
204
+ We first evaluate the tokenization capability of SweetTok on the UCF-101 and K-600 video datasets. As shown in Table 1, SweetTok uses 1,280 tokens (256 spatial tokens and 1,024 temporal tokens), which is four times fewer than OmniTok's 5,120 tokens and comparable with LARP's 1024 tokens. Despite its high compression ratio, our method achieves a $52.3\%$ improvement on UCF-101 and competitive performance on K-600 compared with OmniTok. With a similar token count, SweetTok, despite its smaller model size compared to LARP-L, delivers a $42.8\%$ improvement on UCF-101 and comparable results on K-600. Our Sweet
205
+
206
+ ![](images/18542e9d83b35a62d8fd87d4d0f0ec0fd97b0dfcc1c1425648464b7ef48549c2.jpg)
207
+ Figure 5. Class-conditional video generation result on UCF-101 dataset. The action class label of each row is: "PlayingTabla", "TennisS-wing", and "HorseRiding".
208
+
209
+ ![](images/f0221d5adffbd44e8de2678aedadfaf0b31b5ef7f229628f76a2431947166e4b.jpg)
210
+
211
+ ![](images/d51f7ce5924627f17b63d794c39c6e6619f0b48c45f82c5e848ff9a79bb54ec6.jpg)
212
+
213
+ <table><tr><td>Tokenizer</td><td>#Tokens</td><td>Codebook Size</td><td>rFID ↓</td></tr><tr><td>VQGAN [13]</td><td>256</td><td>1024</td><td>7.94</td></tr><tr><td>RQ-VAE [24]</td><td>256</td><td>16384</td><td>3.20</td></tr><tr><td>MaskGIT[49]</td><td>256</td><td>1024</td><td>2.28</td></tr><tr><td>LlamaGen-16 [34]</td><td>256</td><td>16384</td><td>2.19</td></tr><tr><td>TiTok [51]</td><td>256</td><td>4096</td><td>1.71</td></tr><tr><td>TokenFlow [28]</td><td>256</td><td>4096</td><td>1.03</td></tr><tr><td>SweetTok</td><td>256</td><td>10481</td><td>0.73</td></tr><tr><td>ViT-VQGAN [47]</td><td>1024</td><td>8192</td><td>1.28</td></tr><tr><td>OmniTok [2]</td><td>1024</td><td>8192</td><td>1.11</td></tr><tr><td>OmniTok◇ [2]</td><td>1024</td><td>8192</td><td>0.69</td></tr><tr><td>LlamaGen-8 [34]</td><td>1024</td><td>16384</td><td>0.59</td></tr><tr><td>SweetTok*</td><td>1024</td><td>10481</td><td>0.37</td></tr></table>
214
+
215
+ Table 3. Image reconstruction FID on the ImageNet dataset, using a resolution of $256 \times 256$ . “◇” denotes continuous latent space without quantization. “*” denotes training SweetTok without token compression.
216
+
217
+ Tok demonstrates significant performance gains, achieving $68.8\%$ and $28.5\%$ improvements in rFVD on UCF-101 and K-600 datasets, respectively, compared to LARP-B with similar model size. Notably, if we increase the token number to 5,120, SweetTok* significantly outperforms all baselines, achieving an rFVD of 10.74 on UCF-101 and 7.51 on K-600.
218
+
219
+ The generative capability of SweetTok is evaluated on UCF-101 in a class-conditional generation task. Decoupled tokens extracted by SweetTok are concatenated to form training sequences for VideoGPT [45], following the same generation protocol as OmniTok. As shown in Table 2, SweetTok achieves a significant performance improvement, with a gFVD score of 84, $56\%$ lower than OmniTok's 191. This improvement is attributed to SweetTok's effective token compression, which substantially reduces the training complexity for downstream autoregressive models. With
220
+
221
+ equivalent token counts, SweetTok demonstrates superior performance, achieving a $15.1\%$ improvement in gFVD (84 vs. LARP's 99) at comparable generator sizes. Furthermore, it exhibits scaling law characteristics, with gFVD improving from 84 to 65 as the model scales up to 1.9B parameters.
222
+
223
+ The visualization results are presented in Figure 5. To ensure a fair comparison, we select generated videos with similar appearances, as the generation process inherently involves randomness. The results demonstrate significantly improved detail, such as clearer human facial features and finer table textures. Additionally, SweetTok effectively preserves temporal consistency, even under large motion scenarios.
224
+
225
+ # 4.3. Image Reconstruction
226
+
227
+ To demonstrate the flexibility of our decoupled query design, we show that by directly fine-tuning the spatial branch $DQAE_{s}$ , SweetTok also achieves advanced performance for image reconstruction on ImageNet. As shown in Table 3, we compare SweetTok with recent methods under various token compression settings. With 256 spatial tokens, SweetTok outperforms TiTok [51] by $27.8\%$ , reducing rFID from 1.01 to 0.73. When using 1,024 spatial tokens, SweetTok* achieves a significant improvement over both VQ-based and non-VQ-based methods (marked $\diamond$ ), achieving an rFID of 0.37, which surpasses LlamaGen-8 [34] by $37.3\%$ . Visualization results are shown in Figure 6, SweetTok maintains better global appearance and local details.
228
+
229
+ # 4.4. Ablation Studies
230
+
231
+ Decoupled Query AutoEncoder. We demonstrate the effectiveness of our spatial-temporal decoupling design for token compression. As shown in Table 4, naively downsampling video tokens by 1D linear-interpolation from 5,120 patches into 1,280 tokens results in poor performance of a
232
+
233
+ ![](images/1d94ac1515a489aaf898772d0f0cefbff39ecc9a0bdcddbe5088291d3793310b.jpg)
234
+ GT
235
+ Figure 6. Visualization of image reconstruction results.
236
+
237
+ ![](images/1b434c63c04a778058ac507ce92da498853d4ff6eb5a14287ff4b82c0b88e410.jpg)
238
+ TiTok [51]
239
+
240
+ ![](images/7e46e214711fd5ca9bb200003c9e5c3fad4cfd38944111c3633b782ce7793e45.jpg)
241
+ OmniTok [2]
242
+
243
+ ![](images/9c2d75630cd7624d78b82fb901bdfc8e7d847afef406cf8fc81833b448a3656a.jpg)
244
+ SweetTok
245
+
246
+ <table><tr><td>Compression Method</td><td>#Tokens</td><td>rFVD ↓</td></tr><tr><td>Vanilla Downsample</td><td>1280</td><td>227.65</td></tr><tr><td>Vanilla Query-based (LARP [3])</td><td>1024</td><td>35.15</td></tr><tr><td>Decoupled Query-based (DQAE)</td><td>1280</td><td>20.46</td></tr></table>
247
+
248
+ rFVD of 227.65. Training SweetTok without decoupling (similarly to vanilla query-based method in LARP [3]) results in sub-optimal result, obtaining a rFVD of 35.15. Decoupling query (DQAE) achieves best result of 20.46 rFVD. There are two factors: (1) the flattening operation discards substantial consecutive temporal information, and (2) without decoupling, the model struggles to learn efficiently from the intertwined temporal and spatial information.
249
+
250
+ Motion-enhanced Language Codebook. We evaluate the effects of the motion-enhanced language codebooks (MLC) on UCF-101 datasets. As illustrated in Table 5, vanilla language codebook design enhances rFVD performance from 29.45 to 24.80, leading to a sub-optimal result. Notably, our motion-enhanced temporal language codebook significantly benefits video reconstruction tasks, further reducing the rFVD score from 24.80 to 20.46. This underscores the importance of our unique design for video modalities. Additionally, we compare different types of language-based embeddings, such as using Qwen-2.5B [4] embeddings in place of CLIP [30] embeddings. The experiments indicate that naively increasing the complexity of pre-trained language model is not cost-effective for SweetTok.
251
+
252
+ Table 4. Ablation study of different token count compression method for video tokenizers.
253
+
254
+ <table><tr><td>Methods</td><td>rFVD ↓</td></tr><tr><td>Baseline (w/o LC)</td><td>29.45</td></tr><tr><td>+ LC</td><td>24.80</td></tr><tr><td>+ MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ CLIP [30]-based MLC (SweetTok)</td><td>20.46</td></tr><tr><td>+ Qwen [4]-based MLC</td><td>20.12</td></tr></table>
255
+
256
+ Table 5. Ablation study of different codebooks, including vanilla language codebook (LC), motion-enhanced language codebook (MLC) and more advanced pre-trained language codebook.
257
+
258
+ <table><tr><td>Methods</td><td colspan="4">ImageNet</td><td>UCF-101</td></tr><tr><td>K-way-N-shot</td><td>2-1</td><td>2-3</td><td>2-5</td><td>Avg</td><td>5-5</td></tr><tr><td>SPAE [50]</td><td>84.8</td><td>92.5</td><td>92.6</td><td>89.9</td><td>-</td></tr><tr><td>V2L [54]</td><td>76.3</td><td>91.2</td><td>95.3</td><td>87.6</td><td>-</td></tr><tr><td>ARN[53]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>83.1</td></tr><tr><td>HF-AR [23]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>86.4</td></tr><tr><td>SweetTok</td><td>86.8</td><td>90.5</td><td>95.2</td><td>90.8</td><td>90.1</td></tr></table>
259
+
260
+ Table 6. Few-shot visual classification accuracy $(\uparrow)$ , evaluated on both image and video modality.
261
+
262
+ # 4.5. Visual Semantic Comprehension
263
+
264
+ Few-Shot Visual Classification. We conducted experiments on few-shot image classification and video action recognition tasks. SweetTok extracted visual tokens and transformed them into natural language words via our language-based codebook. Subsequently, CLIP computed the similarity between the visual inputs and text embeddings. Top 21 tokens with the highest similarity were selected to form a prompt for prediction using the Qwen LLM. For the image classification task, we adhered to the V2L protocol [54], comparing SweetTok against SPAE [50] and V2L. In the video action recognition task, we used ARN [53] and HF-AR [23] as baselines. The results in Table 6 indicate that SweetTok achieved an accuracy of $90.8\%$ on the miniImageNet dataset, surpassing SPAE $89.9\%$ and V2L $87.6\%$ . On the UCF-101 dataset, SweetTok attain an average accuracy of $90.1\%$ , outperforming ARN $83.1\%$ and HF-AR $86.4\%$ . These findings demonstrate SweetTok's semantic ability in both image and video tasks. More results are in the supplementary.
265
+
266
+ # 5. Conclusions
267
+
268
+ We present SweetTok, an efficient video tokenization framework that compresses spatial and temporal information through the decoupled query autoencoder. Combined with motion-enhanced language codebook, SweetTok reduces token count for video data more effectively, achieving higher reconstruction fidelity compared to previous state-of-the-arts. Our approach offers a compact representation of video data, making it well-suited for downstream tasks such as video generation, understanding, marking a significant step in efficient video tokenization.
269
+
270
+ # References
271
+
272
+ [1] Language model beats diffusion-tokenizer is key to visual generation. ICLR, 2023. 1, 2, 3, 6
273
+ [2] Omnitokensizer: A joint image-video tokenizer for visual generation. NeurIPS, 2024. 1, 2, 3, 4, 5, 6, 7, 8
274
+ [3] Larp: Tokenizing videos with a learned autoregressive generative prior. *ICLR Oral*, 2025. 1, 2, 3, 4, 6, 7, 8
275
+ [4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.8
276
+ [5] Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 3
277
+ [6] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213-229. Springer, 2020. 2
278
+ [7] Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. A short note about kinetics-600. arXiv preprint arXiv:1808.01340, 2018. 5
279
+ [8] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In CVPR, pages 11315-11325, 2022. 1, 2, 6
280
+ [9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. JMLR, 24(240):1-113, 2023. 3
281
+ [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009. 2, 5
282
+ [11] Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2018. 2, 3
283
+ [12] Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2020. 1, 2
284
+ [13] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In CVPR, 2021. 2, 5, 6, 7
285
+ [14] Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, and Devi Parikh. Long video generation with time-agnostic vqgan and time-sensitive transformer. In ECCV, pages 102-118. Springer, 2022. 1, 2, 3, 6
286
+ [15] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. NeurIPS, 30, 2017. 5
287
+ [16] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022. 6
288
+ [17] Yang Jin, Zhicheng Sun, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, and Yadong Mu. Video-lavit: Unified videolanguage pre-training with decoupled visual-motional tokenization. In ICML, pages 22185-22209, 2024. 1, 2, 6
289
+
290
+ [18] Yang Jin, Kun Xu, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Yadong Mu, et al. Unified language-vision pretraining in llm with dynamic discrete visual tokenization. In ICLR, 2024. 2
291
+ [19] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 5
292
+ [20] Diederik P Kingma. Auto-encoding variational bayes. ICLR, 2013. 2
293
+ [21] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
294
+ [22] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *ICLR*, 2016. 3
295
+ [23] Neeraj Kumar and Siddhansh Narang. Few shot activity recognition using variational inference. arXiv preprint arXiv:2108.08990, 2021. 8
296
+ [24] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. In CVPR, pages 11523-11532, 2022. 7
297
+ [25] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, pages 19730–19742, 2023. 2, 4
298
+ [26] Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. EMNLP, 2023. 1, 2
299
+ [27] Hao Liu, Wilson Yan, and Pieter Abbeel. Language quantized autoencoders: Towards unsupervised text-image alignment. NeurIPS, 36, 2023. 2, 3, 5
300
+ [28] Liao Qu, Huichao Zhang, Yiheng Liu, Xu Wang, Yi Jiang, Yiming Gao, Hu Ye, Daniel K Du, Zehuan Yuan, and Xinglong Wu. Tokenflow: Unified image tokenizer for multimodal understanding and generation. arXiv preprint arXiv:2412.03069, 2024. 7
301
+ [29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 2
302
+ [30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. 3, 5, 8
303
+ [31] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. NeurIPS, 32, 2019. 2
304
+ [32] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684-10695, 2022. 3
305
+ [33] K Soomro. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 2, 5
306
+ [34] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan. Autoregressive model
307
+
308
+ beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024. 7
309
+ [35] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. ICLR, 2024. 1, 2
310
+ [36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 2
311
+ [37] Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. NeurIPS, 34:200-212, 2021. 5
312
+ [38] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. arXiv preprint arXiv:1812.01717, 2018. 5
313
+ [39] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. NeurIPS, 30, 2017. 1, 2, 5
314
+ [40] A Vaswani. Attention is all you need. NeurIPS, 2017. 2
315
+ [41] Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual descriptions. In ICLR, 2023. 1, 2
316
+ [42] Junke Wang, Dongdong Chen, Chong Luo, Bo He, Lu Yuan, Zuxuan Wu, and Yu-Gang Jiang. Omnivid: A generative framework for universal video understanding. In CVPR, pages 18209-18220, 2024. 1, 2
317
+ [43] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. ECCV, 2024. 1, 2
318
+ [44] Chen Wei, Chenxi Liu, Siyuan Qiao, Zhishuai Zhang, Alan Yuille, and Jiahui Yu. De-diffusion makes text a strong cross-modal interface. In CVPR, pages 13492-13503, 2024. 3
319
+ [45] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. Videogpt: Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157, 2021. 7
320
+ [46] Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, and Seunghoon Hong. Towards end-to-end generative modeling of long videos with memory-efficient bidirectional transformers. In CVPR, pages 22888-22897, 2023. 1
321
+ [47] Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. ICLR, 2022. 2, 7
322
+ [48] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. ICLR, 2024. 2
323
+ [49] Lijun Yu, Yong Cheng, Kihyuk Sohn, José Lezama, Han Zhang, Huiwen Chang, Alexander G Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, and Lu Jiang. MAGVIT:
324
+
325
+ Masked generative video transformer. In CVPR, 2023. 1, 2, 3, 6, 7
326
+ [50] Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, et al. Spae: Semantic pyramid autoencoder for multimodal generation with frozen llms. NeurIPS, 36, 2023. 2, 3, 5, 8
327
+ [51] Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, and Liang-Chieh Chen. An image is worth 32 tokens for reconstruction and generation. NeurIPS, 2024. 1, 2, 4, 7, 8
328
+ [52] Baoquan Zhang, Huaibin Wang, Chuyao Luo, Xutao Li, Guotao Liang, Yunming Ye, Xiaochen Qi, and Yao He. Codebook transfer with part-of-speech for vector-quantized image modeling. In CVPR, pages 7757-7766, 2024. 2, 3, 5
329
+ [53] Hongguang Zhang, Li Zhang, Xiaojuan Qi, Hongdong Li, Philip HS Torr, and Piotr Koniusz. Few-shot action recognition with permutation-invariant attention. In ECCV, pages 525-542. Springer, 2020. 5, 8
330
+ [54] Lei Zhu, Fangyun Wei, and Yanye Lu. Beyond text: Frozen large language models in visual signal comprehension. In CVPR, pages 27047-27057, 2024. 3, 8
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bba7b382e4d25336404d1b0ff61b6a4fa43c57c8ce601650f60187f5993c0d9b
3
+ size 813276
2025/SweetTok_ Semantic-Aware Spatial-Temporal Tokenizer for Compact Video Discretization/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_content_list.json ADDED
@@ -0,0 +1,1515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 120,
8
+ 130,
9
+ 875,
10
+ 151
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Sagnik Majumder $^{1}$ Tushar Nagarajan $^{1}$ Ziad Al-Halah $^{2}$ Kristen Grauman $^{1}$ $^{1}$ University of Texas at Austin $^{2}$ University of Utah",
17
+ "bbox": [
18
+ 179,
19
+ 179,
20
+ 816,
21
+ 219
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "image",
27
+ "img_path": "images/c8d8a120cc3841d245058c09b3a3b2c27a1a260051c9ada3885ab9532ac0a810.jpg",
28
+ "image_caption": [
29
+ "Training: Learn human view choices from large-scale in-the-wild videos"
30
+ ],
31
+ "image_footnote": [],
32
+ "bbox": [
33
+ 94,
34
+ 252,
35
+ 529,
36
+ 416
37
+ ],
38
+ "page_idx": 0
39
+ },
40
+ {
41
+ "type": "image",
42
+ "img_path": "images/9c4b2ca15bf2954f908beed6455b5a985764c00e2ed9467e0c3d950a1a437a28.jpg",
43
+ "image_caption": [
44
+ "Inference: Select the best view sequence for a multi-view video",
45
+ "Figure 1. Given a multi-view narrated how-to video, can we select the sequence of camera viewpoints that best show the activity—automating the camerawork that is today done with manual editing? While direct supervision for this task is impractical, our SWITCH-A-VIEW approach shows how to learn typical viewpoint choice patterns from large-scale unlabeled in-the-wild instructional videos (left), then translate those patterns to novel multi-view videos (right), yielding an informative how-to that hops between the most useful ego/exo viewpoints."
46
+ ],
47
+ "image_footnote": [],
48
+ "bbox": [
49
+ 535,
50
+ 257,
51
+ 903,
52
+ 410
53
+ ],
54
+ "page_idx": 0
55
+ },
56
+ {
57
+ "type": "text",
58
+ "text": "Abstract",
59
+ "text_level": 1,
60
+ "bbox": [
61
+ 246,
62
+ 510,
63
+ 326,
64
+ 526
65
+ ],
66
+ "page_idx": 0
67
+ },
68
+ {
69
+ "type": "text",
70
+ "text": "We introduce SWITCH-A-VIEW, a model that learns to automatically select the viewpoint to display at each timepoint when creating a how-to video. The key insight of our approach is how to train such a model from unlabeled—but human-edited—video samples. We pose a pretext task that pseudo-labels segments in the training videos for their primary viewpoint (egocentric or exocentric), and then discovers the patterns between the visual and spoken content in a how-to video on the one hand and its view-switch moments on the other hand. Armed with this predictor, our model can be applied to new multi-view videos to orchestrate which viewpoint should be displayed when. We demonstrate our idea on a variety of real-world videos from HowTo100M and Ego-Exo4D, and rigorously validate its advantages.",
71
+ "bbox": [
72
+ 88,
73
+ 542,
74
+ 485,
75
+ 755
76
+ ],
77
+ "page_idx": 0
78
+ },
79
+ {
80
+ "type": "text",
81
+ "text": "1. Introduction",
82
+ "text_level": 1,
83
+ "bbox": [
84
+ 89,
85
+ 782,
86
+ 220,
87
+ 799
88
+ ],
89
+ "page_idx": 0
90
+ },
91
+ {
92
+ "type": "text",
93
+ "text": "Video is an amazing medium for communication, and today's widely used Internet platforms make it easy to create and share content broadly. Instructional or \"how-to\" video is particularly compelling in this setting: YouTube, TikTok, and similar sites have democratized the ability to share our talents with others, by both showing and telling how to",
94
+ "bbox": [
95
+ 88,
96
+ 809,
97
+ 483,
98
+ 901
99
+ ],
100
+ "page_idx": 0
101
+ },
102
+ {
103
+ "type": "text",
104
+ "text": "perform some special skill. From how to plant a garden, how to make yogurt, how to fold origami, or how to give a dog a haircut, there is no shortage of how-to nuggets produced and consumed by users of many ages and backgrounds.",
105
+ "bbox": [
106
+ 511,
107
+ 512,
108
+ 906,
109
+ 573
110
+ ],
111
+ "page_idx": 0
112
+ },
113
+ {
114
+ "type": "text",
115
+ "text": "Creating an effective how-to video, however, is not trivial. From potentially hours of footage from multiple cameras capturing all aspects of the instructional activity, a creator needs to edit down to the essential steps of their demonstration and decide on the camera viewpoint (view) for each temporal segment that best reveals what they want to show. For example, when showing how to cut the dog's hair, the instructor might first appear standing beside the dog—the camera more distant—then the camera may zoom close up to her using scissors and describing how to trim near the ear, then zoom back out while she shows progress across the dog's body. How-to videos often exhibit this sequential mix of \"exocentric\" and \"egocentric-like\" viewpoints to effectively recap the procedure with clear visuals.",
116
+ "bbox": [
117
+ 509,
118
+ 578,
119
+ 908,
120
+ 790
121
+ ],
122
+ "page_idx": 0
123
+ },
124
+ {
125
+ "type": "text",
126
+ "text": "The status quo is to either orchestrate camerework live while filming, or do post-recording editing among the multiple available cameras—both of which are labor intensive. Work in automatic cinematography [7, 15, 16, 22, 49, 56], though inspiring, relies on heuristics or domain-specific models that are not equipped to address automatic editing of video demonstrations. How could we train an \"AI how-to",
127
+ "bbox": [
128
+ 509,
129
+ 795,
130
+ 910,
131
+ 900
132
+ ],
133
+ "page_idx": 0
134
+ },
135
+ {
136
+ "type": "header",
137
+ "text": "CVF",
138
+ "bbox": [
139
+ 106,
140
+ 2,
141
+ 181,
142
+ 42
143
+ ],
144
+ "page_idx": 0
145
+ },
146
+ {
147
+ "type": "header",
148
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
149
+ "bbox": [
150
+ 238,
151
+ 0,
152
+ 807,
153
+ 46
154
+ ],
155
+ "page_idx": 0
156
+ },
157
+ {
158
+ "type": "page_number",
159
+ "text": "11969",
160
+ "bbox": [
161
+ 478,
162
+ 944,
163
+ 519,
164
+ 955
165
+ ],
166
+ "page_idx": 0
167
+ },
168
+ {
169
+ "type": "text",
170
+ "text": "cameraman\", which, given a stream of two or more simultaneous camera views, could hop between them intelligently?",
171
+ "bbox": [
172
+ 89,
173
+ 90,
174
+ 485,
175
+ 119
176
+ ],
177
+ "page_idx": 1
178
+ },
179
+ {
180
+ "type": "text",
181
+ "text": "Supervising this learning task presents a problem. There are vast amounts of positive examples of well-edited how-to videos, but those edited results hide the \"negatives\"—the viewpoints that were not chosen for inclusion in the final video at any given time point. Those are left on the cutting room floor. This makes it unclear how to translate the editing patterns in in-the-wild edited video to new data.",
182
+ "bbox": [
183
+ 88,
184
+ 122,
185
+ 480,
186
+ 227
187
+ ],
188
+ "page_idx": 1
189
+ },
190
+ {
191
+ "type": "text",
192
+ "text": "To tackle this learning challenge, we design a pretext task for learning human view preferences from varying-view instructional videos on the Web. Varying-view means that the source training videos display an arbitrary number of view switches over the course of the video (e.g., from ego to exo and back as in our example above), and contain only one viewpoint at any time. We introduce a model called SWITCH-A-VIEW that learns from such data; it uses past frames in concert with the how-to narrations spoken by the demonstrator, which are widely available in instructional videos, to learn a binary classifier indicating whether the viewpoint is going to switch or not at the current time step. Then, we deploy this pretext-trained model in multi-view, narrated video settings with limited best view labels, and decide how to orchestrate the view selection of such videos over time. In this way, our approach captures the view-switch patterns from widely diverse unlabeled in-the-wild videos, then translates those trends to automatically direct the camerawork in new instances. See Fig. 1.",
193
+ "bbox": [
194
+ 91,
195
+ 229,
196
+ 485,
197
+ 516
198
+ ],
199
+ "page_idx": 1
200
+ },
201
+ {
202
+ "type": "text",
203
+ "text": "We train and evaluate our approach on HowTo100M [34], an extensive repository of real-world how-to videos, and further show generalization to multi-view Ego-Exo4D [18] videos. Our findings confirm that human judges exhibit substantial agreement on what constitutes a \"best view\" in a how-to video, establishing that it is possible to rigorously evaluate this task. Furthermore, our results show SWITCH-A-VIEW outperforms the state-of-the-art in multi-view video view selection [32] as well as proprietary VLMs like Gemini 2.5 Pro and GPT-4o [12, 23] and other baselines.",
204
+ "bbox": [
205
+ 89,
206
+ 518,
207
+ 483,
208
+ 667
209
+ ],
210
+ "page_idx": 1
211
+ },
212
+ {
213
+ "type": "text",
214
+ "text": "2. Related work",
215
+ "text_level": 1,
216
+ "bbox": [
217
+ 89,
218
+ 686,
219
+ 228,
220
+ 700
221
+ ],
222
+ "page_idx": 1
223
+ },
224
+ {
225
+ "type": "text",
226
+ "text": "Automatic cinematography. In automatic cinematography, systems automate the process of creating an effective video presentation given a video scene, such as controlling camera movements, angles, and transitions. Prior work targets classroom environments [16, 21, 56], group activities [2], or (pseudo-)panoramic recordings [6, 7, 9, 15, 49, 50, 55]. Different from all of the above, we tackle view selection in multi-view instructional scenarios. Moreover, we seek a lighter-weight supervision solution: whereas prior work uses supervised discriminative methods requiring large-scale best view labels [7, 22, 49] or bootstraps view selector training using large-scale multi-view videos annotated with view-agnostic narrations [32], we aim to learn view selection",
227
+ "bbox": [
228
+ 89,
229
+ 704,
230
+ 485,
231
+ 900
232
+ ],
233
+ "page_idx": 1
234
+ },
235
+ {
236
+ "type": "text",
237
+ "text": "from readily available in-the-wild unlabeled instructional videos. Furthermore, our model is multimodal, integrating both the video content as well as its transcribed speech.",
238
+ "bbox": [
239
+ 511,
240
+ 90,
241
+ 906,
242
+ 137
243
+ ],
244
+ "page_idx": 1
245
+ },
246
+ {
247
+ "type": "text",
248
+ "text": "View selection in active perception. More distant from our problem, work in active perception and robotics considers how agents can intelligently select their visual input stream. This includes next-best-view selection, where an embodied agent learns to actively place a camera for recognition [1, 8, 13, 24, 40] or segmentation [44, 45]. Whereas the objective in such work is to spend less time or compute for an agent to see sufficient content, our goal is instead to choose the sequence of informative camera views for human consumption, from among the available viewpoints.",
249
+ "bbox": [
250
+ 511,
251
+ 146,
252
+ 908,
253
+ 297
254
+ ],
255
+ "page_idx": 1
256
+ },
257
+ {
258
+ "type": "text",
259
+ "text": "Weak supervision from Web data. Large-scale instructional data from the Web has been shown to provide weak supervision for understanding instructional activities, by aligning frames [33] and narrations [29, 33] with their step descriptions from instructional Web articles (e.g., Wiki-How), or through modeling the temporal order and interdependence of steps [3, 58, 59]. Unlike any of these methods, we tackle a distinct problem of weakly supervised view-switch detection in instructional videos, with the end goal of using the detector for view selection.",
260
+ "bbox": [
261
+ 511,
262
+ 306,
263
+ 910,
264
+ 458
265
+ ],
266
+ "page_idx": 1
267
+ },
268
+ {
269
+ "type": "text",
270
+ "text": "Video summarization. Temporal video summarization [4, 20, 35, 37, 42] entails creating a short but informative summary of a long video by subsampling keyframes or clips from it. While early methods are largely unsupervised [25, 30, 38, 47], more recent works derive supervision from manual labels [17, 19, 27, 41, 48, 57]. Limited work explores summarization in the context of multiple input videos [10, 14, 37, 43]. Video summarization and viewpoint selection are two entirely distinct tasks. Video summarization aims to downsample the video in time to the essential parts, whereas our task essentially requires downsampling the video in space to isolate the most informative viewpoint.",
271
+ "bbox": [
272
+ 511,
273
+ 468,
274
+ 910,
275
+ 650
276
+ ],
277
+ "page_idx": 1
278
+ },
279
+ {
280
+ "type": "text",
281
+ "text": "3. Approach",
282
+ "text_level": 1,
283
+ "bbox": [
284
+ 511,
285
+ 662,
286
+ 624,
287
+ 680
288
+ ],
289
+ "page_idx": 1
290
+ },
291
+ {
292
+ "type": "text",
293
+ "text": "Our goal is to train a model to predict the \"best view sequence\" for multi-camera instructional videos — the sequence of camera viewpoints (views) that a human would most likely select to demonstrate an instructional activity (e.g., a close-up view of ingredients in a cooking video, moving to a wide-shot view when the chef speaks and gestures). To tackle this, we train a model for the proxy task of detecting \"view switches\" in varying-view instructional videos, which we then bootstrap to form a view selection model.",
294
+ "bbox": [
295
+ 511,
296
+ 688,
297
+ 908,
298
+ 824
299
+ ],
300
+ "page_idx": 1
301
+ },
302
+ {
303
+ "type": "text",
304
+ "text": "First, we formally define our pretext task (Sec. 3.1). Next, we describe how to source pseudo-labels for our pretext task by automatically classifying views in varying-view videos (Sec. 3.2). We then describe our method and how to train it to predict view-switches (Sec. 3.3). Finally, we describe",
305
+ "bbox": [
306
+ 511,
307
+ 825,
308
+ 908,
309
+ 900
310
+ ],
311
+ "page_idx": 1
312
+ },
313
+ {
314
+ "type": "page_number",
315
+ "text": "11970",
316
+ "bbox": [
317
+ 478,
318
+ 944,
319
+ 519,
320
+ 955
321
+ ],
322
+ "page_idx": 1
323
+ },
324
+ {
325
+ "type": "image",
326
+ "img_path": "images/3b2aa182b53c02cc2334c9ded3b6ffeca75de2ec72434b7fc9dcb4fc8a08e3bc.jpg",
327
+ "image_caption": [
328
+ "Figure 2. Given varying-view instructional videos—videos composed of a sequence of views chosen by human(s) to accurately show the instructional activity at all times—our goal is to train a view-switch detector $D$ that can predict if the view should switch or not, at any time in a new video. Our hypothesis is that such a detector, when trained on large-scale and in-the-wild videos, can capture human view preferences and facilitate learning best view selection in multi-view settings with limited labels. However, such in-the-wild videos lack view labels. To train nevertheless, we propose an approach comprising (a) a view pseudo-labeler (left) that given a varying-view instructional video $I$ , automatically classifies views in it and generates a pseudo-label set $\\tilde{V}^I$ , and (b) a view-switch detector $D$ (right) that given the pseudo-labels $\\tilde{V}^I$ and any time $t$ in $I$ , learns to predict the next view. The prediction is conditioned on the past frames, past narrations, and the next narration, where narrations are naturally occurring spoken content from the how-to demonstrator."
329
+ ],
330
+ "image_footnote": [],
331
+ "bbox": [
332
+ 94,
333
+ 87,
334
+ 906,
335
+ 255
336
+ ],
337
+ "page_idx": 2
338
+ },
339
+ {
340
+ "type": "text",
341
+ "text": "how our view-switch detector can bootstrap learning a view selection model (Sec. 3.4 and 3.5) with limited labels.",
342
+ "bbox": [
343
+ 89,
344
+ 388,
345
+ 480,
346
+ 417
347
+ ],
348
+ "page_idx": 2
349
+ },
350
+ {
351
+ "type": "text",
352
+ "text": "3.1. View-switch detection as a pretext task",
353
+ "text_level": 1,
354
+ "bbox": [
355
+ 89,
356
+ 429,
357
+ 423,
358
+ 445
359
+ ],
360
+ "page_idx": 2
361
+ },
362
+ {
363
+ "type": "text",
364
+ "text": "We introduce our pretext task: view-switch detection in varying-view instructional videos. Consider a varying-view instructional video $I$ , where the view changes back and forth over time between a close-up / egocentric-like (ego) view, and a wide shot / exocentric-like (exo) view. $^{1}$ This results in a sequence of varying views $V$ . The instructional video also contains a sequence of narrations $N$ , where each narration $N_{i}$ has a start and end time, $(b_{i}, e_{i})$ , and provides commentary transcribed to text. These narrations are free-form spoken language from the demonstrator, which capture their actions (\"hammer the nail in there\") as well as side comments (\"sometimes I use my sander instead\", \"thanks for watching!\").",
365
+ "bbox": [
366
+ 88,
367
+ 450,
368
+ 482,
369
+ 648
370
+ ],
371
+ "page_idx": 2
372
+ },
373
+ {
374
+ "type": "text",
375
+ "text": "We formulate the view-switch detection task as a two-class view prediction problem, where at any time $t$ in the video, the model must detect if the view should be of type ego or exo, to best showcase the activity over the next $\\Delta$ seconds. More specifically, we require a model $D$ that predicts the human-preferred view $V_{(t,t + \\Delta]}$ given the past video, narrations and views, as well as the next narration. Formally,",
376
+ "bbox": [
377
+ 89,
378
+ 648,
379
+ 485,
380
+ 753
381
+ ],
382
+ "page_idx": 2
383
+ },
384
+ {
385
+ "type": "equation",
386
+ "text": "\n$$\nD (F _ {[: t ]}, N _ {[: t ]}, V _ {[: t ]}, N _ {(t, t + \\Delta ]} ^ {\\prime}) = V _ {(t, t + \\Delta ]},\n$$\n",
387
+ "text_format": "latex",
388
+ "bbox": [
389
+ 148,
390
+ 763,
391
+ 423,
392
+ 782
393
+ ],
394
+ "page_idx": 2
395
+ },
396
+ {
397
+ "type": "text",
398
+ "text": "where $F_{[:t]}$ is the past frames, $N_{[:t]}$ is the past narrations and $V_{[:t]}$ is the past views. $N_{(t,t + \\Delta ]}^{\\prime}$ is the next narration, if it overlaps with the prediction interval, and an empty string otherwise. Importantly, this formulation provides a path from the next-view prediction task to the view-switch",
399
+ "bbox": [
400
+ 89,
401
+ 787,
402
+ 483,
403
+ 864
404
+ ],
405
+ "page_idx": 2
406
+ },
407
+ {
408
+ "type": "text",
409
+ "text": "task: since the most recent past view is observed, estimating the desired next view—and comparing it with the latest past view—is equivalent to predicting whether the view switched.",
410
+ "bbox": [
411
+ 511,
412
+ 388,
413
+ 906,
414
+ 431
415
+ ],
416
+ "page_idx": 2
417
+ },
418
+ {
419
+ "type": "text",
420
+ "text": "While past narrations provide high-level cues about past activity steps, past frames offer more fine-grained information about the steps and how they were viewed. They together form the past context that can help anticipate the next view. The next narration is essential to disambiguate between various potential actions that the demonstrator may do next, and the language directly hints at the appropriate views (e.g., the person says \"next, let's take a closer look at ...\" suggesting an ego view). Thus, combining these inputs will offer valuable cues to our detector.2",
421
+ "bbox": [
422
+ 511,
423
+ 433,
424
+ 908,
425
+ 583
426
+ ],
427
+ "page_idx": 2
428
+ },
429
+ {
430
+ "type": "text",
431
+ "text": "Critically, we aim to train this detector on large-scale, in-the-wild instructional videos [34, 51]. We show that training for this pretext task can enable view selection models for multi-camera settings, with limited supervision. In short, representations developed to detect when to \"switch view\" can be repurposed with minimal modification to select the \"best view\" to switch to, since they contain rich knowledge of human-selected view-switch patterns in a large variety of in-the-wild scenarios. Next, we show how to source pseudolabels to train such models.",
432
+ "bbox": [
433
+ 511,
434
+ 584,
435
+ 910,
436
+ 734
437
+ ],
438
+ "page_idx": 2
439
+ },
440
+ {
441
+ "type": "text",
442
+ "text": "3.2. Sourcing \"view-switch\" pseudo-labels",
443
+ "text_level": 1,
444
+ "bbox": [
445
+ 511,
446
+ 742,
447
+ 839,
448
+ 758
449
+ ],
450
+ "page_idx": 2
451
+ },
452
+ {
453
+ "type": "text",
454
+ "text": "Instructional videos [34, 51] are an ideal source of varying-view data, however they do not come paired with information about what camera viewpoint is chosen for each segment. We therefore design a strategy to automatically identify and pseudo-label their underlying view sequences. We do this in two stages (Fig. 2 left).",
455
+ "bbox": [
456
+ 511,
457
+ 763,
458
+ 908,
459
+ 856
460
+ ],
461
+ "page_idx": 2
462
+ },
463
+ {
464
+ "type": "page_footnote",
465
+ "text": "2Note that next-step narrations are also available at inference time, when we have multi-view video content and a full narration track, and we aim to perform view selection.",
466
+ "bbox": [
467
+ 511,
468
+ 862,
469
+ 906,
470
+ 900
471
+ ],
472
+ "page_idx": 2
473
+ },
474
+ {
475
+ "type": "page_footnote",
476
+ "text": "<sup>1</sup>We adopt this ego/exo view taxonomy given their importance and prevalence in instructional video datasets [18, 34, 51].",
477
+ "bbox": [
478
+ 89,
479
+ 875,
480
+ 482,
481
+ 901
482
+ ],
483
+ "page_idx": 2
484
+ },
485
+ {
486
+ "type": "page_number",
487
+ "text": "11971",
488
+ "bbox": [
489
+ 478,
490
+ 944,
491
+ 517,
492
+ 955
493
+ ],
494
+ "page_idx": 2
495
+ },
496
+ {
497
+ "type": "text",
498
+ "text": "First, given a video $I$ , we use an off-the-shelf scene detector (PySceneDetect [5]) to compute scene boundaries. Using this, we split the video into a sequence of contiguous shots. Next, we classify each frame in the video using a pre-trained ego vs. exo view classifier, and then aggregate the class predictions into a shot-level pseudo-label. Specifically, given a shot from $I$ , we first split it into a sequence of fixed-length clips. Next, we feed each clip to the view classifier that produces the probability that the clip is from an ego vs. exo view. We then compute the pseudo-label for the whole shot by averaging the view probabilities across all its clips. We repeat these steps for all shots, and assign each frame in $I$ the same pseudo-label as the shot it lies in, to finally obtain a pseudo-label set $\\tilde{V}^I$ . Combining the classifier with the scene detector reduces the overall noise in the pseudo-labels due to classification failures at scene boundaries. We use a learned model [28] for ego-exo view classification, trained on the Charades-Ego [46] dataset. See Supp. for details.",
499
+ "bbox": [
500
+ 89,
501
+ 90,
502
+ 485,
503
+ 363
504
+ ],
505
+ "page_idx": 3
506
+ },
507
+ {
508
+ "type": "text",
509
+ "text": "3.3. View-switch detector design",
510
+ "text_level": 1,
511
+ "bbox": [
512
+ 89,
513
+ 373,
514
+ 341,
515
+ 388
516
+ ],
517
+ "page_idx": 3
518
+ },
519
+ {
520
+ "type": "text",
521
+ "text": "Given a video $I$ and any time $t$ in it, our view-switch detector $D$ must successfully predict the view for the future time interval $(t, t + \\Delta]$ . It must do so using the frames, narrations and views from the past, and also the next narration, if it overlaps with the prediction interval (c.f. Sec. 3.1). See Fig. 2 right. In the following, we provide details on how our method extracts features from each input and then aggregates them for making a view prediction.",
522
+ "bbox": [
523
+ 89,
524
+ 395,
525
+ 483,
526
+ 517
527
+ ],
528
+ "page_idx": 3
529
+ },
530
+ {
531
+ "type": "text",
532
+ "text": "Frame encoding. We begin by using a frame encoder $\\mathcal{E}^F$ to embed the past frames $F_{[:t]}$ and produce a visual feature sequence $f$ , where each frame $F_i$ has a feature $f_i$ . We further enhance each feature $f_i$ by using a viewpoint encoder $\\mathcal{E}^V$ to embed the corresponding view $V_i^F$ into a view feature and adding it to $f_i$ . We also encode frame $F_i$ 's temporal position relative to the start time of the most recent narration using a temporal encoder $\\mathcal{E}^T$ and add the encoding to the enhanced frame feature. Producing a feature per frame and augmenting it with view and temporal information helps us create a fine-grained, and view- and temporally-aware representation predictive of the next view.",
533
+ "bbox": [
534
+ 89,
535
+ 526,
536
+ 483,
537
+ 708
538
+ ],
539
+ "page_idx": 3
540
+ },
541
+ {
542
+ "type": "text",
543
+ "text": "Narration encoding. Next, we encode each past narration from $N_{[:t]}$ , and the next narration $N_{(t,t + \\Delta ]}^{\\prime}$ by using an LLM encoder. This generates a text feature sequence $n$ for the past narrations and a single text feature $n^{\\prime}$ for the next narration.",
544
+ "bbox": [
545
+ 89,
546
+ 719,
547
+ 483,
548
+ 779
549
+ ],
550
+ "page_idx": 3
551
+ },
552
+ {
553
+ "type": "text",
554
+ "text": "Similar to our encoding of past frames, we also make the features for past narrations view-aware. To do so, we first produce a per-view count of the frames that lie in the interval of each past narration $N_{i}$ . We then estimate the dominant viewpoint for the narration—called narration view, henceforth—by setting it to the most frequent view per the per-view frame count. Next, we use our view encoder $\\mathcal{E}^{V}$ to embed the narration view into a view feature. Finally, we",
555
+ "bbox": [
556
+ 89,
557
+ 780,
558
+ 483,
559
+ 900
560
+ ],
561
+ "page_idx": 3
562
+ },
563
+ {
564
+ "type": "text",
565
+ "text": "update the narration feature $n_i$ by adding it with the view feature.",
566
+ "bbox": [
567
+ 511,
568
+ 90,
569
+ 906,
570
+ 119
571
+ ],
572
+ "page_idx": 3
573
+ },
574
+ {
575
+ "type": "text",
576
+ "text": "Moreover, for both past and next narrations, we provide their temporal information to our model so that it can infer the alignment between the frames and the narrations, and use it to improve its cross-modal reasoning. To this end, we first normalize the start and end time pair for each past narration $N_{i}$ and next narration $N^{\\prime}$ , to be relative to the start time of the first past narration. We then compute the mean time of each pair. These means convey the temporal locations of the narrations relative to each other. Next, we encode each relative mean with the temporal encoder $\\mathcal{E}^T$ and obtain a temporal feature. Finally, we update the narration features, $n$ and $n^{\\prime}$ , by adding them with their temporal features.",
577
+ "bbox": [
578
+ 511,
579
+ 121,
580
+ 908,
581
+ 303
582
+ ],
583
+ "page_idx": 3
584
+ },
585
+ {
586
+ "type": "text",
587
+ "text": "Feature aggregation and view classification. To aggregate the visual and narration features, we first add modality features to the frame features $f$ , and narration features, $n$ and $n'$ , respectively. These are modality-specific learnable embeddings that help distinguish between the visual and text modalities, and successfully do cross-modal reasoning.",
588
+ "bbox": [
589
+ 511,
590
+ 324,
591
+ 906,
592
+ 414
593
+ ],
594
+ "page_idx": 3
595
+ },
596
+ {
597
+ "type": "text",
598
+ "text": "We also introduce a [CLS] token in our model, and embed it with an encoder $\\mathcal{E}^{\\mathrm{CLS}}$ to produce a feature $c$ , so that the output of our feature aggregator, which corresponds to the [CLS] token, can be used to estimate the next view. Next, we feed the frame features $f$ , the past narration features $n$ , the next narration feature $n'$ , and the [CLS]-token feature $c$ into a feature aggregator $\\mathcal{A}$ . $\\mathcal{A}$ comprises a transformer [53] encoder that performs self-attention on all features and extracts multi-modal cues that are predictive of the next view. Finally, we take the output feature of $\\mathcal{A}$ , which corresponds to the [CLS] token, and pass it to a view classification head $\\mathcal{H}$ to get an estimate $\\hat{V}_{(t,t + \\Delta ]}$ of the next view $V_{(t,t, + \\Delta ]}$ . Formally,",
599
+ "bbox": [
600
+ 511,
601
+ 415,
602
+ 908,
603
+ 599
604
+ ],
605
+ "page_idx": 3
606
+ },
607
+ {
608
+ "type": "equation",
609
+ "text": "\n$$\n\\hat {V} _ {(t, t + \\Delta ]} = \\mathcal {H} (\\mathcal {A} (f, n, n ^ {\\prime}, c) [ j _ {\\mathrm {C L S}} ]), \\tag {1}\n$$\n",
610
+ "text_format": "latex",
611
+ "bbox": [
612
+ 589,
613
+ 611,
614
+ 906,
615
+ 630
616
+ ],
617
+ "page_idx": 3
618
+ },
619
+ {
620
+ "type": "text",
621
+ "text": "where $j_{\\mathrm{CLS}}$ is the feature index for the [CLS] token.",
622
+ "bbox": [
623
+ 511,
624
+ 637,
625
+ 854,
626
+ 652
627
+ ],
628
+ "page_idx": 3
629
+ },
630
+ {
631
+ "type": "text",
632
+ "text": "3.4. Repurposing switch detection for view selection",
633
+ "text_level": 1,
634
+ "bbox": [
635
+ 511,
636
+ 662,
637
+ 906,
638
+ 679
639
+ ],
640
+ "page_idx": 3
641
+ },
642
+ {
643
+ "type": "text",
644
+ "text": "Recall that in view selection, given a multi-view instructional video $I$ and any time $t$ in it, the goal is to predict the view that is preferred by humans for showing the activity in an interval $[t, t + \\Delta]$ . We introduce a view selector $S$ for tackling this task. $S$ is a modification of our view-switch detector $D$ , such that $S$ additionally has access to the frames from the simultaneously captured ego and exo views during the prediction interval $[t, t + \\Delta]$ .",
645
+ "bbox": [
646
+ 511,
647
+ 685,
648
+ 906,
649
+ 806
650
+ ],
651
+ "page_idx": 3
652
+ },
653
+ {
654
+ "type": "text",
655
+ "text": "To this end, we first use our frame encoder $\\mathcal{E}^F$ to embed the ego frames $F_{[t,t + \\Delta ]}^{G}$ and exo frames $F_{[t,t + \\Delta ]}^{X}$ into visual features $f^{G}$ and $f^{X}$ , respectively. Next, we append $f^{G}$ and $f^{X}$ to the input sequence of our feature aggregator $\\mathcal{A}$ . Finally, we treat $\\mathcal{A}$ 's output feature for its [CLS] token input, as a representation of the best view for $[t,t + \\Delta ]$ , and feed it",
656
+ "bbox": [
657
+ 511,
658
+ 806,
659
+ 908,
660
+ 901
661
+ ],
662
+ "page_idx": 3
663
+ },
664
+ {
665
+ "type": "page_number",
666
+ "text": "11972",
667
+ "bbox": [
668
+ 478,
669
+ 944,
670
+ 519,
671
+ 955
672
+ ],
673
+ "page_idx": 3
674
+ },
675
+ {
676
+ "type": "text",
677
+ "text": "to the detector's view classification head $\\mathcal{H}$ to get an estimate $\\ddot{V}_{[t,t + \\Delta ]}$ of the best view $V_{[t,t + \\Delta ]}$ .",
678
+ "bbox": [
679
+ 89,
680
+ 90,
681
+ 482,
682
+ 125
683
+ ],
684
+ "page_idx": 4
685
+ },
686
+ {
687
+ "type": "text",
688
+ "text": "To learn view selection we initialize $S$ with our detector's parameters, trained on the view-switch detection task, and finetune it using a small set of samples labeled for view selection. This design enables us to effectively use the knowledge from pretraining and learn view selection with limited labels. Next, we provide details for training and finetuning.",
689
+ "bbox": [
690
+ 89,
691
+ 123,
692
+ 483,
693
+ 214
694
+ ],
695
+ "page_idx": 4
696
+ },
697
+ {
698
+ "type": "text",
699
+ "text": "3.5. Model training objective",
700
+ "text_level": 1,
701
+ "bbox": [
702
+ 89,
703
+ 227,
704
+ 316,
705
+ 242
706
+ ],
707
+ "page_idx": 4
708
+ },
709
+ {
710
+ "type": "text",
711
+ "text": "We train our view-switch detector $D$ with a view classification loss $\\mathcal{L}^D$ . We set $\\mathcal{L}^D$ to",
712
+ "bbox": [
713
+ 89,
714
+ 250,
715
+ 483,
716
+ 280
717
+ ],
718
+ "page_idx": 4
719
+ },
720
+ {
721
+ "type": "equation",
722
+ "text": "\n$$\n\\mathcal {L} ^ {D} = \\mathcal {L} _ {\\mathrm {C E}} \\left(\\hat {V} _ {(t, t + \\Delta ]}, \\tilde {V} _ {(t, t + \\Delta ]}\\right), \\tag {2}\n$$\n",
723
+ "text_format": "latex",
724
+ "bbox": [
725
+ 181,
726
+ 292,
727
+ 483,
728
+ 313
729
+ ],
730
+ "page_idx": 4
731
+ },
732
+ {
733
+ "type": "text",
734
+ "text": "where $\\hat{V}_{(t,t + \\Delta ]}$ is our estimated view (c.f. Sec. 3.3) and $\\tilde{V}_{(t,t + \\Delta ]}$ is the pseudo-label from our view pseudo-labeler (c.f. Sec. 3.2).",
735
+ "bbox": [
736
+ 89,
737
+ 328,
738
+ 483,
739
+ 376
740
+ ],
741
+ "page_idx": 4
742
+ },
743
+ {
744
+ "type": "text",
745
+ "text": "To train our view selector $S$ , we obtain a small training set of best view labels, $B$ , such that $B = \\{V_{[t_1,t_1 + \\Delta ]},\\dots ,V_{[t_W,t_W + \\Delta ]}\\}$ , and $W$ is the label count in $B$ . For each best view label $V_{[t_w,t_w + \\Delta ]}\\in B$ , and the corresponding view estimate $\\ddot{V}_{[t_w,t_w + \\Delta ]}$ , per our view selector $S$ (c.f. Sec. 3.4), we set our view selection loss $\\mathcal{L}^S$ to a cross-entropy loss, such that",
746
+ "bbox": [
747
+ 89,
748
+ 378,
749
+ 483,
750
+ 486
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "equation",
756
+ "text": "\n$$\n\\mathcal {L} ^ {S} = \\mathcal {L} _ {\\mathrm {C E}} \\left(\\ddot {V} _ {\\left(t _ {w}, t _ {w} + \\Delta \\right]}, V _ {\\left(t _ {w}, t _ {w} + \\Delta \\right]}\\right). \\tag {3}\n$$\n",
757
+ "text_format": "latex",
758
+ "bbox": [
759
+ 163,
760
+ 500,
761
+ 483,
762
+ 518
763
+ ],
764
+ "page_idx": 4
765
+ },
766
+ {
767
+ "type": "text",
768
+ "text": "Once trained, our framework can accurately choose the preferred view in novel multi-view videos.",
769
+ "bbox": [
770
+ 89,
771
+ 532,
772
+ 482,
773
+ 563
774
+ ],
775
+ "page_idx": 4
776
+ },
777
+ {
778
+ "type": "text",
779
+ "text": "4. Datasets and annotations",
780
+ "text_level": 1,
781
+ "bbox": [
782
+ 89,
783
+ 580,
784
+ 326,
785
+ 595
786
+ ],
787
+ "page_idx": 4
788
+ },
789
+ {
790
+ "type": "text",
791
+ "text": "Datasets. We use two datasets in our experiments. HT100M [34] is a large-scale dataset of narrated, in-the-wild instructional videos. These videos are view-varying in nature, and the views can be broadly categorized as ego or exo. This, along with the diversity and realism of HT100M, makes it ideal for our view-switch detection task. Ego-Exo4D [18] contains multi-view videos, where each video is captured with five time-synced cameras—one is an ego camera worn by a human performing an instructional activity, and the other four are stationary exo cameras placed around the scene. Moreover, the narrate-and-act (N&A) subset of Ego-Exo4D has videos of humans narrating and performing an activity, where the narrations are free-form and match in style with HT100M, making it compatible with our task of view selection with limited labels.",
792
+ "bbox": [
793
+ 89,
794
+ 603,
795
+ 483,
796
+ 829
797
+ ],
798
+ "page_idx": 4
799
+ },
800
+ {
801
+ "type": "text",
802
+ "text": "Training data. To train the view-switch detector, we use 3,416 hours of HT100M videos spanning a diverse set of activities (cooking, DIY, household, etc.) and pseudo-label shots from these videos (c.f. Sec. 3.2). See Supp. for details.",
803
+ "bbox": [
804
+ 89,
805
+ 845,
806
+ 483,
807
+ 906
808
+ ],
809
+ "page_idx": 4
810
+ },
811
+ {
812
+ "type": "text",
813
+ "text": "Evaluation data. For evaluation, we use both HT100M and Ego-Exo4D [18], where the view-switch detection evaluation on Ego-Exo4D is zero-shot. While the training sets are automatically generated and pseudo-labeled, we ensure a gold-standard test set free of noise by manually annotating videos for our tasks. To this end, we recruit trained annotators to manually annotate the view types for HT100M and the human-preferred views for Ego-Exo4D, as follows.",
814
+ "bbox": [
815
+ 511,
816
+ 90,
817
+ 906,
818
+ 210
819
+ ],
820
+ "page_idx": 4
821
+ },
822
+ {
823
+ "type": "text",
824
+ "text": "For HT100M, we identify 975 hours of videos that do not overlap with our train videos above. We segment 4,487 fixed-length clips, each with length set to the prediction interval $\\Delta$ (c.f. Sec. 3.1). Next, we ask trained annotators to label these clips as either ego or exo. See Supp. for full annotation instructions and more details.",
825
+ "bbox": [
826
+ 511,
827
+ 212,
828
+ 906,
829
+ 301
830
+ ],
831
+ "page_idx": 4
832
+ },
833
+ {
834
+ "type": "text",
835
+ "text": "For Ego-Exo4D, we create a test set containing 2.7 hours of N&A videos spanning six activity categories (cooking, bike repair, rock climbing, dancing, soccer, basketball). For each video, we use its \"best-exo-view\" annotation from Ego-Exo4D to generate an ego-exo view pair comprising the single ego and the best exo view. As before, we create $\\Delta$ length clips from each view. We then couple the pair with its closest atomic activity description (time-stamped manual descriptions of the camera wearer's activity [18]) and ask our annotators to label the view between the two that best demonstrates the activity described in the narration (see Supp. Fig. 3). Importantly, this means that annotators specifically select the \"best\" view as the one that most clearly illustrates the current actions of the camera wearer, consistent with our how-to video view selection goal.",
836
+ "bbox": [
837
+ 511,
838
+ 303,
839
+ 908,
840
+ 530
841
+ ],
842
+ "page_idx": 4
843
+ },
844
+ {
845
+ "type": "text",
846
+ "text": "Annotator agreement on best view. To ensure annotation quality for both datasets, in addition to providing detailed annotation guidelines and concrete examples (available in Supp.), we require annotators to take qualifiers with stringent passing criteria and we solicit 9 annotators' responses for each instance. We accept an annotation only if the inter-annotator agreement is at least $78\\%$ , meaning at least 7 out of 9 annotators agree. This resulted in a Cohen's kappa coefficient [11] of 0.65 for HT100M and 0.70 for Ego-Exo4D—both of which constitute \"substantial\" agreement [26]. This solid agreement assures the quality of our test set; despite there being some room for subjectivity in deciding the best view for a how-to, this data shows human judges are indeed able to substantially agree.",
847
+ "bbox": [
848
+ 511,
849
+ 541,
850
+ 908,
851
+ 752
852
+ ],
853
+ "page_idx": 4
854
+ },
855
+ {
856
+ "type": "text",
857
+ "text": "This results in a final total of 3,151 and 5,049 test instances (fixed-length clip-narration pairs from above), sampled from 3,677 HT100M and 33 Ego-Exo4D test videos, respectively. In Supp. we filter with even higher agreement thresholds, yielding even more selective (but smaller) test sets; trends for our method vs. baselines remain consistent.",
858
+ "bbox": [
859
+ 511,
860
+ 753,
861
+ 908,
862
+ 844
863
+ ],
864
+ "page_idx": 4
865
+ },
866
+ {
867
+ "type": "text",
868
+ "text": "Data for view selection with limited labels. We train and evaluate our view selector on a small dataset comprising Ego-Exo4D [18] videos. For our training data, we follow",
869
+ "bbox": [
870
+ 511,
871
+ 854,
872
+ 906,
873
+ 900
874
+ ],
875
+ "page_idx": 4
876
+ },
877
+ {
878
+ "type": "page_number",
879
+ "text": "11973",
880
+ "bbox": [
881
+ 480,
882
+ 944,
883
+ 517,
884
+ 955
885
+ ],
886
+ "page_idx": 4
887
+ },
888
+ {
889
+ "type": "table",
890
+ "img_path": "images/733fcd34dde33cd346f23cbb782ee90fe8eea859eb74da913e341acdfa63d2b6.jpg",
891
+ "table_caption": [],
892
+ "table_footnote": [],
893
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"3\">HowTo100M [34]</td><td colspan=\"3\">Ego-Exo4D [18]</td></tr><tr><td>Accuracy</td><td>AUC</td><td>AP</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>All-ego/exo</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>Random</td><td>52.0</td><td>52.0</td><td>51.0</td><td>49.3</td><td>49.3</td><td>49.7</td></tr><tr><td>Last-frame</td><td>42.3</td><td>42.3</td><td>53.4</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>First-person pronoun detector</td><td>47.8</td><td>47.8</td><td>46.4</td><td>50.3</td><td>50.3</td><td>50.1</td></tr><tr><td>Retrieval [54]-F</td><td>53.4</td><td>53.4</td><td>53.2</td><td>52.6</td><td>52.6</td><td>53.6</td></tr><tr><td>Retrieval [54]-N</td><td>52.1</td><td>52.1</td><td>51.8</td><td>52.0</td><td>52.0</td><td>50.6</td></tr><tr><td>Retrieval [54]-N&#x27;</td><td>52.6</td><td>52.6</td><td>52.9</td><td>52.1</td><td>52.1</td><td>52.6</td></tr><tr><td>SWITCH-A-VIEW (Ours)</td><td>59.4</td><td>63.8</td><td>60.5</td><td>51.2</td><td>56.4</td><td>55.4</td></tr></table>",
894
+ "bbox": [
895
+ 93,
896
+ 88,
897
+ 480,
898
+ 209
899
+ ],
900
+ "page_idx": 5
901
+ },
902
+ {
903
+ "type": "text",
904
+ "text": "our annotation protocol for evaluating view-switch detection on Ego-Exo4D, and collect view annotations for a total of 3.5 hours of training videos. This results in a total of 6,634 train instances. For evaluation, we use our test set from view-switch detection. This reuse is possible since a label indicates both the type (ego/exo) of the desired next view for view-switch detection as well as the desired current view for view selection. Train and test videos for this task are disjoint. See Supp. for details.",
905
+ "bbox": [
906
+ 89,
907
+ 258,
908
+ 485,
909
+ 396
910
+ ],
911
+ "page_idx": 5
912
+ },
913
+ {
914
+ "type": "text",
915
+ "text": "5. Experiments",
916
+ "text_level": 1,
917
+ "bbox": [
918
+ 89,
919
+ 407,
920
+ 222,
921
+ 424
922
+ ],
923
+ "page_idx": 5
924
+ },
925
+ {
926
+ "type": "text",
927
+ "text": "Implementation. We set the durations of past frames to 8 seconds—corresponding to 0.23 and 2.31 switch(es) per second for HT100M and Ego-Exo4D, respectively—and past narrations to 32 seconds, and the prediction interval to $\\Delta = 2$ seconds. We set the sample count for view selection to $W = 5000$ . We evaluate view-switch detection on HowTo100M [34] by obtaining the views for the past frames (c.f. Sec. 3.3) from our pseudo-labeler. For Ego-Exo4D, we adopt a teacher-forcing setup and evaluate both tasks by using the ground-truth annotations for past frames and views. We implement our view-switch detector $D$ and view selector $S$ using the DINOv2 [36] encoder for our frame encoder $\\mathcal{E}^F$ , the Llama 2 [52] encoder for our narration encoder $\\mathcal{E}^N$ , a 8-layer transformer encoder [53] for our feature aggregator $\\mathcal{A}$ , a 2-layer MLP for the view classification head $\\mathcal{H}$ , and learnable embedding layers for our view encoder $\\mathcal{E}^V$ and temporal encoder $\\mathcal{E}^T$ .",
928
+ "bbox": [
929
+ 89,
930
+ 431,
931
+ 485,
932
+ 690
933
+ ],
934
+ "page_idx": 5
935
+ },
936
+ {
937
+ "type": "text",
938
+ "text": "Baselines. We provide strong baselines comprising SOTA models and representations, as well as relevant heuristics. For view-switch detection, we compare against",
939
+ "bbox": [
940
+ 89,
941
+ 696,
942
+ 483,
943
+ 743
944
+ ],
945
+ "page_idx": 5
946
+ },
947
+ {
948
+ "type": "text",
949
+ "text": "- InternVideo2 retrieval [54]: a set of baselines that given the most recent past frame (Retrieval [54]- $F$ ), most recent past narration (Retrieval [54]- $N$ ), or next narration (Retrieval [54]- $N'$ ), first encodes [54] them into fine-grained features that capture multi-frame temporal contexts, then uses feature similarity to retrieve a nearest neighbor of the same input type from the train set, and finally outputs the next view for $F$ or $N$ , or the corresponding view for $N'$ , as its prediction.",
950
+ "bbox": [
951
+ 89,
952
+ 743,
953
+ 485,
954
+ 878
955
+ ],
956
+ "page_idx": 5
957
+ },
958
+ {
959
+ "type": "table",
960
+ "img_path": "images/f836af3cd2ff191788a66ada71646917491ce5a5fc73e74a4db515cde2a779ab.jpg",
961
+ "table_caption": [
962
+ "Table 1. View-switch detection results. Evaluation on EgoExo4D [18] is zero-shot. All values are in %, and higher is better."
963
+ ],
964
+ "table_footnote": [],
965
+ "table_body": "<table><tr><td>Model</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>Human performance (Upper bound)</td><td>82.3</td><td>83.5</td><td>81.7</td></tr><tr><td>All-ego/exo</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>Random</td><td>49.3</td><td>49.3</td><td>49.7</td></tr><tr><td>Last-frame</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>First-person pronoun detector</td><td>50.3</td><td>50.3</td><td>50.1</td></tr><tr><td>Retrieval [54]-F</td><td>52.3</td><td>52.3</td><td>53.6</td></tr><tr><td>Retrieval [54]-N</td><td>51.9</td><td>51.9</td><td>51.0</td></tr><tr><td>Retrieval [54]-N&#x27;</td><td>52.4</td><td>52.4</td><td>52.4</td></tr><tr><td>View-narration [54] Similarity</td><td>52.5</td><td>52.4</td><td>53.9</td></tr><tr><td>Finetuned X-CLIP [7]</td><td></td><td></td><td></td></tr><tr><td>Random negative sampling</td><td>52.1</td><td>52.0</td><td>53.1</td></tr><tr><td>Text-conditioned negative sampling</td><td>52.8</td><td>52.7</td><td>53.6</td></tr><tr><td>Proprietary VLMs</td><td></td><td></td><td></td></tr><tr><td>Gemini 2.5 Pro</td><td>51.2</td><td>51.2</td><td>51.0</td></tr><tr><td>GPT-4o</td><td>53.3</td><td>53.3</td><td>52.3</td></tr><tr><td>LangView [32]</td><td></td><td></td><td></td></tr><tr><td>-smallData</td><td>52.1</td><td>52.6</td><td>53.2</td></tr><tr><td>-bigData (privileged)</td><td>53.3</td><td>54.8</td><td>54.5</td></tr><tr><td>Ours w/o pretraining</td><td>50.1</td><td>51.6</td><td>51.3</td></tr><tr><td>SWITCH-A-VIEW (Ours)</td><td>54.0</td><td>57.3</td><td>56.0</td></tr></table>",
966
+ "bbox": [
967
+ 514,
968
+ 88,
969
+ 910,
970
+ 409
971
+ ],
972
+ "page_idx": 5
973
+ },
974
+ {
975
+ "type": "text",
976
+ "text": "Table 2. Results and ablation for view selection with limited labels. All values are in %; higher is better. Significance $p \\leq {0.05}$ .",
977
+ "bbox": [
978
+ 511,
979
+ 417,
980
+ 906,
981
+ 446
982
+ ],
983
+ "page_idx": 5
984
+ },
985
+ {
986
+ "type": "list",
987
+ "sub_type": "text",
988
+ "list_items": [
989
+ "- All-ego, All-exo, Random, Last-frame: these are heuristics that use the ego view (All-ego), the exo view (All-exo), a randomly chosen (Random) view, or the view of the most recent past frame (Last-frame), as their prediction.",
990
+ "- First-person pronoun detector: a heuristic that predicts exo when it detects first-person pronouns like \"I\", \"We\", \"My\" or \"Our\" in the next narration, as human editors often use a wide shot that reveals their face or full body, when using such pronouns."
991
+ ],
992
+ "bbox": [
993
+ 513,
994
+ 460,
995
+ 906,
996
+ 595
997
+ ],
998
+ "page_idx": 5
999
+ },
1000
+ {
1001
+ "type": "text",
1002
+ "text": "For view selection with limited labels, in addition to the baselines listed above, we compare against the following:",
1003
+ "bbox": [
1004
+ 511,
1005
+ 604,
1006
+ 905,
1007
+ 633
1008
+ ],
1009
+ "page_idx": 5
1010
+ },
1011
+ {
1012
+ "type": "list",
1013
+ "sub_type": "text",
1014
+ "list_items": [
1015
+ "- LangView [32]: a SOTA view selector that uses multiview videos and human-annotated narrations for weakly supervised pretraining. We finetune this model with our Ego-Exo4D labels (Sec. 4). We evaluate two versions of this baseline: LangView-bigData and LangView-smallData, which use large-scale Ego-Exo4D [32] videos, and our same small subset (Sec. 4), respectively, for pretraining. Note that the bigData variant enjoys access to $98 \\times$ more training samples than our method, an advantage for the baseline.",
1016
+ "- View-narration [54] Similarity (VN-Sim): separately computes the cosine similarity between the InternVideo2 features [54] for each view and the next narration, and picks the view most similar to the narration.",
1017
+ "- Finetuned X-CLIP [31]: a finetuned CLIP [39]-style model that aligns the frames from the target view and"
1018
+ ],
1019
+ "bbox": [
1020
+ 513,
1021
+ 635,
1022
+ 908,
1023
+ 876
1024
+ ],
1025
+ "page_idx": 5
1026
+ },
1027
+ {
1028
+ "type": "page_footnote",
1029
+ "text": "3See Supp. for parallel evaluation with CLIP [39]-style encoders, which",
1030
+ "bbox": [
1031
+ 107,
1032
+ 886,
1033
+ 482,
1034
+ 901
1035
+ ],
1036
+ "page_idx": 5
1037
+ },
1038
+ {
1039
+ "type": "page_footnote",
1040
+ "text": "generally underperformed InternVideo2 [54] encoders.",
1041
+ "bbox": [
1042
+ 513,
1043
+ 886,
1044
+ 803,
1045
+ 900
1046
+ ],
1047
+ "page_idx": 5
1048
+ },
1049
+ {
1050
+ "type": "page_number",
1051
+ "text": "11974",
1052
+ "bbox": [
1053
+ 480,
1054
+ 944,
1055
+ 517,
1056
+ 955
1057
+ ],
1058
+ "page_idx": 5
1059
+ },
1060
+ {
1061
+ "type": "text",
1062
+ "text": "the future narration. We explore two negative sampling strategies when finetuning: random and text-conditioned.",
1063
+ "bbox": [
1064
+ 102,
1065
+ 90,
1066
+ 482,
1067
+ 119
1068
+ ],
1069
+ "page_idx": 6
1070
+ },
1071
+ {
1072
+ "type": "text",
1073
+ "text": "- Proprietary VLMs: we feed Gemini 2.5 Pro [12] and GPT-4o [23] all our view selection inputs and task them with choosing the best view by providing a text prompt similar to our guidelines for collecting human annotations (see Sec. 5 and Supp.).",
1074
+ "bbox": [
1075
+ 91,
1076
+ 121,
1077
+ 482,
1078
+ 196
1079
+ ],
1080
+ "page_idx": 6
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "text": "LangView evaluates how our model fares against SOTA view selection, while the retrieval, view-narration and finetuned CLIP [39]-style baselines analyze whether SOTA video-language embeddings, whether frozen or finetuned, are sufficient for this task. The heuristics verify the challenging nature of the tasks. The proprietary VLMs evaluate if employing large-scale generalist models is enough.",
1085
+ "bbox": [
1086
+ 89,
1087
+ 196,
1088
+ 485,
1089
+ 303
1090
+ ],
1091
+ "page_idx": 6
1092
+ },
1093
+ {
1094
+ "type": "text",
1095
+ "text": "Evaluation metrics. We consider three metrics: 1) Accuracy, which directly measures the agreement between our predictions and labels; 2) AUC, the area under the ROC curve; and 3) AP, the average precision (AP) of the precision vs. recall curve. We use AUC and AP to account for the possible class imbalance in our collected annotations. Moreover, for each metric, we separately compute its value for the same-view and view-switch instances in our test sets, and report the mean. This lets us account for differences in the same-view and view-switch frequency, and obtain unbiased performance measures.",
1096
+ "bbox": [
1097
+ 89,
1098
+ 314,
1099
+ 485,
1100
+ 481
1101
+ ],
1102
+ "page_idx": 6
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "text": "View-switch detection. In Table 1, we report our view-switch detection results. The heuristics generally perform the worst on both datasets, underlining the challenging nature of the task. The Retrieval [54] baselines improve over them, indicating that our model inputs do provide cues about the view type. Among the Retrieval baselines, retrieving using the most recent past frame performs the best, showing that the past frames offer fine-grained task-relevant information beyond the narration words. Moreover, retrieving with the next narration is better than retrieving with the most recent past narration, revealing that the next narration carries more pertinent details about the desired view. This is likely because the next narration is better aligned with the time interval for which the view is being predicted.",
1107
+ "bbox": [
1108
+ 89,
1109
+ 494,
1110
+ 483,
1111
+ 705
1112
+ ],
1113
+ "page_idx": 6
1114
+ },
1115
+ {
1116
+ "type": "text",
1117
+ "text": "Our method outperforms all baselines on both datasets, with the AUC margin over the best baseline, Retrieval [54]- $F$ , being as high as $10.4\\%$ on HowTo100M (HT100M) [34] and $3.8\\%$ on Ego-Exo4D [18]. Our improvement over the Retrieval baselines show that computing feature [54]-level similarities are not enough for this task. Instead, learning it by leveraging complementary cues from both narrations and frames is critical. Moreover, our zero-shot results on Ego-Exo4D speak to our model's efficacy vis-a-vis learning human view patterns from large-scale and in-the-wild videos, which generalize to different scenarios, without any training.",
1118
+ "bbox": [
1119
+ 89,
1120
+ 707,
1121
+ 485,
1122
+ 875
1123
+ ],
1124
+ "page_idx": 6
1125
+ },
1126
+ {
1127
+ "type": "text",
1128
+ "text": "${}^{4}$ Same-view instance count $= {1.6}\\mathrm{x}$ view-switch instance count for HT100M and ${3.9}\\mathrm{x}$ for Ego-Exo4D",
1129
+ "bbox": [
1130
+ 89,
1131
+ 875,
1132
+ 483,
1133
+ 901
1134
+ ],
1135
+ "page_idx": 6
1136
+ },
1137
+ {
1138
+ "type": "table",
1139
+ "img_path": "images/8ec45d7b75bfe06ee350063824647ba5ae576cccc0ac2e6d8c05a598fd02fc1d.jpg",
1140
+ "table_caption": [],
1141
+ "table_footnote": [],
1142
+ "table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"3\">HowTo100M [34]</td><td colspan=\"3\">Ego-Exo4D [18]</td></tr><tr><td>Accuracy</td><td>AUC</td><td>AP</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>N-only</td><td>53.5</td><td>54.4</td><td>52.3</td><td>50.0</td><td>48.7</td><td>49.0</td></tr><tr><td>N&#x27;-only</td><td>55.4</td><td>57.8</td><td>56.2</td><td>49.8</td><td>49.8</td><td>50.0</td></tr><tr><td>F-only</td><td>53.3</td><td>54.5</td><td>54.7</td><td>51.0</td><td>53.4</td><td>53.2</td></tr><tr><td>(F, N&#x27;)-only</td><td>55.5</td><td>60.1</td><td>58.1</td><td>52.1</td><td>54.2</td><td>52.6</td></tr><tr><td>(N, N&#x27;)-only</td><td>57.5</td><td>59.3</td><td>56.6</td><td>50.0</td><td>53.0</td><td>52.6</td></tr><tr><td>(F, N)-only</td><td>56.0</td><td>60.9</td><td>57.4</td><td>51.8</td><td>54.9</td><td>54.2</td></tr><tr><td>Ours</td><td>59.4</td><td>63.8</td><td>60.5</td><td>51.2</td><td>56.4</td><td>55.4</td></tr></table>",
1143
+ "bbox": [
1144
+ 517,
1145
+ 88,
1146
+ 903,
1147
+ 224
1148
+ ],
1149
+ "page_idx": 6
1150
+ },
1151
+ {
1152
+ "type": "text",
1153
+ "text": "Table 3. Ablation study for view-switch detection. All values are in $\\%$ and higher is better. Significance $p\\leq 0.05$",
1154
+ "bbox": [
1155
+ 511,
1156
+ 234,
1157
+ 906,
1158
+ 263
1159
+ ],
1160
+ "page_idx": 6
1161
+ },
1162
+ {
1163
+ "type": "text",
1164
+ "text": "View selection. Table 2 shows our results on view selection with limited labels. For the heuristics and Retrieval [54] baselines, we observe the same performance trends as view-switch detection. The View-narration [54] Similarity (VN-Sim) baseline marginally improves over these methods, indicating the frames from candidate views when combined with the corresponding narration $(N^{\\prime})$ provide direct cues about the preferred view. LangView [32]'s results benefit from its language-guided training, generally outperforming VN-Sim.",
1165
+ "bbox": [
1166
+ 511,
1167
+ 277,
1168
+ 906,
1169
+ 414
1170
+ ],
1171
+ "page_idx": 6
1172
+ },
1173
+ {
1174
+ "type": "text",
1175
+ "text": "Our method significantly improves over all baselines, with the AUC margin over the best baseline, LangView [32]-bigData, being $2.5\\%$ . Our gains over VN-Sim and Finetuned X-CLIP [31] underscore that using feature similarity to match the activity described in the next narration with a candidate view does not suffice, and instead a model like ours, which can leverage multi-modal cues from the combination of both past and candidate frames, and past and next narrations, is valuable for this task. Our improvement over the proprietary VLMs—despite their much larger size and training data—shows that task-specific experts are necessary to tackle our challenging task. Training our model from scratch with only the small set of best view labels (\"ours w/o pretraining\") is significantly weaker, showing that our view-switch pretraining idea is doing the heavy lifting.",
1176
+ "bbox": [
1177
+ 511,
1178
+ 415,
1179
+ 908,
1180
+ 642
1181
+ ],
1182
+ "page_idx": 6
1183
+ },
1184
+ {
1185
+ "type": "text",
1186
+ "text": "Our gains over the SOTA LangView [32] show that learning view selection from language is less effective than that from large-scale human-edited videos, even when the videos and language are available at scale (bigData). Moreover, the insights of LangView and this work are complementary. We find if we fine-tune SWITCH-A-VIEW with LangView's narration-based pseudo-labels, in addition to our labels (Sec. 4), we achieve further gains. See Supp. for details.",
1187
+ "bbox": [
1188
+ 511,
1189
+ 643,
1190
+ 910,
1191
+ 763
1192
+ ],
1193
+ "page_idx": 6
1194
+ },
1195
+ {
1196
+ "type": "text",
1197
+ "text": "Ablations. Table 3 shows our ablation results for view-switch detection. Dropping any one input to our model degrades performance, indicating that each input plays a role. Dropping two inputs hurt the performance even more, showing that more inputs are better in any combination, suggesting our model design extracts complementary cues from them in all configurations. Moreover, using past frames instead of narrations improves performance, re-affirming",
1198
+ "bbox": [
1199
+ 511,
1200
+ 779,
1201
+ 911,
1202
+ 902
1203
+ ],
1204
+ "page_idx": 6
1205
+ },
1206
+ {
1207
+ "type": "page_number",
1208
+ "text": "11975",
1209
+ "bbox": [
1210
+ 478,
1211
+ 944,
1212
+ 519,
1213
+ 957
1214
+ ],
1215
+ "page_idx": 6
1216
+ },
1217
+ {
1218
+ "type": "image",
1219
+ "img_path": "images/df1ea77d2faf66082b7f706a5f2968a6d35e3112ee2cba30bc1c81ff1cb1e718.jpg",
1220
+ "image_caption": [
1221
+ "Figure 3. Left: successful view-switch detections by our model on same-view (top) and view-switch cases (bottom). Our model correctly detects view switches by anticipating the next step using past frames (same-view sample 1, and view-switch sample 2) or leveraging the content of the next narration (same-view sample 2, and view-switch sample 1 and 2). Right: successful view selections by our model on same-view (top) and view-switch cases (bottom). For view selection as well, our model can predict the desired next view by relying on the next narration (same-view sample 1, and view-switch sample 1 and 2), or anticipate it using the past narrations (same-view sample 1 and 2), or the past frames (same-view sample 1). These examples show that all three inputs play a role in our model predictions."
1222
+ ],
1223
+ "image_footnote": [],
1224
+ "bbox": [
1225
+ 135,
1226
+ 88,
1227
+ 867,
1228
+ 489
1229
+ ],
1230
+ "page_idx": 7
1231
+ },
1232
+ {
1233
+ "type": "text",
1234
+ "text": "that vision provides fine-grained features necessary for high performance. Finally, using $N'$ instead of $N$ improves performance in some cases, showing the next narration's role.",
1235
+ "bbox": [
1236
+ 89,
1237
+ 604,
1238
+ 485,
1239
+ 651
1240
+ ],
1241
+ "page_idx": 7
1242
+ },
1243
+ {
1244
+ "type": "text",
1245
+ "text": "See Supp. for more analysis, including the effect of the past frame and narration durations, and sample count on model performance, and its scenario-level breakdown.",
1246
+ "bbox": [
1247
+ 89,
1248
+ 651,
1249
+ 483,
1250
+ 696
1251
+ ],
1252
+ "page_idx": 7
1253
+ },
1254
+ {
1255
+ "type": "text",
1256
+ "text": "Qualitative examples. Fig. 3-left shows our model's successful view-switch detections on both same-view (top) and view-switch cases (bottom); see caption for details. We also notice some common failure modes with our model. For view-switch detection, our model sometimes fails when there is no next narration overlapping with the prediction interval, and neither the past frames nor narrations are predictive of the next view. In another failure type, the past views are wrongly categorized by our pseudo-labeler for HowTo100M [34] or by professional annotators for Ego-Exo4D [18]. This leads to our model getting confused and predicting the wrong next view. For view selection, in addition to these failures, our model can fail when both views",
1257
+ "bbox": [
1258
+ 89,
1259
+ 704,
1260
+ 485,
1261
+ 900
1262
+ ],
1263
+ "page_idx": 7
1264
+ },
1265
+ {
1266
+ "type": "text",
1267
+ "text": "look equally good. See Supp. for video examples.",
1268
+ "bbox": [
1269
+ 511,
1270
+ 604,
1271
+ 843,
1272
+ 621
1273
+ ],
1274
+ "page_idx": 7
1275
+ },
1276
+ {
1277
+ "type": "text",
1278
+ "text": "6. Conclusion and future work",
1279
+ "text_level": 1,
1280
+ "bbox": [
1281
+ 511,
1282
+ 633,
1283
+ 774,
1284
+ 648
1285
+ ],
1286
+ "page_idx": 7
1287
+ },
1288
+ {
1289
+ "type": "text",
1290
+ "text": "We introduced an approach for learning to select views from instructional video by bootstrapping human-edited (but unlabeled) in-the-wild content. Results show the method's efficacy and set the benchmark for this new task.",
1291
+ "bbox": [
1292
+ 511,
1293
+ 659,
1294
+ 906,
1295
+ 717
1296
+ ],
1297
+ "page_idx": 7
1298
+ },
1299
+ {
1300
+ "type": "text",
1301
+ "text": "A potential limitation of our model is its clip-level predictions, which can lead to rapid switches between viewpoints over time. While hard cuts are in fact necessary at times to maximize informativeness, the trade-off between view information and perceived viewing ease is interesting future work. Other challenges uncovered by our work are the distribution gap between between edited in-the-wild and multi-view videos and the complexity of learning view selection from limited labels. In addition, we plan to generalize to continuous view selection, potentially by integrating ideas from new view synthesis, and we will explore modeling user attention for personalized view selection.",
1302
+ "bbox": [
1303
+ 511,
1304
+ 719,
1305
+ 908,
1306
+ 900
1307
+ ],
1308
+ "page_idx": 7
1309
+ },
1310
+ {
1311
+ "type": "page_number",
1312
+ "text": "11976",
1313
+ "bbox": [
1314
+ 480,
1315
+ 944,
1316
+ 519,
1317
+ 955
1318
+ ],
1319
+ "page_idx": 7
1320
+ },
1321
+ {
1322
+ "type": "text",
1323
+ "text": "Acknowledgements",
1324
+ "text_level": 1,
1325
+ "bbox": [
1326
+ 91,
1327
+ 90,
1328
+ 258,
1329
+ 107
1330
+ ],
1331
+ "page_idx": 8
1332
+ },
1333
+ {
1334
+ "type": "text",
1335
+ "text": "UT Austin is supported in part by the UT Austin IFML NSF AI Institute and the UT Austin MLL Center for Generative AI. We would also like to thank Zihui Xue for suggesting the PySceneDetect scene detector and helpful discussions regarding the view classifier during the early stages of designing our view pseudo-labeler.",
1336
+ "bbox": [
1337
+ 89,
1338
+ 114,
1339
+ 485,
1340
+ 205
1341
+ ],
1342
+ "page_idx": 8
1343
+ },
1344
+ {
1345
+ "type": "text",
1346
+ "text": "References",
1347
+ "text_level": 1,
1348
+ "bbox": [
1349
+ 91,
1350
+ 218,
1351
+ 187,
1352
+ 233
1353
+ ],
1354
+ "page_idx": 8
1355
+ },
1356
+ {
1357
+ "type": "list",
1358
+ "sub_type": "ref_text",
1359
+ "list_items": [
1360
+ "[1] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Košecka, and Alexander C Berg. A dataset for developing and benchmarking active vision. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1378-1385. IEEE, 2017. 2",
1361
+ "[2] Ido Arev, Hyun Soo Park, Yaser Sheikh, Jessica Hodgins, and Ariel Shamir. Automatic editing of footage from multiple social cameras. ACM Trans. Graph., 33(4), 2014. 2",
1362
+ "[3] Kumar Ashutosh, Santhosh Kumar Ramakrishnan, Triantafyllos Afouras, and Kristen Grauman. Video-mined task graphs for keystep recognition in instructional videos. Advances in Neural Information Processing Systems, 36, 2024. 2",
1363
+ "[4] Taivanbat Badamdorj, Mrigank Rochan, Yang Wang, and Li Cheng. Contrastive learning for unsupervised video highlight detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14042-14052, 2022. 2",
1364
+ "[5] Brandon Castellano. Pyscenedetect. https://github.com/Breakthrough/PySceneDetect.4",
1365
+ "[6] Seunghoon Cha, Jungjin Lee, Seunghwa Jeong, Younghui Kim, and Junyong Noh. Enhanced interactive $360^{\\circ}$ viewing via automatic guidance. ACM Trans. Graph., 39(5), 2020. 2",
1366
+ "[7] Jianhui Chen, Keyu Lu, Sijia Tian, and Jim Little. Learning sports camera selection from internet videos. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1682-1691. IEEE, 2019. 1, 2",
1367
+ "[8] Ricson Cheng, Ziyan Wang, and Katerina Fragkiadaki. Geometry-aware recurrent neural networks for active visual recognition. Advances in Neural Information Processing Systems, 31, 2018. 2",
1368
+ "[9] Shih-Han Chou, Yi-Chun Chen, Kuo-Hao Zeng, Hou-Ning Hu, Jianlong Fu, and Min Sun. Self-view grounding given a narrated 360 $\\{\\backslash deg\\}$ video. arXiv preprint arXiv:1711.08664, 2017. 2",
1369
+ "[10] Wen-Sheng Chu, Yale Song, and Alejandro Jaimes. Video co-summarization: Video summarization by visual co-occurrence. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3584–3592, 2015. 2",
1370
+ "[11] J. Cohen. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46, 1960. 5",
1371
+ "[12] Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, et al. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. arXiv preprint arXiv:2507.06261, 2025. 2, 7"
1372
+ ],
1373
+ "bbox": [
1374
+ 93,
1375
+ 243,
1376
+ 485,
1377
+ 898
1378
+ ],
1379
+ "page_idx": 8
1380
+ },
1381
+ {
1382
+ "type": "list",
1383
+ "sub_type": "ref_text",
1384
+ "list_items": [
1385
+ "[13] Ruoyi Du, Wenqing Yu, Heqing Wang, Ting-En Lin, Dongliang Chang, and Zhanyu Ma. Multi-view active fine-grained visual recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1568–1578, 2023. 2",
1386
+ "[14] Mohamed Elfeki, Liqiang Wang, and Ali Borji. Multistream dynamic video summarization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 339-349, 2022. 2",
1387
+ "[15] J. Foote and D. Kimber. Flycam: practical panoramic video and automatic camera control. In 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532), pages 1419-1422 vol.3, 2000. 1, 2",
1388
+ "[16] Michael Gleicher and James Masanz. Towards virtual videography (poster session). In Proceedings of the Eighth ACM International Conference on Multimedia, page 375-378, New York, NY, USA, 2000. Association for Computing Machinery. 1, 2",
1389
+ "[17] Boqing Gong, Wei-Lun Chao, Kristen Grauman, and Fei Sha. Diverse sequential subset selection for supervised video summarization. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2014. 2",
1390
+ "[18] Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, et al. Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives. arXiv preprint arXiv:2311.18259, 2023. 2, 3, 5, 6, 7, 8",
1391
+ "[19] Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos. In Computer Vision – ECCV 2014, pages 505–520, Cham, 2014. Springer International Publishing. 2",
1392
+ "[20] Bo He, Jun Wang, Jielin Qiu, Trung Bui, Abhinav Shrivastava, and Zhaowen Wang. Align and attend: Multimodal summarization with dual contrastive losses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14867-14878, 2023. 2",
1393
+ "[21] Rachel Heck, Michael Wallick, and Michael Gleicher. Virtual videography. In Proceedings of the 14th ACM International Conference on Multimedia, page 961-962, New York, NY, USA, 2006. Association for Computing Machinery. 2",
1394
+ "[22] Hou-Ning Hu, Yen-Chen Lin, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, and Min Sun. Deep 360 pilot: Learning a deep agent for piloting through 360deg sports videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3451-3460, 2017. 1, 2",
1395
+ "[23] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. 2, 7",
1396
+ "[24] Dinesh Jayaraman and Kristen Grauman. End-to-end policy learning for active visual categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(7):1601-1614, 2019. 2",
1397
+ "[25] Hong-Wen Kang, Y. Matsushita, Xiaou Tang, and Xue-Quan Chen. Space-time video montage. In 2006 IEEE Computer"
1398
+ ],
1399
+ "bbox": [
1400
+ 516,
1401
+ 92,
1402
+ 906,
1403
+ 900
1404
+ ],
1405
+ "page_idx": 8
1406
+ },
1407
+ {
1408
+ "type": "page_number",
1409
+ "text": "11977",
1410
+ "bbox": [
1411
+ 478,
1412
+ 944,
1413
+ 519,
1414
+ 955
1415
+ ],
1416
+ "page_idx": 8
1417
+ },
1418
+ {
1419
+ "type": "list",
1420
+ "sub_type": "ref_text",
1421
+ "list_items": [
1422
+ "Society Conference on Computer Vision and Pattern Recognition (CVPR'06), pages 1331-1338, 2006. 2",
1423
+ "[26] J Landis and G Koch. The measurement of observer agreement for categorical data. Biometrics, 1977. 5",
1424
+ "[27] Yandong Li, Liqiang Wang, Tianbao Yang, and Boqing Gong. How local is the local diversity? reinforcing sequential determinantal point processes with dynamic ground sets for supervised video summarization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 151-167, 2018. 2",
1425
+ "[28] Yanghao Li, Tushar Nagarajan, Bo Xiong, and Kristen Grauman. Ego-exo: Transferring visual representations from third-person to first-person videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6943-6953, 2021. 4",
1426
+ "[29] Xudong Lin, Fabio Petroni, Gedas Bertasius, Marcus Rohrbach, Shih-Fu Chang, and Lorenzo Torresani. Learning to recognize procedural activities with distant supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13853-13863, 2022. 2",
1427
+ "[30] Zheng Lu and Kristen Grauman. Story-driven summarization for egocentric video. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, page 2714-2721, USA, 2013. IEEE Computer Society. 2",
1428
+ "[31] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM international conference on multimedia, pages 638–647, 2022. 6, 7",
1429
+ "[32] Sagnik Majumder, Tushar Nagarjan, Ziad Al-Halah, Reina Pradhan, and Kristen Grauman. Which viewpoint shows it best? Language for weakly supervising view selection in multi-view videos. In CVPR, 2025. 2, 6, 7",
1430
+ "[33] Effrosyni Mavroudi, Triantafyllos Afouras, and Lorenzo Torresani. Learning to ground instructional articles in videos through narrations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15201-15213, 2023. 2",
1431
+ "[34] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 2, 3, 5, 6, 7, 8",
1432
+ "[35] Medhini Narasimhan, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, and Cordelia Schmid. Tl; dw? summarizing instructional videos with task relevance and cross-modal saliency. In European Conference on Computer Vision, pages 540-557. Springer, 2022. 2",
1433
+ "[36] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 6",
1434
+ "[37] Rameswar Panda and Amit K Roy-Chowdhury. Collaborative summarization of topic-related videos. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pages 7083-7092, 2017. 2"
1435
+ ],
1436
+ "bbox": [
1437
+ 91,
1438
+ 92,
1439
+ 483,
1440
+ 900
1441
+ ],
1442
+ "page_idx": 9
1443
+ },
1444
+ {
1445
+ "type": "list",
1446
+ "sub_type": "ref_text",
1447
+ "list_items": [
1448
+ "[38] Danila Potapov, Matthijs Douze, Zaid Harchaoui, and Cordelia Schmid. Category-specific video summarization. In Computer Vision - ECCV 2014, pages 540-555, Cham, 2014. Springer International Publishing. 2",
1449
+ "[39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6, 7",
1450
+ "[40] Santhosh K Ramakrishnan, Dinesh Jayaraman, and Kristen Grauman. Emergence of exploratory look-around behaviors through active observation completion. Science Robotics, 4 (30):eaaw6326, 2019. 2",
1451
+ "[41] Mrigank Rochan, Linwei Ye, and Yang Wang. Video summarization using fully convolutional sequence networks. In Proceedings of the European conference on computer vision (ECCV), pages 347-363, 2018. 2",
1452
+ "[42] Mrigank Rochan, Mahesh Kumar Krishna Reddy, Linwei Ye, and Yang Wang. Adaptive video highlight detection by learning from user history. In Computer Vision - ECCV 2020, pages 261-278, Cham, 2020. Springer International Publishing. 2",
1453
+ "[43] Abhimanyu Sahu and Ananda S. Chowdhury. Shot level egocentric video co-summarization. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2887-2892, 2018. 2",
1454
+ "[44] Soroush Seifi and Tinne Tuytelaars. Attend and segment: Attention guided active semantic segmentation. In European Conference on Computer Vision, pages 305-321. Springer, 2020. 2",
1455
+ "[45] Soroush Seifi, Abhishek Jha, and Tinne Tuytelaars. Glimpse-attend-and-exlore: Self-attention for active visual exploration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16137–16146, 2021. 2",
1456
+ "[46] Gunnar A Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. Charades-ego: A large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626, 2018. 4",
1457
+ "[47] Michael Smith and Takeo Kanade. Video skimming for quick browsing based on audio and image characterization. Technical Report CMU-CS-95-186, Pittsburgh, PA, 1995. 2",
1458
+ "[48] Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. Tvsum: Summarizing web videos using titles. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5179-5187, 2015. 2",
1459
+ "[49] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman. Pano2vid: Automatic cinematography for watching 360 videos. In Asian Conference on Computer Vision, pages 154-171. Springer, 2016. 1, 2",
1460
+ "[50] Xinding Sun, J. Foote, D. Kimber, and B. S. Manjunath. Region of interest extraction and virtual camera control based on panoramic video capturing. Trans. Multi., 7(5):981-990, 2005. 2",
1461
+ "[51] Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video"
1462
+ ],
1463
+ "bbox": [
1464
+ 516,
1465
+ 92,
1466
+ 906,
1467
+ 900
1468
+ ],
1469
+ "page_idx": 9
1470
+ },
1471
+ {
1472
+ "type": "page_number",
1473
+ "text": "11978",
1474
+ "bbox": [
1475
+ 480,
1476
+ 944,
1477
+ 519,
1478
+ 955
1479
+ ],
1480
+ "page_idx": 9
1481
+ },
1482
+ {
1483
+ "type": "list",
1484
+ "sub_type": "ref_text",
1485
+ "list_items": [
1486
+ "analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1207-1216, 2019. 3",
1487
+ "[52] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 6",
1488
+ "[53] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 4, 6",
1489
+ "[54] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Zun Wang, Yansong Shi, et al. Internvideo2: Scaling foundation models for multimodal video understanding. In European Conference on Computer Vision, pages 396-416. Springer, 2024. 6, 7",
1490
+ "[55] Bo Xiong and Kristen Grauman. Snap angle prediction for $360^{\\circ}$ panoramas. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 2",
1491
+ "[56] Cha Zhang, Yong Rui, Jim Crawford, and Li-Wei He. An automated end-to-end lecture capture and broadcasting system. ACM Trans. Multimedia Comput. Commun. Appl., 4(1), 2008. 1, 2",
1492
+ "[57] Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman. Summary transfer: Exemplar-based subset selection for video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1059-1067, 2016. 2",
1493
+ "[58] Yiwu Zhong, Licheng Yu, Yang Bai, Shangwen Li, Xueling Yan, and Yin Li. Learning procedure-aware video representation from instructional videos and their narrations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14825-14835, 2023. 2",
1494
+ "[59] Honglu Zhou, Roberto Martín-Martín, Mubbasir Kapadia, Silvio Savarese, and Juan Carlos Niebles. Procedure-aware pretraining for instructional video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10727-10738, 2023. 2"
1495
+ ],
1496
+ "bbox": [
1497
+ 91,
1498
+ 90,
1499
+ 483,
1500
+ 642
1501
+ ],
1502
+ "page_idx": 10
1503
+ },
1504
+ {
1505
+ "type": "page_number",
1506
+ "text": "11979",
1507
+ "bbox": [
1508
+ 480,
1509
+ 944,
1510
+ 517,
1511
+ 955
1512
+ ],
1513
+ "page_idx": 10
1514
+ }
1515
+ ]
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/679bbc29-4d07-48b9-a239-1f0c15065380_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd13f55aa931bf1ee710d1bc471ef4e2d802014c84606037fc4fdfc1fe643a52
3
+ size 2012544
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/full.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Switch-a-View: View Selection Learned from Unlabeled In-the-wild Videos
2
+
3
+ Sagnik Majumder $^{1}$ Tushar Nagarajan $^{1}$ Ziad Al-Halah $^{2}$ Kristen Grauman $^{1}$ $^{1}$ University of Texas at Austin $^{2}$ University of Utah
4
+
5
+ ![](images/c8d8a120cc3841d245058c09b3a3b2c27a1a260051c9ada3885ab9532ac0a810.jpg)
6
+ Training: Learn human view choices from large-scale in-the-wild videos
7
+
8
+ ![](images/9c4b2ca15bf2954f908beed6455b5a985764c00e2ed9467e0c3d950a1a437a28.jpg)
9
+ Inference: Select the best view sequence for a multi-view video
10
+ Figure 1. Given a multi-view narrated how-to video, can we select the sequence of camera viewpoints that best show the activity—automating the camerawork that is today done with manual editing? While direct supervision for this task is impractical, our SWITCH-A-VIEW approach shows how to learn typical viewpoint choice patterns from large-scale unlabeled in-the-wild instructional videos (left), then translate those patterns to novel multi-view videos (right), yielding an informative how-to that hops between the most useful ego/exo viewpoints.
11
+
12
+ # Abstract
13
+
14
+ We introduce SWITCH-A-VIEW, a model that learns to automatically select the viewpoint to display at each timepoint when creating a how-to video. The key insight of our approach is how to train such a model from unlabeled—but human-edited—video samples. We pose a pretext task that pseudo-labels segments in the training videos for their primary viewpoint (egocentric or exocentric), and then discovers the patterns between the visual and spoken content in a how-to video on the one hand and its view-switch moments on the other hand. Armed with this predictor, our model can be applied to new multi-view videos to orchestrate which viewpoint should be displayed when. We demonstrate our idea on a variety of real-world videos from HowTo100M and Ego-Exo4D, and rigorously validate its advantages.
15
+
16
+ # 1. Introduction
17
+
18
+ Video is an amazing medium for communication, and today's widely used Internet platforms make it easy to create and share content broadly. Instructional or "how-to" video is particularly compelling in this setting: YouTube, TikTok, and similar sites have democratized the ability to share our talents with others, by both showing and telling how to
19
+
20
+ perform some special skill. From how to plant a garden, how to make yogurt, how to fold origami, or how to give a dog a haircut, there is no shortage of how-to nuggets produced and consumed by users of many ages and backgrounds.
21
+
22
+ Creating an effective how-to video, however, is not trivial. From potentially hours of footage from multiple cameras capturing all aspects of the instructional activity, a creator needs to edit down to the essential steps of their demonstration and decide on the camera viewpoint (view) for each temporal segment that best reveals what they want to show. For example, when showing how to cut the dog's hair, the instructor might first appear standing beside the dog—the camera more distant—then the camera may zoom close up to her using scissors and describing how to trim near the ear, then zoom back out while she shows progress across the dog's body. How-to videos often exhibit this sequential mix of "exocentric" and "egocentric-like" viewpoints to effectively recap the procedure with clear visuals.
23
+
24
+ The status quo is to either orchestrate camerework live while filming, or do post-recording editing among the multiple available cameras—both of which are labor intensive. Work in automatic cinematography [7, 15, 16, 22, 49, 56], though inspiring, relies on heuristics or domain-specific models that are not equipped to address automatic editing of video demonstrations. How could we train an "AI how-to
25
+
26
+ cameraman", which, given a stream of two or more simultaneous camera views, could hop between them intelligently?
27
+
28
+ Supervising this learning task presents a problem. There are vast amounts of positive examples of well-edited how-to videos, but those edited results hide the "negatives"—the viewpoints that were not chosen for inclusion in the final video at any given time point. Those are left on the cutting room floor. This makes it unclear how to translate the editing patterns in in-the-wild edited video to new data.
29
+
30
+ To tackle this learning challenge, we design a pretext task for learning human view preferences from varying-view instructional videos on the Web. Varying-view means that the source training videos display an arbitrary number of view switches over the course of the video (e.g., from ego to exo and back as in our example above), and contain only one viewpoint at any time. We introduce a model called SWITCH-A-VIEW that learns from such data; it uses past frames in concert with the how-to narrations spoken by the demonstrator, which are widely available in instructional videos, to learn a binary classifier indicating whether the viewpoint is going to switch or not at the current time step. Then, we deploy this pretext-trained model in multi-view, narrated video settings with limited best view labels, and decide how to orchestrate the view selection of such videos over time. In this way, our approach captures the view-switch patterns from widely diverse unlabeled in-the-wild videos, then translates those trends to automatically direct the camerawork in new instances. See Fig. 1.
31
+
32
+ We train and evaluate our approach on HowTo100M [34], an extensive repository of real-world how-to videos, and further show generalization to multi-view Ego-Exo4D [18] videos. Our findings confirm that human judges exhibit substantial agreement on what constitutes a "best view" in a how-to video, establishing that it is possible to rigorously evaluate this task. Furthermore, our results show SWITCH-A-VIEW outperforms the state-of-the-art in multi-view video view selection [32] as well as proprietary VLMs like Gemini 2.5 Pro and GPT-4o [12, 23] and other baselines.
33
+
34
+ # 2. Related work
35
+
36
+ Automatic cinematography. In automatic cinematography, systems automate the process of creating an effective video presentation given a video scene, such as controlling camera movements, angles, and transitions. Prior work targets classroom environments [16, 21, 56], group activities [2], or (pseudo-)panoramic recordings [6, 7, 9, 15, 49, 50, 55]. Different from all of the above, we tackle view selection in multi-view instructional scenarios. Moreover, we seek a lighter-weight supervision solution: whereas prior work uses supervised discriminative methods requiring large-scale best view labels [7, 22, 49] or bootstraps view selector training using large-scale multi-view videos annotated with view-agnostic narrations [32], we aim to learn view selection
37
+
38
+ from readily available in-the-wild unlabeled instructional videos. Furthermore, our model is multimodal, integrating both the video content as well as its transcribed speech.
39
+
40
+ View selection in active perception. More distant from our problem, work in active perception and robotics considers how agents can intelligently select their visual input stream. This includes next-best-view selection, where an embodied agent learns to actively place a camera for recognition [1, 8, 13, 24, 40] or segmentation [44, 45]. Whereas the objective in such work is to spend less time or compute for an agent to see sufficient content, our goal is instead to choose the sequence of informative camera views for human consumption, from among the available viewpoints.
41
+
42
+ Weak supervision from Web data. Large-scale instructional data from the Web has been shown to provide weak supervision for understanding instructional activities, by aligning frames [33] and narrations [29, 33] with their step descriptions from instructional Web articles (e.g., Wiki-How), or through modeling the temporal order and interdependence of steps [3, 58, 59]. Unlike any of these methods, we tackle a distinct problem of weakly supervised view-switch detection in instructional videos, with the end goal of using the detector for view selection.
43
+
44
+ Video summarization. Temporal video summarization [4, 20, 35, 37, 42] entails creating a short but informative summary of a long video by subsampling keyframes or clips from it. While early methods are largely unsupervised [25, 30, 38, 47], more recent works derive supervision from manual labels [17, 19, 27, 41, 48, 57]. Limited work explores summarization in the context of multiple input videos [10, 14, 37, 43]. Video summarization and viewpoint selection are two entirely distinct tasks. Video summarization aims to downsample the video in time to the essential parts, whereas our task essentially requires downsampling the video in space to isolate the most informative viewpoint.
45
+
46
+ # 3. Approach
47
+
48
+ Our goal is to train a model to predict the "best view sequence" for multi-camera instructional videos — the sequence of camera viewpoints (views) that a human would most likely select to demonstrate an instructional activity (e.g., a close-up view of ingredients in a cooking video, moving to a wide-shot view when the chef speaks and gestures). To tackle this, we train a model for the proxy task of detecting "view switches" in varying-view instructional videos, which we then bootstrap to form a view selection model.
49
+
50
+ First, we formally define our pretext task (Sec. 3.1). Next, we describe how to source pseudo-labels for our pretext task by automatically classifying views in varying-view videos (Sec. 3.2). We then describe our method and how to train it to predict view-switches (Sec. 3.3). Finally, we describe
51
+
52
+ ![](images/3b2aa182b53c02cc2334c9ded3b6ffeca75de2ec72434b7fc9dcb4fc8a08e3bc.jpg)
53
+ Figure 2. Given varying-view instructional videos—videos composed of a sequence of views chosen by human(s) to accurately show the instructional activity at all times—our goal is to train a view-switch detector $D$ that can predict if the view should switch or not, at any time in a new video. Our hypothesis is that such a detector, when trained on large-scale and in-the-wild videos, can capture human view preferences and facilitate learning best view selection in multi-view settings with limited labels. However, such in-the-wild videos lack view labels. To train nevertheless, we propose an approach comprising (a) a view pseudo-labeler (left) that given a varying-view instructional video $I$ , automatically classifies views in it and generates a pseudo-label set $\tilde{V}^I$ , and (b) a view-switch detector $D$ (right) that given the pseudo-labels $\tilde{V}^I$ and any time $t$ in $I$ , learns to predict the next view. The prediction is conditioned on the past frames, past narrations, and the next narration, where narrations are naturally occurring spoken content from the how-to demonstrator.
54
+
55
+ how our view-switch detector can bootstrap learning a view selection model (Sec. 3.4 and 3.5) with limited labels.
56
+
57
+ # 3.1. View-switch detection as a pretext task
58
+
59
+ We introduce our pretext task: view-switch detection in varying-view instructional videos. Consider a varying-view instructional video $I$ , where the view changes back and forth over time between a close-up / egocentric-like (ego) view, and a wide shot / exocentric-like (exo) view. $^{1}$ This results in a sequence of varying views $V$ . The instructional video also contains a sequence of narrations $N$ , where each narration $N_{i}$ has a start and end time, $(b_{i}, e_{i})$ , and provides commentary transcribed to text. These narrations are free-form spoken language from the demonstrator, which capture their actions ("hammer the nail in there") as well as side comments ("sometimes I use my sander instead", "thanks for watching!").
60
+
61
+ We formulate the view-switch detection task as a two-class view prediction problem, where at any time $t$ in the video, the model must detect if the view should be of type ego or exo, to best showcase the activity over the next $\Delta$ seconds. More specifically, we require a model $D$ that predicts the human-preferred view $V_{(t,t + \Delta]}$ given the past video, narrations and views, as well as the next narration. Formally,
62
+
63
+ $$
64
+ D (F _ {[: t ]}, N _ {[: t ]}, V _ {[: t ]}, N _ {(t, t + \Delta ]} ^ {\prime}) = V _ {(t, t + \Delta ]},
65
+ $$
66
+
67
+ where $F_{[:t]}$ is the past frames, $N_{[:t]}$ is the past narrations and $V_{[:t]}$ is the past views. $N_{(t,t + \Delta ]}^{\prime}$ is the next narration, if it overlaps with the prediction interval, and an empty string otherwise. Importantly, this formulation provides a path from the next-view prediction task to the view-switch
68
+
69
+ task: since the most recent past view is observed, estimating the desired next view—and comparing it with the latest past view—is equivalent to predicting whether the view switched.
70
+
71
+ While past narrations provide high-level cues about past activity steps, past frames offer more fine-grained information about the steps and how they were viewed. They together form the past context that can help anticipate the next view. The next narration is essential to disambiguate between various potential actions that the demonstrator may do next, and the language directly hints at the appropriate views (e.g., the person says "next, let's take a closer look at ..." suggesting an ego view). Thus, combining these inputs will offer valuable cues to our detector.2
72
+
73
+ Critically, we aim to train this detector on large-scale, in-the-wild instructional videos [34, 51]. We show that training for this pretext task can enable view selection models for multi-camera settings, with limited supervision. In short, representations developed to detect when to "switch view" can be repurposed with minimal modification to select the "best view" to switch to, since they contain rich knowledge of human-selected view-switch patterns in a large variety of in-the-wild scenarios. Next, we show how to source pseudolabels to train such models.
74
+
75
+ # 3.2. Sourcing "view-switch" pseudo-labels
76
+
77
+ Instructional videos [34, 51] are an ideal source of varying-view data, however they do not come paired with information about what camera viewpoint is chosen for each segment. We therefore design a strategy to automatically identify and pseudo-label their underlying view sequences. We do this in two stages (Fig. 2 left).
78
+
79
+ First, given a video $I$ , we use an off-the-shelf scene detector (PySceneDetect [5]) to compute scene boundaries. Using this, we split the video into a sequence of contiguous shots. Next, we classify each frame in the video using a pre-trained ego vs. exo view classifier, and then aggregate the class predictions into a shot-level pseudo-label. Specifically, given a shot from $I$ , we first split it into a sequence of fixed-length clips. Next, we feed each clip to the view classifier that produces the probability that the clip is from an ego vs. exo view. We then compute the pseudo-label for the whole shot by averaging the view probabilities across all its clips. We repeat these steps for all shots, and assign each frame in $I$ the same pseudo-label as the shot it lies in, to finally obtain a pseudo-label set $\tilde{V}^I$ . Combining the classifier with the scene detector reduces the overall noise in the pseudo-labels due to classification failures at scene boundaries. We use a learned model [28] for ego-exo view classification, trained on the Charades-Ego [46] dataset. See Supp. for details.
80
+
81
+ # 3.3. View-switch detector design
82
+
83
+ Given a video $I$ and any time $t$ in it, our view-switch detector $D$ must successfully predict the view for the future time interval $(t, t + \Delta]$ . It must do so using the frames, narrations and views from the past, and also the next narration, if it overlaps with the prediction interval (c.f. Sec. 3.1). See Fig. 2 right. In the following, we provide details on how our method extracts features from each input and then aggregates them for making a view prediction.
84
+
85
+ Frame encoding. We begin by using a frame encoder $\mathcal{E}^F$ to embed the past frames $F_{[:t]}$ and produce a visual feature sequence $f$ , where each frame $F_i$ has a feature $f_i$ . We further enhance each feature $f_i$ by using a viewpoint encoder $\mathcal{E}^V$ to embed the corresponding view $V_i^F$ into a view feature and adding it to $f_i$ . We also encode frame $F_i$ 's temporal position relative to the start time of the most recent narration using a temporal encoder $\mathcal{E}^T$ and add the encoding to the enhanced frame feature. Producing a feature per frame and augmenting it with view and temporal information helps us create a fine-grained, and view- and temporally-aware representation predictive of the next view.
86
+
87
+ Narration encoding. Next, we encode each past narration from $N_{[:t]}$ , and the next narration $N_{(t,t + \Delta ]}^{\prime}$ by using an LLM encoder. This generates a text feature sequence $n$ for the past narrations and a single text feature $n^{\prime}$ for the next narration.
88
+
89
+ Similar to our encoding of past frames, we also make the features for past narrations view-aware. To do so, we first produce a per-view count of the frames that lie in the interval of each past narration $N_{i}$ . We then estimate the dominant viewpoint for the narration—called narration view, henceforth—by setting it to the most frequent view per the per-view frame count. Next, we use our view encoder $\mathcal{E}^{V}$ to embed the narration view into a view feature. Finally, we
90
+
91
+ update the narration feature $n_i$ by adding it with the view feature.
92
+
93
+ Moreover, for both past and next narrations, we provide their temporal information to our model so that it can infer the alignment between the frames and the narrations, and use it to improve its cross-modal reasoning. To this end, we first normalize the start and end time pair for each past narration $N_{i}$ and next narration $N^{\prime}$ , to be relative to the start time of the first past narration. We then compute the mean time of each pair. These means convey the temporal locations of the narrations relative to each other. Next, we encode each relative mean with the temporal encoder $\mathcal{E}^T$ and obtain a temporal feature. Finally, we update the narration features, $n$ and $n^{\prime}$ , by adding them with their temporal features.
94
+
95
+ Feature aggregation and view classification. To aggregate the visual and narration features, we first add modality features to the frame features $f$ , and narration features, $n$ and $n'$ , respectively. These are modality-specific learnable embeddings that help distinguish between the visual and text modalities, and successfully do cross-modal reasoning.
96
+
97
+ We also introduce a [CLS] token in our model, and embed it with an encoder $\mathcal{E}^{\mathrm{CLS}}$ to produce a feature $c$ , so that the output of our feature aggregator, which corresponds to the [CLS] token, can be used to estimate the next view. Next, we feed the frame features $f$ , the past narration features $n$ , the next narration feature $n'$ , and the [CLS]-token feature $c$ into a feature aggregator $\mathcal{A}$ . $\mathcal{A}$ comprises a transformer [53] encoder that performs self-attention on all features and extracts multi-modal cues that are predictive of the next view. Finally, we take the output feature of $\mathcal{A}$ , which corresponds to the [CLS] token, and pass it to a view classification head $\mathcal{H}$ to get an estimate $\hat{V}_{(t,t + \Delta ]}$ of the next view $V_{(t,t, + \Delta ]}$ . Formally,
98
+
99
+ $$
100
+ \hat {V} _ {(t, t + \Delta ]} = \mathcal {H} (\mathcal {A} (f, n, n ^ {\prime}, c) [ j _ {\mathrm {C L S}} ]), \tag {1}
101
+ $$
102
+
103
+ where $j_{\mathrm{CLS}}$ is the feature index for the [CLS] token.
104
+
105
+ # 3.4. Repurposing switch detection for view selection
106
+
107
+ Recall that in view selection, given a multi-view instructional video $I$ and any time $t$ in it, the goal is to predict the view that is preferred by humans for showing the activity in an interval $[t, t + \Delta]$ . We introduce a view selector $S$ for tackling this task. $S$ is a modification of our view-switch detector $D$ , such that $S$ additionally has access to the frames from the simultaneously captured ego and exo views during the prediction interval $[t, t + \Delta]$ .
108
+
109
+ To this end, we first use our frame encoder $\mathcal{E}^F$ to embed the ego frames $F_{[t,t + \Delta ]}^{G}$ and exo frames $F_{[t,t + \Delta ]}^{X}$ into visual features $f^{G}$ and $f^{X}$ , respectively. Next, we append $f^{G}$ and $f^{X}$ to the input sequence of our feature aggregator $\mathcal{A}$ . Finally, we treat $\mathcal{A}$ 's output feature for its [CLS] token input, as a representation of the best view for $[t,t + \Delta ]$ , and feed it
110
+
111
+ to the detector's view classification head $\mathcal{H}$ to get an estimate $\ddot{V}_{[t,t + \Delta ]}$ of the best view $V_{[t,t + \Delta ]}$ .
112
+
113
+ To learn view selection we initialize $S$ with our detector's parameters, trained on the view-switch detection task, and finetune it using a small set of samples labeled for view selection. This design enables us to effectively use the knowledge from pretraining and learn view selection with limited labels. Next, we provide details for training and finetuning.
114
+
115
+ # 3.5. Model training objective
116
+
117
+ We train our view-switch detector $D$ with a view classification loss $\mathcal{L}^D$ . We set $\mathcal{L}^D$ to
118
+
119
+ $$
120
+ \mathcal {L} ^ {D} = \mathcal {L} _ {\mathrm {C E}} \left(\hat {V} _ {(t, t + \Delta ]}, \tilde {V} _ {(t, t + \Delta ]}\right), \tag {2}
121
+ $$
122
+
123
+ where $\hat{V}_{(t,t + \Delta ]}$ is our estimated view (c.f. Sec. 3.3) and $\tilde{V}_{(t,t + \Delta ]}$ is the pseudo-label from our view pseudo-labeler (c.f. Sec. 3.2).
124
+
125
+ To train our view selector $S$ , we obtain a small training set of best view labels, $B$ , such that $B = \{V_{[t_1,t_1 + \Delta ]},\dots ,V_{[t_W,t_W + \Delta ]}\}$ , and $W$ is the label count in $B$ . For each best view label $V_{[t_w,t_w + \Delta ]}\in B$ , and the corresponding view estimate $\ddot{V}_{[t_w,t_w + \Delta ]}$ , per our view selector $S$ (c.f. Sec. 3.4), we set our view selection loss $\mathcal{L}^S$ to a cross-entropy loss, such that
126
+
127
+ $$
128
+ \mathcal {L} ^ {S} = \mathcal {L} _ {\mathrm {C E}} \left(\ddot {V} _ {\left(t _ {w}, t _ {w} + \Delta \right]}, V _ {\left(t _ {w}, t _ {w} + \Delta \right]}\right). \tag {3}
129
+ $$
130
+
131
+ Once trained, our framework can accurately choose the preferred view in novel multi-view videos.
132
+
133
+ # 4. Datasets and annotations
134
+
135
+ Datasets. We use two datasets in our experiments. HT100M [34] is a large-scale dataset of narrated, in-the-wild instructional videos. These videos are view-varying in nature, and the views can be broadly categorized as ego or exo. This, along with the diversity and realism of HT100M, makes it ideal for our view-switch detection task. Ego-Exo4D [18] contains multi-view videos, where each video is captured with five time-synced cameras—one is an ego camera worn by a human performing an instructional activity, and the other four are stationary exo cameras placed around the scene. Moreover, the narrate-and-act (N&A) subset of Ego-Exo4D has videos of humans narrating and performing an activity, where the narrations are free-form and match in style with HT100M, making it compatible with our task of view selection with limited labels.
136
+
137
+ Training data. To train the view-switch detector, we use 3,416 hours of HT100M videos spanning a diverse set of activities (cooking, DIY, household, etc.) and pseudo-label shots from these videos (c.f. Sec. 3.2). See Supp. for details.
138
+
139
+ Evaluation data. For evaluation, we use both HT100M and Ego-Exo4D [18], where the view-switch detection evaluation on Ego-Exo4D is zero-shot. While the training sets are automatically generated and pseudo-labeled, we ensure a gold-standard test set free of noise by manually annotating videos for our tasks. To this end, we recruit trained annotators to manually annotate the view types for HT100M and the human-preferred views for Ego-Exo4D, as follows.
140
+
141
+ For HT100M, we identify 975 hours of videos that do not overlap with our train videos above. We segment 4,487 fixed-length clips, each with length set to the prediction interval $\Delta$ (c.f. Sec. 3.1). Next, we ask trained annotators to label these clips as either ego or exo. See Supp. for full annotation instructions and more details.
142
+
143
+ For Ego-Exo4D, we create a test set containing 2.7 hours of N&A videos spanning six activity categories (cooking, bike repair, rock climbing, dancing, soccer, basketball). For each video, we use its "best-exo-view" annotation from Ego-Exo4D to generate an ego-exo view pair comprising the single ego and the best exo view. As before, we create $\Delta$ length clips from each view. We then couple the pair with its closest atomic activity description (time-stamped manual descriptions of the camera wearer's activity [18]) and ask our annotators to label the view between the two that best demonstrates the activity described in the narration (see Supp. Fig. 3). Importantly, this means that annotators specifically select the "best" view as the one that most clearly illustrates the current actions of the camera wearer, consistent with our how-to video view selection goal.
144
+
145
+ Annotator agreement on best view. To ensure annotation quality for both datasets, in addition to providing detailed annotation guidelines and concrete examples (available in Supp.), we require annotators to take qualifiers with stringent passing criteria and we solicit 9 annotators' responses for each instance. We accept an annotation only if the inter-annotator agreement is at least $78\%$ , meaning at least 7 out of 9 annotators agree. This resulted in a Cohen's kappa coefficient [11] of 0.65 for HT100M and 0.70 for Ego-Exo4D—both of which constitute "substantial" agreement [26]. This solid agreement assures the quality of our test set; despite there being some room for subjectivity in deciding the best view for a how-to, this data shows human judges are indeed able to substantially agree.
146
+
147
+ This results in a final total of 3,151 and 5,049 test instances (fixed-length clip-narration pairs from above), sampled from 3,677 HT100M and 33 Ego-Exo4D test videos, respectively. In Supp. we filter with even higher agreement thresholds, yielding even more selective (but smaller) test sets; trends for our method vs. baselines remain consistent.
148
+
149
+ Data for view selection with limited labels. We train and evaluate our view selector on a small dataset comprising Ego-Exo4D [18] videos. For our training data, we follow
150
+
151
+ <table><tr><td rowspan="2">Model</td><td colspan="3">HowTo100M [34]</td><td colspan="3">Ego-Exo4D [18]</td></tr><tr><td>Accuracy</td><td>AUC</td><td>AP</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>All-ego/exo</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>Random</td><td>52.0</td><td>52.0</td><td>51.0</td><td>49.3</td><td>49.3</td><td>49.7</td></tr><tr><td>Last-frame</td><td>42.3</td><td>42.3</td><td>53.4</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>First-person pronoun detector</td><td>47.8</td><td>47.8</td><td>46.4</td><td>50.3</td><td>50.3</td><td>50.1</td></tr><tr><td>Retrieval [54]-F</td><td>53.4</td><td>53.4</td><td>53.2</td><td>52.6</td><td>52.6</td><td>53.6</td></tr><tr><td>Retrieval [54]-N</td><td>52.1</td><td>52.1</td><td>51.8</td><td>52.0</td><td>52.0</td><td>50.6</td></tr><tr><td>Retrieval [54]-N&#x27;</td><td>52.6</td><td>52.6</td><td>52.9</td><td>52.1</td><td>52.1</td><td>52.6</td></tr><tr><td>SWITCH-A-VIEW (Ours)</td><td>59.4</td><td>63.8</td><td>60.5</td><td>51.2</td><td>56.4</td><td>55.4</td></tr></table>
152
+
153
+ our annotation protocol for evaluating view-switch detection on Ego-Exo4D, and collect view annotations for a total of 3.5 hours of training videos. This results in a total of 6,634 train instances. For evaluation, we use our test set from view-switch detection. This reuse is possible since a label indicates both the type (ego/exo) of the desired next view for view-switch detection as well as the desired current view for view selection. Train and test videos for this task are disjoint. See Supp. for details.
154
+
155
+ # 5. Experiments
156
+
157
+ Implementation. We set the durations of past frames to 8 seconds—corresponding to 0.23 and 2.31 switch(es) per second for HT100M and Ego-Exo4D, respectively—and past narrations to 32 seconds, and the prediction interval to $\Delta = 2$ seconds. We set the sample count for view selection to $W = 5000$ . We evaluate view-switch detection on HowTo100M [34] by obtaining the views for the past frames (c.f. Sec. 3.3) from our pseudo-labeler. For Ego-Exo4D, we adopt a teacher-forcing setup and evaluate both tasks by using the ground-truth annotations for past frames and views. We implement our view-switch detector $D$ and view selector $S$ using the DINOv2 [36] encoder for our frame encoder $\mathcal{E}^F$ , the Llama 2 [52] encoder for our narration encoder $\mathcal{E}^N$ , a 8-layer transformer encoder [53] for our feature aggregator $\mathcal{A}$ , a 2-layer MLP for the view classification head $\mathcal{H}$ , and learnable embedding layers for our view encoder $\mathcal{E}^V$ and temporal encoder $\mathcal{E}^T$ .
158
+
159
+ Baselines. We provide strong baselines comprising SOTA models and representations, as well as relevant heuristics. For view-switch detection, we compare against
160
+
161
+ - InternVideo2 retrieval [54]: a set of baselines that given the most recent past frame (Retrieval [54]- $F$ ), most recent past narration (Retrieval [54]- $N$ ), or next narration (Retrieval [54]- $N'$ ), first encodes [54] them into fine-grained features that capture multi-frame temporal contexts, then uses feature similarity to retrieve a nearest neighbor of the same input type from the train set, and finally outputs the next view for $F$ or $N$ , or the corresponding view for $N'$ , as its prediction.
162
+
163
+ Table 1. View-switch detection results. Evaluation on EgoExo4D [18] is zero-shot. All values are in %, and higher is better.
164
+
165
+ <table><tr><td>Model</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>Human performance (Upper bound)</td><td>82.3</td><td>83.5</td><td>81.7</td></tr><tr><td>All-ego/exo</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>Random</td><td>49.3</td><td>49.3</td><td>49.7</td></tr><tr><td>Last-frame</td><td>50.0</td><td>50.0</td><td>50.0</td></tr><tr><td>First-person pronoun detector</td><td>50.3</td><td>50.3</td><td>50.1</td></tr><tr><td>Retrieval [54]-F</td><td>52.3</td><td>52.3</td><td>53.6</td></tr><tr><td>Retrieval [54]-N</td><td>51.9</td><td>51.9</td><td>51.0</td></tr><tr><td>Retrieval [54]-N&#x27;</td><td>52.4</td><td>52.4</td><td>52.4</td></tr><tr><td>View-narration [54] Similarity</td><td>52.5</td><td>52.4</td><td>53.9</td></tr><tr><td>Finetuned X-CLIP [7]</td><td></td><td></td><td></td></tr><tr><td>Random negative sampling</td><td>52.1</td><td>52.0</td><td>53.1</td></tr><tr><td>Text-conditioned negative sampling</td><td>52.8</td><td>52.7</td><td>53.6</td></tr><tr><td>Proprietary VLMs</td><td></td><td></td><td></td></tr><tr><td>Gemini 2.5 Pro</td><td>51.2</td><td>51.2</td><td>51.0</td></tr><tr><td>GPT-4o</td><td>53.3</td><td>53.3</td><td>52.3</td></tr><tr><td>LangView [32]</td><td></td><td></td><td></td></tr><tr><td>-smallData</td><td>52.1</td><td>52.6</td><td>53.2</td></tr><tr><td>-bigData (privileged)</td><td>53.3</td><td>54.8</td><td>54.5</td></tr><tr><td>Ours w/o pretraining</td><td>50.1</td><td>51.6</td><td>51.3</td></tr><tr><td>SWITCH-A-VIEW (Ours)</td><td>54.0</td><td>57.3</td><td>56.0</td></tr></table>
166
+
167
+ Table 2. Results and ablation for view selection with limited labels. All values are in %; higher is better. Significance $p \leq {0.05}$ .
168
+
169
+ - All-ego, All-exo, Random, Last-frame: these are heuristics that use the ego view (All-ego), the exo view (All-exo), a randomly chosen (Random) view, or the view of the most recent past frame (Last-frame), as their prediction.
170
+ - First-person pronoun detector: a heuristic that predicts exo when it detects first-person pronouns like "I", "We", "My" or "Our" in the next narration, as human editors often use a wide shot that reveals their face or full body, when using such pronouns.
171
+
172
+ For view selection with limited labels, in addition to the baselines listed above, we compare against the following:
173
+
174
+ - LangView [32]: a SOTA view selector that uses multiview videos and human-annotated narrations for weakly supervised pretraining. We finetune this model with our Ego-Exo4D labels (Sec. 4). We evaluate two versions of this baseline: LangView-bigData and LangView-smallData, which use large-scale Ego-Exo4D [32] videos, and our same small subset (Sec. 4), respectively, for pretraining. Note that the bigData variant enjoys access to $98 \times$ more training samples than our method, an advantage for the baseline.
175
+ - View-narration [54] Similarity (VN-Sim): separately computes the cosine similarity between the InternVideo2 features [54] for each view and the next narration, and picks the view most similar to the narration.
176
+ - Finetuned X-CLIP [31]: a finetuned CLIP [39]-style model that aligns the frames from the target view and
177
+
178
+ the future narration. We explore two negative sampling strategies when finetuning: random and text-conditioned.
179
+
180
+ - Proprietary VLMs: we feed Gemini 2.5 Pro [12] and GPT-4o [23] all our view selection inputs and task them with choosing the best view by providing a text prompt similar to our guidelines for collecting human annotations (see Sec. 5 and Supp.).
181
+
182
+ LangView evaluates how our model fares against SOTA view selection, while the retrieval, view-narration and finetuned CLIP [39]-style baselines analyze whether SOTA video-language embeddings, whether frozen or finetuned, are sufficient for this task. The heuristics verify the challenging nature of the tasks. The proprietary VLMs evaluate if employing large-scale generalist models is enough.
183
+
184
+ Evaluation metrics. We consider three metrics: 1) Accuracy, which directly measures the agreement between our predictions and labels; 2) AUC, the area under the ROC curve; and 3) AP, the average precision (AP) of the precision vs. recall curve. We use AUC and AP to account for the possible class imbalance in our collected annotations. Moreover, for each metric, we separately compute its value for the same-view and view-switch instances in our test sets, and report the mean. This lets us account for differences in the same-view and view-switch frequency, and obtain unbiased performance measures.
185
+
186
+ View-switch detection. In Table 1, we report our view-switch detection results. The heuristics generally perform the worst on both datasets, underlining the challenging nature of the task. The Retrieval [54] baselines improve over them, indicating that our model inputs do provide cues about the view type. Among the Retrieval baselines, retrieving using the most recent past frame performs the best, showing that the past frames offer fine-grained task-relevant information beyond the narration words. Moreover, retrieving with the next narration is better than retrieving with the most recent past narration, revealing that the next narration carries more pertinent details about the desired view. This is likely because the next narration is better aligned with the time interval for which the view is being predicted.
187
+
188
+ Our method outperforms all baselines on both datasets, with the AUC margin over the best baseline, Retrieval [54]- $F$ , being as high as $10.4\%$ on HowTo100M (HT100M) [34] and $3.8\%$ on Ego-Exo4D [18]. Our improvement over the Retrieval baselines show that computing feature [54]-level similarities are not enough for this task. Instead, learning it by leveraging complementary cues from both narrations and frames is critical. Moreover, our zero-shot results on Ego-Exo4D speak to our model's efficacy vis-a-vis learning human view patterns from large-scale and in-the-wild videos, which generalize to different scenarios, without any training.
189
+
190
+ ${}^{4}$ Same-view instance count $= {1.6}\mathrm{x}$ view-switch instance count for HT100M and ${3.9}\mathrm{x}$ for Ego-Exo4D
191
+
192
+ <table><tr><td rowspan="2">Model</td><td colspan="3">HowTo100M [34]</td><td colspan="3">Ego-Exo4D [18]</td></tr><tr><td>Accuracy</td><td>AUC</td><td>AP</td><td>Accuracy</td><td>AUC</td><td>AP</td></tr><tr><td>N-only</td><td>53.5</td><td>54.4</td><td>52.3</td><td>50.0</td><td>48.7</td><td>49.0</td></tr><tr><td>N&#x27;-only</td><td>55.4</td><td>57.8</td><td>56.2</td><td>49.8</td><td>49.8</td><td>50.0</td></tr><tr><td>F-only</td><td>53.3</td><td>54.5</td><td>54.7</td><td>51.0</td><td>53.4</td><td>53.2</td></tr><tr><td>(F, N&#x27;)-only</td><td>55.5</td><td>60.1</td><td>58.1</td><td>52.1</td><td>54.2</td><td>52.6</td></tr><tr><td>(N, N&#x27;)-only</td><td>57.5</td><td>59.3</td><td>56.6</td><td>50.0</td><td>53.0</td><td>52.6</td></tr><tr><td>(F, N)-only</td><td>56.0</td><td>60.9</td><td>57.4</td><td>51.8</td><td>54.9</td><td>54.2</td></tr><tr><td>Ours</td><td>59.4</td><td>63.8</td><td>60.5</td><td>51.2</td><td>56.4</td><td>55.4</td></tr></table>
193
+
194
+ Table 3. Ablation study for view-switch detection. All values are in $\%$ and higher is better. Significance $p\leq 0.05$
195
+
196
+ View selection. Table 2 shows our results on view selection with limited labels. For the heuristics and Retrieval [54] baselines, we observe the same performance trends as view-switch detection. The View-narration [54] Similarity (VN-Sim) baseline marginally improves over these methods, indicating the frames from candidate views when combined with the corresponding narration $(N^{\prime})$ provide direct cues about the preferred view. LangView [32]'s results benefit from its language-guided training, generally outperforming VN-Sim.
197
+
198
+ Our method significantly improves over all baselines, with the AUC margin over the best baseline, LangView [32]-bigData, being $2.5\%$ . Our gains over VN-Sim and Finetuned X-CLIP [31] underscore that using feature similarity to match the activity described in the next narration with a candidate view does not suffice, and instead a model like ours, which can leverage multi-modal cues from the combination of both past and candidate frames, and past and next narrations, is valuable for this task. Our improvement over the proprietary VLMs—despite their much larger size and training data—shows that task-specific experts are necessary to tackle our challenging task. Training our model from scratch with only the small set of best view labels ("ours w/o pretraining") is significantly weaker, showing that our view-switch pretraining idea is doing the heavy lifting.
199
+
200
+ Our gains over the SOTA LangView [32] show that learning view selection from language is less effective than that from large-scale human-edited videos, even when the videos and language are available at scale (bigData). Moreover, the insights of LangView and this work are complementary. We find if we fine-tune SWITCH-A-VIEW with LangView's narration-based pseudo-labels, in addition to our labels (Sec. 4), we achieve further gains. See Supp. for details.
201
+
202
+ Ablations. Table 3 shows our ablation results for view-switch detection. Dropping any one input to our model degrades performance, indicating that each input plays a role. Dropping two inputs hurt the performance even more, showing that more inputs are better in any combination, suggesting our model design extracts complementary cues from them in all configurations. Moreover, using past frames instead of narrations improves performance, re-affirming
203
+
204
+ ![](images/df1ea77d2faf66082b7f706a5f2968a6d35e3112ee2cba30bc1c81ff1cb1e718.jpg)
205
+ Figure 3. Left: successful view-switch detections by our model on same-view (top) and view-switch cases (bottom). Our model correctly detects view switches by anticipating the next step using past frames (same-view sample 1, and view-switch sample 2) or leveraging the content of the next narration (same-view sample 2, and view-switch sample 1 and 2). Right: successful view selections by our model on same-view (top) and view-switch cases (bottom). For view selection as well, our model can predict the desired next view by relying on the next narration (same-view sample 1, and view-switch sample 1 and 2), or anticipate it using the past narrations (same-view sample 1 and 2), or the past frames (same-view sample 1). These examples show that all three inputs play a role in our model predictions.
206
+
207
+ that vision provides fine-grained features necessary for high performance. Finally, using $N'$ instead of $N$ improves performance in some cases, showing the next narration's role.
208
+
209
+ See Supp. for more analysis, including the effect of the past frame and narration durations, and sample count on model performance, and its scenario-level breakdown.
210
+
211
+ Qualitative examples. Fig. 3-left shows our model's successful view-switch detections on both same-view (top) and view-switch cases (bottom); see caption for details. We also notice some common failure modes with our model. For view-switch detection, our model sometimes fails when there is no next narration overlapping with the prediction interval, and neither the past frames nor narrations are predictive of the next view. In another failure type, the past views are wrongly categorized by our pseudo-labeler for HowTo100M [34] or by professional annotators for Ego-Exo4D [18]. This leads to our model getting confused and predicting the wrong next view. For view selection, in addition to these failures, our model can fail when both views
212
+
213
+ look equally good. See Supp. for video examples.
214
+
215
+ # 6. Conclusion and future work
216
+
217
+ We introduced an approach for learning to select views from instructional video by bootstrapping human-edited (but unlabeled) in-the-wild content. Results show the method's efficacy and set the benchmark for this new task.
218
+
219
+ A potential limitation of our model is its clip-level predictions, which can lead to rapid switches between viewpoints over time. While hard cuts are in fact necessary at times to maximize informativeness, the trade-off between view information and perceived viewing ease is interesting future work. Other challenges uncovered by our work are the distribution gap between between edited in-the-wild and multi-view videos and the complexity of learning view selection from limited labels. In addition, we plan to generalize to continuous view selection, potentially by integrating ideas from new view synthesis, and we will explore modeling user attention for personalized view selection.
220
+
221
+ # Acknowledgements
222
+
223
+ UT Austin is supported in part by the UT Austin IFML NSF AI Institute and the UT Austin MLL Center for Generative AI. We would also like to thank Zihui Xue for suggesting the PySceneDetect scene detector and helpful discussions regarding the view classifier during the early stages of designing our view pseudo-labeler.
224
+
225
+ # References
226
+
227
+ [1] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Košecka, and Alexander C Berg. A dataset for developing and benchmarking active vision. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1378-1385. IEEE, 2017. 2
228
+ [2] Ido Arev, Hyun Soo Park, Yaser Sheikh, Jessica Hodgins, and Ariel Shamir. Automatic editing of footage from multiple social cameras. ACM Trans. Graph., 33(4), 2014. 2
229
+ [3] Kumar Ashutosh, Santhosh Kumar Ramakrishnan, Triantafyllos Afouras, and Kristen Grauman. Video-mined task graphs for keystep recognition in instructional videos. Advances in Neural Information Processing Systems, 36, 2024. 2
230
+ [4] Taivanbat Badamdorj, Mrigank Rochan, Yang Wang, and Li Cheng. Contrastive learning for unsupervised video highlight detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14042-14052, 2022. 2
231
+ [5] Brandon Castellano. Pyscenedetect. https://github.com/Breakthrough/PySceneDetect.4
232
+ [6] Seunghoon Cha, Jungjin Lee, Seunghwa Jeong, Younghui Kim, and Junyong Noh. Enhanced interactive $360^{\circ}$ viewing via automatic guidance. ACM Trans. Graph., 39(5), 2020. 2
233
+ [7] Jianhui Chen, Keyu Lu, Sijia Tian, and Jim Little. Learning sports camera selection from internet videos. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1682-1691. IEEE, 2019. 1, 2
234
+ [8] Ricson Cheng, Ziyan Wang, and Katerina Fragkiadaki. Geometry-aware recurrent neural networks for active visual recognition. Advances in Neural Information Processing Systems, 31, 2018. 2
235
+ [9] Shih-Han Chou, Yi-Chun Chen, Kuo-Hao Zeng, Hou-Ning Hu, Jianlong Fu, and Min Sun. Self-view grounding given a narrated 360 $\{\backslash deg\}$ video. arXiv preprint arXiv:1711.08664, 2017. 2
236
+ [10] Wen-Sheng Chu, Yale Song, and Alejandro Jaimes. Video co-summarization: Video summarization by visual co-occurrence. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3584–3592, 2015. 2
237
+ [11] J. Cohen. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46, 1960. 5
238
+ [12] Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, et al. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. arXiv preprint arXiv:2507.06261, 2025. 2, 7
239
+
240
+ [13] Ruoyi Du, Wenqing Yu, Heqing Wang, Ting-En Lin, Dongliang Chang, and Zhanyu Ma. Multi-view active fine-grained visual recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1568–1578, 2023. 2
241
+ [14] Mohamed Elfeki, Liqiang Wang, and Ali Borji. Multistream dynamic video summarization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 339-349, 2022. 2
242
+ [15] J. Foote and D. Kimber. Flycam: practical panoramic video and automatic camera control. In 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532), pages 1419-1422 vol.3, 2000. 1, 2
243
+ [16] Michael Gleicher and James Masanz. Towards virtual videography (poster session). In Proceedings of the Eighth ACM International Conference on Multimedia, page 375-378, New York, NY, USA, 2000. Association for Computing Machinery. 1, 2
244
+ [17] Boqing Gong, Wei-Lun Chao, Kristen Grauman, and Fei Sha. Diverse sequential subset selection for supervised video summarization. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2014. 2
245
+ [18] Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, et al. Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives. arXiv preprint arXiv:2311.18259, 2023. 2, 3, 5, 6, 7, 8
246
+ [19] Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos. In Computer Vision – ECCV 2014, pages 505–520, Cham, 2014. Springer International Publishing. 2
247
+ [20] Bo He, Jun Wang, Jielin Qiu, Trung Bui, Abhinav Shrivastava, and Zhaowen Wang. Align and attend: Multimodal summarization with dual contrastive losses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14867-14878, 2023. 2
248
+ [21] Rachel Heck, Michael Wallick, and Michael Gleicher. Virtual videography. In Proceedings of the 14th ACM International Conference on Multimedia, page 961-962, New York, NY, USA, 2006. Association for Computing Machinery. 2
249
+ [22] Hou-Ning Hu, Yen-Chen Lin, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, and Min Sun. Deep 360 pilot: Learning a deep agent for piloting through 360deg sports videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3451-3460, 2017. 1, 2
250
+ [23] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. 2, 7
251
+ [24] Dinesh Jayaraman and Kristen Grauman. End-to-end policy learning for active visual categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(7):1601-1614, 2019. 2
252
+ [25] Hong-Wen Kang, Y. Matsushita, Xiaou Tang, and Xue-Quan Chen. Space-time video montage. In 2006 IEEE Computer
253
+
254
+ Society Conference on Computer Vision and Pattern Recognition (CVPR'06), pages 1331-1338, 2006. 2
255
+ [26] J Landis and G Koch. The measurement of observer agreement for categorical data. Biometrics, 1977. 5
256
+ [27] Yandong Li, Liqiang Wang, Tianbao Yang, and Boqing Gong. How local is the local diversity? reinforcing sequential determinantal point processes with dynamic ground sets for supervised video summarization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 151-167, 2018. 2
257
+ [28] Yanghao Li, Tushar Nagarajan, Bo Xiong, and Kristen Grauman. Ego-exo: Transferring visual representations from third-person to first-person videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6943-6953, 2021. 4
258
+ [29] Xudong Lin, Fabio Petroni, Gedas Bertasius, Marcus Rohrbach, Shih-Fu Chang, and Lorenzo Torresani. Learning to recognize procedural activities with distant supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13853-13863, 2022. 2
259
+ [30] Zheng Lu and Kristen Grauman. Story-driven summarization for egocentric video. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, page 2714-2721, USA, 2013. IEEE Computer Society. 2
260
+ [31] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-clip: End-to-end multi-grained contrastive learning for video-text retrieval. In Proceedings of the 30th ACM international conference on multimedia, pages 638–647, 2022. 6, 7
261
+ [32] Sagnik Majumder, Tushar Nagarjan, Ziad Al-Halah, Reina Pradhan, and Kristen Grauman. Which viewpoint shows it best? Language for weakly supervising view selection in multi-view videos. In CVPR, 2025. 2, 6, 7
262
+ [33] Effrosyni Mavroudi, Triantafyllos Afouras, and Lorenzo Torresani. Learning to ground instructional articles in videos through narrations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15201-15213, 2023. 2
263
+ [34] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 2, 3, 5, 6, 7, 8
264
+ [35] Medhini Narasimhan, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, and Cordelia Schmid. Tl; dw? summarizing instructional videos with task relevance and cross-modal saliency. In European Conference on Computer Vision, pages 540-557. Springer, 2022. 2
265
+ [36] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 6
266
+ [37] Rameswar Panda and Amit K Roy-Chowdhury. Collaborative summarization of topic-related videos. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pages 7083-7092, 2017. 2
267
+
268
+ [38] Danila Potapov, Matthijs Douze, Zaid Harchaoui, and Cordelia Schmid. Category-specific video summarization. In Computer Vision - ECCV 2014, pages 540-555, Cham, 2014. Springer International Publishing. 2
269
+ [39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 6, 7
270
+ [40] Santhosh K Ramakrishnan, Dinesh Jayaraman, and Kristen Grauman. Emergence of exploratory look-around behaviors through active observation completion. Science Robotics, 4 (30):eaaw6326, 2019. 2
271
+ [41] Mrigank Rochan, Linwei Ye, and Yang Wang. Video summarization using fully convolutional sequence networks. In Proceedings of the European conference on computer vision (ECCV), pages 347-363, 2018. 2
272
+ [42] Mrigank Rochan, Mahesh Kumar Krishna Reddy, Linwei Ye, and Yang Wang. Adaptive video highlight detection by learning from user history. In Computer Vision - ECCV 2020, pages 261-278, Cham, 2020. Springer International Publishing. 2
273
+ [43] Abhimanyu Sahu and Ananda S. Chowdhury. Shot level egocentric video co-summarization. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2887-2892, 2018. 2
274
+ [44] Soroush Seifi and Tinne Tuytelaars. Attend and segment: Attention guided active semantic segmentation. In European Conference on Computer Vision, pages 305-321. Springer, 2020. 2
275
+ [45] Soroush Seifi, Abhishek Jha, and Tinne Tuytelaars. Glimpse-attend-and-exlore: Self-attention for active visual exploration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16137–16146, 2021. 2
276
+ [46] Gunnar A Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. Charades-ego: A large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626, 2018. 4
277
+ [47] Michael Smith and Takeo Kanade. Video skimming for quick browsing based on audio and image characterization. Technical Report CMU-CS-95-186, Pittsburgh, PA, 1995. 2
278
+ [48] Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. Tvsum: Summarizing web videos using titles. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5179-5187, 2015. 2
279
+ [49] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman. Pano2vid: Automatic cinematography for watching 360 videos. In Asian Conference on Computer Vision, pages 154-171. Springer, 2016. 1, 2
280
+ [50] Xinding Sun, J. Foote, D. Kimber, and B. S. Manjunath. Region of interest extraction and virtual camera control based on panoramic video capturing. Trans. Multi., 7(5):981-990, 2005. 2
281
+ [51] Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video
282
+
283
+ analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1207-1216, 2019. 3
284
+ [52] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 6
285
+ [53] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 4, 6
286
+ [54] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Zun Wang, Yansong Shi, et al. Internvideo2: Scaling foundation models for multimodal video understanding. In European Conference on Computer Vision, pages 396-416. Springer, 2024. 6, 7
287
+ [55] Bo Xiong and Kristen Grauman. Snap angle prediction for $360^{\circ}$ panoramas. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 2
288
+ [56] Cha Zhang, Yong Rui, Jim Crawford, and Li-Wei He. An automated end-to-end lecture capture and broadcasting system. ACM Trans. Multimedia Comput. Commun. Appl., 4(1), 2008. 1, 2
289
+ [57] Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman. Summary transfer: Exemplar-based subset selection for video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1059-1067, 2016. 2
290
+ [58] Yiwu Zhong, Licheng Yu, Yang Bai, Shangwen Li, Xueling Yan, and Yin Li. Learning procedure-aware video representation from instructional videos and their narrations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14825-14835, 2023. 2
291
+ [59] Honglu Zhou, Roberto Martín-Martín, Mubbasir Kapadia, Silvio Savarese, and Juan Carlos Niebles. Procedure-aware pretraining for instructional video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10727-10738, 2023. 2
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1bac0352e71db2b715e1546aa7562d7d09236a1cfeefba8c9a7129e09240c04
3
+ size 501515
2025/Switch-a-View_ View Selection Learned from Unlabeled In-the-wild Videos/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_content_list.json ADDED
@@ -0,0 +1,1650 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "SynAD: Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 127,
8
+ 128,
9
+ 869,
10
+ 176
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Jongsuk Kim $^{1*}$ Jaeyoung Lee $^{1*}$ Gyojin Han $^{1}$ Dong-Jae Lee $^{1}$ Minki Jeong $^{2}$ Junmo Kim $^{1}$ $^{1}$ KAIST $^{2}$ AI Center, Samsung Electronics",
17
+ "bbox": [
18
+ 107,
19
+ 202,
20
+ 887,
21
+ 239
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "{jskpop, mcneato, hangj0820, jhtwosun, junmo.kim}@kaist.ac.kr minki6.jeong@samsung.com",
28
+ "bbox": [
29
+ 107,
30
+ 241,
31
+ 883,
32
+ 258
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Abstract",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 246,
42
+ 291,
43
+ 326,
44
+ 306
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Recent advancements in deep learning and the availability of high-quality real-world driving datasets have propelled end-to-end autonomous driving. Despite this progress, relying solely on real-world data limits the variety of driving scenarios for training. Synthetic scenario generation has emerged as a promising solution to enrich the diversity of training data; however, its application within E2E AD models remains largely unexplored. This is primarily due to the absence of a designated ego vehicle and the associated sensor inputs, such as camera or LiDAR, typically provided in real-world scenarios. To address this gap, we introduce SynAD, the first framework designed to enhance real-world E2E AD models using synthetic data. Our method designates the agent with the most comprehensive driving information as the ego vehicle in a multi-agent synthetic scenario. We further project path-level scenarios onto maps and employ a newly developed Map-to-BEV Network to derive bird's-eye-view features without relying on sensor inputs. Finally, we devise a training strategy that effectively integrates these map-based synthetic data with real driving data. Experimental results demonstrate that SynAD effectively integrates all components and notably enhances safety performance. By bridging synthetic scenario generation and E2E AD, SynAD paves the way for more comprehensive and robust autonomous driving models.",
51
+ "bbox": [
52
+ 88,
53
+ 323,
54
+ 485,
55
+ 700
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1. Introduction",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 91,
65
+ 729,
66
+ 220,
67
+ 744
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Autonomous vehicles are moving from research labs to roads as deep learning and comprehensive real-world driving datasets [1] are leading to significant progress. Recent studies leverage LiDAR [16, 33] or multi-camera images [11-13] from these datasets to extract bird's-eye-view (BEV) features [3, 17, 22] for various tasks [10, 18, 21, 35]. These approaches improve the end-to-end autonomous driving (E2E AD) model's performance by incorporating per",
74
+ "bbox": [
75
+ 89,
76
+ 755,
77
+ 483,
78
+ 878
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "image",
84
+ "img_path": "images/e57ed366b71214652912617159824884b5a67f642e0911efa96037244b10b69e.jpg",
85
+ "image_caption": [
86
+ "(Training time)",
87
+ "Figure 1. Conceptual illustration of SynAD. During training, both real and synthetic data are used to generate BEV and MapBEV features for the E2E AD model, while only real data is used during testing to ensure practical applicability."
88
+ ],
89
+ "image_footnote": [],
90
+ "bbox": [
91
+ 516,
92
+ 310,
93
+ 906,
94
+ 522
95
+ ],
96
+ "page_idx": 0
97
+ },
98
+ {
99
+ "type": "text",
100
+ "text": "ception tasks (tracking and mapping), prediction tasks (motion forecasting and occupancy prediction), and planning tasks either in parallel [28] or in series [11, 12], in an end-to-end manner. To better capture the complexities of real-world driving environments, some studies [26, 36] have proposed methods incorporating 3D occupancy understanding.",
101
+ "bbox": [
102
+ 511,
103
+ 625,
104
+ 906,
105
+ 717
106
+ ],
107
+ "page_idx": 0
108
+ },
109
+ {
110
+ "type": "text",
111
+ "text": "However, relying solely on real-world datasets introduces a fundamental limitation: the high costs of data collection and labeling lead to a lack of diversity, restricting the range of scenarios available for training. To mitigate this issue, some studies [8, 34] generate paths under specific conditions and utilize CARLA simulator [6] to acquire corresponding camera data for additional model training. These approaches enable robust driving in extreme situations but still have limitations since they can only operate in virtual environments. In parallel, several studies have developed methods to generate realistic driving scenarios from real-world datasets without relying on simulators. These",
112
+ "bbox": [
113
+ 511,
114
+ 719,
115
+ 908,
116
+ 901
117
+ ],
118
+ "page_idx": 0
119
+ },
120
+ {
121
+ "type": "header",
122
+ "text": "CVF",
123
+ "bbox": [
124
+ 106,
125
+ 2,
126
+ 181,
127
+ 42
128
+ ],
129
+ "page_idx": 0
130
+ },
131
+ {
132
+ "type": "header",
133
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
134
+ "bbox": [
135
+ 238,
136
+ 0,
137
+ 807,
138
+ 46
139
+ ],
140
+ "page_idx": 0
141
+ },
142
+ {
143
+ "type": "page_footnote",
144
+ "text": "*Equal Contribution.",
145
+ "bbox": [
146
+ 109,
147
+ 886,
148
+ 220,
149
+ 900
150
+ ],
151
+ "page_idx": 0
152
+ },
153
+ {
154
+ "type": "page_number",
155
+ "text": "25197",
156
+ "bbox": [
157
+ 478,
158
+ 944,
159
+ 517,
160
+ 957
161
+ ],
162
+ "page_idx": 0
163
+ },
164
+ {
165
+ "type": "text",
166
+ "text": "works employ logic-based [38], language-guided [25, 37], and retrieval-based [5] methods to produce high-quality and diverse scenarios that satisfy specific conditions.",
167
+ "bbox": [
168
+ 89,
169
+ 90,
170
+ 480,
171
+ 136
172
+ ],
173
+ "page_idx": 1
174
+ },
175
+ {
176
+ "type": "text",
177
+ "text": "Despite the high generative capabilities of these methods, synthetic scenarios have not been effectively integrated into real-world E2E AD model training. A key limitation is that current scenario generation approaches yield only path-level outputs and overlook the designation of an ego vehicle. They also fail to generate the corresponding sensor inputs, such as multi-camera images and LiDAR data, which are required to establish the ego-centric perspective seen in real-world scenarios. Consequently, this absence restricts their integration into real-world E2E AD training pipelines.",
178
+ "bbox": [
179
+ 88,
180
+ 138,
181
+ 480,
182
+ 289
183
+ ],
184
+ "page_idx": 1
185
+ },
186
+ {
187
+ "type": "text",
188
+ "text": "To address these challenges, we propose SynAD, a novel framework that enhances real-world E2E AD models by integrating synthetic data. SynAD comprises three key components: First, we introduce an ego-centric scenario generation method specifically tailored for E2E AD training. During scenario generation, we set effective guides while designating the agent with the richest driving information as the ego vehicle. The path of the selected ego vehicle is then set as the target path and serves as additional training data for the E2E AD model. Second, we propose a Map-to-BEV Network to integrate synthetic scenarios into the E2E AD training pipeline. The Map-to-BEV Network encodes BEV features from maps that contain vehicle information from the synthetic scenarios, enabling this integration without relying on sensor data inputs. Finally, we reduce the domain gap between map-based synthetic data and real driving data by also projecting real scenarios onto a map, ensuring consistent integration as shown in Figure 1. Moreover, by selectively utilizing features extracted from each type of map at the most suitable stage, we avoid performance degradation from integrating map data and ensure the model achieves high test time performance with image-only inputs. Extensive ablation studies verify that each component of SynAD contributes effectively to the application of synthetic scenarios in E2E AD training. Our main contributions are summarized as follows:",
189
+ "bbox": [
190
+ 91,
191
+ 291,
192
+ 483,
193
+ 683
194
+ ],
195
+ "page_idx": 1
196
+ },
197
+ {
198
+ "type": "list",
199
+ "sub_type": "text",
200
+ "list_items": [
201
+ "- To overcome the lack of necessary sensor data and the absence of an ego-centric perspective in synthetic scenarios, we propose SynAD, a novel method that integrates synthetic data into real-world E2E AD models.",
202
+ "- SynAD introduces three key contributions: (1) ego-centric scenario generation method that transforms path-level scenarios into ego-centric maps by designating the most informative agent as the ego vehicle, (2) a Map-to-BEV Network that produces BEV features without relying on any sensor inputs, and (3) a training strategy that effectively utilizes both synthetic and real data.",
203
+ "- Extensive experiments demonstrate that SynAD outperforms existing methods, with ablation studies confirming the effectiveness of each component."
204
+ ],
205
+ "bbox": [
206
+ 89,
207
+ 686,
208
+ 482,
209
+ 897
210
+ ],
211
+ "page_idx": 1
212
+ },
213
+ {
214
+ "type": "text",
215
+ "text": "2. Related Works",
216
+ "text_level": 1,
217
+ "bbox": [
218
+ 513,
219
+ 89,
220
+ 663,
221
+ 104
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "2.1. Traffic Scenario Generation",
228
+ "text_level": 1,
229
+ "bbox": [
230
+ 513,
231
+ 117,
232
+ 764,
233
+ 132
234
+ ],
235
+ "page_idx": 1
236
+ },
237
+ {
238
+ "type": "text",
239
+ "text": "Traffic scenario generation is crucial for testing and improving autonomous driving systems by enabling safe and comprehensive validation in simulated environments. Recent studies [4, 8, 24, 27, 30, 34] have focused on safety-critical scenarios, which are difficult to capture in real-world driving due to cost and safety constraints. KING [8] uses a kinematic bicycle model to derive gradients of safety-critical objectives, updating paths that make the ego vehicle more likely to cause accidents. It also improves the robustness of E2E AD in synthetic driving environments based on simulators by fine-tuning these generated scenarios. Beyond purely safety-critical contexts, research on controllable scenario generation [14, 20, 23, 30, 37, 38] is also receiving great attention. These works introduce diffusion models that allow users to specify trajectory properties (e.g., reaching a goal, following speed limits) while preserving physical feasibility and natural behaviors. In addition, some studies [19, 25, 29, 37] leverage large language models to convert user queries into realistic traffic scenarios. RealGen [5] highlights a limitation in generative approaches that they often struggle to produce novel scenarios and propose combining behaviors from multiple retrieved examples for creating new scenarios. However, employing these generated scenarios to improve real-world E2E AD models remains largely unexplored.",
240
+ "bbox": [
241
+ 511,
242
+ 143,
243
+ 906,
244
+ 521
245
+ ],
246
+ "page_idx": 1
247
+ },
248
+ {
249
+ "type": "text",
250
+ "text": "2.2. End-to-End Autonomous Driving",
251
+ "text_level": 1,
252
+ "bbox": [
253
+ 513,
254
+ 542,
255
+ 805,
256
+ 559
257
+ ],
258
+ "page_idx": 1
259
+ },
260
+ {
261
+ "type": "text",
262
+ "text": "E2E AD, particularly vision-centric approaches, has become an active area of recent research. Unlike conventional AD methods [2, 7, 15, 32], which separate perception tasks and planning, vision-centric E2E methods integrate these components into a single unified model. These approaches provide interpretability and safety benefits while improving performance in each downstream task through end-to-end optimization. Planning-oriented modular design principles have driven several recent advances in E2E AD. ST-P3 [11] trains semantic occupancy prediction and planning in an end-to-end manner. UniAD [12] proposes a planning-oriented unification of tracking, online mapping, motion forecasting, occupancy prediction, and planning. Paradrive [28] achieves parallel processing across these modules, boosting runtime speed by nearly threefold. VAD [13] replaces dense rasterized scene representations with fully vectorized data to boost efficiency. Meanwhile, OccNet [26] and OccWorld [36] explore 3D occupancy representation by segmenting the scene into structured cells with semantic labels. Despite differences in network design and framework implementation, these methods all rely on BEV features derived from multi-camera inputs.",
263
+ "bbox": [
264
+ 511,
265
+ 568,
266
+ 908,
267
+ 902
268
+ ],
269
+ "page_idx": 1
270
+ },
271
+ {
272
+ "type": "page_number",
273
+ "text": "25198",
274
+ "bbox": [
275
+ 478,
276
+ 944,
277
+ 519,
278
+ 957
279
+ ],
280
+ "page_idx": 1
281
+ },
282
+ {
283
+ "type": "image",
284
+ "img_path": "images/7799caa30fe066416dbd16489b7ab44dc553055a720181f33b1d2bbf897f7977.jpg",
285
+ "image_caption": [
286
+ "Figure 2. Overview of SynAD. We generate synthetic multi-agent scenarios and convert them into ego-centric map representations $x_{\\mathrm{SM}}$ while real scenarios are similarly projected as $x_{\\mathrm{RM}}$ . To train Map-to-BEV Network, we use paired data from $x_{\\mathrm{RM}}$ and $x_{I}$ , ensuring that Map-to-BEV Network produces BEV feature consistent with the output of pretrained BEVFormer applied to multi-camera images. The synthetic scenario $x_{\\mathrm{SM}}$ can be converted into BEV feature $B_{\\mathrm{SM}}$ without any multi-camera images using our novel Map-to-BEV network. In the final E2E AD framework, we selectively apply BEV features only to modules that benefit most, thereby improving overall performance."
287
+ ],
288
+ "image_footnote": [],
289
+ "bbox": [
290
+ 94,
291
+ 88,
292
+ 903,
293
+ 349
294
+ ],
295
+ "page_idx": 2
296
+ },
297
+ {
298
+ "type": "text",
299
+ "text": "3. Method",
300
+ "text_level": 1,
301
+ "bbox": [
302
+ 89,
303
+ 452,
304
+ 183,
305
+ 468
306
+ ],
307
+ "page_idx": 2
308
+ },
309
+ {
310
+ "type": "text",
311
+ "text": "Our method aims to enhance the E2E AD model by leveraging synthetic data. First, we generate multi-agent driving scenarios that satisfy specific conditions and convert them into ego-centric scenarios by designating an ego vehicle and cropping a map centered around it. We denote this synthetic ego-centric map representation as $x_{\\mathrm{SM}}$ , which is used in E2E AD training. In parallel, we train the Map-to-BEV Network using $x_{\\mathrm{RM}}$ , constructed by projecting real-world scenarios onto a corresponding map representation. The Map-to-BEV Network aligns BEV features extracted from $x_{\\mathrm{RM}}$ with multi-camera images $x_{I}$ , enabling the use of synthetic scenarios without requiring sensor inputs. Finally, we propose a training strategy that incorporates synthetic scenarios into the E2E AD training process. Figure 2 provides an overview of our method.",
312
+ "bbox": [
313
+ 88,
314
+ 478,
315
+ 483,
316
+ 705
317
+ ],
318
+ "page_idx": 2
319
+ },
320
+ {
321
+ "type": "text",
322
+ "text": "3.1. Ego-centric Scenario Generation",
323
+ "text_level": 1,
324
+ "bbox": [
325
+ 89,
326
+ 714,
327
+ 379,
328
+ 729
329
+ ],
330
+ "page_idx": 2
331
+ },
332
+ {
333
+ "type": "text",
334
+ "text": "Realistic Scenario Generation. In an autonomous driving system, the trajectory of a vehicle is represented by its state $s$ at each time step $t$ . This state vector comprises four elements: position in 2D coordinates $(x,y)$ , speed $v$ , and heading angle $\\theta$ , represented as $s = (x,y,v,\\theta)$ . To generate realistic scenarios in ego-centric autonomous driving environments that meet desired conditions, we employ conditional diffusion models that aim to generate trajectory $\\tau$ , which includes the state of $M$ agents over $T$ timestamps:",
335
+ "bbox": [
336
+ 88,
337
+ 736,
338
+ 482,
339
+ 872
340
+ ],
341
+ "page_idx": 2
342
+ },
343
+ {
344
+ "type": "equation",
345
+ "text": "\n$$\n\\tau = \\left[ \\tau_ {1}, \\tau_ {2}, \\dots , \\tau_ {M} \\right], \\text {w h e r e} \\tau_ {i} = \\left[ s _ {i} ^ {1}, s _ {i} ^ {2}, \\dots , s _ {i} ^ {T} \\right] ^ {\\top}, \\tag {1}\n$$\n",
346
+ "text_format": "latex",
347
+ "bbox": [
348
+ 102,
349
+ 883,
350
+ 482,
351
+ 902
352
+ ],
353
+ "page_idx": 2
354
+ },
355
+ {
356
+ "type": "text",
357
+ "text": "$s_i^t$ denotes the state of agent $i$ at time $t$ , and $\\tau \\in \\mathbb{R}^{T \\times M \\times 4}$ . The diffusion model adds Gaussian noise in a forward process and then reconstructs it in a reverse process. Defining $\\tau^k$ as the trajectory at the $k$ -th diffusion step, the forward process is defined as:",
358
+ "bbox": [
359
+ 511,
360
+ 452,
361
+ 906,
362
+ 530
363
+ ],
364
+ "page_idx": 2
365
+ },
366
+ {
367
+ "type": "equation",
368
+ "text": "\n$$\nq \\left(\\tau^ {1: K} \\mid \\tau^ {0}\\right) = \\prod_ {k = 1} ^ {K} q \\left(\\tau^ {k} \\mid \\tau^ {k - 1}\\right), \\tag {2}\n$$\n",
369
+ "text_format": "latex",
370
+ "bbox": [
371
+ 553,
372
+ 537,
373
+ 906,
374
+ 578
375
+ ],
376
+ "page_idx": 2
377
+ },
378
+ {
379
+ "type": "equation",
380
+ "text": "\n$$\nq \\left(\\tau^ {k} \\mid \\tau^ {k - 1}\\right) = \\mathcal {N} \\left(\\tau^ {k}; \\sqrt {1 - \\beta_ {k}} \\tau^ {k - 1}, \\beta_ {k} I\\right), \\tag {3}\n$$\n",
381
+ "text_format": "latex",
382
+ "bbox": [
383
+ 553,
384
+ 580,
385
+ 903,
386
+ 606
387
+ ],
388
+ "page_idx": 2
389
+ },
390
+ {
391
+ "type": "text",
392
+ "text": "and $\\beta_{k}$ is the variance schedule controlling the amount of noise added at each diffusion step. Note that $\\tau^0$ represents the clean trajectory, and $\\tau^K$ represents the trajectory corrupted by random noise after $K$ diffusion steps.",
393
+ "bbox": [
394
+ 511,
395
+ 612,
396
+ 906,
397
+ 672
398
+ ],
399
+ "page_idx": 2
400
+ },
401
+ {
402
+ "type": "text",
403
+ "text": "To incorporate contextual information into the reverse diffusion process, we construct a composite feature $\\mathbf{f}$ by aggregating the image features from the past $h$ timestamps. These image features are extracted from maps that display only the road layout and environmental context, without any vehicle depictions. Then, the reverse diffusion process $p_{\\varphi}$ can be represented as follows:",
404
+ "bbox": [
405
+ 511,
406
+ 672,
407
+ 906,
408
+ 779
409
+ ],
410
+ "page_idx": 2
411
+ },
412
+ {
413
+ "type": "equation",
414
+ "text": "\n$$\np _ {\\varphi} \\left(\\tau^ {0: K} \\mid \\mathbf {f}\\right) = p \\left(\\tau^ {K}\\right) \\prod_ {k = 1} ^ {K} p _ {\\varphi} \\left(\\tau^ {k - 1} \\mid \\tau^ {k}, \\mathbf {f}\\right), \\tag {4}\n$$\n",
415
+ "text_format": "latex",
416
+ "bbox": [
417
+ 537,
418
+ 785,
419
+ 903,
420
+ 825
421
+ ],
422
+ "page_idx": 2
423
+ },
424
+ {
425
+ "type": "equation",
426
+ "text": "\n$$\np _ {\\varphi} \\left(\\tau^ {k - 1} \\mid \\tau^ {k}, \\mathbf {f}\\right) = \\mathcal {N} \\left(\\tau^ {k - 1}; \\mu_ {\\varphi} \\left(\\tau^ {k}, k, \\mathbf {f}\\right), \\Sigma_ {\\varphi} \\left(\\tau^ {k}, k, \\mathbf {f}\\right)\\right), \\tag {5}\n$$\n",
427
+ "text_format": "latex",
428
+ "bbox": [
429
+ 511,
430
+ 828,
431
+ 916,
432
+ 861
433
+ ],
434
+ "page_idx": 2
435
+ },
436
+ {
437
+ "type": "text",
438
+ "text": "where $\\tau^K\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})$ starts as random noise and is progressively denoised over $K$ steps by $p_{\\varphi}$",
439
+ "bbox": [
440
+ 511,
441
+ 869,
442
+ 906,
443
+ 901
444
+ ],
445
+ "page_idx": 2
446
+ },
447
+ {
448
+ "type": "page_number",
449
+ "text": "25199",
450
+ "bbox": [
451
+ 478,
452
+ 944,
453
+ 519,
454
+ 957
455
+ ],
456
+ "page_idx": 2
457
+ },
458
+ {
459
+ "type": "image",
460
+ "img_path": "images/83acb15766d6d77f0b95f71a39d7132c6d905be250021486e5b07423c4f80710.jpg",
461
+ "image_caption": [],
462
+ "image_footnote": [],
463
+ "bbox": [
464
+ 91,
465
+ 88,
466
+ 482,
467
+ 166
468
+ ],
469
+ "page_idx": 3
470
+ },
471
+ {
472
+ "type": "image",
473
+ "img_path": "images/8781af177424058693ce1679df864cb4fdb9c64cbb8988728f4233a452aae050.jpg",
474
+ "image_caption": [
475
+ "Figure 3. Examples of $x_{\\mathrm{SM}}$ over time. White box indicates the ego vehicle, while orange boxes denote other vehicles. The synthetic scenarios are conditioned on the existing map representation, then projected using vehicle states and size information."
476
+ ],
477
+ "image_footnote": [],
478
+ "bbox": [
479
+ 91,
480
+ 167,
481
+ 480,
482
+ 242
483
+ ],
484
+ "page_idx": 3
485
+ },
486
+ {
487
+ "type": "text",
488
+ "text": "Guided Sampling. To generate realistic scenarios under diverse conditions, we apply guided sampling during inference. We define a guide $\\mathcal{J} = \\sum w_i R_i$ as the weighted sum of functions $R_i$ that measure rule satisfaction for the $i$ -th objective. In our work, we employ three specific objectives: preventing agent collisions, preventing map collisions, and adhering to speed limits (detailed in Supp. A.2). We then modify the denoising process by applying the gradient of this guide as follows:",
489
+ "bbox": [
490
+ 89,
491
+ 334,
492
+ 483,
493
+ 470
494
+ ],
495
+ "page_idx": 3
496
+ },
497
+ {
498
+ "type": "equation",
499
+ "text": "\n$$\np _ {\\varphi} \\left(\\tau^ {k - 1} \\mid \\tau^ {k}, \\mathbf {f}\\right) = \\mathcal {N} \\left(\\tau^ {k - 1}; \\mu_ {\\varphi} + \\nabla \\mathcal {J} (\\mu_ {\\varphi}), \\Sigma_ {\\varphi}\\right). \\tag {6}\n$$\n",
500
+ "text_format": "latex",
501
+ "bbox": [
502
+ 99,
503
+ 478,
504
+ 482,
505
+ 500
506
+ ],
507
+ "page_idx": 3
508
+ },
509
+ {
510
+ "type": "text",
511
+ "text": "By modifying the reverse diffusion process, we can dynamically generate trajectories that satisfy each objective.",
512
+ "bbox": [
513
+ 89,
514
+ 507,
515
+ 482,
516
+ 537
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "Ego-centric Scenario. To train the autonomous driving model using multi-agent scenarios generated by the diffusion model, we first determine the ego-vehicle among the agents. Since synthetic scenarios should contain comprehensive driving information, we establish an ego selection rule that designates the vehicle traveling the longest distance as the ego vehicle. Accordingly, the ego index $e$ is determined as:",
523
+ "bbox": [
524
+ 89,
525
+ 555,
526
+ 483,
527
+ 675
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "equation",
533
+ "text": "\n$$\ne = \\arg \\max _ {i} \\sum_ {t = 1} ^ {T - 1} d \\left(s _ {i} ^ {t}, s _ {i} ^ {t + 1}\\right), \\tag {7}\n$$\n",
534
+ "text_format": "latex",
535
+ "bbox": [
536
+ 186,
537
+ 683,
538
+ 482,
539
+ 724
540
+ ],
541
+ "page_idx": 3
542
+ },
543
+ {
544
+ "type": "text",
545
+ "text": "where $d(\\cdot, \\cdot)$ represents the distance between positions of states. We then crop a fixed-size area centered on the ego vehicle to form the input map $x_{\\mathrm{SM}}^t$ for timestamp $t$ .",
546
+ "bbox": [
547
+ 89,
548
+ 734,
549
+ 482,
550
+ 780
551
+ ],
552
+ "page_idx": 3
553
+ },
554
+ {
555
+ "type": "text",
556
+ "text": "Training the planning module requires the future trajectory of the ego vehicle and the bounding boxes of other vehicles to ensure collision-free predictions. Since the generated trajectories are in the absolute coordinate system, we transform them into coordinate systems relative to the selected ego vehicles to align them with the real data. The transformation aligns the driving direction of the ego vehicle with the positive $y$ -axis and sets its center as the origin.",
557
+ "bbox": [
558
+ 89,
559
+ 780,
560
+ 483,
561
+ 901
562
+ ],
563
+ "page_idx": 3
564
+ },
565
+ {
566
+ "type": "text",
567
+ "text": "To transform an arbitrary position $(x,y)$ in the absolute coordinate system relative to the state of ego vehicle $s$ , we define the transformation function as:",
568
+ "bbox": [
569
+ 511,
570
+ 90,
571
+ 905,
572
+ 135
573
+ ],
574
+ "page_idx": 3
575
+ },
576
+ {
577
+ "type": "equation",
578
+ "text": "\n$$\nT (x, y; s) = \\left( \\begin{array}{c c} \\sin s _ {\\theta} & - \\cos s _ {\\theta} \\\\ \\cos s _ {\\theta} & \\sin s _ {\\theta} \\end{array} \\right) \\times \\left( \\begin{array}{c} x - s _ {x} \\\\ y - s _ {y} \\end{array} \\right). \\qquad (8)\n$$\n",
579
+ "text_format": "latex",
580
+ "bbox": [
581
+ 537,
582
+ 142,
583
+ 905,
584
+ 176
585
+ ],
586
+ "page_idx": 3
587
+ },
588
+ {
589
+ "type": "text",
590
+ "text": "The derivation of this specific form of the rotation matrix is included in the Supp. A.3. To obtain the target path over the next $T_{p}$ timestamps in the ego-centric coordinate frame at time $t$ , we apply the transformation $T(\\cdot; s_{e}^{t})$ to the positions of the ego vehicle. We also transform the heading angle relative to the ego vehicle's orientation at time $t$ as:",
591
+ "bbox": [
592
+ 511,
593
+ 183,
594
+ 906,
595
+ 273
596
+ ],
597
+ "page_idx": 3
598
+ },
599
+ {
600
+ "type": "equation",
601
+ "text": "\n$$\n\\mathcal {T} ^ {t} = \\left\\{T \\left(s _ {x}, s _ {y}; s _ {e} ^ {t}\\right) \\mid s = s _ {e} ^ {t + t ^ {\\prime}}, t ^ {\\prime} \\in [ T _ {p} ] \\right\\}, \\tag {9}\n$$\n",
602
+ "text_format": "latex",
603
+ "bbox": [
604
+ 555,
605
+ 282,
606
+ 903,
607
+ 306
608
+ ],
609
+ "page_idx": 3
610
+ },
611
+ {
612
+ "type": "equation",
613
+ "text": "\n$$\n\\Theta^ {t} = \\left\\{s _ {\\theta} - s _ {e, \\theta} ^ {t} \\mid s = s _ {e} ^ {t + t ^ {\\prime}}, t ^ {\\prime} \\in [ T _ {p} ] \\right\\}, \\tag {10}\n$$\n",
614
+ "text_format": "latex",
615
+ "bbox": [
616
+ 557,
617
+ 310,
618
+ 903,
619
+ 335
620
+ ],
621
+ "page_idx": 3
622
+ },
623
+ {
624
+ "type": "text",
625
+ "text": "where $[T_p]$ denotes the set of integers from 1 to $T_{p}$ . To process the bounding box information, we first obtain the bounding box coordinates $b_i^{t + t'}$ , for each vehicle $i$ at time $t + t'$ . We then apply the transformation across other vehicles and timestamps, resulting in:",
626
+ "bbox": [
627
+ 511,
628
+ 344,
629
+ 905,
630
+ 421
631
+ ],
632
+ "page_idx": 3
633
+ },
634
+ {
635
+ "type": "equation",
636
+ "text": "\n$$\n\\mathcal {B} ^ {t} = \\left\\{T (x, y; s _ {e} ^ {t}) \\mid (x, y) \\in b _ {i} ^ {t + t ^ {\\prime}}, i \\in [ M ] \\backslash \\{e \\}, t ^ {\\prime} \\in [ T _ {p} ] \\right\\}. \\tag {11}\n$$\n",
637
+ "text_format": "latex",
638
+ "bbox": [
639
+ 511,
640
+ 430,
641
+ 916,
642
+ 467
643
+ ],
644
+ "page_idx": 3
645
+ },
646
+ {
647
+ "type": "text",
648
+ "text": "Finally, each scenario provides $(\\mathcal{T}^t,\\Theta^t,\\mathcal{B}^t,w_e,h_e)$ for the input $x_{\\mathrm{SM}}^t$ where $t\\in [T - T_p]$ . Unlike real driving datasets, the ego vehicle in synthetic data can vary in size across scenarios as shown in Figure 3. Therefore, ego vehicle's width and height $(w_{e},h_{e})$ are also included in each instance.",
649
+ "bbox": [
650
+ 511,
651
+ 467,
652
+ 905,
653
+ 544
654
+ ],
655
+ "page_idx": 3
656
+ },
657
+ {
658
+ "type": "text",
659
+ "text": "3.2. Map-to-BEV Network",
660
+ "text_level": 1,
661
+ "bbox": [
662
+ 511,
663
+ 551,
664
+ 720,
665
+ 568
666
+ ],
667
+ "page_idx": 3
668
+ },
669
+ {
670
+ "type": "text",
671
+ "text": "To address the absence of sensor inputs (e.g., multi-camera images or LiDAR) in synthetic scenarios, we introduce a Map-to-BEV Network $f_{B}$ that generates BEV features directly from ego-centric map inputs. Consistent with our synthetic data generation pipeline, we derive the map input $x_{\\mathrm{RM}}$ from the real scenario. A map encoder $f_{M}$ processes $x_{\\mathrm{RM}}$ into a spatial feature, which then serves as the key and value in a Transformer encoder. A learnable query $Q_{B}$ is used as the query input, producing the mapBEV feature. The encoding process is as follows:",
672
+ "bbox": [
673
+ 511,
674
+ 574,
675
+ 906,
676
+ 726
677
+ ],
678
+ "page_idx": 3
679
+ },
680
+ {
681
+ "type": "equation",
682
+ "text": "\n$$\n\\begin{array}{l} B _ {\\mathrm {R M}} = f _ {B} \\left(Q _ {B}, x _ {\\mathrm {R M}}\\right) (12) \\\\ = \\operatorname {T r a n s f o r m e r E n c o d e r} \\left(Q _ {B}, f _ {M} \\left(x _ {\\mathrm {R M}}\\right)\\right). (13) \\\\ \\end{array}\n$$\n",
683
+ "text_format": "latex",
684
+ "bbox": [
685
+ 537,
686
+ 734,
687
+ 903,
688
+ 770
689
+ ],
690
+ "page_idx": 3
691
+ },
692
+ {
693
+ "type": "text",
694
+ "text": "This design enables the Transformer encoder to capture spatial relationships within the map feature, producing accurate map-based BEV representation. Concurrently, we utilize the pre-trained BEVFormer [17] to extract BEV features $B_{I}$ from multi-camera images $x_{I}$ , which correspond to the map input $x_{\\mathrm{RM}}$ . To align the BEV features extracted from the map $(B_{\\mathrm{RM}})$ with those extracted from multi-camera images $(B_{I})$ , we employ an L2 loss function. Formally, the",
695
+ "bbox": [
696
+ 511,
697
+ 779,
698
+ 906,
699
+ 901
700
+ ],
701
+ "page_idx": 3
702
+ },
703
+ {
704
+ "type": "page_number",
705
+ "text": "25200",
706
+ "bbox": [
707
+ 478,
708
+ 944,
709
+ 519,
710
+ 957
711
+ ],
712
+ "page_idx": 3
713
+ },
714
+ {
715
+ "type": "image",
716
+ "img_path": "images/31b59196f2deea40c78a58d57fa617f7146355ce89586e0f7b39418a4fe64712.jpg",
717
+ "image_caption": [
718
+ "Figure 4. Overview of the Map-to-BEV training. We freeze pretrained BEVFormer and align $B_{\\mathrm{RM}}$ with $B_I$ , enabling the network to generate BEV representations without sensor inputs."
719
+ ],
720
+ "image_footnote": [],
721
+ "bbox": [
722
+ 96,
723
+ 88,
724
+ 477,
725
+ 300
726
+ ],
727
+ "page_idx": 4
728
+ },
729
+ {
730
+ "type": "text",
731
+ "text": "loss function for training the Map-to-BEV Network can be expressed as follows:",
732
+ "bbox": [
733
+ 89,
734
+ 380,
735
+ 483,
736
+ 411
737
+ ],
738
+ "page_idx": 4
739
+ },
740
+ {
741
+ "type": "equation",
742
+ "text": "\n$$\n\\mathcal {L} _ {\\text {m a p}} = \\left\\| B _ {\\mathrm {R M}} - B _ {I} \\right\\| _ {2} ^ {2}. \\tag {14}\n$$\n",
743
+ "text_format": "latex",
744
+ "bbox": [
745
+ 207,
746
+ 422,
747
+ 482,
748
+ 441
749
+ ],
750
+ "page_idx": 4
751
+ },
752
+ {
753
+ "type": "text",
754
+ "text": "This step allows the Map-to-BEV Network to generate BEV features without depending on sensor inputs. Consequently, we can encode the BEV features from $x_{\\mathrm{SM}}$ , enabling E2E AD model training using synthetic scenarios.",
755
+ "bbox": [
756
+ 89,
757
+ 452,
758
+ 483,
759
+ 513
760
+ ],
761
+ "page_idx": 4
762
+ },
763
+ {
764
+ "type": "text",
765
+ "text": "3.3. Training E2E AD with Generated Scenario",
766
+ "text_level": 1,
767
+ "bbox": [
768
+ 89,
769
+ 523,
770
+ 455,
771
+ 540
772
+ ],
773
+ "page_idx": 4
774
+ },
775
+ {
776
+ "type": "text",
777
+ "text": "To train our model, we use generated data $x_{\\mathrm{SM}}$ alongside multi-camera image data $x_{I}$ and real data $x_{\\mathrm{RM}}$ . The image data $x_{I}$ is fed into BEVFormer to produce BEV features $B_{I}$ , while map data $x_{\\mathrm{RM}}$ and $x_{\\mathrm{SM}}$ pass through the Map-to-BEV Network to obtain $B_{\\mathrm{RM}}$ and $B_{\\mathrm{SM}}$ , respectively. E2E AD models typically comprise three main BEV-based modules: a perception module that handles tracking and mapping, a prediction module for motion forecasting and occupancy prediction, and a planning module. Since our map input already includes much of the perception-level information, we do not incorporate it into the perception module, and focus on integrating map data into prediction and planning modules. In the following, we provide a brief overview of each module, with additional details in Supp. B.",
778
+ "bbox": [
779
+ 88,
780
+ 546,
781
+ 483,
782
+ 758
783
+ ],
784
+ "page_idx": 4
785
+ },
786
+ {
787
+ "type": "text",
788
+ "text": "Motion Forecasting. To predict trajectories for multi-agents over multi-timestamps with possible $N$ series of waypoints, we employ MotionEncoder and MotionDecoder based on deformable cross-attention [39]. MotionEncoder produces a motion query embedding $q_{\\mathrm{motion}}$ that represents general motion patterns, agent-centered motion offsets, and relationships to ego vehicle. The MotionDecoder refines this embedding using the BEV feature, yielding multiple",
789
+ "bbox": [
790
+ 89,
791
+ 779,
792
+ 483,
793
+ 902
794
+ ],
795
+ "page_idx": 4
796
+ },
797
+ {
798
+ "type": "text",
799
+ "text": "predicted trajectories $\\hat{\\mathbf{x}}_n$ and their probabilities $p_n$ . To handle multi-hypothesis output, we find trajectory $\\hat{\\mathbf{x}}_{n^*}$ , closest to ground-truth trajectory $\\mathbf{x}_{\\mathrm{GT}}$ by minimizing average displacement error over time. We then calculate joint negative log-likelihood loss as:",
800
+ "bbox": [
801
+ 511,
802
+ 90,
803
+ 905,
804
+ 167
805
+ ],
806
+ "page_idx": 4
807
+ },
808
+ {
809
+ "type": "equation",
810
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {J N L L}} = - \\log \\left(p _ {n ^ {*}} \\cdot P \\left(\\mathbf {x} _ {\\mathrm {G T}} \\mid \\hat {\\mathbf {x}} _ {n ^ {*}}\\right)\\right), \\tag {15}\n$$\n",
811
+ "text_format": "latex",
812
+ "bbox": [
813
+ 570,
814
+ 175,
815
+ 906,
816
+ 191
817
+ ],
818
+ "page_idx": 4
819
+ },
820
+ {
821
+ "type": "equation",
822
+ "text": "\n$$\n\\text {w h e r e} \\quad n ^ {*} = \\arg \\min _ {n} \\left(\\frac {1}{T} \\sum_ {t = 1} ^ {T} \\| \\hat {\\mathbf {x}} _ {n} ^ {t} - \\mathbf {x} _ {\\mathrm {G T}} ^ {t} \\| _ {2} ^ {2}\\right). \\tag {16}\n$$\n",
823
+ "text_format": "latex",
824
+ "bbox": [
825
+ 535,
826
+ 196,
827
+ 906,
828
+ 237
829
+ ],
830
+ "page_idx": 4
831
+ },
832
+ {
833
+ "type": "text",
834
+ "text": "With the minimum final displacement error (minFDE), total motion forecasting loss $\\mathcal{L}_{\\mathrm{motion}}$ is defined as:",
835
+ "bbox": [
836
+ 511,
837
+ 244,
838
+ 905,
839
+ 273
840
+ ],
841
+ "page_idx": 4
842
+ },
843
+ {
844
+ "type": "equation",
845
+ "text": "\n$$\n\\mathcal {L} _ {\\min \\mathrm {F D E}} = \\min _ {n} \\left(\\| \\hat {\\mathbf {x}} _ {n} ^ {T} - \\mathbf {x} _ {\\mathrm {G T}} ^ {T} \\| _ {2} ^ {2}\\right), \\tag {17}\n$$\n",
846
+ "text_format": "latex",
847
+ "bbox": [
848
+ 568,
849
+ 281,
850
+ 906,
851
+ 304
852
+ ],
853
+ "page_idx": 4
854
+ },
855
+ {
856
+ "type": "equation",
857
+ "text": "\n$$\n\\mathcal {L} _ {\\text {m o t i o n}} = \\lambda_ {\\mathrm {J N L L}} \\mathcal {L} _ {\\mathrm {J N L L}} + \\lambda_ {\\min \\mathrm {F D E}} \\mathcal {L} _ {\\min \\mathrm {F D E}},\n$$\n",
858
+ "text_format": "latex",
859
+ "bbox": [
860
+ 576,
861
+ 308,
862
+ 849,
863
+ 325
864
+ ],
865
+ "page_idx": 4
866
+ },
867
+ {
868
+ "type": "text",
869
+ "text": "where $\\lambda_{\\mathrm{JNLL}}$ and $\\lambda_{\\mathrm{minFDE}}$ are corresponding loss weights.",
870
+ "bbox": [
871
+ 511,
872
+ 333,
873
+ 888,
874
+ 349
875
+ ],
876
+ "page_idx": 4
877
+ },
878
+ {
879
+ "type": "text",
880
+ "text": "Occupancy Prediction. An occupancy prediction module forecasts future occupancy maps. Using embedding $\\hat{q}_{\\mathrm{motion}}$ from motion forecasting module, we first derive temporal queries $q_{\\mathrm{temp}}$ . Then, temporal queries refine a down-scaled BEV feature $B_{\\mathrm{state}}^{t-1}$ through a transformer-based OccDecoder, producing updated BEV feature $B_{\\mathrm{state}}^t$ . Once all timestamps have been processed, we combine final BEV features with instance queries to produce occupancy maps $\\hat{O}^t$ . The predicted occupancy maps $\\hat{O} = \\{\\hat{O}^1, \\dots, \\hat{O}^T\\}$ are compared with ground-truth occupancy maps $O_{\\mathrm{GT}}$ to compute occupancy prediction loss $\\mathcal{L}_{\\mathrm{occ}}$ , which consists of dice loss $\\mathcal{L}_{\\mathrm{dice}}$ and binary cross-entropy loss $\\mathcal{L}_{\\mathrm{bce}}$ as follows:",
881
+ "bbox": [
882
+ 511,
883
+ 366,
884
+ 906,
885
+ 547
886
+ ],
887
+ "page_idx": 4
888
+ },
889
+ {
890
+ "type": "equation",
891
+ "text": "\n$$\n\\mathcal {L} _ {\\text {o c c}} = \\lambda_ {\\text {d i c e}} \\mathcal {L} _ {\\text {d i c e}} (\\hat {O}, O _ {\\mathrm {G T}}) + \\lambda_ {\\text {b c e}} \\mathcal {L} _ {\\text {b c e}} (\\hat {O}, O _ {\\mathrm {G T}}), \\tag {18}\n$$\n",
892
+ "text_format": "latex",
893
+ "bbox": [
894
+ 535,
895
+ 555,
896
+ 905,
897
+ 574
898
+ ],
899
+ "page_idx": 4
900
+ },
901
+ {
902
+ "type": "text",
903
+ "text": "where $\\lambda_{\\mathrm{dice}}$ and $\\lambda_{\\mathrm{bce}}$ are the corresponding loss weights.",
904
+ "bbox": [
905
+ 511,
906
+ 582,
907
+ 882,
908
+ 598
909
+ ],
910
+ "page_idx": 4
911
+ },
912
+ {
913
+ "type": "text",
914
+ "text": "Planning. Following prior works [11, 12], we concatenate a high-level command embedding with a learnable parameter and pass them through a linear layer to form the initial planning query $q_{\\mathrm{plan}}$ . A Transformer-based PlanDecoder refines this query using an adapted BEV feature $B_{a}$ as the key and value inputs:",
915
+ "bbox": [
916
+ 511,
917
+ 614,
918
+ 905,
919
+ 705
920
+ ],
921
+ "page_idx": 4
922
+ },
923
+ {
924
+ "type": "equation",
925
+ "text": "\n$$\n\\hat {q} _ {\\text {p l a n}} = \\operatorname {P l a n D e c o d e r} \\left(q _ {\\text {p l a n}}, B _ {a}\\right). \\tag {19}\n$$\n",
926
+ "text_format": "latex",
927
+ "bbox": [
928
+ 598,
929
+ 714,
930
+ 903,
931
+ 732
932
+ ],
933
+ "page_idx": 4
934
+ },
935
+ {
936
+ "type": "text",
937
+ "text": "Then $\\hat{q}_{\\mathrm{plan}}$ passes through an MLP to obtain displacement vectors $\\Delta \\hat{\\tau} = \\mathrm{MLP}(\\hat{q}_{\\mathrm{plan}})$ . Taking the cumulative sum of these displacements across timesteps produces the final predicted trajectory $\\hat{\\tau}$ . The planning loss is defined as the sum of the imitation loss and the collision loss, both computed from $\\hat{\\tau}$ . The imitation loss measures the L2 distance between the $\\hat{\\tau}$ and the ground-truth trajectory $\\tau$ . For the collision loss, we obtain the ego vehicle's bounding box at timestamp $t$ as:",
938
+ "bbox": [
939
+ 511,
940
+ 739,
941
+ 906,
942
+ 876
943
+ ],
944
+ "page_idx": 4
945
+ },
946
+ {
947
+ "type": "equation",
948
+ "text": "\n$$\n\\hat {b} ^ {t} (\\delta) = \\operatorname {b o x} \\left(\\hat {\\tau} ^ {t}, w _ {e} + \\delta , h _ {e} + \\delta\\right). \\tag {20}\n$$\n",
949
+ "text_format": "latex",
950
+ "bbox": [
951
+ 596,
952
+ 883,
953
+ 903,
954
+ 901
955
+ ],
956
+ "page_idx": 4
957
+ },
958
+ {
959
+ "type": "page_number",
960
+ "text": "25201",
961
+ "bbox": [
962
+ 478,
963
+ 944,
964
+ 517,
965
+ 955
966
+ ],
967
+ "page_idx": 4
968
+ },
969
+ {
970
+ "type": "text",
971
+ "text": "where $\\delta$ is a safety margin. We then compute collision loss using IoU between $\\hat{b}^t (\\delta)$ and each other vehicle's bounding box $b_{i}^{t}$ across all timesteps. Combining these losses for multiple values of $\\delta$ yields the planning loss as below:",
972
+ "bbox": [
973
+ 89,
974
+ 90,
975
+ 483,
976
+ 154
977
+ ],
978
+ "page_idx": 5
979
+ },
980
+ {
981
+ "type": "equation",
982
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {c o l}} (\\delta) = \\sum_ {i, t} \\operatorname {I o U} \\left(\\hat {b} ^ {t} (\\delta), b _ {i} ^ {t}\\right), \\tag {21}\n$$\n",
983
+ "text_format": "latex",
984
+ "bbox": [
985
+ 156,
986
+ 162,
987
+ 482,
988
+ 196
989
+ ],
990
+ "page_idx": 5
991
+ },
992
+ {
993
+ "type": "equation",
994
+ "text": "\n$$\n\\mathcal {L} _ {\\text {p l a n}} = \\| \\tau - \\hat {\\tau} \\| _ {2} ^ {2} + \\sum_ {(\\lambda_ {\\delta}, \\delta)} \\lambda_ {\\delta} \\mathcal {L} _ {\\mathrm {c o l}} (\\delta). \\tag {22}\n$$\n",
995
+ "text_format": "latex",
996
+ "bbox": [
997
+ 171,
998
+ 200,
999
+ 482,
1000
+ 234
1001
+ ],
1002
+ "page_idx": 5
1003
+ },
1004
+ {
1005
+ "type": "text",
1006
+ "text": "Note that in training with generated scenarios, $(w_{e},h_{e})$ may vary, the ground-truth trajectory $\\tau$ is taken from $\\mathcal{T}$ (Eq. 9), and each bounding box $b_{i}^{t}$ is an element of $\\mathcal{B}$ (Eq. 11).",
1007
+ "bbox": [
1008
+ 89,
1009
+ 246,
1010
+ 482,
1011
+ 291
1012
+ ],
1013
+ "page_idx": 5
1014
+ },
1015
+ {
1016
+ "type": "text",
1017
+ "text": "Finally, the loss function for E2E AD training can be expressed by incorporating scaling factors as follows:",
1018
+ "bbox": [
1019
+ 89,
1020
+ 291,
1021
+ 483,
1022
+ 323
1023
+ ],
1024
+ "page_idx": 5
1025
+ },
1026
+ {
1027
+ "type": "equation",
1028
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {E 2 E}} = \\lambda_ {\\text {m o t i o n}} \\mathcal {L} _ {\\text {m o t i o n}} + \\lambda_ {\\text {o c c}} \\mathcal {L} _ {\\text {o c c}} + \\lambda_ {\\text {p l a n}} \\mathcal {L} _ {\\text {p l a n}}. \\tag {23}\n$$\n",
1029
+ "text_format": "latex",
1030
+ "bbox": [
1031
+ 119,
1032
+ 334,
1033
+ 482,
1034
+ 351
1035
+ ],
1036
+ "page_idx": 5
1037
+ },
1038
+ {
1039
+ "type": "text",
1040
+ "text": "Map-Data Integration. We incorporate map-based BEV features into the motion forecasting and planning modules, where additional contextual information (e.g., road geometry, traffic structure) proves beneficial. In contrast, occupancy prediction requires high spatial precision, making 2D map data less helpful [36]; we thus exclude map inputs for this module to prevent performance degradation. Experimental results confirm that this selective integration avoids degrading overall performance and maintains strong test-time accuracy with image-only data.",
1041
+ "bbox": [
1042
+ 89,
1043
+ 368,
1044
+ 483,
1045
+ 518
1046
+ ],
1047
+ "page_idx": 5
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "4. Experiments",
1052
+ "text_level": 1,
1053
+ "bbox": [
1054
+ 89,
1055
+ 532,
1056
+ 223,
1057
+ 550
1058
+ ],
1059
+ "page_idx": 5
1060
+ },
1061
+ {
1062
+ "type": "text",
1063
+ "text": "4.1. Implementation Details",
1064
+ "text_level": 1,
1065
+ "bbox": [
1066
+ 89,
1067
+ 558,
1068
+ 307,
1069
+ 574
1070
+ ],
1071
+ "page_idx": 5
1072
+ },
1073
+ {
1074
+ "type": "text",
1075
+ "text": "Scenario Generation. We conduct all experiments using nuScenes [1], a real-world driving dataset containing 1,000 scenes. Each nuScenes scene consists of 40 video frames, capturing a 20-second video at $2\\mathrm{Hz}$ . We set the future prediction timestamp $T_{p}$ to 6, which yields 34 training instances per scene. For our main results, we train the model from scratch for 5 epochs while adding 500 synthetic scenes, which is equivalent to 7.5 epochs if training solely on the original nuScenes dataset. Despite this additional data, our total training cost remains lower than that of other E2E AD methods. For ablation studies, unless otherwise noted, we use 100 synthetic scenes for training. To maintain sufficient interaction complexity, we exclude any instance that contains only a single driving agent.",
1076
+ "bbox": [
1077
+ 89,
1078
+ 580,
1079
+ 482,
1080
+ 792
1081
+ ],
1082
+ "page_idx": 5
1083
+ },
1084
+ {
1085
+ "type": "text",
1086
+ "text": "Training Details. For training the Map-to-BEV Network, we freeze the pre-trained BEVFormer [17] and update only the Map-to-BEV network parameters over 20 epochs. For the E2E AD model, we train each module from scratch while keeping the BEVFormer and the Map-to-BEV network frozen. At test time, we apply the occupancy-based",
1087
+ "bbox": [
1088
+ 89,
1089
+ 809,
1090
+ 483,
1091
+ 902
1092
+ ],
1093
+ "page_idx": 5
1094
+ },
1095
+ {
1096
+ "type": "table",
1097
+ "img_path": "images/2bf52d3f4717ad54df103fee74542ead79133391b4e6fc522f98deb30b941244.jpg",
1098
+ "table_caption": [],
1099
+ "table_footnote": [],
1100
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"3\">no collision ↓</td><td colspan=\"3\">no offroad ↓</td></tr><tr><td>rule</td><td>real</td><td>rel real</td><td>rule</td><td>real</td><td>rel real</td></tr><tr><td>BITS [31]</td><td>0.065</td><td>0.099</td><td>0.352</td><td>0.018</td><td>0.099</td><td>0.355</td></tr><tr><td>BITS+opt [31]</td><td>0.041</td><td>0.070</td><td>0.353</td><td>0.005</td><td>0.100</td><td>0.358</td></tr><tr><td>CTG [38]</td><td>0.052</td><td>0.044</td><td>0.346</td><td>0.002</td><td>0.042</td><td>0.346</td></tr><tr><td>CTG++ [37]</td><td>0.036</td><td>0.040</td><td>0.332</td><td>0.004</td><td>0.038</td><td>0.328</td></tr><tr><td>SynAD(Ours)</td><td>0.033</td><td>0.045</td><td>0.330</td><td>0.002</td><td>0.040</td><td>0.324</td></tr></table>",
1101
+ "bbox": [
1102
+ 517,
1103
+ 89,
1104
+ 903,
1105
+ 183
1106
+ ],
1107
+ "page_idx": 5
1108
+ },
1109
+ {
1110
+ "type": "table",
1111
+ "img_path": "images/270c1a69d0d3a975c453a3cb4792abce9733d778c1a60168d5ef36a88ee5ab2a.jpg",
1112
+ "table_caption": [
1113
+ "Table 1. Evaluation of synthetic scenarios with varying guidance."
1114
+ ],
1115
+ "table_footnote": [],
1116
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"4\">L2(m) ↓</td><td colspan=\"4\">Collsion Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>ST-P3† [11]</td><td>1.33</td><td>2.11</td><td>2.90</td><td>2.11</td><td>0.23</td><td>0.62</td><td>1.27</td><td>0.71</td></tr><tr><td>UniAD [12]</td><td>0.48</td><td>0.74</td><td>1.07</td><td>0.76</td><td>0.12</td><td>0.13</td><td>0.28</td><td>0.17</td></tr><tr><td>VAD [13]</td><td>0.41</td><td>0.70</td><td>1.05</td><td>0.72</td><td>0.07</td><td>0.17</td><td>0.41</td><td>0.22</td></tr><tr><td>OCCNet† [26]</td><td>1.29</td><td>2.31</td><td>2.99</td><td>2.14</td><td>0.21</td><td>0.59</td><td>1.37</td><td>0.72</td></tr><tr><td>Paradrive [28]</td><td>0.25</td><td>0.46</td><td>0.74</td><td>0.48</td><td>0.14</td><td>0.23</td><td>0.39</td><td>0.25</td></tr><tr><td>OCCWorld [36]</td><td>0.32</td><td>0.61</td><td>0.98</td><td>0.64</td><td>0.06</td><td>0.21</td><td>0.47</td><td>0.24</td></tr><tr><td>SynAD (Ours)</td><td>0.52</td><td>0.78</td><td>1.10</td><td>0.80</td><td>0.04</td><td>0.10</td><td>0.20</td><td>0.11</td></tr></table>",
1117
+ "bbox": [
1118
+ 517,
1119
+ 222,
1120
+ 903,
1121
+ 337
1122
+ ],
1123
+ "page_idx": 5
1124
+ },
1125
+ {
1126
+ "type": "text",
1127
+ "text": "Table 2. Planning performance on the nuScenes validation set. ${}^{ \\dagger }$ denotes results evaluated under the ST-P3 metric.",
1128
+ "bbox": [
1129
+ 513,
1130
+ 347,
1131
+ 903,
1132
+ 375
1133
+ ],
1134
+ "page_idx": 5
1135
+ },
1136
+ {
1137
+ "type": "text",
1138
+ "text": "optimization from UniAD [12]. All experiments are conducted on 8 NVIDIA RTX 4090 GPUs with batch size 1 per GPU. More details can be found in the Supp. C.",
1139
+ "bbox": [
1140
+ 511,
1141
+ 402,
1142
+ 905,
1143
+ 450
1144
+ ],
1145
+ "page_idx": 5
1146
+ },
1147
+ {
1148
+ "type": "text",
1149
+ "text": "4.2. Main Results",
1150
+ "text_level": 1,
1151
+ "bbox": [
1152
+ 511,
1153
+ 458,
1154
+ 651,
1155
+ 473
1156
+ ],
1157
+ "page_idx": 5
1158
+ },
1159
+ {
1160
+ "type": "text",
1161
+ "text": "We evaluate our method on the nuScenes validation set, adopting the CTG++ [37] metrics for scenario generation and the VAD [13] evaluation protocol for the E2E AD task, ensuring consistency with existing methods. Details on the reporting rules can be found in Supp. D",
1162
+ "bbox": [
1163
+ 511,
1164
+ 481,
1165
+ 905,
1166
+ 556
1167
+ ],
1168
+ "page_idx": 5
1169
+ },
1170
+ {
1171
+ "type": "text",
1172
+ "text": "Scenario Generation. In Table 1, we evaluate our generated paths using three metrics: rule, real, and rel real. The rule metric indicates how strictly the generated trajectories adhere to given rules. The real metric measures absolute similarity to real-world data using the Wasserstein distance, while rel real assesses the realism of scene-level interactions between vehicles. Our method demonstrates robust compliance with traffic constraints, as indicated by its substantial rule score. Although it has a slightly lower real score, suggesting a looser correspondence to exact real-world trajectories, it achieves a higher rel real score that highlights more sophisticated multi-agent interactions. These results show that the generated trajectories deviate from real-world paths while still capturing diverse driving behaviors, which is advantageous for building more robust E2E AD systems.",
1173
+ "bbox": [
1174
+ 511,
1175
+ 577,
1176
+ 906,
1177
+ 819
1178
+ ],
1179
+ "page_idx": 5
1180
+ },
1181
+ {
1182
+ "type": "text",
1183
+ "text": "Planning. Table 2 presents our planning performance from two perspectives: trajectory accuracy, measured by the L2 distance error from the ground truth path, and safety, represented by the collision rate with other vehicles. While",
1184
+ "bbox": [
1185
+ 511,
1186
+ 839,
1187
+ 908,
1188
+ 902
1189
+ ],
1190
+ "page_idx": 5
1191
+ },
1192
+ {
1193
+ "type": "page_number",
1194
+ "text": "25202",
1195
+ "bbox": [
1196
+ 478,
1197
+ 944,
1198
+ 519,
1199
+ 957
1200
+ ],
1201
+ "page_idx": 5
1202
+ },
1203
+ {
1204
+ "type": "table",
1205
+ "img_path": "images/3617300c56e6bdeb5025db7815cfbf8aa5c97ad40d96304b346ab0468fb6c472.jpg",
1206
+ "table_caption": [],
1207
+ "table_footnote": [],
1208
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td colspan=\"3\">Motion Forecasting ↓</td><td colspan=\"2\">Occupancy. ↑</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td></tr><tr><td>UniAD [12]</td><td>0.75</td><td>1.10</td><td>0.166</td><td>61.9</td><td>39.7</td></tr><tr><td>VAD* [13]</td><td>0.78</td><td>1.11</td><td>0.169</td><td>-</td><td>-</td></tr><tr><td>Paradrive [28]</td><td>0.73</td><td>1.08</td><td>0.162</td><td>60.0</td><td>36.4</td></tr><tr><td>SynAD (Ours)</td><td>0.69</td><td>1.01</td><td>0.154</td><td>60.5</td><td>39.6</td></tr></table>",
1209
+ "bbox": [
1210
+ 94,
1211
+ 88,
1212
+ 478,
1213
+ 176
1214
+ ],
1215
+ "page_idx": 6
1216
+ },
1217
+ {
1218
+ "type": "table",
1219
+ "img_path": "images/5f14903ab0a5407edd9bfd7e73b34b36df1e9129219e30fc1dd564c9912cb36e.jpg",
1220
+ "table_caption": [
1221
+ "Table 3. Prediction performance on the nuScenes validation set. Results reproduced in our environments. *VAD does not have an occupancy prediction module."
1222
+ ],
1223
+ "table_footnote": [],
1224
+ "table_body": "<table><tr><td colspan=\"3\">Updated Modules</td><td rowspan=\"2\" colspan=\"3\">Motion Forecasting ↓</td><td rowspan=\"2\" colspan=\"2\">Occupancy. ↑</td><td rowspan=\"2\" colspan=\"2\">Plan.(avg.) ↓</td></tr><tr><td colspan=\"2\">xRM</td><td>xSM</td></tr><tr><td>Mot.</td><td>Occ.</td><td>Plan</td><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td><td>L2</td><td>Col.</td></tr><tr><td></td><td></td><td></td><td>0.76</td><td>1.11</td><td>0.162</td><td>60.1</td><td>38.9</td><td>1.15</td><td>0.25</td></tr><tr><td></td><td></td><td></td><td>0.75</td><td>1.13</td><td>0.166</td><td>59.3</td><td>38.3</td><td>0.82</td><td>0.19</td></tr><tr><td></td><td></td><td>✓</td><td>0.77</td><td>1.15</td><td>0.168</td><td>60.2</td><td>39.0</td><td>0.79</td><td>0.18</td></tr><tr><td></td><td>✓</td><td>✓</td><td>0.77</td><td>1.14</td><td>0.167</td><td>58.4</td><td>37.6</td><td>0.78</td><td>0.18</td></tr><tr><td>✓</td><td></td><td>✓</td><td>0.73</td><td>1.06</td><td>0.157</td><td>60.2</td><td>39.2</td><td>0.77</td><td>0.14</td></tr></table>",
1225
+ "bbox": [
1226
+ 94,
1227
+ 244,
1228
+ 480,
1229
+ 325
1230
+ ],
1231
+ "page_idx": 6
1232
+ },
1233
+ {
1234
+ "type": "text",
1235
+ "text": "our SynAD model exhibits slightly higher L2 distance errors due to the broader distribution of generated behaviors, it achieves the lowest collision rate among all baselines, indicating superior collision avoidance. This trade-off stems from emphasizing more diverse, realistic interactions during scenario generation, which yields safer but not necessarily GT-matching trajectories. Since real-world driving prioritizes collision avoidance over precise path replication, our approach is particularly well-suited for practical deployment. Additionally, SynAD is the only method to incorporate variations in vehicle sizes during training, further enhancing its adaptability to real-world driving conditions.",
1236
+ "bbox": [
1237
+ 88,
1238
+ 391,
1239
+ 482,
1240
+ 571
1241
+ ],
1242
+ "page_idx": 6
1243
+ },
1244
+ {
1245
+ "type": "text",
1246
+ "text": "Prediction. Motion forecasting and occupancy prediction results provide insights into the E2E AD model's ability to interpret and anticipate the behavior of surrounding objects and agents. The results in Table 3 show that SynAD excels in accurately predicting the movements of surrounding agents and maintains a solid understanding of environmental occupancy. Even when synthetic data is introduced as a new input type, the model demonstrates robust performance during testing with image-only input, validating the effectiveness of the integration strategy.",
1247
+ "bbox": [
1248
+ 88,
1249
+ 594,
1250
+ 482,
1251
+ 746
1252
+ ],
1253
+ "page_idx": 6
1254
+ },
1255
+ {
1256
+ "type": "text",
1257
+ "text": "4.3. Ablation Studies",
1258
+ "text_level": 1,
1259
+ "bbox": [
1260
+ 89,
1261
+ 756,
1262
+ 254,
1263
+ 771
1264
+ ],
1265
+ "page_idx": 6
1266
+ },
1267
+ {
1268
+ "type": "text",
1269
+ "text": "Training Strategy. To effectively leverage the synthetic scenarios in our E2E AD framework, we use $x_{\\mathrm{RM}}$ , the real scenario projected onto the map, as a training bridge. Table 4 presents the results of this approach. First, incorporating $x_{\\mathrm{SM}}$ into the planning module training significantly improves planning performance, while adding $x_{\\mathrm{RM}}$ provides a modest additional gain. However, when we extend real map to the occupancy prediction module, performance declines,",
1270
+ "bbox": [
1271
+ 88,
1272
+ 779,
1273
+ 482,
1274
+ 902
1275
+ ],
1276
+ "page_idx": 6
1277
+ },
1278
+ {
1279
+ "type": "table",
1280
+ "img_path": "images/b813e08a096800cf2227ec702557984f32435d6caa11d0b0b775374c3badf606.jpg",
1281
+ "table_caption": [
1282
+ "Table 4. Prediction and planning performance variations based on the incorporation of $x_{\\mathrm{RM}}$ and $x_{\\mathrm{SM}}$ in each E2E AD module."
1283
+ ],
1284
+ "table_footnote": [],
1285
+ "table_body": "<table><tr><td rowspan=\"2\"># Synthetic scenes</td><td colspan=\"3\">Motion Forecasting ↓</td><td colspan=\"2\">Occupancy. ↑</td><td colspan=\"2\">Plan.(avg.) ↓</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td><td>L2</td><td>Col.</td></tr><tr><td colspan=\"8\">Baseline</td></tr><tr><td>0</td><td>0.76</td><td>1.11</td><td>0.162</td><td>60.1</td><td>38.9</td><td>1.15</td><td>0.25</td></tr><tr><td colspan=\"8\">Same step (Fair comparison)</td></tr><tr><td>100</td><td>0.73</td><td>1.06</td><td>0.157</td><td>60.2</td><td>39.2</td><td>0.77</td><td>0.14</td></tr><tr><td>300</td><td>0.72</td><td>1.02</td><td>0.153</td><td>59.4</td><td>38.6</td><td>0.81</td><td>0.13</td></tr><tr><td>500</td><td>0.73</td><td>1.03</td><td>0.155</td><td>59.4</td><td>38.7</td><td>0.85</td><td>0.14</td></tr><tr><td colspan=\"8\">Same epoch (Longer training)</td></tr><tr><td>100</td><td>0.72</td><td>1.04</td><td>0.156</td><td>60.3</td><td>39.1</td><td>0.76</td><td>0.13</td></tr><tr><td>300</td><td>0.71</td><td>1.02</td><td>0.155</td><td>60.3</td><td>39.4</td><td>0.77</td><td>0.12</td></tr><tr><td>500</td><td>0.69</td><td>1.01</td><td>0.154</td><td>60.5</td><td>39.6</td><td>0.80</td><td>0.11</td></tr></table>",
1286
+ "bbox": [
1287
+ 517,
1288
+ 89,
1289
+ 903,
1290
+ 220
1291
+ ],
1292
+ "page_idx": 6
1293
+ },
1294
+ {
1295
+ "type": "table",
1296
+ "img_path": "images/500266f96ac87d17e38454facbe153821455b6987a13b293c4a1e114f2b8438e.jpg",
1297
+ "table_caption": [
1298
+ "Table 5. Performance under different numbers of generated scenes, comparing two training protocols."
1299
+ ],
1300
+ "table_footnote": [],
1301
+ "table_body": "<table><tr><td rowspan=\"2\">Arch.</td><td rowspan=\"2\">input res.</td><td rowspan=\"2\">\\( \\mathcal{L}_{\\text{map }}^{\\text{val}} \\downarrow (\\times 10^{-2}) \\)</td><td colspan=\"3\">Motion Forecasting ↓</td><td colspan=\"2\">Plan.(avg.) ↓</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>L2</td><td>Col.</td></tr><tr><td>SwinUNETR</td><td>800</td><td>9.55</td><td>0.75</td><td>1.11</td><td>0.158</td><td>1.08</td><td>0.26</td></tr><tr><td>Ours</td><td>224</td><td>8.96</td><td>0.73</td><td>1.06</td><td>0.157</td><td>0.77</td><td>0.14</td></tr></table>",
1302
+ "bbox": [
1303
+ 517,
1304
+ 272,
1305
+ 901,
1306
+ 327
1307
+ ],
1308
+ "page_idx": 6
1309
+ },
1310
+ {
1311
+ "type": "text",
1312
+ "text": "Table 6. Performance variations based on Map-to-BEV network architectures.",
1313
+ "bbox": [
1314
+ 511,
1315
+ 337,
1316
+ 903,
1317
+ 364
1318
+ ],
1319
+ "page_idx": 6
1320
+ },
1321
+ {
1322
+ "type": "text",
1323
+ "text": "suggesting that 2D map representations alone are insufficient for this task. These results indicate that BEV features extracted from map data suffice for motion forecasting and planning but fall short for occupancy prediction. The latter often requires richer spatial information, as evidenced by OCCNet [26] and OCCWorld [36], which leverage 3D data to improve performance. Consequently, our main training strategy updates only the motion forecasting and planning modules through the map data.",
1324
+ "bbox": [
1325
+ 511,
1326
+ 392,
1327
+ 906,
1328
+ 529
1329
+ ],
1330
+ "page_idx": 6
1331
+ },
1332
+ {
1333
+ "type": "text",
1334
+ "text": "Scale of Synthetic Scenarios. Table 5 illustrates how varying their number influences performance under two training protocols: one with the same number of training steps and another with the same number of epochs. When no synthetic scenes are used, the model relies solely on multi-camera image data, forming our baseline. In the same-step protocol, incorporating a moderate amount of synthetic scene improves results, although further increases yield diminishing results. Under the same-epoch protocol, introducing more synthetic scenes consistently enhances performance, demonstrating that the model benefits from broader coverage given sufficient training iterations. Across both protocols, L2 distance tends to increase with more scenes, reflecting the broader distribution of synthetic scenarios. In particular, faster convergence under the same training steps as the baseline underscores the advantages of using the synthetic scenario.",
1335
+ "bbox": [
1336
+ 511,
1337
+ 547,
1338
+ 906,
1339
+ 806
1340
+ ],
1341
+ "page_idx": 6
1342
+ },
1343
+ {
1344
+ "type": "text",
1345
+ "text": "Map-to-BEV Network Architecture. Table 6 shows the ablation results on the different network architectures for the Map-to-BEV Network. We compare our model with SwinUNETR [9], which preserves spatial correspondence between the map and BEV features. One observation is that",
1346
+ "bbox": [
1347
+ 511,
1348
+ 824,
1349
+ 908,
1350
+ 902
1351
+ ],
1352
+ "page_idx": 6
1353
+ },
1354
+ {
1355
+ "type": "page_number",
1356
+ "text": "25203",
1357
+ "bbox": [
1358
+ 478,
1359
+ 944,
1360
+ 517,
1361
+ 955
1362
+ ],
1363
+ "page_idx": 6
1364
+ },
1365
+ {
1366
+ "type": "image",
1367
+ "img_path": "images/9907c64cca082aaa0fe19a502d629a2676f676b964130c5f1aad04a4f01c1fce.jpg",
1368
+ "image_caption": [
1369
+ "Figure 5. Qualitative result of SynAD. The performance of SynAD in an urban driving scenario is presented through six views capturing the surroundings. The front and back vehicles' motion forecasting are visualized with color-coded trajectories, where warmer colors (red) indicate more immediate movements and cooler colors (blue) represent later positions."
1370
+ ],
1371
+ "image_footnote": [],
1372
+ "bbox": [
1373
+ 93,
1374
+ 94,
1375
+ 908,
1376
+ 271
1377
+ ],
1378
+ "page_idx": 7
1379
+ },
1380
+ {
1381
+ "type": "table",
1382
+ "img_path": "images/6150845c5dd5d95a800a34de7d4e57dedb41dcba24b410c343bf532496235e95.jpg",
1383
+ "table_caption": [],
1384
+ "table_footnote": [],
1385
+ "table_body": "<table><tr><td rowspan=\"2\">agent</td><td rowspan=\"2\">Guide map</td><td rowspan=\"2\">speed</td><td colspan=\"4\">L2(m)↓</td><td colspan=\"4\">Collision Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>✓</td><td></td><td></td><td>0.48</td><td>0.73</td><td>1.05</td><td>0.75</td><td>0.05</td><td>0.15</td><td>0.35</td><td>0.18</td></tr><tr><td>✓</td><td>✓</td><td></td><td>0.49</td><td>0.74</td><td>1.06</td><td>0.76</td><td>0.05</td><td>0.12</td><td>0.27</td><td>0.15</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>0.50</td><td>0.75</td><td>1.07</td><td>0.77</td><td>0.05</td><td>0.11</td><td>0.26</td><td>0.14</td></tr></table>",
1386
+ "bbox": [
1387
+ 94,
1388
+ 351,
1389
+ 480,
1390
+ 416
1391
+ ],
1392
+ "page_idx": 7
1393
+ },
1394
+ {
1395
+ "type": "text",
1396
+ "text": "SwinUNETR requires high-resolution inputs to achieve sufficient performance, leading to higher computational costs. After training each Map-to-BEV Network variant, we evaluate BEV feature quality using an L2 loss on the validation dataset (i.e. $\\mathcal{L}_{\\mathrm{map}}^{\\mathrm{val}}$ ). Then, we compare the performance of motion forecasting and planning in E2E AD training, which are influenced by map data. The results demonstrate that our design outperforms SwinUNETR while incurring lower computational costs.",
1397
+ "bbox": [
1398
+ 89,
1399
+ 482,
1400
+ 483,
1401
+ 619
1402
+ ],
1403
+ "page_idx": 7
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "text": "Guide Composition for Scenario Generation. Table 7 presents the impact of different guide compositions from Equation 6 on planning performance. The agent guide aims to prevent collisions between agents, and the map guide prevents collisions with map components, and the speed guide enforces both minimum and maximum speeds. Incorporating the map and speed guides progressively decreases collision rates, demonstrating that these additional constraints enhance safety. While we focus on specific guide functions, extending the approach, such as using LLM- or retrieval-based guidance, is left for future work.",
1408
+ "bbox": [
1409
+ 89,
1410
+ 638,
1411
+ 483,
1412
+ 806
1413
+ ],
1414
+ "page_idx": 7
1415
+ },
1416
+ {
1417
+ "type": "text",
1418
+ "text": "Ego Selection Rule. For synthetic scenario generations, we experiment with three ego selection rules: random, dynamic, and longest. Each rule selects vehicles with a minimum movement of $1m$ over the generated timestamps to ensure meaningful training. The random rule selects any",
1419
+ "bbox": [
1420
+ 89,
1421
+ 825,
1422
+ 483,
1423
+ 902
1424
+ ],
1425
+ "page_idx": 7
1426
+ },
1427
+ {
1428
+ "type": "table",
1429
+ "img_path": "images/331a71cde1e46cd39ffae25ea4049104e8699f5ef70ef4110764a413e522fa3b.jpg",
1430
+ "table_caption": [
1431
+ "Table 7. Planning performance with varying guide functions for sampling synthetic scenarios."
1432
+ ],
1433
+ "table_footnote": [],
1434
+ "table_body": "<table><tr><td rowspan=\"2\">Rule</td><td rowspan=\"2\">Dist.</td><td colspan=\"4\">L2(m) ↓</td><td colspan=\"4\">Collsion Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>Random</td><td>3.67</td><td>0.51</td><td>0.77</td><td>1.09</td><td>0.79</td><td>0.06</td><td>0.11</td><td>0.31</td><td>0.16</td></tr><tr><td>Dynamic</td><td>3.68</td><td>0.50</td><td>0.77</td><td>1.11</td><td>0.79</td><td>0.04</td><td>0.10</td><td>0.31</td><td>0.15</td></tr><tr><td>Longest</td><td>3.70</td><td>0.50</td><td>0.75</td><td>1.07</td><td>0.77</td><td>0.05</td><td>0.11</td><td>0.26</td><td>0.14</td></tr></table>",
1435
+ "bbox": [
1436
+ 517,
1437
+ 352,
1438
+ 903,
1439
+ 420
1440
+ ],
1441
+ "page_idx": 7
1442
+ },
1443
+ {
1444
+ "type": "text",
1445
+ "text": "Table 8. Planning performance with different ego vehicle selection rules in synthetic scenarios. Dist. represents the average distance traveled between consecutive frames.",
1446
+ "bbox": [
1447
+ 511,
1448
+ 430,
1449
+ 906,
1450
+ 472
1451
+ ],
1452
+ "page_idx": 7
1453
+ },
1454
+ {
1455
+ "type": "text",
1456
+ "text": "vehicle meeting this criterion, while the dynamic rule designates the vehicle with the largest lateral ( $x$ -axis) movement. Lastly, the longest rule selects the ego vehicle that traveled the longest distance, following Equation 7. Table 8 presents both the planning performance and the average distance traveled by the ego vehicle over future timestamp $T_{p}$ . When selecting the ego vehicle based on the longest rule, the model shows the lowest collision rate. Furthermore, the longest rule achieves the lowest L2 errors with sufficient trajectory distance. Based on the results, the longest rule is set as the ego selection rule for the synthetic scenario that we incorporate into our E2E AD training.",
1457
+ "bbox": [
1458
+ 511,
1459
+ 500,
1460
+ 906,
1461
+ 681
1462
+ ],
1463
+ "page_idx": 7
1464
+ },
1465
+ {
1466
+ "type": "text",
1467
+ "text": "5. Conclusion",
1468
+ "text_level": 1,
1469
+ "bbox": [
1470
+ 511,
1471
+ 694,
1472
+ 633,
1473
+ 709
1474
+ ],
1475
+ "page_idx": 7
1476
+ },
1477
+ {
1478
+ "type": "text",
1479
+ "text": "We propose SynAD, a novel method that integrates synthetic scenarios into real-world E2E AD models. SynAD overcomes the limitations that previously confined such integrations to virtual environments like simulators. By utilizing map-based BEV feature encoding, we enable the training of synthetic scenarios without relying on sensor data such as multi-camera images or LiDAR data. Also, we propose ego-centric scenario generation methods and strategic integration approaches. Meanwhile, integrating synthetic scenarios holds significant potential for incorporation into existing E2E AD pipelines. We leave applying our integration strategy to other E2E AD methods for future work.",
1480
+ "bbox": [
1481
+ 511,
1482
+ 719,
1483
+ 906,
1484
+ 900
1485
+ ],
1486
+ "page_idx": 7
1487
+ },
1488
+ {
1489
+ "type": "page_number",
1490
+ "text": "25204",
1491
+ "bbox": [
1492
+ 478,
1493
+ 944,
1494
+ 519,
1495
+ 957
1496
+ ],
1497
+ "page_idx": 7
1498
+ },
1499
+ {
1500
+ "type": "text",
1501
+ "text": "Acknowledgments",
1502
+ "text_level": 1,
1503
+ "bbox": [
1504
+ 91,
1505
+ 90,
1506
+ 250,
1507
+ 107
1508
+ ],
1509
+ "page_idx": 8
1510
+ },
1511
+ {
1512
+ "type": "text",
1513
+ "text": "This work was supported by Samsung Electronics Co., Ltd (IO231005-07280-01).",
1514
+ "bbox": [
1515
+ 89,
1516
+ 114,
1517
+ 485,
1518
+ 146
1519
+ ],
1520
+ "page_idx": 8
1521
+ },
1522
+ {
1523
+ "type": "text",
1524
+ "text": "References",
1525
+ "text_level": 1,
1526
+ "bbox": [
1527
+ 91,
1528
+ 157,
1529
+ 187,
1530
+ 174
1531
+ ],
1532
+ "page_idx": 8
1533
+ },
1534
+ {
1535
+ "type": "list",
1536
+ "sub_type": "ref_text",
1537
+ "list_items": [
1538
+ "[1] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 1, 6",
1539
+ "[2] Long Chen, Lukas Platinsky, Stefanie Speichert, Băzej Osiński, Oliver Scheel, Yawei Ye, Hugo Grimmett, Luca Del Pero, and Peter Ondruska. What data do we need for training an av motion planner? In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 1066-1072. IEEE, 2021. 2",
1540
+ "[3] Shaoyu Chen, Tianheng Cheng, Xinggang Wang, Wenming Meng, Qian Zhang, and Wenyu Liu. Efficient and robust 2d-to-bev representation learning via geometry-guided kernel transformer. arXiv preprint arXiv:2206.04584, 2022. 1",
1541
+ "[4] Wenhao Ding, Baiming Chen, Minjun Xu, and Ding Zhao. Learning to collide: An adaptive safety-critical scenarios generating method. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2243-2250. IEEE, 2020. 2",
1542
+ "[5] Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, and Marco Pavone. Realgen: Retrieval augmented generation for controllable traffic scenarios. arXiv preprint arXiv:2312.13303, 2023. 2",
1543
+ "[6] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning, pages 1-16. PMLR, 2017. 1",
1544
+ "[7] David González, Joshu Pérez, Vicente Milanés, and Fawzi Nashashibi. A review of motion planning techniques for automated vehicles. IEEE Transactions on intelligent transportation systems, 17(4):1135-1145, 2015. 2",
1545
+ "[8] Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, and Andreas Geiger. King: Generating safety-critical driving scenarios for robust imitation via kinematics gradients. In European Conference on Computer Vision, pages 335-352. Springer, 2022. 1, 2",
1546
+ "[9] Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI brainlesion workshop, pages 272-284. Springer, 2021. 7",
1547
+ "[10] Anthony Hu, Zak Murez, Nikhil Mohan, Sofia Dudas, Jeffrey Hawke, Vijay Badrinarayanan, Roberto Cipolla, and Alex Kendall. Fiery: Future instance prediction in bird's-eye view from surround monocular cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15273-15282, 2021. 1",
1548
+ "[11] Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, and Dacheng Tao. St-p3: End-to-end vision-based au"
1549
+ ],
1550
+ "bbox": [
1551
+ 93,
1552
+ 183,
1553
+ 485,
1554
+ 901
1555
+ ],
1556
+ "page_idx": 8
1557
+ },
1558
+ {
1559
+ "type": "list",
1560
+ "sub_type": "ref_text",
1561
+ "list_items": [
1562
+ "tonomous driving via spatial-temporal feature learning. In European Conference on Computer Vision, pages 533-549. Springer, 2022. 1, 2, 5, 6",
1563
+ "[12] Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, et al. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17853-17862, 2023. 1, 2, 5, 6, 7, 3",
1564
+ "[13] Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. Vad: Vectorized scene representation for efficient autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340-8350, 2023. 1, 2, 6, 7",
1565
+ "[14] Chiyu Jiang, Andre Cornman, Cheolho Park, Benjamin Sapp, Yin Zhou, Dragomir Anguelov, et al. Motiondiffuser: Controllable multi-agent motion prediction using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9644-9653, 2023. 2",
1566
+ "[15] Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah. Learning to drive in a day. In 2019 international conference on robotics and automation (ICRA), pages 8248-8254. IEEE, 2019. 2",
1567
+ "[16] Tarasha Khurana, Peiyun Hu, Achal Dave, Jason Ziglar, David Held, and Deva Ramanan. Differentiable raycasting for self-supervised occupancy forecasting. In European Conference on Computer Vision, pages 353-369. Springer, 2022. 1",
1568
+ "[17] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Yu Qiao, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. In European conference on computer vision, pages 1-18. Springer, 2022. 1, 4, 6, 3",
1569
+ "[18] Zhi Liu, Shaoyu Chen, Xiaojie Guo, Xinggang Wang, Tianheng Cheng, Hongmei Zhu, Qian Zhang, Wenyu Liu, and Yi Zhang. Vision-based uneven bev representation learning with polar rasterization and surface estimation. In Conference on Robot Learning, pages 437-446. PMLR, 2023. 1",
1570
+ "[19] Zhiyuan Liu, Leheng Li, Yuning Wang, Haotian Lin, Zhizhe Liu, Lei He, and Jianqiang Wang. Controllable traffic simulation through llm-guided hierarchical chain-of-thought reasoning. arXiv preprint arXiv:2409.15135, 2024. 2",
1571
+ "[20] Jack Lu, Kelvin Wong, Chris Zhang, Simon Suo, and Raquel Urtasun. Scenecontrol: Diffusion for controllable traffic scene generation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 16908-16914. IEEE, 2024. 2",
1572
+ "[21] Tung Phan-Minh, Elena Corina Grigore, Freddy A Boulton, Oscar Beijbom, and Eric M Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14074-14083, 2020. 1",
1573
+ "[22] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision-ECCV 2020: 16th European"
1574
+ ],
1575
+ "bbox": [
1576
+ 516,
1577
+ 92,
1578
+ 906,
1579
+ 901
1580
+ ],
1581
+ "page_idx": 8
1582
+ },
1583
+ {
1584
+ "type": "page_number",
1585
+ "text": "25205",
1586
+ "bbox": [
1587
+ 478,
1588
+ 944,
1589
+ 517,
1590
+ 955
1591
+ ],
1592
+ "page_idx": 8
1593
+ },
1594
+ {
1595
+ "type": "list",
1596
+ "sub_type": "ref_text",
1597
+ "list_items": [
1598
+ "Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV 16, pages 194-210. Springer, 2020. 1",
1599
+ "[23] Ethan Pronovost, Meghan Reddy Ganesina, Noureldin Hendy, Zeyu Wang, Andres Morales, Kai Wang, and Nick Roy. Scenario diffusion: Controllable driving scenario generation with diffusion. Advances in Neural Information Processing Systems, 36:68873-68894, 2023. 2",
1600
+ "[24] Davis Rempe, Jonah Philion, Leonidas J Guibas, Sanja Fidler, and Or Litany. Generating useful accident-prone driving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17305-17315, 2022. 2",
1601
+ "[25] Bo-Kai Ruan, Hao-Tang Tsui, Yung-Hui Li, and Hong-Han Shuai. Traffic scene generation from natural language description for autonomous vehicles with large language model. arXiv preprint arXiv:2409.09575, 2024. 2",
1602
+ "[26] Wenwen Tong, Chonghao Sima, Tai Wang, Li Chen, Silei Wu, Hanming Deng, Yi Gu, Lewei Lu, Ping Luo, Dahua Lin, et al. Scene as occupancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8406-8415, 2023. 1, 2, 6, 7, 3",
1603
+ "[27] Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat, Sergio Casas, Mengye Ren, and Raquel Urtasun. Advsim: Generating safety-critical scenarios for self-driving vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9909-9918, 2021. 2",
1604
+ "[28] Xinshuo Weng, Boris Ivanovic, Yan Wang, Yue Wang, and Marco Pavone. Para-drive: Parallelized architecture for real-time autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15449-15458, 2024. 1, 2, 6, 7, 3",
1605
+ "[29] Junkai Xia, Chenxin Xu, Qingyao Xu, Chen Xie, Yanfeng Wang, and Siheng Chen. Language-driven interactive traffic trajectory generation. arXiv preprint arXiv:2405.15388, 2024. 2",
1606
+ "[30] Chejian Xu, Ding Zhao, Alberto Sangiovanni-Vincentelli, and Bo Li. Diffscene: Diffusion-based safety-critical scenario generation for autonomous vehicles. In The Second Workshop on New Frontiers in Adversarial Machine Learning, 2023. 2",
1607
+ "[31] Danfei Xu, Yuxiao Chen, Boris Ivanovic, and Marco Pavone. Bits: Bi-level imitation for traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 2929-2936. IEEE, 2023. 6",
1608
+ "[32] Wenda Xu, Qian Wang, and John M Dolan. Autonomous vehicle motion planning via recurrent spline optimization. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 7730-7736. IEEE, 2021. 2",
1609
+ "[33] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8660-8669, 2019. 1",
1610
+ "[34] Jiawei Zhang, Chejian Xu, and Bo Li. Chatscene: Knowledge-enabled safety-critical scenario generation for autonomous vehicles. In Proceedings of the IEEE/CVF Con-"
1611
+ ],
1612
+ "bbox": [
1613
+ 91,
1614
+ 90,
1615
+ 482,
1616
+ 900
1617
+ ],
1618
+ "page_idx": 9
1619
+ },
1620
+ {
1621
+ "type": "list",
1622
+ "sub_type": "ref_text",
1623
+ "list_items": [
1624
+ "ference on Computer Vision and Pattern Recognition, pages 15459-15469, 2024. 1, 2",
1625
+ "[35] Yunpeng Zhang, Zheng Zhu, Wenzhao Zheng, Junjie Huang, Guan Huang, Jie Zhou, and Jiwen Lu. **Reverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv preprint arXiv:2205.09743**, 2022. 1",
1626
+ "[36] Wenzhao Zheng, Weiliang Chen, Yuanhui Huang, Borui Zhang, Yueqi Duan, and Jiwen Lu. Occworld: Learning a 3d occupancy world model for autonomous driving. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 1, 2, 6, 7, 3",
1627
+ "[37] Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, and Baishakhi Ray. Language-guided traffic simulation via scene-level diffusion. In Conference on Robot Learning, pages 144-177. PMLR, 2023. 2, 6, 3",
1628
+ "[38] Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 3560-3566. IEEE, 2023. 2, 6",
1629
+ "[39] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020. 5"
1630
+ ],
1631
+ "bbox": [
1632
+ 516,
1633
+ 92,
1634
+ 903,
1635
+ 457
1636
+ ],
1637
+ "page_idx": 9
1638
+ },
1639
+ {
1640
+ "type": "page_number",
1641
+ "text": "25206",
1642
+ "bbox": [
1643
+ 478,
1644
+ 945,
1645
+ 519,
1646
+ 955
1647
+ ],
1648
+ "page_idx": 9
1649
+ }
1650
+ ]
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/98f750ca-b29d-44cf-a005-9c501051463b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52326b90af288c799317dcc9ec2ead4b1403dcbde883febb78d0703638631b4a
3
+ size 923230
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/full.md ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynAD: Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration
2
+
3
+ Jongsuk Kim $^{1*}$ Jaeyoung Lee $^{1*}$ Gyojin Han $^{1}$ Dong-Jae Lee $^{1}$ Minki Jeong $^{2}$ Junmo Kim $^{1}$ $^{1}$ KAIST $^{2}$ AI Center, Samsung Electronics
4
+
5
+ {jskpop, mcneato, hangj0820, jhtwosun, junmo.kim}@kaist.ac.kr minki6.jeong@samsung.com
6
+
7
+ # Abstract
8
+
9
+ Recent advancements in deep learning and the availability of high-quality real-world driving datasets have propelled end-to-end autonomous driving. Despite this progress, relying solely on real-world data limits the variety of driving scenarios for training. Synthetic scenario generation has emerged as a promising solution to enrich the diversity of training data; however, its application within E2E AD models remains largely unexplored. This is primarily due to the absence of a designated ego vehicle and the associated sensor inputs, such as camera or LiDAR, typically provided in real-world scenarios. To address this gap, we introduce SynAD, the first framework designed to enhance real-world E2E AD models using synthetic data. Our method designates the agent with the most comprehensive driving information as the ego vehicle in a multi-agent synthetic scenario. We further project path-level scenarios onto maps and employ a newly developed Map-to-BEV Network to derive bird's-eye-view features without relying on sensor inputs. Finally, we devise a training strategy that effectively integrates these map-based synthetic data with real driving data. Experimental results demonstrate that SynAD effectively integrates all components and notably enhances safety performance. By bridging synthetic scenario generation and E2E AD, SynAD paves the way for more comprehensive and robust autonomous driving models.
10
+
11
+ # 1. Introduction
12
+
13
+ Autonomous vehicles are moving from research labs to roads as deep learning and comprehensive real-world driving datasets [1] are leading to significant progress. Recent studies leverage LiDAR [16, 33] or multi-camera images [11-13] from these datasets to extract bird's-eye-view (BEV) features [3, 17, 22] for various tasks [10, 18, 21, 35]. These approaches improve the end-to-end autonomous driving (E2E AD) model's performance by incorporating per
14
+
15
+ ![](images/e57ed366b71214652912617159824884b5a67f642e0911efa96037244b10b69e.jpg)
16
+ (Training time)
17
+ Figure 1. Conceptual illustration of SynAD. During training, both real and synthetic data are used to generate BEV and MapBEV features for the E2E AD model, while only real data is used during testing to ensure practical applicability.
18
+
19
+ ception tasks (tracking and mapping), prediction tasks (motion forecasting and occupancy prediction), and planning tasks either in parallel [28] or in series [11, 12], in an end-to-end manner. To better capture the complexities of real-world driving environments, some studies [26, 36] have proposed methods incorporating 3D occupancy understanding.
20
+
21
+ However, relying solely on real-world datasets introduces a fundamental limitation: the high costs of data collection and labeling lead to a lack of diversity, restricting the range of scenarios available for training. To mitigate this issue, some studies [8, 34] generate paths under specific conditions and utilize CARLA simulator [6] to acquire corresponding camera data for additional model training. These approaches enable robust driving in extreme situations but still have limitations since they can only operate in virtual environments. In parallel, several studies have developed methods to generate realistic driving scenarios from real-world datasets without relying on simulators. These
22
+
23
+ works employ logic-based [38], language-guided [25, 37], and retrieval-based [5] methods to produce high-quality and diverse scenarios that satisfy specific conditions.
24
+
25
+ Despite the high generative capabilities of these methods, synthetic scenarios have not been effectively integrated into real-world E2E AD model training. A key limitation is that current scenario generation approaches yield only path-level outputs and overlook the designation of an ego vehicle. They also fail to generate the corresponding sensor inputs, such as multi-camera images and LiDAR data, which are required to establish the ego-centric perspective seen in real-world scenarios. Consequently, this absence restricts their integration into real-world E2E AD training pipelines.
26
+
27
+ To address these challenges, we propose SynAD, a novel framework that enhances real-world E2E AD models by integrating synthetic data. SynAD comprises three key components: First, we introduce an ego-centric scenario generation method specifically tailored for E2E AD training. During scenario generation, we set effective guides while designating the agent with the richest driving information as the ego vehicle. The path of the selected ego vehicle is then set as the target path and serves as additional training data for the E2E AD model. Second, we propose a Map-to-BEV Network to integrate synthetic scenarios into the E2E AD training pipeline. The Map-to-BEV Network encodes BEV features from maps that contain vehicle information from the synthetic scenarios, enabling this integration without relying on sensor data inputs. Finally, we reduce the domain gap between map-based synthetic data and real driving data by also projecting real scenarios onto a map, ensuring consistent integration as shown in Figure 1. Moreover, by selectively utilizing features extracted from each type of map at the most suitable stage, we avoid performance degradation from integrating map data and ensure the model achieves high test time performance with image-only inputs. Extensive ablation studies verify that each component of SynAD contributes effectively to the application of synthetic scenarios in E2E AD training. Our main contributions are summarized as follows:
28
+
29
+ - To overcome the lack of necessary sensor data and the absence of an ego-centric perspective in synthetic scenarios, we propose SynAD, a novel method that integrates synthetic data into real-world E2E AD models.
30
+ - SynAD introduces three key contributions: (1) ego-centric scenario generation method that transforms path-level scenarios into ego-centric maps by designating the most informative agent as the ego vehicle, (2) a Map-to-BEV Network that produces BEV features without relying on any sensor inputs, and (3) a training strategy that effectively utilizes both synthetic and real data.
31
+ - Extensive experiments demonstrate that SynAD outperforms existing methods, with ablation studies confirming the effectiveness of each component.
32
+
33
+ # 2. Related Works
34
+
35
+ # 2.1. Traffic Scenario Generation
36
+
37
+ Traffic scenario generation is crucial for testing and improving autonomous driving systems by enabling safe and comprehensive validation in simulated environments. Recent studies [4, 8, 24, 27, 30, 34] have focused on safety-critical scenarios, which are difficult to capture in real-world driving due to cost and safety constraints. KING [8] uses a kinematic bicycle model to derive gradients of safety-critical objectives, updating paths that make the ego vehicle more likely to cause accidents. It also improves the robustness of E2E AD in synthetic driving environments based on simulators by fine-tuning these generated scenarios. Beyond purely safety-critical contexts, research on controllable scenario generation [14, 20, 23, 30, 37, 38] is also receiving great attention. These works introduce diffusion models that allow users to specify trajectory properties (e.g., reaching a goal, following speed limits) while preserving physical feasibility and natural behaviors. In addition, some studies [19, 25, 29, 37] leverage large language models to convert user queries into realistic traffic scenarios. RealGen [5] highlights a limitation in generative approaches that they often struggle to produce novel scenarios and propose combining behaviors from multiple retrieved examples for creating new scenarios. However, employing these generated scenarios to improve real-world E2E AD models remains largely unexplored.
38
+
39
+ # 2.2. End-to-End Autonomous Driving
40
+
41
+ E2E AD, particularly vision-centric approaches, has become an active area of recent research. Unlike conventional AD methods [2, 7, 15, 32], which separate perception tasks and planning, vision-centric E2E methods integrate these components into a single unified model. These approaches provide interpretability and safety benefits while improving performance in each downstream task through end-to-end optimization. Planning-oriented modular design principles have driven several recent advances in E2E AD. ST-P3 [11] trains semantic occupancy prediction and planning in an end-to-end manner. UniAD [12] proposes a planning-oriented unification of tracking, online mapping, motion forecasting, occupancy prediction, and planning. Paradrive [28] achieves parallel processing across these modules, boosting runtime speed by nearly threefold. VAD [13] replaces dense rasterized scene representations with fully vectorized data to boost efficiency. Meanwhile, OccNet [26] and OccWorld [36] explore 3D occupancy representation by segmenting the scene into structured cells with semantic labels. Despite differences in network design and framework implementation, these methods all rely on BEV features derived from multi-camera inputs.
42
+
43
+ ![](images/7799caa30fe066416dbd16489b7ab44dc553055a720181f33b1d2bbf897f7977.jpg)
44
+ Figure 2. Overview of SynAD. We generate synthetic multi-agent scenarios and convert them into ego-centric map representations $x_{\mathrm{SM}}$ while real scenarios are similarly projected as $x_{\mathrm{RM}}$ . To train Map-to-BEV Network, we use paired data from $x_{\mathrm{RM}}$ and $x_{I}$ , ensuring that Map-to-BEV Network produces BEV feature consistent with the output of pretrained BEVFormer applied to multi-camera images. The synthetic scenario $x_{\mathrm{SM}}$ can be converted into BEV feature $B_{\mathrm{SM}}$ without any multi-camera images using our novel Map-to-BEV network. In the final E2E AD framework, we selectively apply BEV features only to modules that benefit most, thereby improving overall performance.
45
+
46
+ # 3. Method
47
+
48
+ Our method aims to enhance the E2E AD model by leveraging synthetic data. First, we generate multi-agent driving scenarios that satisfy specific conditions and convert them into ego-centric scenarios by designating an ego vehicle and cropping a map centered around it. We denote this synthetic ego-centric map representation as $x_{\mathrm{SM}}$ , which is used in E2E AD training. In parallel, we train the Map-to-BEV Network using $x_{\mathrm{RM}}$ , constructed by projecting real-world scenarios onto a corresponding map representation. The Map-to-BEV Network aligns BEV features extracted from $x_{\mathrm{RM}}$ with multi-camera images $x_{I}$ , enabling the use of synthetic scenarios without requiring sensor inputs. Finally, we propose a training strategy that incorporates synthetic scenarios into the E2E AD training process. Figure 2 provides an overview of our method.
49
+
50
+ # 3.1. Ego-centric Scenario Generation
51
+
52
+ Realistic Scenario Generation. In an autonomous driving system, the trajectory of a vehicle is represented by its state $s$ at each time step $t$ . This state vector comprises four elements: position in 2D coordinates $(x,y)$ , speed $v$ , and heading angle $\theta$ , represented as $s = (x,y,v,\theta)$ . To generate realistic scenarios in ego-centric autonomous driving environments that meet desired conditions, we employ conditional diffusion models that aim to generate trajectory $\tau$ , which includes the state of $M$ agents over $T$ timestamps:
53
+
54
+ $$
55
+ \tau = \left[ \tau_ {1}, \tau_ {2}, \dots , \tau_ {M} \right], \text {w h e r e} \tau_ {i} = \left[ s _ {i} ^ {1}, s _ {i} ^ {2}, \dots , s _ {i} ^ {T} \right] ^ {\top}, \tag {1}
56
+ $$
57
+
58
+ $s_i^t$ denotes the state of agent $i$ at time $t$ , and $\tau \in \mathbb{R}^{T \times M \times 4}$ . The diffusion model adds Gaussian noise in a forward process and then reconstructs it in a reverse process. Defining $\tau^k$ as the trajectory at the $k$ -th diffusion step, the forward process is defined as:
59
+
60
+ $$
61
+ q \left(\tau^ {1: K} \mid \tau^ {0}\right) = \prod_ {k = 1} ^ {K} q \left(\tau^ {k} \mid \tau^ {k - 1}\right), \tag {2}
62
+ $$
63
+
64
+ $$
65
+ q \left(\tau^ {k} \mid \tau^ {k - 1}\right) = \mathcal {N} \left(\tau^ {k}; \sqrt {1 - \beta_ {k}} \tau^ {k - 1}, \beta_ {k} I\right), \tag {3}
66
+ $$
67
+
68
+ and $\beta_{k}$ is the variance schedule controlling the amount of noise added at each diffusion step. Note that $\tau^0$ represents the clean trajectory, and $\tau^K$ represents the trajectory corrupted by random noise after $K$ diffusion steps.
69
+
70
+ To incorporate contextual information into the reverse diffusion process, we construct a composite feature $\mathbf{f}$ by aggregating the image features from the past $h$ timestamps. These image features are extracted from maps that display only the road layout and environmental context, without any vehicle depictions. Then, the reverse diffusion process $p_{\varphi}$ can be represented as follows:
71
+
72
+ $$
73
+ p _ {\varphi} \left(\tau^ {0: K} \mid \mathbf {f}\right) = p \left(\tau^ {K}\right) \prod_ {k = 1} ^ {K} p _ {\varphi} \left(\tau^ {k - 1} \mid \tau^ {k}, \mathbf {f}\right), \tag {4}
74
+ $$
75
+
76
+ $$
77
+ p _ {\varphi} \left(\tau^ {k - 1} \mid \tau^ {k}, \mathbf {f}\right) = \mathcal {N} \left(\tau^ {k - 1}; \mu_ {\varphi} \left(\tau^ {k}, k, \mathbf {f}\right), \Sigma_ {\varphi} \left(\tau^ {k}, k, \mathbf {f}\right)\right), \tag {5}
78
+ $$
79
+
80
+ where $\tau^K\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ starts as random noise and is progressively denoised over $K$ steps by $p_{\varphi}$
81
+
82
+ ![](images/83acb15766d6d77f0b95f71a39d7132c6d905be250021486e5b07423c4f80710.jpg)
83
+
84
+ ![](images/8781af177424058693ce1679df864cb4fdb9c64cbb8988728f4233a452aae050.jpg)
85
+ Figure 3. Examples of $x_{\mathrm{SM}}$ over time. White box indicates the ego vehicle, while orange boxes denote other vehicles. The synthetic scenarios are conditioned on the existing map representation, then projected using vehicle states and size information.
86
+
87
+ Guided Sampling. To generate realistic scenarios under diverse conditions, we apply guided sampling during inference. We define a guide $\mathcal{J} = \sum w_i R_i$ as the weighted sum of functions $R_i$ that measure rule satisfaction for the $i$ -th objective. In our work, we employ three specific objectives: preventing agent collisions, preventing map collisions, and adhering to speed limits (detailed in Supp. A.2). We then modify the denoising process by applying the gradient of this guide as follows:
88
+
89
+ $$
90
+ p _ {\varphi} \left(\tau^ {k - 1} \mid \tau^ {k}, \mathbf {f}\right) = \mathcal {N} \left(\tau^ {k - 1}; \mu_ {\varphi} + \nabla \mathcal {J} (\mu_ {\varphi}), \Sigma_ {\varphi}\right). \tag {6}
91
+ $$
92
+
93
+ By modifying the reverse diffusion process, we can dynamically generate trajectories that satisfy each objective.
94
+
95
+ Ego-centric Scenario. To train the autonomous driving model using multi-agent scenarios generated by the diffusion model, we first determine the ego-vehicle among the agents. Since synthetic scenarios should contain comprehensive driving information, we establish an ego selection rule that designates the vehicle traveling the longest distance as the ego vehicle. Accordingly, the ego index $e$ is determined as:
96
+
97
+ $$
98
+ e = \arg \max _ {i} \sum_ {t = 1} ^ {T - 1} d \left(s _ {i} ^ {t}, s _ {i} ^ {t + 1}\right), \tag {7}
99
+ $$
100
+
101
+ where $d(\cdot, \cdot)$ represents the distance between positions of states. We then crop a fixed-size area centered on the ego vehicle to form the input map $x_{\mathrm{SM}}^t$ for timestamp $t$ .
102
+
103
+ Training the planning module requires the future trajectory of the ego vehicle and the bounding boxes of other vehicles to ensure collision-free predictions. Since the generated trajectories are in the absolute coordinate system, we transform them into coordinate systems relative to the selected ego vehicles to align them with the real data. The transformation aligns the driving direction of the ego vehicle with the positive $y$ -axis and sets its center as the origin.
104
+
105
+ To transform an arbitrary position $(x,y)$ in the absolute coordinate system relative to the state of ego vehicle $s$ , we define the transformation function as:
106
+
107
+ $$
108
+ T (x, y; s) = \left( \begin{array}{c c} \sin s _ {\theta} & - \cos s _ {\theta} \\ \cos s _ {\theta} & \sin s _ {\theta} \end{array} \right) \times \left( \begin{array}{c} x - s _ {x} \\ y - s _ {y} \end{array} \right). \qquad (8)
109
+ $$
110
+
111
+ The derivation of this specific form of the rotation matrix is included in the Supp. A.3. To obtain the target path over the next $T_{p}$ timestamps in the ego-centric coordinate frame at time $t$ , we apply the transformation $T(\cdot; s_{e}^{t})$ to the positions of the ego vehicle. We also transform the heading angle relative to the ego vehicle's orientation at time $t$ as:
112
+
113
+ $$
114
+ \mathcal {T} ^ {t} = \left\{T \left(s _ {x}, s _ {y}; s _ {e} ^ {t}\right) \mid s = s _ {e} ^ {t + t ^ {\prime}}, t ^ {\prime} \in [ T _ {p} ] \right\}, \tag {9}
115
+ $$
116
+
117
+ $$
118
+ \Theta^ {t} = \left\{s _ {\theta} - s _ {e, \theta} ^ {t} \mid s = s _ {e} ^ {t + t ^ {\prime}}, t ^ {\prime} \in [ T _ {p} ] \right\}, \tag {10}
119
+ $$
120
+
121
+ where $[T_p]$ denotes the set of integers from 1 to $T_{p}$ . To process the bounding box information, we first obtain the bounding box coordinates $b_i^{t + t'}$ , for each vehicle $i$ at time $t + t'$ . We then apply the transformation across other vehicles and timestamps, resulting in:
122
+
123
+ $$
124
+ \mathcal {B} ^ {t} = \left\{T (x, y; s _ {e} ^ {t}) \mid (x, y) \in b _ {i} ^ {t + t ^ {\prime}}, i \in [ M ] \backslash \{e \}, t ^ {\prime} \in [ T _ {p} ] \right\}. \tag {11}
125
+ $$
126
+
127
+ Finally, each scenario provides $(\mathcal{T}^t,\Theta^t,\mathcal{B}^t,w_e,h_e)$ for the input $x_{\mathrm{SM}}^t$ where $t\in [T - T_p]$ . Unlike real driving datasets, the ego vehicle in synthetic data can vary in size across scenarios as shown in Figure 3. Therefore, ego vehicle's width and height $(w_{e},h_{e})$ are also included in each instance.
128
+
129
+ # 3.2. Map-to-BEV Network
130
+
131
+ To address the absence of sensor inputs (e.g., multi-camera images or LiDAR) in synthetic scenarios, we introduce a Map-to-BEV Network $f_{B}$ that generates BEV features directly from ego-centric map inputs. Consistent with our synthetic data generation pipeline, we derive the map input $x_{\mathrm{RM}}$ from the real scenario. A map encoder $f_{M}$ processes $x_{\mathrm{RM}}$ into a spatial feature, which then serves as the key and value in a Transformer encoder. A learnable query $Q_{B}$ is used as the query input, producing the mapBEV feature. The encoding process is as follows:
132
+
133
+ $$
134
+ \begin{array}{l} B _ {\mathrm {R M}} = f _ {B} \left(Q _ {B}, x _ {\mathrm {R M}}\right) (12) \\ = \operatorname {T r a n s f o r m e r E n c o d e r} \left(Q _ {B}, f _ {M} \left(x _ {\mathrm {R M}}\right)\right). (13) \\ \end{array}
135
+ $$
136
+
137
+ This design enables the Transformer encoder to capture spatial relationships within the map feature, producing accurate map-based BEV representation. Concurrently, we utilize the pre-trained BEVFormer [17] to extract BEV features $B_{I}$ from multi-camera images $x_{I}$ , which correspond to the map input $x_{\mathrm{RM}}$ . To align the BEV features extracted from the map $(B_{\mathrm{RM}})$ with those extracted from multi-camera images $(B_{I})$ , we employ an L2 loss function. Formally, the
138
+
139
+ ![](images/31b59196f2deea40c78a58d57fa617f7146355ce89586e0f7b39418a4fe64712.jpg)
140
+ Figure 4. Overview of the Map-to-BEV training. We freeze pretrained BEVFormer and align $B_{\mathrm{RM}}$ with $B_I$ , enabling the network to generate BEV representations without sensor inputs.
141
+
142
+ loss function for training the Map-to-BEV Network can be expressed as follows:
143
+
144
+ $$
145
+ \mathcal {L} _ {\text {m a p}} = \left\| B _ {\mathrm {R M}} - B _ {I} \right\| _ {2} ^ {2}. \tag {14}
146
+ $$
147
+
148
+ This step allows the Map-to-BEV Network to generate BEV features without depending on sensor inputs. Consequently, we can encode the BEV features from $x_{\mathrm{SM}}$ , enabling E2E AD model training using synthetic scenarios.
149
+
150
+ # 3.3. Training E2E AD with Generated Scenario
151
+
152
+ To train our model, we use generated data $x_{\mathrm{SM}}$ alongside multi-camera image data $x_{I}$ and real data $x_{\mathrm{RM}}$ . The image data $x_{I}$ is fed into BEVFormer to produce BEV features $B_{I}$ , while map data $x_{\mathrm{RM}}$ and $x_{\mathrm{SM}}$ pass through the Map-to-BEV Network to obtain $B_{\mathrm{RM}}$ and $B_{\mathrm{SM}}$ , respectively. E2E AD models typically comprise three main BEV-based modules: a perception module that handles tracking and mapping, a prediction module for motion forecasting and occupancy prediction, and a planning module. Since our map input already includes much of the perception-level information, we do not incorporate it into the perception module, and focus on integrating map data into prediction and planning modules. In the following, we provide a brief overview of each module, with additional details in Supp. B.
153
+
154
+ Motion Forecasting. To predict trajectories for multi-agents over multi-timestamps with possible $N$ series of waypoints, we employ MotionEncoder and MotionDecoder based on deformable cross-attention [39]. MotionEncoder produces a motion query embedding $q_{\mathrm{motion}}$ that represents general motion patterns, agent-centered motion offsets, and relationships to ego vehicle. The MotionDecoder refines this embedding using the BEV feature, yielding multiple
155
+
156
+ predicted trajectories $\hat{\mathbf{x}}_n$ and their probabilities $p_n$ . To handle multi-hypothesis output, we find trajectory $\hat{\mathbf{x}}_{n^*}$ , closest to ground-truth trajectory $\mathbf{x}_{\mathrm{GT}}$ by minimizing average displacement error over time. We then calculate joint negative log-likelihood loss as:
157
+
158
+ $$
159
+ \mathcal {L} _ {\mathrm {J N L L}} = - \log \left(p _ {n ^ {*}} \cdot P \left(\mathbf {x} _ {\mathrm {G T}} \mid \hat {\mathbf {x}} _ {n ^ {*}}\right)\right), \tag {15}
160
+ $$
161
+
162
+ $$
163
+ \text {w h e r e} \quad n ^ {*} = \arg \min _ {n} \left(\frac {1}{T} \sum_ {t = 1} ^ {T} \| \hat {\mathbf {x}} _ {n} ^ {t} - \mathbf {x} _ {\mathrm {G T}} ^ {t} \| _ {2} ^ {2}\right). \tag {16}
164
+ $$
165
+
166
+ With the minimum final displacement error (minFDE), total motion forecasting loss $\mathcal{L}_{\mathrm{motion}}$ is defined as:
167
+
168
+ $$
169
+ \mathcal {L} _ {\min \mathrm {F D E}} = \min _ {n} \left(\| \hat {\mathbf {x}} _ {n} ^ {T} - \mathbf {x} _ {\mathrm {G T}} ^ {T} \| _ {2} ^ {2}\right), \tag {17}
170
+ $$
171
+
172
+ $$
173
+ \mathcal {L} _ {\text {m o t i o n}} = \lambda_ {\mathrm {J N L L}} \mathcal {L} _ {\mathrm {J N L L}} + \lambda_ {\min \mathrm {F D E}} \mathcal {L} _ {\min \mathrm {F D E}},
174
+ $$
175
+
176
+ where $\lambda_{\mathrm{JNLL}}$ and $\lambda_{\mathrm{minFDE}}$ are corresponding loss weights.
177
+
178
+ Occupancy Prediction. An occupancy prediction module forecasts future occupancy maps. Using embedding $\hat{q}_{\mathrm{motion}}$ from motion forecasting module, we first derive temporal queries $q_{\mathrm{temp}}$ . Then, temporal queries refine a down-scaled BEV feature $B_{\mathrm{state}}^{t-1}$ through a transformer-based OccDecoder, producing updated BEV feature $B_{\mathrm{state}}^t$ . Once all timestamps have been processed, we combine final BEV features with instance queries to produce occupancy maps $\hat{O}^t$ . The predicted occupancy maps $\hat{O} = \{\hat{O}^1, \dots, \hat{O}^T\}$ are compared with ground-truth occupancy maps $O_{\mathrm{GT}}$ to compute occupancy prediction loss $\mathcal{L}_{\mathrm{occ}}$ , which consists of dice loss $\mathcal{L}_{\mathrm{dice}}$ and binary cross-entropy loss $\mathcal{L}_{\mathrm{bce}}$ as follows:
179
+
180
+ $$
181
+ \mathcal {L} _ {\text {o c c}} = \lambda_ {\text {d i c e}} \mathcal {L} _ {\text {d i c e}} (\hat {O}, O _ {\mathrm {G T}}) + \lambda_ {\text {b c e}} \mathcal {L} _ {\text {b c e}} (\hat {O}, O _ {\mathrm {G T}}), \tag {18}
182
+ $$
183
+
184
+ where $\lambda_{\mathrm{dice}}$ and $\lambda_{\mathrm{bce}}$ are the corresponding loss weights.
185
+
186
+ Planning. Following prior works [11, 12], we concatenate a high-level command embedding with a learnable parameter and pass them through a linear layer to form the initial planning query $q_{\mathrm{plan}}$ . A Transformer-based PlanDecoder refines this query using an adapted BEV feature $B_{a}$ as the key and value inputs:
187
+
188
+ $$
189
+ \hat {q} _ {\text {p l a n}} = \operatorname {P l a n D e c o d e r} \left(q _ {\text {p l a n}}, B _ {a}\right). \tag {19}
190
+ $$
191
+
192
+ Then $\hat{q}_{\mathrm{plan}}$ passes through an MLP to obtain displacement vectors $\Delta \hat{\tau} = \mathrm{MLP}(\hat{q}_{\mathrm{plan}})$ . Taking the cumulative sum of these displacements across timesteps produces the final predicted trajectory $\hat{\tau}$ . The planning loss is defined as the sum of the imitation loss and the collision loss, both computed from $\hat{\tau}$ . The imitation loss measures the L2 distance between the $\hat{\tau}$ and the ground-truth trajectory $\tau$ . For the collision loss, we obtain the ego vehicle's bounding box at timestamp $t$ as:
193
+
194
+ $$
195
+ \hat {b} ^ {t} (\delta) = \operatorname {b o x} \left(\hat {\tau} ^ {t}, w _ {e} + \delta , h _ {e} + \delta\right). \tag {20}
196
+ $$
197
+
198
+ where $\delta$ is a safety margin. We then compute collision loss using IoU between $\hat{b}^t (\delta)$ and each other vehicle's bounding box $b_{i}^{t}$ across all timesteps. Combining these losses for multiple values of $\delta$ yields the planning loss as below:
199
+
200
+ $$
201
+ \mathcal {L} _ {\mathrm {c o l}} (\delta) = \sum_ {i, t} \operatorname {I o U} \left(\hat {b} ^ {t} (\delta), b _ {i} ^ {t}\right), \tag {21}
202
+ $$
203
+
204
+ $$
205
+ \mathcal {L} _ {\text {p l a n}} = \| \tau - \hat {\tau} \| _ {2} ^ {2} + \sum_ {(\lambda_ {\delta}, \delta)} \lambda_ {\delta} \mathcal {L} _ {\mathrm {c o l}} (\delta). \tag {22}
206
+ $$
207
+
208
+ Note that in training with generated scenarios, $(w_{e},h_{e})$ may vary, the ground-truth trajectory $\tau$ is taken from $\mathcal{T}$ (Eq. 9), and each bounding box $b_{i}^{t}$ is an element of $\mathcal{B}$ (Eq. 11).
209
+
210
+ Finally, the loss function for E2E AD training can be expressed by incorporating scaling factors as follows:
211
+
212
+ $$
213
+ \mathcal {L} _ {\mathrm {E 2 E}} = \lambda_ {\text {m o t i o n}} \mathcal {L} _ {\text {m o t i o n}} + \lambda_ {\text {o c c}} \mathcal {L} _ {\text {o c c}} + \lambda_ {\text {p l a n}} \mathcal {L} _ {\text {p l a n}}. \tag {23}
214
+ $$
215
+
216
+ Map-Data Integration. We incorporate map-based BEV features into the motion forecasting and planning modules, where additional contextual information (e.g., road geometry, traffic structure) proves beneficial. In contrast, occupancy prediction requires high spatial precision, making 2D map data less helpful [36]; we thus exclude map inputs for this module to prevent performance degradation. Experimental results confirm that this selective integration avoids degrading overall performance and maintains strong test-time accuracy with image-only data.
217
+
218
+ # 4. Experiments
219
+
220
+ # 4.1. Implementation Details
221
+
222
+ Scenario Generation. We conduct all experiments using nuScenes [1], a real-world driving dataset containing 1,000 scenes. Each nuScenes scene consists of 40 video frames, capturing a 20-second video at $2\mathrm{Hz}$ . We set the future prediction timestamp $T_{p}$ to 6, which yields 34 training instances per scene. For our main results, we train the model from scratch for 5 epochs while adding 500 synthetic scenes, which is equivalent to 7.5 epochs if training solely on the original nuScenes dataset. Despite this additional data, our total training cost remains lower than that of other E2E AD methods. For ablation studies, unless otherwise noted, we use 100 synthetic scenes for training. To maintain sufficient interaction complexity, we exclude any instance that contains only a single driving agent.
223
+
224
+ Training Details. For training the Map-to-BEV Network, we freeze the pre-trained BEVFormer [17] and update only the Map-to-BEV network parameters over 20 epochs. For the E2E AD model, we train each module from scratch while keeping the BEVFormer and the Map-to-BEV network frozen. At test time, we apply the occupancy-based
225
+
226
+ <table><tr><td rowspan="2">Method</td><td colspan="3">no collision ↓</td><td colspan="3">no offroad ↓</td></tr><tr><td>rule</td><td>real</td><td>rel real</td><td>rule</td><td>real</td><td>rel real</td></tr><tr><td>BITS [31]</td><td>0.065</td><td>0.099</td><td>0.352</td><td>0.018</td><td>0.099</td><td>0.355</td></tr><tr><td>BITS+opt [31]</td><td>0.041</td><td>0.070</td><td>0.353</td><td>0.005</td><td>0.100</td><td>0.358</td></tr><tr><td>CTG [38]</td><td>0.052</td><td>0.044</td><td>0.346</td><td>0.002</td><td>0.042</td><td>0.346</td></tr><tr><td>CTG++ [37]</td><td>0.036</td><td>0.040</td><td>0.332</td><td>0.004</td><td>0.038</td><td>0.328</td></tr><tr><td>SynAD(Ours)</td><td>0.033</td><td>0.045</td><td>0.330</td><td>0.002</td><td>0.040</td><td>0.324</td></tr></table>
227
+
228
+ Table 1. Evaluation of synthetic scenarios with varying guidance.
229
+
230
+ <table><tr><td rowspan="2">Method</td><td colspan="4">L2(m) ↓</td><td colspan="4">Collsion Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>ST-P3† [11]</td><td>1.33</td><td>2.11</td><td>2.90</td><td>2.11</td><td>0.23</td><td>0.62</td><td>1.27</td><td>0.71</td></tr><tr><td>UniAD [12]</td><td>0.48</td><td>0.74</td><td>1.07</td><td>0.76</td><td>0.12</td><td>0.13</td><td>0.28</td><td>0.17</td></tr><tr><td>VAD [13]</td><td>0.41</td><td>0.70</td><td>1.05</td><td>0.72</td><td>0.07</td><td>0.17</td><td>0.41</td><td>0.22</td></tr><tr><td>OCCNet† [26]</td><td>1.29</td><td>2.31</td><td>2.99</td><td>2.14</td><td>0.21</td><td>0.59</td><td>1.37</td><td>0.72</td></tr><tr><td>Paradrive [28]</td><td>0.25</td><td>0.46</td><td>0.74</td><td>0.48</td><td>0.14</td><td>0.23</td><td>0.39</td><td>0.25</td></tr><tr><td>OCCWorld [36]</td><td>0.32</td><td>0.61</td><td>0.98</td><td>0.64</td><td>0.06</td><td>0.21</td><td>0.47</td><td>0.24</td></tr><tr><td>SynAD (Ours)</td><td>0.52</td><td>0.78</td><td>1.10</td><td>0.80</td><td>0.04</td><td>0.10</td><td>0.20</td><td>0.11</td></tr></table>
231
+
232
+ Table 2. Planning performance on the nuScenes validation set. ${}^{ \dagger }$ denotes results evaluated under the ST-P3 metric.
233
+
234
+ optimization from UniAD [12]. All experiments are conducted on 8 NVIDIA RTX 4090 GPUs with batch size 1 per GPU. More details can be found in the Supp. C.
235
+
236
+ # 4.2. Main Results
237
+
238
+ We evaluate our method on the nuScenes validation set, adopting the CTG++ [37] metrics for scenario generation and the VAD [13] evaluation protocol for the E2E AD task, ensuring consistency with existing methods. Details on the reporting rules can be found in Supp. D
239
+
240
+ Scenario Generation. In Table 1, we evaluate our generated paths using three metrics: rule, real, and rel real. The rule metric indicates how strictly the generated trajectories adhere to given rules. The real metric measures absolute similarity to real-world data using the Wasserstein distance, while rel real assesses the realism of scene-level interactions between vehicles. Our method demonstrates robust compliance with traffic constraints, as indicated by its substantial rule score. Although it has a slightly lower real score, suggesting a looser correspondence to exact real-world trajectories, it achieves a higher rel real score that highlights more sophisticated multi-agent interactions. These results show that the generated trajectories deviate from real-world paths while still capturing diverse driving behaviors, which is advantageous for building more robust E2E AD systems.
241
+
242
+ Planning. Table 2 presents our planning performance from two perspectives: trajectory accuracy, measured by the L2 distance error from the ground truth path, and safety, represented by the collision rate with other vehicles. While
243
+
244
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Motion Forecasting ↓</td><td colspan="2">Occupancy. ↑</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td></tr><tr><td>UniAD [12]</td><td>0.75</td><td>1.10</td><td>0.166</td><td>61.9</td><td>39.7</td></tr><tr><td>VAD* [13]</td><td>0.78</td><td>1.11</td><td>0.169</td><td>-</td><td>-</td></tr><tr><td>Paradrive [28]</td><td>0.73</td><td>1.08</td><td>0.162</td><td>60.0</td><td>36.4</td></tr><tr><td>SynAD (Ours)</td><td>0.69</td><td>1.01</td><td>0.154</td><td>60.5</td><td>39.6</td></tr></table>
245
+
246
+ Table 3. Prediction performance on the nuScenes validation set. Results reproduced in our environments. *VAD does not have an occupancy prediction module.
247
+
248
+ <table><tr><td colspan="3">Updated Modules</td><td rowspan="2" colspan="3">Motion Forecasting ↓</td><td rowspan="2" colspan="2">Occupancy. ↑</td><td rowspan="2" colspan="2">Plan.(avg.) ↓</td></tr><tr><td colspan="2">xRM</td><td>xSM</td></tr><tr><td>Mot.</td><td>Occ.</td><td>Plan</td><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td><td>L2</td><td>Col.</td></tr><tr><td></td><td></td><td></td><td>0.76</td><td>1.11</td><td>0.162</td><td>60.1</td><td>38.9</td><td>1.15</td><td>0.25</td></tr><tr><td></td><td></td><td></td><td>0.75</td><td>1.13</td><td>0.166</td><td>59.3</td><td>38.3</td><td>0.82</td><td>0.19</td></tr><tr><td></td><td></td><td>✓</td><td>0.77</td><td>1.15</td><td>0.168</td><td>60.2</td><td>39.0</td><td>0.79</td><td>0.18</td></tr><tr><td></td><td>✓</td><td>✓</td><td>0.77</td><td>1.14</td><td>0.167</td><td>58.4</td><td>37.6</td><td>0.78</td><td>0.18</td></tr><tr><td>✓</td><td></td><td>✓</td><td>0.73</td><td>1.06</td><td>0.157</td><td>60.2</td><td>39.2</td><td>0.77</td><td>0.14</td></tr></table>
249
+
250
+ our SynAD model exhibits slightly higher L2 distance errors due to the broader distribution of generated behaviors, it achieves the lowest collision rate among all baselines, indicating superior collision avoidance. This trade-off stems from emphasizing more diverse, realistic interactions during scenario generation, which yields safer but not necessarily GT-matching trajectories. Since real-world driving prioritizes collision avoidance over precise path replication, our approach is particularly well-suited for practical deployment. Additionally, SynAD is the only method to incorporate variations in vehicle sizes during training, further enhancing its adaptability to real-world driving conditions.
251
+
252
+ Prediction. Motion forecasting and occupancy prediction results provide insights into the E2E AD model's ability to interpret and anticipate the behavior of surrounding objects and agents. The results in Table 3 show that SynAD excels in accurately predicting the movements of surrounding agents and maintains a solid understanding of environmental occupancy. Even when synthetic data is introduced as a new input type, the model demonstrates robust performance during testing with image-only input, validating the effectiveness of the integration strategy.
253
+
254
+ # 4.3. Ablation Studies
255
+
256
+ Training Strategy. To effectively leverage the synthetic scenarios in our E2E AD framework, we use $x_{\mathrm{RM}}$ , the real scenario projected onto the map, as a training bridge. Table 4 presents the results of this approach. First, incorporating $x_{\mathrm{SM}}$ into the planning module training significantly improves planning performance, while adding $x_{\mathrm{RM}}$ provides a modest additional gain. However, when we extend real map to the occupancy prediction module, performance declines,
257
+
258
+ Table 4. Prediction and planning performance variations based on the incorporation of $x_{\mathrm{RM}}$ and $x_{\mathrm{SM}}$ in each E2E AD module.
259
+
260
+ <table><tr><td rowspan="2"># Synthetic scenes</td><td colspan="3">Motion Forecasting ↓</td><td colspan="2">Occupancy. ↑</td><td colspan="2">Plan.(avg.) ↓</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>IoU-n</td><td>IoU-f</td><td>L2</td><td>Col.</td></tr><tr><td colspan="8">Baseline</td></tr><tr><td>0</td><td>0.76</td><td>1.11</td><td>0.162</td><td>60.1</td><td>38.9</td><td>1.15</td><td>0.25</td></tr><tr><td colspan="8">Same step (Fair comparison)</td></tr><tr><td>100</td><td>0.73</td><td>1.06</td><td>0.157</td><td>60.2</td><td>39.2</td><td>0.77</td><td>0.14</td></tr><tr><td>300</td><td>0.72</td><td>1.02</td><td>0.153</td><td>59.4</td><td>38.6</td><td>0.81</td><td>0.13</td></tr><tr><td>500</td><td>0.73</td><td>1.03</td><td>0.155</td><td>59.4</td><td>38.7</td><td>0.85</td><td>0.14</td></tr><tr><td colspan="8">Same epoch (Longer training)</td></tr><tr><td>100</td><td>0.72</td><td>1.04</td><td>0.156</td><td>60.3</td><td>39.1</td><td>0.76</td><td>0.13</td></tr><tr><td>300</td><td>0.71</td><td>1.02</td><td>0.155</td><td>60.3</td><td>39.4</td><td>0.77</td><td>0.12</td></tr><tr><td>500</td><td>0.69</td><td>1.01</td><td>0.154</td><td>60.5</td><td>39.6</td><td>0.80</td><td>0.11</td></tr></table>
261
+
262
+ Table 5. Performance under different numbers of generated scenes, comparing two training protocols.
263
+
264
+ <table><tr><td rowspan="2">Arch.</td><td rowspan="2">input res.</td><td rowspan="2">\( \mathcal{L}_{\text{map }}^{\text{val}} \downarrow (\times 10^{-2}) \)</td><td colspan="3">Motion Forecasting ↓</td><td colspan="2">Plan.(avg.) ↓</td></tr><tr><td>minADE</td><td>minFDE</td><td>MR</td><td>L2</td><td>Col.</td></tr><tr><td>SwinUNETR</td><td>800</td><td>9.55</td><td>0.75</td><td>1.11</td><td>0.158</td><td>1.08</td><td>0.26</td></tr><tr><td>Ours</td><td>224</td><td>8.96</td><td>0.73</td><td>1.06</td><td>0.157</td><td>0.77</td><td>0.14</td></tr></table>
265
+
266
+ Table 6. Performance variations based on Map-to-BEV network architectures.
267
+
268
+ suggesting that 2D map representations alone are insufficient for this task. These results indicate that BEV features extracted from map data suffice for motion forecasting and planning but fall short for occupancy prediction. The latter often requires richer spatial information, as evidenced by OCCNet [26] and OCCWorld [36], which leverage 3D data to improve performance. Consequently, our main training strategy updates only the motion forecasting and planning modules through the map data.
269
+
270
+ Scale of Synthetic Scenarios. Table 5 illustrates how varying their number influences performance under two training protocols: one with the same number of training steps and another with the same number of epochs. When no synthetic scenes are used, the model relies solely on multi-camera image data, forming our baseline. In the same-step protocol, incorporating a moderate amount of synthetic scene improves results, although further increases yield diminishing results. Under the same-epoch protocol, introducing more synthetic scenes consistently enhances performance, demonstrating that the model benefits from broader coverage given sufficient training iterations. Across both protocols, L2 distance tends to increase with more scenes, reflecting the broader distribution of synthetic scenarios. In particular, faster convergence under the same training steps as the baseline underscores the advantages of using the synthetic scenario.
271
+
272
+ Map-to-BEV Network Architecture. Table 6 shows the ablation results on the different network architectures for the Map-to-BEV Network. We compare our model with SwinUNETR [9], which preserves spatial correspondence between the map and BEV features. One observation is that
273
+
274
+ ![](images/9907c64cca082aaa0fe19a502d629a2676f676b964130c5f1aad04a4f01c1fce.jpg)
275
+ Figure 5. Qualitative result of SynAD. The performance of SynAD in an urban driving scenario is presented through six views capturing the surroundings. The front and back vehicles' motion forecasting are visualized with color-coded trajectories, where warmer colors (red) indicate more immediate movements and cooler colors (blue) represent later positions.
276
+
277
+ <table><tr><td rowspan="2">agent</td><td rowspan="2">Guide map</td><td rowspan="2">speed</td><td colspan="4">L2(m)↓</td><td colspan="4">Collision Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>✓</td><td></td><td></td><td>0.48</td><td>0.73</td><td>1.05</td><td>0.75</td><td>0.05</td><td>0.15</td><td>0.35</td><td>0.18</td></tr><tr><td>✓</td><td>✓</td><td></td><td>0.49</td><td>0.74</td><td>1.06</td><td>0.76</td><td>0.05</td><td>0.12</td><td>0.27</td><td>0.15</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>0.50</td><td>0.75</td><td>1.07</td><td>0.77</td><td>0.05</td><td>0.11</td><td>0.26</td><td>0.14</td></tr></table>
278
+
279
+ SwinUNETR requires high-resolution inputs to achieve sufficient performance, leading to higher computational costs. After training each Map-to-BEV Network variant, we evaluate BEV feature quality using an L2 loss on the validation dataset (i.e. $\mathcal{L}_{\mathrm{map}}^{\mathrm{val}}$ ). Then, we compare the performance of motion forecasting and planning in E2E AD training, which are influenced by map data. The results demonstrate that our design outperforms SwinUNETR while incurring lower computational costs.
280
+
281
+ Guide Composition for Scenario Generation. Table 7 presents the impact of different guide compositions from Equation 6 on planning performance. The agent guide aims to prevent collisions between agents, and the map guide prevents collisions with map components, and the speed guide enforces both minimum and maximum speeds. Incorporating the map and speed guides progressively decreases collision rates, demonstrating that these additional constraints enhance safety. While we focus on specific guide functions, extending the approach, such as using LLM- or retrieval-based guidance, is left for future work.
282
+
283
+ Ego Selection Rule. For synthetic scenario generations, we experiment with three ego selection rules: random, dynamic, and longest. Each rule selects vehicles with a minimum movement of $1m$ over the generated timestamps to ensure meaningful training. The random rule selects any
284
+
285
+ Table 7. Planning performance with varying guide functions for sampling synthetic scenarios.
286
+
287
+ <table><tr><td rowspan="2">Rule</td><td rowspan="2">Dist.</td><td colspan="4">L2(m) ↓</td><td colspan="4">Collsion Rate(%) ↓</td></tr><tr><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td><td>1s</td><td>2s</td><td>3s</td><td>Avg.</td></tr><tr><td>Random</td><td>3.67</td><td>0.51</td><td>0.77</td><td>1.09</td><td>0.79</td><td>0.06</td><td>0.11</td><td>0.31</td><td>0.16</td></tr><tr><td>Dynamic</td><td>3.68</td><td>0.50</td><td>0.77</td><td>1.11</td><td>0.79</td><td>0.04</td><td>0.10</td><td>0.31</td><td>0.15</td></tr><tr><td>Longest</td><td>3.70</td><td>0.50</td><td>0.75</td><td>1.07</td><td>0.77</td><td>0.05</td><td>0.11</td><td>0.26</td><td>0.14</td></tr></table>
288
+
289
+ Table 8. Planning performance with different ego vehicle selection rules in synthetic scenarios. Dist. represents the average distance traveled between consecutive frames.
290
+
291
+ vehicle meeting this criterion, while the dynamic rule designates the vehicle with the largest lateral ( $x$ -axis) movement. Lastly, the longest rule selects the ego vehicle that traveled the longest distance, following Equation 7. Table 8 presents both the planning performance and the average distance traveled by the ego vehicle over future timestamp $T_{p}$ . When selecting the ego vehicle based on the longest rule, the model shows the lowest collision rate. Furthermore, the longest rule achieves the lowest L2 errors with sufficient trajectory distance. Based on the results, the longest rule is set as the ego selection rule for the synthetic scenario that we incorporate into our E2E AD training.
292
+
293
+ # 5. Conclusion
294
+
295
+ We propose SynAD, a novel method that integrates synthetic scenarios into real-world E2E AD models. SynAD overcomes the limitations that previously confined such integrations to virtual environments like simulators. By utilizing map-based BEV feature encoding, we enable the training of synthetic scenarios without relying on sensor data such as multi-camera images or LiDAR data. Also, we propose ego-centric scenario generation methods and strategic integration approaches. Meanwhile, integrating synthetic scenarios holds significant potential for incorporation into existing E2E AD pipelines. We leave applying our integration strategy to other E2E AD methods for future work.
296
+
297
+ # Acknowledgments
298
+
299
+ This work was supported by Samsung Electronics Co., Ltd (IO231005-07280-01).
300
+
301
+ # References
302
+
303
+ [1] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 1, 6
304
+ [2] Long Chen, Lukas Platinsky, Stefanie Speichert, Băzej Osiński, Oliver Scheel, Yawei Ye, Hugo Grimmett, Luca Del Pero, and Peter Ondruska. What data do we need for training an av motion planner? In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 1066-1072. IEEE, 2021. 2
305
+ [3] Shaoyu Chen, Tianheng Cheng, Xinggang Wang, Wenming Meng, Qian Zhang, and Wenyu Liu. Efficient and robust 2d-to-bev representation learning via geometry-guided kernel transformer. arXiv preprint arXiv:2206.04584, 2022. 1
306
+ [4] Wenhao Ding, Baiming Chen, Minjun Xu, and Ding Zhao. Learning to collide: An adaptive safety-critical scenarios generating method. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2243-2250. IEEE, 2020. 2
307
+ [5] Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, and Marco Pavone. Realgen: Retrieval augmented generation for controllable traffic scenarios. arXiv preprint arXiv:2312.13303, 2023. 2
308
+ [6] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning, pages 1-16. PMLR, 2017. 1
309
+ [7] David González, Joshu Pérez, Vicente Milanés, and Fawzi Nashashibi. A review of motion planning techniques for automated vehicles. IEEE Transactions on intelligent transportation systems, 17(4):1135-1145, 2015. 2
310
+ [8] Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, and Andreas Geiger. King: Generating safety-critical driving scenarios for robust imitation via kinematics gradients. In European Conference on Computer Vision, pages 335-352. Springer, 2022. 1, 2
311
+ [9] Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI brainlesion workshop, pages 272-284. Springer, 2021. 7
312
+ [10] Anthony Hu, Zak Murez, Nikhil Mohan, Sofia Dudas, Jeffrey Hawke, Vijay Badrinarayanan, Roberto Cipolla, and Alex Kendall. Fiery: Future instance prediction in bird's-eye view from surround monocular cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15273-15282, 2021. 1
313
+ [11] Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, and Dacheng Tao. St-p3: End-to-end vision-based au
314
+
315
+ tonomous driving via spatial-temporal feature learning. In European Conference on Computer Vision, pages 533-549. Springer, 2022. 1, 2, 5, 6
316
+ [12] Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, et al. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17853-17862, 2023. 1, 2, 5, 6, 7, 3
317
+ [13] Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. Vad: Vectorized scene representation for efficient autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340-8350, 2023. 1, 2, 6, 7
318
+ [14] Chiyu Jiang, Andre Cornman, Cheolho Park, Benjamin Sapp, Yin Zhou, Dragomir Anguelov, et al. Motiondiffuser: Controllable multi-agent motion prediction using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9644-9653, 2023. 2
319
+ [15] Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah. Learning to drive in a day. In 2019 international conference on robotics and automation (ICRA), pages 8248-8254. IEEE, 2019. 2
320
+ [16] Tarasha Khurana, Peiyun Hu, Achal Dave, Jason Ziglar, David Held, and Deva Ramanan. Differentiable raycasting for self-supervised occupancy forecasting. In European Conference on Computer Vision, pages 353-369. Springer, 2022. 1
321
+ [17] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Yu Qiao, and Jifeng Dai. Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. In European conference on computer vision, pages 1-18. Springer, 2022. 1, 4, 6, 3
322
+ [18] Zhi Liu, Shaoyu Chen, Xiaojie Guo, Xinggang Wang, Tianheng Cheng, Hongmei Zhu, Qian Zhang, Wenyu Liu, and Yi Zhang. Vision-based uneven bev representation learning with polar rasterization and surface estimation. In Conference on Robot Learning, pages 437-446. PMLR, 2023. 1
323
+ [19] Zhiyuan Liu, Leheng Li, Yuning Wang, Haotian Lin, Zhizhe Liu, Lei He, and Jianqiang Wang. Controllable traffic simulation through llm-guided hierarchical chain-of-thought reasoning. arXiv preprint arXiv:2409.15135, 2024. 2
324
+ [20] Jack Lu, Kelvin Wong, Chris Zhang, Simon Suo, and Raquel Urtasun. Scenecontrol: Diffusion for controllable traffic scene generation. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 16908-16914. IEEE, 2024. 2
325
+ [21] Tung Phan-Minh, Elena Corina Grigore, Freddy A Boulton, Oscar Beijbom, and Eric M Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14074-14083, 2020. 1
326
+ [22] Jonah Philion and Sanja Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In Computer Vision-ECCV 2020: 16th European
327
+
328
+ Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV 16, pages 194-210. Springer, 2020. 1
329
+ [23] Ethan Pronovost, Meghan Reddy Ganesina, Noureldin Hendy, Zeyu Wang, Andres Morales, Kai Wang, and Nick Roy. Scenario diffusion: Controllable driving scenario generation with diffusion. Advances in Neural Information Processing Systems, 36:68873-68894, 2023. 2
330
+ [24] Davis Rempe, Jonah Philion, Leonidas J Guibas, Sanja Fidler, and Or Litany. Generating useful accident-prone driving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17305-17315, 2022. 2
331
+ [25] Bo-Kai Ruan, Hao-Tang Tsui, Yung-Hui Li, and Hong-Han Shuai. Traffic scene generation from natural language description for autonomous vehicles with large language model. arXiv preprint arXiv:2409.09575, 2024. 2
332
+ [26] Wenwen Tong, Chonghao Sima, Tai Wang, Li Chen, Silei Wu, Hanming Deng, Yi Gu, Lewei Lu, Ping Luo, Dahua Lin, et al. Scene as occupancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8406-8415, 2023. 1, 2, 6, 7, 3
333
+ [27] Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat, Sergio Casas, Mengye Ren, and Raquel Urtasun. Advsim: Generating safety-critical scenarios for self-driving vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9909-9918, 2021. 2
334
+ [28] Xinshuo Weng, Boris Ivanovic, Yan Wang, Yue Wang, and Marco Pavone. Para-drive: Parallelized architecture for real-time autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15449-15458, 2024. 1, 2, 6, 7, 3
335
+ [29] Junkai Xia, Chenxin Xu, Qingyao Xu, Chen Xie, Yanfeng Wang, and Siheng Chen. Language-driven interactive traffic trajectory generation. arXiv preprint arXiv:2405.15388, 2024. 2
336
+ [30] Chejian Xu, Ding Zhao, Alberto Sangiovanni-Vincentelli, and Bo Li. Diffscene: Diffusion-based safety-critical scenario generation for autonomous vehicles. In The Second Workshop on New Frontiers in Adversarial Machine Learning, 2023. 2
337
+ [31] Danfei Xu, Yuxiao Chen, Boris Ivanovic, and Marco Pavone. Bits: Bi-level imitation for traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 2929-2936. IEEE, 2023. 6
338
+ [32] Wenda Xu, Qian Wang, and John M Dolan. Autonomous vehicle motion planning via recurrent spline optimization. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 7730-7736. IEEE, 2021. 2
339
+ [33] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8660-8669, 2019. 1
340
+ [34] Jiawei Zhang, Chejian Xu, and Bo Li. Chatscene: Knowledge-enabled safety-critical scenario generation for autonomous vehicles. In Proceedings of the IEEE/CVF Con-
341
+
342
+ ference on Computer Vision and Pattern Recognition, pages 15459-15469, 2024. 1, 2
343
+ [35] Yunpeng Zhang, Zheng Zhu, Wenzhao Zheng, Junjie Huang, Guan Huang, Jie Zhou, and Jiwen Lu. **Reverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv preprint arXiv:2205.09743**, 2022. 1
344
+ [36] Wenzhao Zheng, Weiliang Chen, Yuanhui Huang, Borui Zhang, Yueqi Duan, and Jiwen Lu. Occworld: Learning a 3d occupancy world model for autonomous driving. In European Conference on Computer Vision, pages 55-72. Springer, 2025. 1, 2, 6, 7, 3
345
+ [37] Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, and Baishakhi Ray. Language-guided traffic simulation via scene-level diffusion. In Conference on Robot Learning, pages 144-177. PMLR, 2023. 2, 6, 3
346
+ [38] Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 3560-3566. IEEE, 2023. 2, 6
347
+ [39] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020. 5
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c101805be2bdbd1458ea751df46cafe0c60e6e35ed818e3fbf206cb74544e517
3
+ size 578059
2025/SynAD_ Enhancing Real-World End-to-End Autonomous Driving Models through Synthetic Data Integration/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_content_list.json ADDED
@@ -0,0 +1,1907 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "SynCity: Training-Free Generation of 3D Worlds",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 248,
8
+ 130,
9
+ 750,
10
+ 152
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Paul Engstler*",
17
+ "bbox": [
18
+ 158,
19
+ 180,
20
+ 274,
21
+ 199
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Aleksandar Shtedritski*",
28
+ "bbox": [
29
+ 320,
30
+ 180,
31
+ 509,
32
+ 196
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Iro Laina",
39
+ "bbox": [
40
+ 555,
41
+ 181,
42
+ 630,
43
+ 196
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Andrea Vedaldi",
50
+ "bbox": [
51
+ 436,
52
+ 199,
53
+ 560,
54
+ 215
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "Visual Geometry Group, University of Oxford",
61
+ "bbox": [
62
+ 316,
63
+ 215,
64
+ 681,
65
+ 234
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "{paule,suny,iro,chrisr,vedaldi}@robots.ox.ac.uk",
72
+ "bbox": [
73
+ 292,
74
+ 236,
75
+ 707,
76
+ 250
77
+ ],
78
+ "page_idx": 0
79
+ },
80
+ {
81
+ "type": "image",
82
+ "img_path": "images/7418ddcb2383f7c60ea809e8e6bf33b5b403682c695d7e3b21cf772d326bfc32.jpg",
83
+ "image_caption": [
84
+ "Figure 1. We introduce SynCity, a novel method for generating complex and freely navigable 3D worlds from a prompt. It is training-free and leverages powerful language, 2D, and 3D generators through new prompt engineering strategies."
85
+ ],
86
+ "image_footnote": [],
87
+ "bbox": [
88
+ 133,
89
+ 292,
90
+ 867,
91
+ 592
92
+ ],
93
+ "page_idx": 0
94
+ },
95
+ {
96
+ "type": "text",
97
+ "text": "Abstract",
98
+ "text_level": 1,
99
+ "bbox": [
100
+ 245,
101
+ 660,
102
+ 323,
103
+ 676
104
+ ],
105
+ "page_idx": 0
106
+ },
107
+ {
108
+ "type": "text",
109
+ "text": "We propose SynCity, a method for generating explorable 3D worlds from textual descriptions. Our approach leverages pre-trained textual, image, and 3D generators without requiring fine-tuning or inference-time optimization. While most 3D generators are object-centric and unable to create large-scale worlds, we demonstrate how 2D and 3D generators can be combined to produce ever-expanding scenes. The world is generated tile by tile, with each new tile created within its context and seamlessly integrated into the scene. SynCity enables fine-grained control over the appearance and layout of the generated worlds, which are both detailed and diverse.",
110
+ "bbox": [
111
+ 88,
112
+ 684,
113
+ 485,
114
+ 864
115
+ ],
116
+ "page_idx": 0
117
+ },
118
+ {
119
+ "type": "text",
120
+ "text": "1. Introduction",
121
+ "text_level": 1,
122
+ "bbox": [
123
+ 514,
124
+ 660,
125
+ 645,
126
+ 676
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "We consider the problem of generating 3D worlds from textual descriptions. Generating 3D content, for example, for video games, virtual reality, and simulation, is highly laborious and time-consuming. This is particularly true for large 3D scenes, even though these are often in the background and may have limited artistic significance. Automating their generation is thus particularly appealing.",
133
+ "bbox": [
134
+ 511,
135
+ 686,
136
+ 906,
137
+ 794
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "text",
143
+ "text": "The advent of modern generative AI has already impacted 3D generation, and in particular, the generation of 3D objects. DreamFusion [40] was among the first to adapt diffusion-based 2D image generators [44] to create 3D objects. Subsequent advancements fine-tuned 2D image generators to produce multiple consistent views of an object [14, 33, 48, 53] and learned few-view 3D reconstruc",
144
+ "bbox": [
145
+ 511,
146
+ 795,
147
+ 908,
148
+ 902
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "header",
154
+ "text": "CVF",
155
+ "bbox": [
156
+ 106,
157
+ 2,
158
+ 181,
159
+ 42
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "header",
165
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
166
+ "bbox": [
167
+ 238,
168
+ 0,
169
+ 807,
170
+ 46
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "page_footnote",
176
+ "text": "*Denotes equal contribution. Project page: https://research.paulengstler.com/syncity/",
177
+ "bbox": [
178
+ 89,
179
+ 875,
180
+ 480,
181
+ 902
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "page_number",
187
+ "text": "27585",
188
+ "bbox": [
189
+ 478,
190
+ 944,
191
+ 519,
192
+ 957
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "text",
198
+ "text": "tion networks [25, 65]. More recently, the focus has shifted to methods that learn 3D latent spaces [10, 26, 27, 63, 69]. 3D latents can be sampled to generate 3D objects directly and with better geometry. Furthermore, by making these 3D latent generators conditional on an image prompt, they can be easily combined with 2D image generators, which can generally be trained on a much larger scale.",
199
+ "bbox": [
200
+ 89,
201
+ 90,
202
+ 480,
203
+ 196
204
+ ],
205
+ "page_idx": 1
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "In addition to objects, there is also ample literature on generating 3D scenes. Most scene generators are image-based and progressively reconstruct scenes by expanding from an initial image [8, 12, 13, 17, 31, 38, 43, 67, 70], combining depth prediction, image and depth outpainting, and 3D reconstruction using NeRF [36] or 3D Gaussian Splating [20]. The main advantage of these approaches is that they also leverage powerful 2D image generators to create the various views of the scene. These 2D generators understand complex textual prompts and result in vibrant 3D scenes with good artistic quality. However, it is challenging to keep the scene consistent while expanding it. As a consequence, while the reconstructed scenes can envelop the observer in a $360^{\\circ}$ manner, it is often not possible to 'walk' into the scene for more than a few steps [22].",
210
+ "bbox": [
211
+ 89,
212
+ 199,
213
+ 480,
214
+ 425
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "A key challenge in generating scenes beyond these '3D bubbles' is maintaining consistency incrementally without drifting. We argue that latent-space 3D generators might help, as they can regularize and constrain the reconstructed geometry, including hallucinating shapes and textures in regions behind the visible sides of objects. Some evidence comes from recent works like BlockDiffusion [60] and LT3SD [34], which learn to generate large coherent 3D scenes in latent space. However, they do not leverage 2D generators trained on billions of images, which severely hinders scene diversity and limits their ability to generalize and follow instructions.",
221
+ "bbox": [
222
+ 89,
223
+ 426,
224
+ 480,
225
+ 607
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "In this work, we aim to combine the strengths of latent 3D and 2D generators to create large, high-quality 3D scenes that can be navigated freely (Fig. 1). First, we note that, while 3D generators like TRELLIS [63] are trained for object-level reconstruction, they can reconstruct fairly complex compositions of multiple objects. Borrowing ideas from video game world building, we show that TRELLIS can effectively generate, if not an entire world, at least tiles representing local regions of the world. In particular, we show that, if we prompt the model with an 'isometric' view of a tile, we can effectively generate it in 3D.",
232
+ "bbox": [
233
+ 89,
234
+ 609,
235
+ 480,
236
+ 776
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "Given this basic ability, we then address the problem of generating the images of the tiles that form a larger scene. To do so, we build on a text-to-image generator (Flux [21]) and introduce a new way of prompting it that reliably causes it to output tile-like images, with a consistent isometric framing. This regular and stable framing is the key reason why the reconstructed 3D tiles can fit together seamlessly.",
243
+ "bbox": [
244
+ 89,
245
+ 777,
246
+ 480,
247
+ 883
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "In addition to framing, we propose two additional mech",
254
+ "bbox": [
255
+ 109,
256
+ 885,
257
+ 480,
258
+ 900
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "text",
264
+ "text": "anisms to make tiles fit together well. First, we encourage consistency in appearance by using previously generated tiles to provide context for the image generator, where each new tile inpaints a missing region in a 2D isometric view of the scene. Second, we enforce geometric consistency by blending the 3D representations of neighboring tiles using the 3D generative model.",
265
+ "bbox": [
266
+ 511,
267
+ 90,
268
+ 903,
269
+ 196
270
+ ],
271
+ "page_idx": 1
272
+ },
273
+ {
274
+ "type": "text",
275
+ "text": "Tile-specific textual prompts control the generation of the tiles. Tile prompts can also be generated automatically, utilizing a large language model (ChatGPT [37]), so that an entire scene can be obtained from a single 'world' prompt.",
276
+ "bbox": [
277
+ 511,
278
+ 198,
279
+ 903,
280
+ 258
281
+ ],
282
+ "page_idx": 1
283
+ },
284
+ {
285
+ "type": "text",
286
+ "text": "As we show in the experiments, SynCity can, in this way, leverage off-the-shelf components (i.e., language, 2D and 3D generators) to produce vibrant, detailed, and coherent 3D worlds that can be navigated freely.",
287
+ "bbox": [
288
+ 511,
289
+ 260,
290
+ 903,
291
+ 320
292
+ ],
293
+ "page_idx": 1
294
+ },
295
+ {
296
+ "type": "text",
297
+ "text": "2. Related Work",
298
+ "text_level": 1,
299
+ "bbox": [
300
+ 513,
301
+ 337,
302
+ 653,
303
+ 352
304
+ ],
305
+ "page_idx": 1
306
+ },
307
+ {
308
+ "type": "text",
309
+ "text": "Novel view synthesis for scenes. Expanding an image beyond its boundaries has been a long-standing task in computer vision. Early methods that sought to expand object-centric scenes relied on layer-structured representations [24, 35, 49, 51, 55, 56], which disregard the scene's actual geometry. SynSin [59] is a pivotal work where image features are projected and used as conditioning to generate novel views, achieving geometric and semantic consistency. ZeroNVS [45] introduces high-quality results with fine-grained control of the camera but remains object-centric. GenWarp [46] integrates semantic information through cross-attention when generating a novel view.",
310
+ "bbox": [
311
+ 511,
312
+ 363,
313
+ 903,
314
+ 544
315
+ ],
316
+ "page_idx": 1
317
+ },
318
+ {
319
+ "type": "text",
320
+ "text": "A challenge for these methods is to avoid semantic drift and maintain object persistence. To obtain a 3D representation, the generated views need to be transferred into such a representation, e.g., NeRF [36] or Gaussians [18, 20], where any geometric conflicts must be resolved.",
321
+ "bbox": [
322
+ 511,
323
+ 546,
324
+ 903,
325
+ 621
326
+ ],
327
+ "page_idx": 1
328
+ },
329
+ {
330
+ "type": "text",
331
+ "text": "Image projection-based scene generation. A different line of work follows the paradigm of building the 3D representation of a scene sequentially using 2D image generators [8, 13, 17, 23, 38, 43, 58, 61, 67, 70]. Most of these methods employ an image generator to outpaint the existing scene using predefined camera poses. The results are then fused in 3D with depth prediction models. Text2Room [17] generates meshes of indoor scenes. As the bounds of the mesh delimit the scene, it can be freely explored. LucidDreamer [8] and Text2Immersion [38] go beyond indoor scenes, but their generated scenes reveal geometric inconsistencies when stepping away from the camera poses used to generate the scene. Invisible Stitch [12] addresses this issue by inpainting depth (rather than naively aligning it), and RealmDreamer [50] proposes multiple optimization losses to refine the generated scene. Despite these improvements, the resulting scenes still suffer from geometric artifacts and remain relatively small. WonderJourney [67] introduces",
332
+ "bbox": [
333
+ 511,
334
+ 628,
335
+ 903,
336
+ 900
337
+ ],
338
+ "page_idx": 1
339
+ },
340
+ {
341
+ "type": "page_number",
342
+ "text": "27586",
343
+ "bbox": [
344
+ 478,
345
+ 945,
346
+ 517,
347
+ 955
348
+ ],
349
+ "page_idx": 1
350
+ },
351
+ {
352
+ "type": "image",
353
+ "img_path": "images/619d7467f9e6ca5817f332992a7203a69bcba819c73fe62c9aed0d08791f4d14.jpg",
354
+ "image_caption": [
355
+ "Figure 2. Overview of SynCity. 2D prompting: To generate a new tile, we first render a view of where the tile should be placed, including context from neighboring tiles. 3D prompting: We extract the new tile image and construct an image prompt for TRELLIS by adding a wider base under the tile. 3D blending: The 3D model output by TRELLIS is often not well blended with the rest of the scene. To address this, we render a view of the new tile alongside each neighboring tile and inpaint the region between them using an image inpainting model. Next, we condition on this well-blended view to refine the region between the two 3D tiles. Finally, the new tile is added to the world."
356
+ ],
357
+ "image_footnote": [],
358
+ "bbox": [
359
+ 106,
360
+ 99,
361
+ 906,
362
+ 415
363
+ ],
364
+ "page_idx": 2
365
+ },
366
+ {
367
+ "type": "text",
368
+ "text": "novel ideas for depth fusion, such as grouping objects at similar disparity to planes and refining sky depth, enabling extensive 'scene journeys' where independent representations are built between scene 'keyframes'. However, these are not merged into one coherent scene. WonderWorld [68] leverages these improvements to build a single scene, allowing interactive updates, but the true extent of the generated scenes remains limited. Other works use panoramas [52, 57] or implicit representations [2, 45], but the freedom of movement remains constricted.",
369
+ "bbox": [
370
+ 88,
371
+ 521,
372
+ 483,
373
+ 672
374
+ ],
375
+ "page_idx": 2
376
+ },
377
+ {
378
+ "type": "text",
379
+ "text": "Procedural scene generation. Further methods permit long-range fly-overs over nature [5-7, 28, 31] or cities [30, 47, 64]. These methods usually generate procedural unbounded images (e.g., the terrain makeup or a city layout). While these methods create realistic-looking images, they are often monotonous as they are domain-specific and thus highly constrained in the variety they can generate.",
380
+ "bbox": [
381
+ 89,
382
+ 681,
383
+ 483,
384
+ 787
385
+ ],
386
+ "page_idx": 2
387
+ },
388
+ {
389
+ "type": "text",
390
+ "text": "3D scene generation. Instead of merely generating images of a scene or outpainting it only in 2D, other methods generate the 3D representation directly. Set-the-Scene [9] adds a layer of control to the layout of NeRF scenes by defining object proxies. BlockFusion [60] learns a network to diffuse small blocks to extend a mesh auto-regressively. A 2D layout conditioning is used to control the generation process,",
391
+ "bbox": [
392
+ 89,
393
+ 794,
394
+ 483,
395
+ 901
396
+ ],
397
+ "page_idx": 2
398
+ },
399
+ {
400
+ "type": "text",
401
+ "text": "allowing users to generate scenes of rooms, a village, and a city. While the method allows building large-scale scenes, the variety of the objects it generates is severely limited as it requires domain-specific 3D training data. Furthermore, it generates textured meshes. LT3SD [34] learns a diffusion model that generates 3D environments in a patch-by-patch and coarse-to-fine fashion. However, this method is only trained to produce indoor scenes.",
402
+ "bbox": [
403
+ 511,
404
+ 521,
405
+ 906,
406
+ 642
407
+ ],
408
+ "page_idx": 2
409
+ },
410
+ {
411
+ "type": "text",
412
+ "text": "At the same time, the synthesis of complex, high-fidelity objects has been enabled by the rapid progress in the fields of text-to-3D and image-to-3D generation [26, 27, 29, 32, 41, 42, 48, 54, 63, 71, 73]. Trained on large-scale curated subsets of 3D datasets such as Objaverse-XL [11], these models can generate a large variety of different objects.",
413
+ "bbox": [
414
+ 511,
415
+ 643,
416
+ 906,
417
+ 734
418
+ ],
419
+ "page_idx": 2
420
+ },
421
+ {
422
+ "type": "text",
423
+ "text": "While prior works have utilized 3D object generators to composite a scene from objects [19, 66], we generate complete chunks of a scene that are seamlessly fused together, which is a fundamentally new approach to scene generation.",
424
+ "bbox": [
425
+ 511,
426
+ 734,
427
+ 908,
428
+ 810
429
+ ],
430
+ "page_idx": 2
431
+ },
432
+ {
433
+ "type": "text",
434
+ "text": "3. Method",
435
+ "text_level": 1,
436
+ "bbox": [
437
+ 513,
438
+ 828,
439
+ 604,
440
+ 843
441
+ ],
442
+ "page_idx": 2
443
+ },
444
+ {
445
+ "type": "text",
446
+ "text": "Our goal is to generate a 3D world $\\mathcal{G}$ from an initial textual prompt $p_0$ . Our method, SynCity, leverages prompt engineering in combination with off-the-shelf language, 2D, and",
447
+ "bbox": [
448
+ 511,
449
+ 854,
450
+ 906,
451
+ 901
452
+ ],
453
+ "page_idx": 2
454
+ },
455
+ {
456
+ "type": "page_number",
457
+ "text": "27587",
458
+ "bbox": [
459
+ 478,
460
+ 944,
461
+ 517,
462
+ 955
463
+ ],
464
+ "page_idx": 2
465
+ },
466
+ {
467
+ "type": "image",
468
+ "img_path": "images/61b9173701d1dc881a13cda8750f423e186b9ae8fece69c0b203f82575a4ad03.jpg",
469
+ "image_caption": [
470
+ "Figure 3. Left: Progressive generation of world tiles $\\mathcal{T}$ . Right: Isometric framing of a tile for image-based prompting."
471
+ ],
472
+ "image_footnote": [],
473
+ "bbox": [
474
+ 106,
475
+ 87,
476
+ 473,
477
+ 170
478
+ ],
479
+ "page_idx": 3
480
+ },
481
+ {
482
+ "type": "text",
483
+ "text": "3D generators to create the entire world automatically, with no need to retrain the models.",
484
+ "bbox": [
485
+ 89,
486
+ 234,
487
+ 482,
488
+ 263
489
+ ],
490
+ "page_idx": 3
491
+ },
492
+ {
493
+ "type": "text",
494
+ "text": "We structure the world as a $W \\times H$ grid $\\mathcal{T} = \\{0, \\dots, W - 1\\} \\times \\{0, \\dots, H - 1\\}$ of square tiles, each of which can contain several complex 3D objects (e.g., buildings, bridges, trees) as well as the ground surface. We generate the world progressively, tile by tile, as shown in Fig. 3. When generating tile $(x, y) \\in \\mathcal{T}$ , tiles $\\mathcal{T}(x, y) = \\{(x', y') \\in \\mathcal{T} : y' < y \\vee (y' = y \\wedge x' < x)\\}$ have already been generated.",
495
+ "bbox": [
496
+ 89,
497
+ 265,
498
+ 483,
499
+ 385
500
+ ],
501
+ "page_idx": 3
502
+ },
503
+ {
504
+ "type": "text",
505
+ "text": "An overview of our approach is shown in Fig. 2. The first step is to expand the world description $p_0$ into tile-specific prompts (Sec. 3.1). The second step is to pass these tile-specific prompts to a 2D image generator and inpainter to create an isometric view of each tile, accounting for the part of the world generated so far (Sec. 3.2). The third step is to extract image prompts from these isometric views and use them as input to an image-to-3D generator to reconstruct each tile's geometry and appearance in 3D (Sec. 3.3). The final step is to align and blend the 3D reconstructions of the tiles to create a coherent 3D world (Sec. 3.4).",
506
+ "bbox": [
507
+ 88,
508
+ 386,
509
+ 483,
510
+ 551
511
+ ],
512
+ "page_idx": 3
513
+ },
514
+ {
515
+ "type": "text",
516
+ "text": "3.1. Prompting the Language Model",
517
+ "text_level": 1,
518
+ "bbox": [
519
+ 89,
520
+ 561,
521
+ 372,
522
+ 577
523
+ ],
524
+ "page_idx": 3
525
+ },
526
+ {
527
+ "type": "text",
528
+ "text": "The goal of language prompting is to take a high-level textual description of the world $p_0$ and expand it into a set $p$ of tile-specific textual prompts that can be used to generate the 3D world. Specifically, $p$ is a collection of sub-prompts $p_{xy} \\in \\Sigma^*$ , one for each tile, and a world-level 'style' prompt $p_\\star \\in \\Sigma^*$ , so that we can write $p = \\{p_{xy}\\}_{(x,y) \\in \\mathcal{T}} \\cup \\{p_\\star\\}$ , where $\\Sigma^*$ is the set of all possible strings.",
529
+ "bbox": [
530
+ 89,
531
+ 583,
532
+ 482,
533
+ 686
534
+ ],
535
+ "page_idx": 3
536
+ },
537
+ {
538
+ "type": "text",
539
+ "text": "The prompt $p$ can be constructed manually (allowing control over the content of each tile) or generated by a large language model (LLM) such as ChatGPT [37] from a 'seed' prompt. For the latter, we employ in-context learning, prompting ChatGPT o3-mini-high to generate a grid-like world with tile-specific descriptions after providing it with a template in JSON format (see Appendix A.1).",
540
+ "bbox": [
541
+ 89,
542
+ 688,
543
+ 483,
544
+ 794
545
+ ],
546
+ "page_idx": 3
547
+ },
548
+ {
549
+ "type": "text",
550
+ "text": "3.2. Prompting the 2D Generator",
551
+ "text_level": 1,
552
+ "bbox": [
553
+ 89,
554
+ 803,
555
+ 349,
556
+ 819
557
+ ],
558
+ "page_idx": 3
559
+ },
560
+ {
561
+ "type": "text",
562
+ "text": "We use the language prompts $p$ from Sec. 3.1 to prompt an off-the-shelf 2D image generator $\\Phi_{2\\mathrm{D}}$ to output a 2D image $I(x,y)$ of each tile to be generated, as shown in Fig. 4. The image $I(x,y)$ must satisfy several constraints: (1) It must reflect the tile-specific prompt $p_{xy}$ of the target tile as well",
563
+ "bbox": [
564
+ 89,
565
+ 825,
566
+ 483,
567
+ 901
568
+ ],
569
+ "page_idx": 3
570
+ },
571
+ {
572
+ "type": "text",
573
+ "text": "as the world-level prompt $p_{\\star}$ . (2) It must be suitable for prompting the image-to-3D generator in the next step. (3) It must be consistent with the previously generated tiles.",
574
+ "bbox": [
575
+ 511,
576
+ 90,
577
+ 905,
578
+ 136
579
+ ],
580
+ "page_idx": 3
581
+ },
582
+ {
583
+ "type": "text",
584
+ "text": "We propose a prompting strategy designed to satisfy these constraints. The image is drawn as a sample $I(x,y) \\sim \\Phi_{2\\mathrm{D}}(q,B,M)$ from the 2D image generator $\\Phi_{2\\mathrm{D}}$ , where $q = p_{xy} \\cdot p_{\\star}$ is a prompt that combines the tile-specific and world-level descriptions. The generator $\\Phi_{2\\mathrm{D}}$ is also provided with a base image $B$ and an inpainting mask $M$ to constrain its output, as explained next.",
585
+ "bbox": [
586
+ 511,
587
+ 136,
588
+ 905,
589
+ 242
590
+ ],
591
+ "page_idx": 3
592
+ },
593
+ {
594
+ "type": "text",
595
+ "text": "Tile inpainting. To satisfy constraint (2), we encourage the image generator to produce regular tiles so that the imaged-to-3D model outputs tiles with regular geometry that fit well together. We assume that tiles have a square basis of unit size and are imaged in an 'isometric' manner. This framing of the tiles is conducive to generating regular 3D tiles. Furthermore, it is a common choice in video games and might have been observed by the image generator during training, as these models are often trained on game-like data.",
596
+ "bbox": [
597
+ 511,
598
+ 248,
599
+ 905,
600
+ 383
601
+ ],
602
+ "page_idx": 3
603
+ },
604
+ {
605
+ "type": "text",
606
+ "text": "While we could fine-tune the image generator $\\Phi_{2\\mathrm{D}}$ to produce such images, we demonstrate below that this effect can be achieved through prompt engineering alone, avoiding the need for retraining. We only assume that $\\Phi_{2\\mathrm{D}}$ is capable of inpainting—a common feature of modern image generators, and provide it with inputs $B$ and $M$ , as shown in Fig. 4. Specifically, $B$ is set to be the image of the base of the tile, represented as a square, gray slab imaged from a fixed isometric vantage point. The mask $M$ is a binary mask covering a cube on top of the base.",
607
+ "bbox": [
608
+ 511,
609
+ 383,
610
+ 905,
611
+ 534
612
+ ],
613
+ "page_idx": 3
614
+ },
615
+ {
616
+ "type": "text",
617
+ "text": "Figure 4 shows the result of prompting the model in this manner, as well as what happens if signals $B$ and $M$ are removed: the viewpoint and general frame of the tile become random and unsuitable for 3D generation.",
618
+ "bbox": [
619
+ 511,
620
+ 535,
621
+ 905,
622
+ 595
623
+ ],
624
+ "page_idx": 3
625
+ },
626
+ {
627
+ "type": "image",
628
+ "img_path": "images/7d9d758c64f08f772efbc63a891776fa2bb9eb01907020bc9920beb6f3b71c39.jpg",
629
+ "image_caption": [
630
+ "Figure 4. Left: Generation of the 2D image prompt for the first world tile at $x = 0$ and $y = 0$ . The image generator $\\Phi_{2\\mathrm{D}}$ is conditioned on $q = p_{00} \\cdot p_{\\star}$ and tasked with inpainting the base image $B$ within the masked region $M$ . Right: Without 'framing' the image using $B$ and $M$ , the output image is unsuitable for tiling."
631
+ ],
632
+ "image_footnote": [],
633
+ "bbox": [
634
+ 517,
635
+ 608,
636
+ 898,
637
+ 705
638
+ ],
639
+ "page_idx": 3
640
+ },
641
+ {
642
+ "type": "text",
643
+ "text": "Context-aware generation. Except for the first tile $(0,0)$ , tiles are generated in the context of the part of the world generated before. To account for this context, for tiles with $x,y > 0$ , we modify the base image $B$ and the mask $M$ as shown in Fig. 5. For the base image $B$ , instead of the slab, we render the previously generated portion of the 3D",
644
+ "bbox": [
645
+ 511,
646
+ 810,
647
+ 905,
648
+ 900
649
+ ],
650
+ "page_idx": 3
651
+ },
652
+ {
653
+ "type": "page_number",
654
+ "text": "27588",
655
+ "bbox": [
656
+ 478,
657
+ 944,
658
+ 517,
659
+ 955
660
+ ],
661
+ "page_idx": 3
662
+ },
663
+ {
664
+ "type": "image",
665
+ "img_path": "images/b29a3c18e2c8226d9d62dbfbe635a31d97e4868ecfeefa7b61a728d350c983d4.jpg",
666
+ "image_caption": [
667
+ "Figure 5. Left: Base image $B$ and inpainting mask $M$ (white overlay) used to prompt the image generator $\\Phi_{2D}$ to generate an image for a world tile at $x > 0, y > 0$ . Right: Result of inpainting."
668
+ ],
669
+ "image_footnote": [],
670
+ "bbox": [
671
+ 93,
672
+ 88,
673
+ 281,
674
+ 205
675
+ ],
676
+ "page_idx": 4
677
+ },
678
+ {
679
+ "type": "image",
680
+ "img_path": "images/3fcc2d012a5de2e545b0151afb6b5267eeef592dab923b261adaca0cf092108e.jpg",
681
+ "image_caption": [
682
+ "Figure 7. Left: Isolating the image of the new tile from $I(x,y)$ . Right: Adding a slightly larger base underneath the tile."
683
+ ],
684
+ "image_footnote": [],
685
+ "bbox": [
686
+ 289,
687
+ 88,
688
+ 482,
689
+ 205
690
+ ],
691
+ "page_idx": 4
692
+ },
693
+ {
694
+ "type": "text",
695
+ "text": "world (which is constructed as described next in Secs. 3.3 and 3.4), providing context to the inpainting network. We also modify the mask $M$ to avoid covering tiles already generated to the left (west), i.e., for a tile $(i,j) \\in \\mathcal{T}$ , these are tiles $\\{(x,y) \\in \\mathcal{T} : x < i \\wedge y = j\\}$ . See the appendix for a comparison of different masking schemes.",
696
+ "bbox": [
697
+ 89,
698
+ 282,
699
+ 483,
700
+ 375
701
+ ],
702
+ "page_idx": 4
703
+ },
704
+ {
705
+ "type": "text",
706
+ "text": "To ensure continuity of the ground, before rendering the contextual image, we trim any 3D geometry that is sufficiently high to occlude the tile we wish to generate, as shown in Fig. 6 (observe the trimmed structures in Fig. 5).",
707
+ "bbox": [
708
+ 89,
709
+ 375,
710
+ 483,
711
+ 436
712
+ ],
713
+ "page_idx": 4
714
+ },
715
+ {
716
+ "type": "image",
717
+ "img_path": "images/4392d5e68a2282a69ae3118b454c62818e07937d4f495ee4c90bb4191df5494c.jpg",
718
+ "image_caption": [
719
+ "Figure 6. Trimming tall structures for 2D prompting"
720
+ ],
721
+ "image_footnote": [],
722
+ "bbox": [
723
+ 142,
724
+ 446,
725
+ 277,
726
+ 549
727
+ ],
728
+ "page_idx": 4
729
+ },
730
+ {
731
+ "type": "image",
732
+ "img_path": "images/fbd1843fc804906894cc07e039fef52bf16cde5b8440e8f221d9bd6475217614.jpg",
733
+ "image_caption": [],
734
+ "image_footnote": [],
735
+ "bbox": [
736
+ 295,
737
+ 446,
738
+ 431,
739
+ 549
740
+ ],
741
+ "page_idx": 4
742
+ },
743
+ {
744
+ "type": "text",
745
+ "text": "The appendix discusses a special case for tiles at the boundaries of the world (see Appendix A.2).",
746
+ "bbox": [
747
+ 89,
748
+ 590,
749
+ 482,
750
+ 622
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "text",
756
+ "text": "3.3. Prompting the 3D Generator",
757
+ "text_level": 1,
758
+ "bbox": [
759
+ 89,
760
+ 630,
761
+ 349,
762
+ 647
763
+ ],
764
+ "page_idx": 4
765
+ },
766
+ {
767
+ "type": "text",
768
+ "text": "Given the tile image $I(x,y)$ obtained from the 2D image generator in Sec. 3.2, the next goal is to generate a corresponding 3D reconstruction $G(x,y)$ of the tile using an image-to-3D model $\\Phi_{\\mathrm{3D}}$ . We opt for a robust 3D generator and select TRELLIS [63] due to its strong performance, ability to generate both shape and texture, and latent space structure, which is easy to manipulate for blending, as we show later in Sec. 3.4. TRELLIS produces the reconstructions $G(x,y)$ in the form of 3D Gaussian Splats (3DGS).",
769
+ "bbox": [
770
+ 89,
771
+ 652,
772
+ 482,
773
+ 787
774
+ ],
775
+ "page_idx": 4
776
+ },
777
+ {
778
+ "type": "text",
779
+ "text": "Thus, 3D reconstruction amounts to drawing a sample $G(x,y) \\sim \\Phi_{\\mathrm{3D}}(J(x,y))$ from the image-to-3D generator $\\Phi_{\\mathrm{3D}}$ . Rather than conditioning on the image $I(x,y)$ , we use a pre-processed version $J(x,y)$ , as described next.",
780
+ "bbox": [
781
+ 89,
782
+ 789,
783
+ 483,
784
+ 849
785
+ ],
786
+ "page_idx": 4
787
+ },
788
+ {
789
+ "type": "text",
790
+ "text": "2D foreground extraction and rebasing. Recall that the image $I(x,y)$ output by the 2D generator in Sec. 3.2 is an image of the tile and its context. However, the 3D generator",
791
+ "bbox": [
792
+ 89,
793
+ 854,
794
+ 483,
795
+ 900
796
+ ],
797
+ "page_idx": 4
798
+ },
799
+ {
800
+ "type": "image",
801
+ "img_path": "images/1d3c92958393f3d4bccfdd4b9b5d630735e57ba06b3072eb22ce4dd23b621641.jpg",
802
+ "image_caption": [],
803
+ "image_footnote": [],
804
+ "bbox": [
805
+ 517,
806
+ 89,
807
+ 901,
808
+ 205
809
+ ],
810
+ "page_idx": 4
811
+ },
812
+ {
813
+ "type": "text",
814
+ "text": "$\\Phi_{3D}$ expects the input image to focus only on the object that needs to be reconstructed, i.e., the new tile. The first step is to extract from $I(x,y)$ only the part that corresponds to the new tile, which we achieve using rembg [15] with alpha matting [4], as shown in Fig. 7.",
815
+ "bbox": [
816
+ 511,
817
+ 272,
818
+ 905,
819
+ 348
820
+ ],
821
+ "page_idx": 4
822
+ },
823
+ {
824
+ "type": "text",
825
+ "text": "The resulting image is narrowly cropped around the new tile. Similar to Sec. 3.2, we found it beneficial to include a slab base for the tile, an operation we call 'rebasing,' as shown in Fig. 7. We simply compose the image of the tile with a slightly larger gray slab (in 2D) to obtain $J(x,y)$ , which effectively provides a 'frame' for the 3D generator to work with. The base is reconstructed as part of the tile's geometry, which can be used for validation and as an easy-to-detect handle for further 3D processing.",
826
+ "bbox": [
827
+ 511,
828
+ 348,
829
+ 905,
830
+ 484
831
+ ],
832
+ "page_idx": 4
833
+ },
834
+ {
835
+ "type": "text",
836
+ "text": "The 'rebased' image $J(x,y)$ is fed to the 3D generator $\\Phi_{\\mathrm{3D}}$ to obtain the 3D reconstruction $G(x,y)$ of the tile, in the form of 3D Gaussian Splats. The effect of rebasing on the 3D result is shown in Fig. 8.",
837
+ "bbox": [
838
+ 511,
839
+ 484,
840
+ 906,
841
+ 546
842
+ ],
843
+ "page_idx": 4
844
+ },
845
+ {
846
+ "type": "image",
847
+ "img_path": "images/ee57d0ae2a2fd1326b7cf21360aae9b2239b925a01ec165dd42744d2cfc9a279.jpg",
848
+ "image_caption": [
849
+ "Figure 8. Top: 3D reconstruction using a tight base. Bottom: 3D reconstruction with a slightly larger base, which helps to keep the tile's geometry above ground (see the back of the reconstruction) and creates an easy-to-detect 3D base."
850
+ ],
851
+ "image_footnote": [],
852
+ "bbox": [
853
+ 514,
854
+ 561,
855
+ 910,
856
+ 734
857
+ ],
858
+ "page_idx": 4
859
+ },
860
+ {
861
+ "type": "text",
862
+ "text": "3D geometric validation. Because the generators are imperfect, we verify the 3D reconstruction $G(x,y)$ to ensure that it is of sufficient quality. If not, we discard it and regenerate the tile using a different random seed. To verify the tile, we use a few heuristics to check that the tile's ge",
863
+ "bbox": [
864
+ 511,
865
+ 825,
866
+ 905,
867
+ 900
868
+ ],
869
+ "page_idx": 4
870
+ },
871
+ {
872
+ "type": "page_number",
873
+ "text": "27589",
874
+ "bbox": [
875
+ 478,
876
+ 944,
877
+ 519,
878
+ 955
879
+ ],
880
+ "page_idx": 4
881
+ },
882
+ {
883
+ "type": "text",
884
+ "text": "ometry occupies a square region of sufficient size and that the base of the tile has been reconstructed faithfully. Please see Appendix A.3 for more details.",
885
+ "bbox": [
886
+ 89,
887
+ 90,
888
+ 482,
889
+ 137
890
+ ],
891
+ "page_idx": 5
892
+ },
893
+ {
894
+ "type": "text",
895
+ "text": "3D post-processing. At this point, we have verified the 3D reconstruction $G(x,y)$ for the tile as a mixture of 3D Gaussians. However, the actual 3D footprint, orientation, and size of the tile are controlled by the 3D generator and are inconsistent. The post-processing step utilizes simple heuristics to crop the subset of 3D Gaussians that form the actual tile, remove the extended base, rescale them to fill the full tile, and reorient them to face the same way as the 2D image prompt. We explain this in more detail in Appendix A.4.",
896
+ "bbox": [
897
+ 89,
898
+ 142,
899
+ 483,
900
+ 280
901
+ ],
902
+ "page_idx": 5
903
+ },
904
+ {
905
+ "type": "text",
906
+ "text": "3.4. 3D Blending",
907
+ "text_level": 1,
908
+ "bbox": [
909
+ 89,
910
+ 289,
911
+ 225,
912
+ 306
913
+ ],
914
+ "page_idx": 5
915
+ },
916
+ {
917
+ "type": "text",
918
+ "text": "At this stage of the pipeline, we have reconstructed all 3D tiles $G(x,y)$ , $(x,y) \\in \\mathcal{T}$ . Due to the prompting and post-processing steps in Secs. 3.2 and 3.3, the tiles are already approximately aligned and oriented correctly, including being roughly level with the ground.",
919
+ "bbox": [
920
+ 89,
921
+ 311,
922
+ 483,
923
+ 386
924
+ ],
925
+ "page_idx": 5
926
+ },
927
+ {
928
+ "type": "text",
929
+ "text": "However, they are not perfectly aligned, particularly at their boundaries. This misalignment arises because TREL-LIS does not reconstruct the input images exactly, and only a single view of each tile is provided, which only indirectly controls the reconstruction of the back of the tile. Further, while $G(x,y)$ is represented as 3DGS, these imperfections are not easily addressed in that representation space.",
930
+ "bbox": [
931
+ 89,
932
+ 387,
933
+ 483,
934
+ 493
935
+ ],
936
+ "page_idx": 5
937
+ },
938
+ {
939
+ "type": "text",
940
+ "text": "We address this issue by blending the tiles in TRELLIS latent space to create a coherent and continuous 3D scene. As explained next, we first repaint the boundary region in 2D. Then, we align the tiles in the latent voxel grid of $\\Phi_{3\\mathrm{D}}$ . Finally, we resample the voxel features in a narrow boundary region between two tiles.",
941
+ "bbox": [
942
+ 89,
943
+ 493,
944
+ 483,
945
+ 585
946
+ ],
947
+ "page_idx": 5
948
+ },
949
+ {
950
+ "type": "text",
951
+ "text": "Blending in 2D. To blend the latents of two neighboring tiles, we first predict the appearance of the boundary between the two tiles. To achieve this, we place the two 3D tiles next to each other, render a frontal view, and inpaint the middle region of the rendering (Fig. 2) using $\\Phi_{2\\mathrm{D}}$ . This results in a blended image, which we use to condition $\\Phi_{3\\mathrm{D}}$ .",
952
+ "bbox": [
953
+ 89,
954
+ 590,
955
+ 482,
956
+ 681
957
+ ],
958
+ "page_idx": 5
959
+ },
960
+ {
961
+ "type": "text",
962
+ "text": "Unifying the tile size in 3D latent space. Recall that, due to the rebasing, $G(x,y)$ contains a 3D base. While we have removed the base in 3DGS space, we have yet to do the same in the latent space. We use the same cuts applied in 3DGS space to crop the latents, rounding them to account for the discrete nature of the latent voxel grid.",
963
+ "bbox": [
964
+ 89,
965
+ 688,
966
+ 482,
967
+ 779
968
+ ],
969
+ "page_idx": 5
970
+ },
971
+ {
972
+ "type": "text",
973
+ "text": "Because these cuts might differ for two neighboring tiles, $\\gamma^1$ and $\\gamma^2$ , we may need to upsample the latents to ensure $\\gamma^1, \\gamma^2 \\in \\mathbb{R}^{D \\times R \\times R \\times R}$ . Here, $\\gamma$ represents $D$ -dimensional features in the $R$ -sized 3D grid that TRELLIS denoises.",
974
+ "bbox": [
975
+ 89,
976
+ 779,
977
+ 482,
978
+ 839
979
+ ],
980
+ "page_idx": 5
981
+ },
982
+ {
983
+ "type": "text",
984
+ "text": "We found that naively upsampling the latents by interpolation leads to poor reconstructions, as shown in Fig. 9. Instead, we propose a different scheme where we resample the features (called structured latents in [62]) after upsam",
985
+ "bbox": [
986
+ 89,
987
+ 839,
988
+ 483,
989
+ 901
990
+ ],
991
+ "page_idx": 5
992
+ },
993
+ {
994
+ "type": "image",
995
+ "img_path": "images/344713b811212ab31832aa331308e561ca83466e920e3f012feb127a7f3b7f65.jpg",
996
+ "image_caption": [
997
+ "Figure 9. Upsampling sparse latents. We need to resize or up-sample sparse latents in order to stitch them. Due to the sparsity of the latents and the behaviour of the latent decoder, naively resampling in latent space leads to artifacts. Our proposed resizing of the sparse latents better preserves textures and fine structures."
998
+ ],
999
+ "image_footnote": [],
1000
+ "bbox": [
1001
+ 526,
1002
+ 88,
1003
+ 893,
1004
+ 189
1005
+ ],
1006
+ "page_idx": 5
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "pling the latent voxel grid of each tile. First, we upsample the cropped occupancy volume that TRELLIS predicted to the original resolution $V \\in \\{0,1\\}^{R \\times R \\times R}$ . Next, we denoise a new set of latents $\\gamma$ on the upsampled occupancy volume. To preserve the details and textures of the original 3D tile, we render it from multiple views and jointly condition the structured latent inference on all of them. In practice, when denoising with multiple conditioning views, at each timestep, the denoising step is computed as the average denoising step across all views. We show that this upsampling scheme leads to superior reconstructions in Fig. 9.",
1011
+ "bbox": [
1012
+ 511,
1013
+ 299,
1014
+ 906,
1015
+ 465
1016
+ ],
1017
+ "page_idx": 5
1018
+ },
1019
+ {
1020
+ "type": "text",
1021
+ "text": "Now, mirroring the base cropping operation in 3DGS space, we have tiles of matching sizes in latent space.",
1022
+ "bbox": [
1023
+ 511,
1024
+ 467,
1025
+ 905,
1026
+ 498
1027
+ ],
1028
+ "page_idx": 5
1029
+ },
1030
+ {
1031
+ "type": "text",
1032
+ "text": "Blending in 3D. Finally, we use $\\Phi_{\\mathrm{3D}}$ to blend tiles. We take the latents of the two tiles $\\gamma^1$ and $\\gamma^2$ , where $\\gamma^1, \\gamma^2 \\in \\mathbb{R}^{D \\times R \\times R \\times R}$ after upsampling. We combine them into a new volume $\\gamma$ , where the side where they meet is in the center:",
1033
+ "bbox": [
1034
+ 511,
1035
+ 505,
1036
+ 906,
1037
+ 579
1038
+ ],
1039
+ "page_idx": 5
1040
+ },
1041
+ {
1042
+ "type": "equation",
1043
+ "text": "\n$$\n\\gamma_ {:, x, y, z} = \\left\\{ \\begin{array}{l l} \\gamma_ {:, x + R / 2, y, z} ^ {1}, & \\text {i f} x < R / 2, \\\\ \\gamma_ {:, x - R / 2, y, z} ^ {2}, & \\text {i f} x \\geq R / 2. \\end{array} \\right.\n$$\n",
1044
+ "text_format": "latex",
1045
+ "bbox": [
1046
+ 575,
1047
+ 593,
1048
+ 841,
1049
+ 635
1050
+ ],
1051
+ "page_idx": 5
1052
+ },
1053
+ {
1054
+ "type": "text",
1055
+ "text": "We apply the denoising function $\\Omega$ , which is the latent denoiser of $\\Phi_{3D}$ , to the volume $\\gamma$ , but only within the middle region where we have applied the stitch, i.e., for $x \\in [R/2 - r, R/2 + r]$ for some $r < R/2$ , while keeping the rest fixed. Formally, we initialize $\\tilde{\\gamma} \\sim \\mathcal{N}(0, I)$ and at each denoising step $t$ , we update $\\tilde{\\gamma}$ as:",
1056
+ "bbox": [
1057
+ 511,
1058
+ 648,
1059
+ 906,
1060
+ 742
1061
+ ],
1062
+ "page_idx": 5
1063
+ },
1064
+ {
1065
+ "type": "equation",
1066
+ "text": "\n$$\n\\tilde {\\gamma} _ {t + 1,: x, y, z} = \\left\\{ \\begin{array}{l l} \\Omega (\\tilde {\\gamma} _ {t,: x, y, z}), & \\text {i f} | x - R / 2 | \\leq r, \\\\ \\gamma_ {t + 1,: x, y, z}, & \\text {o t h e r w i s e}. \\end{array} \\right.\n$$\n",
1067
+ "text_format": "latex",
1068
+ "bbox": [
1069
+ 544,
1070
+ 755,
1071
+ 870,
1072
+ 797
1073
+ ],
1074
+ "page_idx": 5
1075
+ },
1076
+ {
1077
+ "type": "text",
1078
+ "text": "Here, $\\gamma_{t}$ is obtained by adding noise to the original $\\gamma$ at the corresponding noise level for step $t$ . In practice, we only update the structured latents, keeping the sparse structure (latent voxel grid) fixed: The low spatial resolution of the sparse structure ( $R = 16$ , compared to $R = 64$ for the structured latents) is too coarse for choosing an adequate $r$ .",
1079
+ "bbox": [
1080
+ 511,
1081
+ 809,
1082
+ 906,
1083
+ 902
1084
+ ],
1085
+ "page_idx": 5
1086
+ },
1087
+ {
1088
+ "type": "page_number",
1089
+ "text": "27590",
1090
+ "bbox": [
1091
+ 478,
1092
+ 944,
1093
+ 519,
1094
+ 957
1095
+ ],
1096
+ "page_idx": 5
1097
+ },
1098
+ {
1099
+ "type": "table",
1100
+ "img_path": "images/9ad3e17ebe27ca2bbc53b4fc0c71d9bb80096e17aec581f2627e6f94c28aec0e.jpg",
1101
+ "table_caption": [],
1102
+ "table_footnote": [],
1103
+ "table_body": "<table><tr><td colspan=\"5\">Win Rate (%)</td></tr><tr><td>Overall</td><td>Geometry</td><td>Exploration</td><td>Diversity</td><td>Realism</td></tr><tr><td>90.9</td><td>81.8</td><td>90.9</td><td>90.9</td><td>86.4</td></tr></table>",
1104
+ "bbox": [
1105
+ 99,
1106
+ 88,
1107
+ 475,
1108
+ 147
1109
+ ],
1110
+ "page_idx": 6
1111
+ },
1112
+ {
1113
+ "type": "image",
1114
+ "img_path": "images/9c2a00b23f4ed6887b163eb7b2b5a179ecc7e9e2b703f284e8cd9fa1d8162d74.jpg",
1115
+ "image_caption": [
1116
+ "Figure 10. Left: $2 \\times 2$ grid generated with our method, not taking context into account—here, the scale of the buildings is not consistent. Right: Generated with our method using the same prompts, where context is taken into account as described in Sec. 3.2."
1117
+ ],
1118
+ "image_footnote": [],
1119
+ "bbox": [
1120
+ 93,
1121
+ 226,
1122
+ 486,
1123
+ 308
1124
+ ],
1125
+ "page_idx": 6
1126
+ },
1127
+ {
1128
+ "type": "text",
1129
+ "text": "4. Experiments",
1130
+ "text_level": 1,
1131
+ "bbox": [
1132
+ 89,
1133
+ 414,
1134
+ 223,
1135
+ 431
1136
+ ],
1137
+ "page_idx": 6
1138
+ },
1139
+ {
1140
+ "type": "text",
1141
+ "text": "Experimental details. We generate the text prompts using ChatGPT o3-mini-high. For the 2D inpainter, we use the Flux ControlNet of [1].",
1142
+ "bbox": [
1143
+ 89,
1144
+ 439,
1145
+ 483,
1146
+ 486
1147
+ ],
1148
+ "page_idx": 6
1149
+ },
1150
+ {
1151
+ "type": "text",
1152
+ "text": "Human preference. We evaluate human preference for the results generated by our method compared to those obtained with BlockFusion [60]. In particular, we compare a 'city' scene, showing the entire scene as well as close-up detail views. As seen in Tab. 1, participants $(n = 22)$ find our method better overall, with superior geometry, realism, and diversity.",
1153
+ "bbox": [
1154
+ 89,
1155
+ 491,
1156
+ 483,
1157
+ 598
1158
+ ],
1159
+ "page_idx": 6
1160
+ },
1161
+ {
1162
+ "type": "text",
1163
+ "text": "4.1. Ablations",
1164
+ "text_level": 1,
1165
+ "bbox": [
1166
+ 89,
1167
+ 608,
1168
+ 202,
1169
+ 623
1170
+ ],
1171
+ "page_idx": 6
1172
+ },
1173
+ {
1174
+ "type": "text",
1175
+ "text": "Here, we ablate several components of our approach to demonstrate the importance of each.",
1176
+ "bbox": [
1177
+ 89,
1178
+ 630,
1179
+ 482,
1180
+ 661
1181
+ ],
1182
+ "page_idx": 6
1183
+ },
1184
+ {
1185
+ "type": "text",
1186
+ "text": "Building a grid. A naive approach to generating a 3D scene involves querying the image generator to produce an image of a large-scale scene (using our 2D image prompt setup) and then obtaining the entire 3D world directly with TRELLIS. To achieve the same level of control provided by our method, the textual prompt needs to be highly detailed and include layout instructions. However, we found that neither precise nor abstract prompts were effective at steering the generations of Flux (for details, see Appendix A.4). This highlights the effectiveness of our grid-based approach in generating highly detailed 3D worlds at scale.",
1187
+ "bbox": [
1188
+ 89,
1189
+ 667,
1190
+ 482,
1191
+ 834
1192
+ ],
1193
+ "page_idx": 6
1194
+ },
1195
+ {
1196
+ "type": "text",
1197
+ "text": "2D prompting context. We remove the context from neighboring tiles, as described in Sec. 3.2. When this is done, each tile is sampled independently, and the relative scale between objects becomes inconsistent (Fig. 10).",
1198
+ "bbox": [
1199
+ 89,
1200
+ 839,
1201
+ 483,
1202
+ 902
1203
+ ],
1204
+ "page_idx": 6
1205
+ },
1206
+ {
1207
+ "type": "table",
1208
+ "img_path": "images/01cd3fbe18ea3efae6718173e0fdfc5c85f329ccae09bf7a6e576944c7d4e7a0.jpg",
1209
+ "table_caption": [
1210
+ "Table 1. Win rates of our method against BlockFusion. We asked participants to select which scene they prefer overall, as well as which one has better geometry, would be more interesting to explore, is more diverse, and has better realism."
1211
+ ],
1212
+ "table_footnote": [],
1213
+ "table_body": "<table><tr><td>Method</td><td>Base Area</td><td>Squareness ↑</td><td>Completeness ↑</td></tr><tr><td>No Rebasing</td><td>2271</td><td>0.92</td><td>0.73</td></tr><tr><td>Ours</td><td>4096</td><td>1.00</td><td>1.00</td></tr></table>",
1214
+ "bbox": [
1215
+ 514,
1216
+ 88,
1217
+ 905,
1218
+ 147
1219
+ ],
1220
+ "page_idx": 6
1221
+ },
1222
+ {
1223
+ "type": "table",
1224
+ "img_path": "images/8d1e98e2b2158581b66a55677acce0ae7b984b27f69f43375e150a8e8a0d6282.jpg",
1225
+ "table_caption": [
1226
+ "Table 2. Average tile 3D geometry metrics for an approach without rebasing and our method. Rebasing is crucial to ensure a tile is square and its base has been reconstructed faithfully. The metrics we use are the area of the base in voxels, a measure for the 'squareness' of the base, and how many border voxels have been faithfully reconstructed. For details, please refer to the appendix."
1227
+ ],
1228
+ "table_footnote": [],
1229
+ "table_body": "<table><tr><td>Method</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>FID ↓</td><td>KID ↓</td></tr><tr><td>Naïve upsampling</td><td>0.5914</td><td>0.3093</td><td>200.5</td><td>0.243</td></tr><tr><td>Ours (single frame)</td><td>0.3517</td><td>0.5149</td><td>111.6</td><td>0.069</td></tr><tr><td>Ours (multi frame)</td><td>0.3212</td><td>0.5312</td><td>89.1</td><td>0.051</td></tr></table>",
1230
+ "bbox": [
1231
+ 524,
1232
+ 255,
1233
+ 895,
1234
+ 325
1235
+ ],
1236
+ "page_idx": 6
1237
+ },
1238
+ {
1239
+ "type": "text",
1240
+ "text": "Table 3. Perceptual metrics for our method and the naive approach for 3D latent upsampling. Lower values for LPIPS [72], FID [16], and KID [3] are better, while higher values for SSIM are better. We see that even using a single conditioning frame leads to better results, and multiple frames further improve performance.",
1241
+ "bbox": [
1242
+ 511,
1243
+ 335,
1244
+ 906,
1245
+ 407
1246
+ ],
1247
+ "page_idx": 6
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "text": "Rebasing. To place tiles on a grid, they need to be square (otherwise the grid would be jagged), and their base must be reconstructed faithfully (clearly delimiting where the tile stops). Without rebasing, the geometry generated by TREL-LIS might extend beyond the base, making the tile's 'true' extent difficult to detect, as shown in Fig. 8. We ablate the effect of rebasing using a small $2 \\times 2$ scene to minimize the effect of error accumulation. As seen in Tab. 2, without rebasing, TREL-LIS generates tiles that are, on average, neither perfectly square nor have a solid border.",
1252
+ "bbox": [
1253
+ 511,
1254
+ 434,
1255
+ 906,
1256
+ 585
1257
+ ],
1258
+ "page_idx": 6
1259
+ },
1260
+ {
1261
+ "type": "text",
1262
+ "text": "Latent upsampling. We sample 10 random views from each of 200 tiles generated by TRELLIS and compute perceptual metrics in Tab. 3 when upsampling latents with our proposed approach in Sec. 3.4 versus naive interpolation. Our proposed method leads to better results across a range of metrics, even when using a single conditioning frame.",
1263
+ "bbox": [
1264
+ 511,
1265
+ 593,
1266
+ 905,
1267
+ 684
1268
+ ],
1269
+ "page_idx": 6
1270
+ },
1271
+ {
1272
+ "type": "text",
1273
+ "text": "3D blending. In Fig. 12, we generate a scene without applying 3D blending (Sec. 3.4), resulting in visible discontinuities between the tiles.",
1274
+ "bbox": [
1275
+ 511,
1276
+ 691,
1277
+ 905,
1278
+ 737
1279
+ ],
1280
+ "page_idx": 6
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "text": "4.2. Qualitative Results",
1285
+ "text_level": 1,
1286
+ "bbox": [
1287
+ 511,
1288
+ 750,
1289
+ 694,
1290
+ 766
1291
+ ],
1292
+ "page_idx": 6
1293
+ },
1294
+ {
1295
+ "type": "text",
1296
+ "text": "We present example scenes generated by our method in Fig. 1. Additionally, we show detailed views highlighting the quality and diversity of the scenes. Please see the appendix for many more examples.",
1297
+ "bbox": [
1298
+ 511,
1299
+ 772,
1300
+ 905,
1301
+ 834
1302
+ ],
1303
+ "page_idx": 6
1304
+ },
1305
+ {
1306
+ "type": "text",
1307
+ "text": "Exploring a generated world. SynCity produces explorable worlds that are easy to navigate. To demonstrate this, we sample trajectories and 'walk into' the generated 3D worlds (Fig. 11). Pre-made skybox textures are added",
1308
+ "bbox": [
1309
+ 511,
1310
+ 839,
1311
+ 906,
1312
+ 901
1313
+ ],
1314
+ "page_idx": 6
1315
+ },
1316
+ {
1317
+ "type": "page_number",
1318
+ "text": "27591",
1319
+ "bbox": [
1320
+ 478,
1321
+ 944,
1322
+ 517,
1323
+ 957
1324
+ ],
1325
+ "page_idx": 6
1326
+ },
1327
+ {
1328
+ "type": "image",
1329
+ "img_path": "images/4949890d9511b0f49b759ba5c818102f5a6eaa8e16a6e42c8abc143abb41d08e.jpg",
1330
+ "image_caption": [],
1331
+ "image_footnote": [],
1332
+ "bbox": [
1333
+ 96,
1334
+ 88,
1335
+ 205,
1336
+ 175
1337
+ ],
1338
+ "page_idx": 7
1339
+ },
1340
+ {
1341
+ "type": "image",
1342
+ "img_path": "images/dcd0a6ea556e2e67826a8425cc557a1fa220ea7c302bd39655fb6936e5d58ba6.jpg",
1343
+ "image_caption": [],
1344
+ "image_footnote": [],
1345
+ "bbox": [
1346
+ 210,
1347
+ 88,
1348
+ 321,
1349
+ 175
1350
+ ],
1351
+ "page_idx": 7
1352
+ },
1353
+ {
1354
+ "type": "image",
1355
+ "img_path": "images/9209c41848fe4c005ab121ec1a1aca7fe2415e3dff280c9634c0db1bed8e1387.jpg",
1356
+ "image_caption": [],
1357
+ "image_footnote": [],
1358
+ "bbox": [
1359
+ 326,
1360
+ 88,
1361
+ 437,
1362
+ 175
1363
+ ],
1364
+ "page_idx": 7
1365
+ },
1366
+ {
1367
+ "type": "image",
1368
+ "img_path": "images/cc7338cffa49f244652417102d6bbb8ea47247478e44bbddce71ce88e6894f4b.jpg",
1369
+ "image_caption": [],
1370
+ "image_footnote": [],
1371
+ "bbox": [
1372
+ 442,
1373
+ 88,
1374
+ 553,
1375
+ 175
1376
+ ],
1377
+ "page_idx": 7
1378
+ },
1379
+ {
1380
+ "type": "image",
1381
+ "img_path": "images/601f125b72b6435cf8cd282e935b92f130b7e334b8d12e91c93d59015f0e0223.jpg",
1382
+ "image_caption": [],
1383
+ "image_footnote": [],
1384
+ "bbox": [
1385
+ 558,
1386
+ 88,
1387
+ 669,
1388
+ 175
1389
+ ],
1390
+ "page_idx": 7
1391
+ },
1392
+ {
1393
+ "type": "image",
1394
+ "img_path": "images/943da4a153285a43c9a96216a754fc278eec7b996c4519585377a2a7ac539956.jpg",
1395
+ "image_caption": [],
1396
+ "image_footnote": [],
1397
+ "bbox": [
1398
+ 674,
1399
+ 88,
1400
+ 785,
1401
+ 175
1402
+ ],
1403
+ "page_idx": 7
1404
+ },
1405
+ {
1406
+ "type": "image",
1407
+ "img_path": "images/223bfe20f6031ba604a9d3f5f569d8a75888b5e44539414b956c93b79030c5ec.jpg",
1408
+ "image_caption": [],
1409
+ "image_footnote": [],
1410
+ "bbox": [
1411
+ 790,
1412
+ 88,
1413
+ 901,
1414
+ 175
1415
+ ],
1416
+ "page_idx": 7
1417
+ },
1418
+ {
1419
+ "type": "image",
1420
+ "img_path": "images/dbadda5f459d18a4e623ac68ca7ba6feb0e8af6bcaf275656100812bfdf7e7ce.jpg",
1421
+ "image_caption": [],
1422
+ "image_footnote": [],
1423
+ "bbox": [
1424
+ 96,
1425
+ 178,
1426
+ 207,
1427
+ 267
1428
+ ],
1429
+ "page_idx": 7
1430
+ },
1431
+ {
1432
+ "type": "image",
1433
+ "img_path": "images/11a99a57aee2464c5c0e5169849c49abce8668d7562c02616425e95883af283b.jpg",
1434
+ "image_caption": [],
1435
+ "image_footnote": [],
1436
+ "bbox": [
1437
+ 210,
1438
+ 178,
1439
+ 321,
1440
+ 267
1441
+ ],
1442
+ "page_idx": 7
1443
+ },
1444
+ {
1445
+ "type": "image",
1446
+ "img_path": "images/0826b10a6f103b48a2a5e5dfaf05b45dcfad8411868c2b214bf2f625c8afdfa1.jpg",
1447
+ "image_caption": [],
1448
+ "image_footnote": [],
1449
+ "bbox": [
1450
+ 326,
1451
+ 178,
1452
+ 437,
1453
+ 267
1454
+ ],
1455
+ "page_idx": 7
1456
+ },
1457
+ {
1458
+ "type": "image",
1459
+ "img_path": "images/7886c7d428333c77765d5fa3abba36966322ad1036c70a59ef272fb73d84a19d.jpg",
1460
+ "image_caption": [],
1461
+ "image_footnote": [],
1462
+ "bbox": [
1463
+ 442,
1464
+ 178,
1465
+ 553,
1466
+ 267
1467
+ ],
1468
+ "page_idx": 7
1469
+ },
1470
+ {
1471
+ "type": "image",
1472
+ "img_path": "images/b0daf9f185a12ab80f4284719923f4591e4e89774536f565a5ad61feba952ff5.jpg",
1473
+ "image_caption": [],
1474
+ "image_footnote": [],
1475
+ "bbox": [
1476
+ 558,
1477
+ 178,
1478
+ 669,
1479
+ 267
1480
+ ],
1481
+ "page_idx": 7
1482
+ },
1483
+ {
1484
+ "type": "image",
1485
+ "img_path": "images/a8a376174a44204dcca1aafb22e8abe0fedee2c5f2e313dcd96578277e1ffbca.jpg",
1486
+ "image_caption": [],
1487
+ "image_footnote": [],
1488
+ "bbox": [
1489
+ 674,
1490
+ 178,
1491
+ 785,
1492
+ 267
1493
+ ],
1494
+ "page_idx": 7
1495
+ },
1496
+ {
1497
+ "type": "image",
1498
+ "img_path": "images/c50955da3b398b91a5a85b275c98a2978f3dd1800e3819daa3a88f04cd1d1d1d.jpg",
1499
+ "image_caption": [],
1500
+ "image_footnote": [],
1501
+ "bbox": [
1502
+ 790,
1503
+ 178,
1504
+ 901,
1505
+ 267
1506
+ ],
1507
+ "page_idx": 7
1508
+ },
1509
+ {
1510
+ "type": "image",
1511
+ "img_path": "images/d1d5ef7362afee5c84c2a90981f0a43c6f9fbd31ddd9b1027edd3bedf5fd4866.jpg",
1512
+ "image_caption": [
1513
+ "Figure 11. Exploring a 3D world. We show camera walk-throughs that explore the generated 3D worlds. Please refer to the supplement for accompanying videos."
1514
+ ],
1515
+ "image_footnote": [],
1516
+ "bbox": [
1517
+ 94,
1518
+ 270,
1519
+ 207,
1520
+ 356
1521
+ ],
1522
+ "page_idx": 7
1523
+ },
1524
+ {
1525
+ "type": "image",
1526
+ "img_path": "images/859888a2add68234c85d596dd7243bf275696725d4e17db9cfa0b0903ca65ebd.jpg",
1527
+ "image_caption": [],
1528
+ "image_footnote": [],
1529
+ "bbox": [
1530
+ 210,
1531
+ 270,
1532
+ 321,
1533
+ 354
1534
+ ],
1535
+ "page_idx": 7
1536
+ },
1537
+ {
1538
+ "type": "image",
1539
+ "img_path": "images/5568cacb63542a7a8ef9abe44b2dbe56011af62fe74ee235a9d0e0da4500ae16.jpg",
1540
+ "image_caption": [],
1541
+ "image_footnote": [],
1542
+ "bbox": [
1543
+ 326,
1544
+ 270,
1545
+ 437,
1546
+ 354
1547
+ ],
1548
+ "page_idx": 7
1549
+ },
1550
+ {
1551
+ "type": "image",
1552
+ "img_path": "images/5582bc0fd5656cbb61f48d3a382cec8a8199fd7ea66d29b3f5d9adcee88dfe27.jpg",
1553
+ "image_caption": [],
1554
+ "image_footnote": [],
1555
+ "bbox": [
1556
+ 442,
1557
+ 270,
1558
+ 553,
1559
+ 354
1560
+ ],
1561
+ "page_idx": 7
1562
+ },
1563
+ {
1564
+ "type": "image",
1565
+ "img_path": "images/633cc8baa924ff7eff9ac8f2eed36833fab109a7bdd6a3f56824af78a9abdfde.jpg",
1566
+ "image_caption": [],
1567
+ "image_footnote": [],
1568
+ "bbox": [
1569
+ 558,
1570
+ 270,
1571
+ 669,
1572
+ 354
1573
+ ],
1574
+ "page_idx": 7
1575
+ },
1576
+ {
1577
+ "type": "image",
1578
+ "img_path": "images/97a6b06639440a1c90af0deeb3ac4d2cda3d81dc14dfc091016da3561bdb58ea.jpg",
1579
+ "image_caption": [],
1580
+ "image_footnote": [],
1581
+ "bbox": [
1582
+ 674,
1583
+ 270,
1584
+ 785,
1585
+ 354
1586
+ ],
1587
+ "page_idx": 7
1588
+ },
1589
+ {
1590
+ "type": "image",
1591
+ "img_path": "images/e8b3cfb9b6dd72e9ec80897a2f11e5f28aa4cd93dbcfb16a7d0779aa2b3dce2c.jpg",
1592
+ "image_caption": [],
1593
+ "image_footnote": [],
1594
+ "bbox": [
1595
+ 790,
1596
+ 270,
1597
+ 901,
1598
+ 354
1599
+ ],
1600
+ "page_idx": 7
1601
+ },
1602
+ {
1603
+ "type": "image",
1604
+ "img_path": "images/b7c35b1e0ea24ff69d2f8c40cc9040b66b07164cc652be64fe345896413c9c50.jpg",
1605
+ "image_caption": [
1606
+ "Before blending",
1607
+ "Figure 12. Left: Tiles before the 3D blending step. Right: After the 3D blending step. Previously visible boundaries between tiles are now well-blended, resulting in a more coherent appearance."
1608
+ ],
1609
+ "image_footnote": [],
1610
+ "bbox": [
1611
+ 96,
1612
+ 435,
1613
+ 292,
1614
+ 566
1615
+ ],
1616
+ "page_idx": 7
1617
+ },
1618
+ {
1619
+ "type": "image",
1620
+ "img_path": "images/e6f698cb82686647a1a7c0fe13a35311b8c62e27663ff5b4ba60bbad0cc93dd2.jpg",
1621
+ "image_caption": [
1622
+ "After blending"
1623
+ ],
1624
+ "image_footnote": [],
1625
+ "bbox": [
1626
+ 295,
1627
+ 436,
1628
+ 475,
1629
+ 566
1630
+ ],
1631
+ "page_idx": 7
1632
+ },
1633
+ {
1634
+ "type": "text",
1635
+ "text": "for visual effect. Unlike videos generated by world models such as [39], our results are guaranteed to be consistent and do not suffer from semantic drift. Unlike other systems that only generate a 'bubble,' our method creates spaces sufficiently large to be navigated in a meaningful way.",
1636
+ "bbox": [
1637
+ 89,
1638
+ 642,
1639
+ 483,
1640
+ 719
1641
+ ],
1642
+ "page_idx": 7
1643
+ },
1644
+ {
1645
+ "type": "text",
1646
+ "text": "5. Conclusion",
1647
+ "text_level": 1,
1648
+ "bbox": [
1649
+ 89,
1650
+ 737,
1651
+ 209,
1652
+ 752
1653
+ ],
1654
+ "page_idx": 7
1655
+ },
1656
+ {
1657
+ "type": "text",
1658
+ "text": "We have introduced SynCity, an approach to generate complex, diverse, high-quality, and fully textured 3D worlds with fine-grained control over their layout and appearance. SynCity creates worlds by autoregressively generating tiles on a grid, which can be scaled to arbitrary sizes. By accounting for local context when generating individual tiles and applying a novel blending operation, the tiles smoothly integrate with one another, creating seamless and coherent scenes. SynCity is flexible: It can either generate worlds",
1659
+ "bbox": [
1660
+ 89,
1661
+ 763,
1662
+ 483,
1663
+ 900
1664
+ ],
1665
+ "page_idx": 7
1666
+ },
1667
+ {
1668
+ "type": "text",
1669
+ "text": "from a brief 'world' text prompt, but also allows fine-grained control of individual tiles via tile-specific instructions. Despite offering this high degree of control, SynCity maintains an overall stylistic and thematic consistency of the generated world. The rich detail of the generated worlds can be fully explored, without being restricted to a single '3D bubble' as in many prior works.",
1670
+ "bbox": [
1671
+ 511,
1672
+ 422,
1673
+ 906,
1674
+ 527
1675
+ ],
1676
+ "page_idx": 7
1677
+ },
1678
+ {
1679
+ "type": "text",
1680
+ "text": "We have demonstrated the effectiveness of off-the-shelf generation by utilizing pre-trained language, 2D, and 3D generators through carefully designed prompting strategies. This eliminates the need to retrain any of these components, which would be complicated by the lack of large-scale scene datasets. Nevertheless, we expect that once these do become available, fine-tuning some components could result in further improvement of the method's performance, and would further simplify the alignment and rebasing steps of the pipeline. Future work could also consider relaxing the rigid grid structure and establishing a greater coherence between tiles. In terms of structure, the tiles could be randomly shifted and scaled. To ensure a coherent global theme that is carried into fine-grained local details, a coarse-to-fine approach could be employed. Here, a theme could inform a coarse representation of a grid, which then influences the generation of individual tiles on a local level.",
1681
+ "bbox": [
1682
+ 511,
1683
+ 532,
1684
+ 908,
1685
+ 790
1686
+ ],
1687
+ "page_idx": 7
1688
+ },
1689
+ {
1690
+ "type": "text",
1691
+ "text": "Ethics. For details on ethics, data protection, and copyright, please see https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html.",
1692
+ "bbox": [
1693
+ 511,
1694
+ 799,
1695
+ 906,
1696
+ 844
1697
+ ],
1698
+ "page_idx": 7
1699
+ },
1700
+ {
1701
+ "type": "text",
1702
+ "text": "Acknowledgments. The authors of this work are supported by ERC 101001212-UNION, AIMS EP/S024050/1, and Meta Research.",
1703
+ "bbox": [
1704
+ 511,
1705
+ 854,
1706
+ 908,
1707
+ 900
1708
+ ],
1709
+ "page_idx": 7
1710
+ },
1711
+ {
1712
+ "type": "page_number",
1713
+ "text": "27592",
1714
+ "bbox": [
1715
+ 478,
1716
+ 944,
1717
+ 519,
1718
+ 957
1719
+ ],
1720
+ "page_idx": 7
1721
+ },
1722
+ {
1723
+ "type": "text",
1724
+ "text": "References",
1725
+ "text_level": 1,
1726
+ "bbox": [
1727
+ 91,
1728
+ 89,
1729
+ 187,
1730
+ 104
1731
+ ],
1732
+ "page_idx": 8
1733
+ },
1734
+ {
1735
+ "type": "list",
1736
+ "sub_type": "ref_text",
1737
+ "list_items": [
1738
+ "[1] AlimamaCreative. Flux-controlnet-inpainting. https://github.com/alimama-creative/FLUX-Controlnet-Inpainting, 2024. GitHub repository. 7",
1739
+ "[2] Miguel Angel Bautista et al. GAUDI: A neural architect for immersive 3D scene generation. arXiv.cs, abs/2207.13751, 2022. 3",
1740
+ "[3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 7",
1741
+ "[4] Ron Brinkmann. The art and science of digital compositing: Techniques for visual effects, animation and motion graphics. Morgan Kaufmann, 2008. 5",
1742
+ "[5] Shengqu Cai, Eric Ryan Chan, Songyou Peng, Mohamad Shahbazi, Anton Obukhov, Luc Van Gool and Gordon Wetzstein. DiffDreamer: Towards consistent unsupervised single-view scene extrapolation with conditional diffusion models. In Proc. ICCV, 2023. 3",
1743
+ "[6] Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola and Noah Snavely. Persistent nature: A generative model of unbounded 3d worlds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20863-20874, 2023.",
1744
+ "[7] Zhaoxi Chen, Guangcong Wang and Ziwei Liu. Scenedreamer: Unbounded 3d scene generation from 2d image collections. IEEE transactions on pattern analysis and machine intelligence, 45(12):15562-15576, 2023. 3",
1745
+ "[8] Jaeyoung Chung, Suyoung Lee, Hyeongjin Nam, Jaerin Lee and Kyoung Mu Lee. LucidDreamer: Domain-free generation of 3d gaussian splatting scenes. In arXiv, 2023. 2",
1746
+ "[9] Dana Cohen-Bar, Elad Richardson, Gal Metzer, Raja Giryes and Daniel Cohen-Or. Set-the-scene: Global-local training for generating controllable NeRF scenes. In Proc. ICCV Workshops, 2023. 3",
1747
+ "[10] Deemos. Rodin text-to-3D gen-1 (0525) v0.5, 2024. 2",
1748
+ "[11] Matt Deitke et al. Objverse-XL: A universe of 10M+ 3D objects. CoRR, abs/2307.05663, 2023. 3",
1749
+ "[12] Paul Engstler, Andrea Vedaldi, Iro Laina and Christian Rupprecht. Invisible stitch: Generating smooth 3D scenes with depth inpainting. In Proceedings of the International Conference on 3D Vision (3DV), 2025. 2",
1750
+ "[13] Rafail Fridman, Amit Abecasis, Yoni Kasten and Tali Dekel. Scenescape: Text-driven consistent scene generation. CoRR, abs/2302.01133, 2023. 2",
1751
+ "[14] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron and Ben Poole. CAT3D: create anything in 3d with multi-view diffusion models. arXiv, 2405.10314, 2024. 1",
1752
+ "[15] Daniel Gatis. rembg, 2025. 5",
1753
+ "[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7",
1754
+ "[17] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson and Matthias Nießner. Text2Room: Extracting textured 3D"
1755
+ ],
1756
+ "bbox": [
1757
+ 93,
1758
+ 114,
1759
+ 483,
1760
+ 900
1761
+ ],
1762
+ "page_idx": 8
1763
+ },
1764
+ {
1765
+ "type": "list",
1766
+ "sub_type": "ref_text",
1767
+ "list_items": [
1768
+ "meshes from 2D text-to-image models. In Proc. ICCV, 2023. 2",
1769
+ "[18] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger and Shenghua Gao. 2d gaussian splattering for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 conference papers, pages 1-11, 2024. 2",
1770
+ "[19] Zehuan Huang, Yuan-Chen Guo, Xingqiao An, Yunhan Yang, Yangguang Li, Zi-Xin Zou, Ding Liang, Xihui Liu, Yan-Pei Cao and Lu Sheng. Midi: Multi-instance diffusion for single image to 3d scene generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 23646-23657, 2025. 3",
1771
+ "[20] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler and George Drettakis. 3D Gaussian Splatting for real-time radiance field rendering. Proc. SIGGRAPH, 42(4), 2023. 2",
1772
+ "[21] Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2023.2",
1773
+ "[22] World Labs. Generating worlds, 2024. 2",
1774
+ "[23] Jiabao Lei, Jiapeng Tang and Kui Jia. RGBD2: generative scene synthesis via incremental view inpainting using RGBD diffusion models. In Proc. CVPR, 2023. 2",
1775
+ "[24] Jiaxin Li, Zijian Feng, Qi She, Henghui Ding, Changhu Wang and Gim Hee Lee. MINE: towards continuous depth MPI with NeRF for novel view synthesis. In Proc. ICCV, 2021. 2",
1776
+ "[25] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. Proc. ICLR, 2024. 2",
1777
+ "[26] Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan and Xiaoxiao Long. CraftsMan: high-fidelity mesh generation with 3d native generation and interactive geometry refiner. arXiv, 2405.14979, 2024. 2, 3",
1778
+ "[27] Yangguang Li et al. TripoSG: high-fidelity 3D shape synthesis using large-scale rectified flow models. arXiv, 2502.06608, 2025. 2, 3",
1779
+ "[28] Zhengqi Li, Qianqian Wang, Noah Snavely and Angjoo Kanazawa. Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In European Conference on Computer Vision, pages 515-534. Springer, 2022. 3",
1780
+ "[29] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. arXiv.cs, abs/2211.10440, 2022. 3",
1781
+ "[30] Chieh Hubert Lin, Hsin-Ying Lee, Willi Menapace, Menglei Chai, Aliaksandr Siarohin, Ming-Hsuan Yang and Sergey Tulyakov. Infiniticity: Infinite-scale city synthesis. In Proceedings of the IEEE/CVF international conference on computer vision, pages 22808-22818, 2023. 3",
1782
+ "[31] A Liu, R Tucker, V Jampani, A Makadia and N Snavely.... Infinite nature: Perpetual view generation of natural scenes from a single image. In Proc. ICCV, 2021. 2, 3",
1783
+ "[32] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 3"
1784
+ ],
1785
+ "bbox": [
1786
+ 516,
1787
+ 93,
1788
+ 906,
1789
+ 900
1790
+ ],
1791
+ "page_idx": 8
1792
+ },
1793
+ {
1794
+ "type": "page_number",
1795
+ "text": "27593",
1796
+ "bbox": [
1797
+ 478,
1798
+ 944,
1799
+ 517,
1800
+ 955
1801
+ ],
1802
+ "page_idx": 8
1803
+ },
1804
+ {
1805
+ "type": "list",
1806
+ "sub_type": "ref_text",
1807
+ "list_items": [
1808
+ "[33] Luke Melas-Kyriazio, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni and Filippos Kokkinos. IM-3D: Iterative multiview diffusion and reconstruction for high-quality 3D generation. In Proceedings of the International Conference on Machine Learning (ICML), 2024. 1",
1809
+ "[34] Quan Meng, Lei Li, Matthias Nießner and Angela Dai. LT3SD: latent trees for 3D scene diffusion. arXiv, 2409.08215, 2024. 2, 3",
1810
+ "[35] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng and Abhishek Kar. Local light field fusion: practical view synthesis with prescriptive sampling guidelines. Proc. SIGGRAPH, 38(4), 2019. 2",
1811
+ "[36] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 2",
1812
+ "[37] OpenAI et al. GPT-4 technical report. arXiv, 2303.08774, 2024. 2, 4",
1813
+ "[38] Hao Ouyang, Kathryn Heal, Stephen Lombardi and Tiancheng Sun. Text2Immersion: Generative immersive scene with 3D gaussians. arXiv.cs, abs/2312.09242, 2023. 2",
1814
+ "[39] Jack Parker-Holder et al. Genie 2: A large-scale foundation world model, 2024. 8",
1815
+ "[40] Ben Poole, Ajay Jain, Jonathan T. Barron and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 1",
1816
+ "[41] Guocheng Qian et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. arXiv.cs, abs/2306.17843, 2023. 3",
1817
+ "[42] Amit Raj et al. DreamBooth3D: subject-driven text-to-3D generation. In Proc. ICCV, 2023. 3",
1818
+ "[43] Chris Rockwell, David F. Fouhey and Justin Johnson. PixelSynth: Generating a 3D-consistent experience from a single image. In Proc. ICCV, 2021. 2",
1819
+ "[44] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 1",
1820
+ "[45] Kyle Sargent et al. ZeroNVS: Zero-shot 360-degree view synthesis from a single real image. arXiv.cs, abs/2310.17994, 2023. 2, 3",
1821
+ "[46] Junyoung Seo, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin Lai, Seungryong Kim and Yuki Mitsufuji. GenWarp: single image to novel views with semantic-preserving generative warping. arXiv, 2405.17251, 2024. 2",
1822
+ "[47] Yu Shang, Yuming Lin, Yu Zheng, Hangyu Fan, Jingtao Ding, Jie Feng, Jiansheng Chen, Li Tian and Yong Li. Urbanworld: An urban world model for 3d city generation. arXiv preprint arXiv:2407.11965, 2024. 3",
1823
+ "[48] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 1, 3",
1824
+ "[49] Meng-Li Shih, Shih-Yang Su, Johannes Kopf and Jia-Bin Huang. 3d photography using context-aware layered depth inpainting. In Proc. CVPR, 2020. 2"
1825
+ ],
1826
+ "bbox": [
1827
+ 91,
1828
+ 90,
1829
+ 480,
1830
+ 898
1831
+ ],
1832
+ "page_idx": 9
1833
+ },
1834
+ {
1835
+ "type": "list",
1836
+ "sub_type": "ref_text",
1837
+ "list_items": [
1838
+ "[50] Jaidev Shriram, Alex Trevithick, Lingjie Liu and Ravi Ramamoorthi. RealmDreamer: text-driven 3d scene generation with inpainting and depth diffusion. In Proc. 3DV, 2025. 2",
1839
+ "[51] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng and Noah Snavely. Pushing the boundaries of view extrapolation with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 175-184, 2019. 2",
1840
+ "[52] Gabriela Ben Melech Stan et al. LDM3D: Latent diffusion model for 3D. arXiv.cs, 2305.10853, 2023. 3",
1841
+ "[53] Stanislaw Szymanowicz, Christian Rupprecht and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3D generative models from 2D data. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1",
1842
+ "[54] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng and Ziwei Liu. LGM: Large multi-view Gaussian model for high-resolution 3D content creation. arXiv, 2402.05054, 2024. 3",
1843
+ "[55] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 551-560, 2020. 2",
1844
+ "[56] Shubham Tulsiani, Richard Tucker and Noah Snavely. Layer-structured 3d scene inference via view synthesis. In Proc. ECCV, 2018. 2",
1845
+ "[57] Guangcong Wang, Peng Wang, Zhaoxi Chen, Wenping Wang, Chen Change Loy and Ziwei Liu. PERF: panoramic neural radiance field from a single panorama. tpami, 2024. 3",
1846
+ "[58] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 36:8406-8441, 2023. 2",
1847
+ "[59] Olivia Wiles, Georgia Gkioxari, Richard Szeliski and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In Proc. CVPR, 2020. 2",
1848
+ "[60] Zhennan Wu et al. BlockFusion: Expandable 3D scene generation using latent tri-plane extrapolation. arXiv.cs, 2024. 2, 3, 7, 14",
1849
+ "[61] Jianfeng Xiang, Jiaolong Yang, Binbin Huang and Xin Tong. 3D-aware image generation using 2D diffusion models. arXiv.cs, abs/2303.17905, 2023. 2",
1850
+ "[62] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 6",
1851
+ "[63] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong and Jiaolong Yang. Structured 3D latents for scalable and versatile 3D generation. arXiv, 2412.01506, 2024. 2, 3, 5",
1852
+ "[64] Haozhe Xie, Zhaoxi Chen, Fangzhou Hong and Ziwei Liu. Citydreamer: Compositional generative model of unbounded 3d cities. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9666-9675, 2024. 3",
1853
+ "[65] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen and Gordon Wetzstein."
1854
+ ],
1855
+ "bbox": [
1856
+ 516,
1857
+ 90,
1858
+ 903,
1859
+ 900
1860
+ ],
1861
+ "page_idx": 9
1862
+ },
1863
+ {
1864
+ "type": "page_number",
1865
+ "text": "27594",
1866
+ "bbox": [
1867
+ 478,
1868
+ 944,
1869
+ 519,
1870
+ 955
1871
+ ],
1872
+ "page_idx": 9
1873
+ },
1874
+ {
1875
+ "type": "list",
1876
+ "sub_type": "ref_text",
1877
+ "list_items": [
1878
+ "GRM: Large gaussian reconstruction model for efficient 3D reconstruction and generation. arXiv, 2403.14621, 2024. 2",
1879
+ "[66] Kaixin Yao, Longwen Zhang, Xinhao Yan, Yan Zeng, Qixuan Zhang, Lan Xu, Wei Yang, Jiayuan Gu and Jingyi Yu. Cast: Component-aligned 3d scene reconstruction from an rgb image. ACM Transactions on Graphics (TOG), 44(4): 1-19, 2025. 3",
1880
+ "[67] Hong-Xing Yu et al. Wonderjourney: Going from anywhere to everywhere. arXiv.cs, abs/2312.03884, 2023. 2",
1881
+ "[68] Hong-Xing Yu, Haoyi Duan, Charles Herrmann, William T. Freeman and Jiajun Wu. Wonderworld: Interactive 3D scene generation from a single image. arXiv preprint arXiv:2406.09394, 2024. 3",
1882
+ "[69] Biao Zhang, Jiapeng Tang, Matthias Niessner and Peter Wonka. 3DShape2VecSet: A 3D shape representation for neural fields and generative diffusion models. In ACM Transactions on Graphics, 2023. 2",
1883
+ "[70] Jingbo Zhang, Xiaoyu Li, Ziyu Wan, Can Wang and Jing Liao. Text2NeRF: Text-driven 3D scene generation with neural radiance fields. arXiv.cs, abs/2305.11588, 2023. 2",
1884
+ "[71] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 3",
1885
+ "[72] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. CVPR, pages 586-595, 2018. 7, 13",
1886
+ "[73] Zibo Zhao et al. Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation. arXiv preprint arXiv:2501.12202, 2025. 3"
1887
+ ],
1888
+ "bbox": [
1889
+ 91,
1890
+ 90,
1891
+ 482,
1892
+ 544
1893
+ ],
1894
+ "page_idx": 10
1895
+ },
1896
+ {
1897
+ "type": "page_number",
1898
+ "text": "27595",
1899
+ "bbox": [
1900
+ 478,
1901
+ 944,
1902
+ 517,
1903
+ 955
1904
+ ],
1905
+ "page_idx": 10
1906
+ }
1907
+ ]
2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynCity_ Training-Free Generation of 3D Worlds/0d7bb992-f306-41cb-ba08-811e8364f528_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a8a817a0c5bc94678e84fde310846884e625f07ba21195b38dfdbf3429d0cf8
3
+ size 2978471
2025/SynCity_ Training-Free Generation of 3D Worlds/full.md ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynCity: Training-Free Generation of 3D Worlds
2
+
3
+ Paul Engstler*
4
+
5
+ Aleksandar Shtedritski*
6
+
7
+ Iro Laina
8
+
9
+ Andrea Vedaldi
10
+
11
+ Visual Geometry Group, University of Oxford
12
+
13
+ {paule,suny,iro,chrisr,vedaldi}@robots.ox.ac.uk
14
+
15
+ ![](images/7418ddcb2383f7c60ea809e8e6bf33b5b403682c695d7e3b21cf772d326bfc32.jpg)
16
+ Figure 1. We introduce SynCity, a novel method for generating complex and freely navigable 3D worlds from a prompt. It is training-free and leverages powerful language, 2D, and 3D generators through new prompt engineering strategies.
17
+
18
+ # Abstract
19
+
20
+ We propose SynCity, a method for generating explorable 3D worlds from textual descriptions. Our approach leverages pre-trained textual, image, and 3D generators without requiring fine-tuning or inference-time optimization. While most 3D generators are object-centric and unable to create large-scale worlds, we demonstrate how 2D and 3D generators can be combined to produce ever-expanding scenes. The world is generated tile by tile, with each new tile created within its context and seamlessly integrated into the scene. SynCity enables fine-grained control over the appearance and layout of the generated worlds, which are both detailed and diverse.
21
+
22
+ # 1. Introduction
23
+
24
+ We consider the problem of generating 3D worlds from textual descriptions. Generating 3D content, for example, for video games, virtual reality, and simulation, is highly laborious and time-consuming. This is particularly true for large 3D scenes, even though these are often in the background and may have limited artistic significance. Automating their generation is thus particularly appealing.
25
+
26
+ The advent of modern generative AI has already impacted 3D generation, and in particular, the generation of 3D objects. DreamFusion [40] was among the first to adapt diffusion-based 2D image generators [44] to create 3D objects. Subsequent advancements fine-tuned 2D image generators to produce multiple consistent views of an object [14, 33, 48, 53] and learned few-view 3D reconstruc
27
+
28
+ tion networks [25, 65]. More recently, the focus has shifted to methods that learn 3D latent spaces [10, 26, 27, 63, 69]. 3D latents can be sampled to generate 3D objects directly and with better geometry. Furthermore, by making these 3D latent generators conditional on an image prompt, they can be easily combined with 2D image generators, which can generally be trained on a much larger scale.
29
+
30
+ In addition to objects, there is also ample literature on generating 3D scenes. Most scene generators are image-based and progressively reconstruct scenes by expanding from an initial image [8, 12, 13, 17, 31, 38, 43, 67, 70], combining depth prediction, image and depth outpainting, and 3D reconstruction using NeRF [36] or 3D Gaussian Splating [20]. The main advantage of these approaches is that they also leverage powerful 2D image generators to create the various views of the scene. These 2D generators understand complex textual prompts and result in vibrant 3D scenes with good artistic quality. However, it is challenging to keep the scene consistent while expanding it. As a consequence, while the reconstructed scenes can envelop the observer in a $360^{\circ}$ manner, it is often not possible to 'walk' into the scene for more than a few steps [22].
31
+
32
+ A key challenge in generating scenes beyond these '3D bubbles' is maintaining consistency incrementally without drifting. We argue that latent-space 3D generators might help, as they can regularize and constrain the reconstructed geometry, including hallucinating shapes and textures in regions behind the visible sides of objects. Some evidence comes from recent works like BlockDiffusion [60] and LT3SD [34], which learn to generate large coherent 3D scenes in latent space. However, they do not leverage 2D generators trained on billions of images, which severely hinders scene diversity and limits their ability to generalize and follow instructions.
33
+
34
+ In this work, we aim to combine the strengths of latent 3D and 2D generators to create large, high-quality 3D scenes that can be navigated freely (Fig. 1). First, we note that, while 3D generators like TRELLIS [63] are trained for object-level reconstruction, they can reconstruct fairly complex compositions of multiple objects. Borrowing ideas from video game world building, we show that TRELLIS can effectively generate, if not an entire world, at least tiles representing local regions of the world. In particular, we show that, if we prompt the model with an 'isometric' view of a tile, we can effectively generate it in 3D.
35
+
36
+ Given this basic ability, we then address the problem of generating the images of the tiles that form a larger scene. To do so, we build on a text-to-image generator (Flux [21]) and introduce a new way of prompting it that reliably causes it to output tile-like images, with a consistent isometric framing. This regular and stable framing is the key reason why the reconstructed 3D tiles can fit together seamlessly.
37
+
38
+ In addition to framing, we propose two additional mech
39
+
40
+ anisms to make tiles fit together well. First, we encourage consistency in appearance by using previously generated tiles to provide context for the image generator, where each new tile inpaints a missing region in a 2D isometric view of the scene. Second, we enforce geometric consistency by blending the 3D representations of neighboring tiles using the 3D generative model.
41
+
42
+ Tile-specific textual prompts control the generation of the tiles. Tile prompts can also be generated automatically, utilizing a large language model (ChatGPT [37]), so that an entire scene can be obtained from a single 'world' prompt.
43
+
44
+ As we show in the experiments, SynCity can, in this way, leverage off-the-shelf components (i.e., language, 2D and 3D generators) to produce vibrant, detailed, and coherent 3D worlds that can be navigated freely.
45
+
46
+ # 2. Related Work
47
+
48
+ Novel view synthesis for scenes. Expanding an image beyond its boundaries has been a long-standing task in computer vision. Early methods that sought to expand object-centric scenes relied on layer-structured representations [24, 35, 49, 51, 55, 56], which disregard the scene's actual geometry. SynSin [59] is a pivotal work where image features are projected and used as conditioning to generate novel views, achieving geometric and semantic consistency. ZeroNVS [45] introduces high-quality results with fine-grained control of the camera but remains object-centric. GenWarp [46] integrates semantic information through cross-attention when generating a novel view.
49
+
50
+ A challenge for these methods is to avoid semantic drift and maintain object persistence. To obtain a 3D representation, the generated views need to be transferred into such a representation, e.g., NeRF [36] or Gaussians [18, 20], where any geometric conflicts must be resolved.
51
+
52
+ Image projection-based scene generation. A different line of work follows the paradigm of building the 3D representation of a scene sequentially using 2D image generators [8, 13, 17, 23, 38, 43, 58, 61, 67, 70]. Most of these methods employ an image generator to outpaint the existing scene using predefined camera poses. The results are then fused in 3D with depth prediction models. Text2Room [17] generates meshes of indoor scenes. As the bounds of the mesh delimit the scene, it can be freely explored. LucidDreamer [8] and Text2Immersion [38] go beyond indoor scenes, but their generated scenes reveal geometric inconsistencies when stepping away from the camera poses used to generate the scene. Invisible Stitch [12] addresses this issue by inpainting depth (rather than naively aligning it), and RealmDreamer [50] proposes multiple optimization losses to refine the generated scene. Despite these improvements, the resulting scenes still suffer from geometric artifacts and remain relatively small. WonderJourney [67] introduces
53
+
54
+ ![](images/619d7467f9e6ca5817f332992a7203a69bcba819c73fe62c9aed0d08791f4d14.jpg)
55
+ Figure 2. Overview of SynCity. 2D prompting: To generate a new tile, we first render a view of where the tile should be placed, including context from neighboring tiles. 3D prompting: We extract the new tile image and construct an image prompt for TRELLIS by adding a wider base under the tile. 3D blending: The 3D model output by TRELLIS is often not well blended with the rest of the scene. To address this, we render a view of the new tile alongside each neighboring tile and inpaint the region between them using an image inpainting model. Next, we condition on this well-blended view to refine the region between the two 3D tiles. Finally, the new tile is added to the world.
56
+
57
+ novel ideas for depth fusion, such as grouping objects at similar disparity to planes and refining sky depth, enabling extensive 'scene journeys' where independent representations are built between scene 'keyframes'. However, these are not merged into one coherent scene. WonderWorld [68] leverages these improvements to build a single scene, allowing interactive updates, but the true extent of the generated scenes remains limited. Other works use panoramas [52, 57] or implicit representations [2, 45], but the freedom of movement remains constricted.
58
+
59
+ Procedural scene generation. Further methods permit long-range fly-overs over nature [5-7, 28, 31] or cities [30, 47, 64]. These methods usually generate procedural unbounded images (e.g., the terrain makeup or a city layout). While these methods create realistic-looking images, they are often monotonous as they are domain-specific and thus highly constrained in the variety they can generate.
60
+
61
+ 3D scene generation. Instead of merely generating images of a scene or outpainting it only in 2D, other methods generate the 3D representation directly. Set-the-Scene [9] adds a layer of control to the layout of NeRF scenes by defining object proxies. BlockFusion [60] learns a network to diffuse small blocks to extend a mesh auto-regressively. A 2D layout conditioning is used to control the generation process,
62
+
63
+ allowing users to generate scenes of rooms, a village, and a city. While the method allows building large-scale scenes, the variety of the objects it generates is severely limited as it requires domain-specific 3D training data. Furthermore, it generates textured meshes. LT3SD [34] learns a diffusion model that generates 3D environments in a patch-by-patch and coarse-to-fine fashion. However, this method is only trained to produce indoor scenes.
64
+
65
+ At the same time, the synthesis of complex, high-fidelity objects has been enabled by the rapid progress in the fields of text-to-3D and image-to-3D generation [26, 27, 29, 32, 41, 42, 48, 54, 63, 71, 73]. Trained on large-scale curated subsets of 3D datasets such as Objaverse-XL [11], these models can generate a large variety of different objects.
66
+
67
+ While prior works have utilized 3D object generators to composite a scene from objects [19, 66], we generate complete chunks of a scene that are seamlessly fused together, which is a fundamentally new approach to scene generation.
68
+
69
+ # 3. Method
70
+
71
+ Our goal is to generate a 3D world $\mathcal{G}$ from an initial textual prompt $p_0$ . Our method, SynCity, leverages prompt engineering in combination with off-the-shelf language, 2D, and
72
+
73
+ ![](images/61b9173701d1dc881a13cda8750f423e186b9ae8fece69c0b203f82575a4ad03.jpg)
74
+ Figure 3. Left: Progressive generation of world tiles $\mathcal{T}$ . Right: Isometric framing of a tile for image-based prompting.
75
+
76
+ 3D generators to create the entire world automatically, with no need to retrain the models.
77
+
78
+ We structure the world as a $W \times H$ grid $\mathcal{T} = \{0, \dots, W - 1\} \times \{0, \dots, H - 1\}$ of square tiles, each of which can contain several complex 3D objects (e.g., buildings, bridges, trees) as well as the ground surface. We generate the world progressively, tile by tile, as shown in Fig. 3. When generating tile $(x, y) \in \mathcal{T}$ , tiles $\mathcal{T}(x, y) = \{(x', y') \in \mathcal{T} : y' < y \vee (y' = y \wedge x' < x)\}$ have already been generated.
79
+
80
+ An overview of our approach is shown in Fig. 2. The first step is to expand the world description $p_0$ into tile-specific prompts (Sec. 3.1). The second step is to pass these tile-specific prompts to a 2D image generator and inpainter to create an isometric view of each tile, accounting for the part of the world generated so far (Sec. 3.2). The third step is to extract image prompts from these isometric views and use them as input to an image-to-3D generator to reconstruct each tile's geometry and appearance in 3D (Sec. 3.3). The final step is to align and blend the 3D reconstructions of the tiles to create a coherent 3D world (Sec. 3.4).
81
+
82
+ # 3.1. Prompting the Language Model
83
+
84
+ The goal of language prompting is to take a high-level textual description of the world $p_0$ and expand it into a set $p$ of tile-specific textual prompts that can be used to generate the 3D world. Specifically, $p$ is a collection of sub-prompts $p_{xy} \in \Sigma^*$ , one for each tile, and a world-level 'style' prompt $p_\star \in \Sigma^*$ , so that we can write $p = \{p_{xy}\}_{(x,y) \in \mathcal{T}} \cup \{p_\star\}$ , where $\Sigma^*$ is the set of all possible strings.
85
+
86
+ The prompt $p$ can be constructed manually (allowing control over the content of each tile) or generated by a large language model (LLM) such as ChatGPT [37] from a 'seed' prompt. For the latter, we employ in-context learning, prompting ChatGPT o3-mini-high to generate a grid-like world with tile-specific descriptions after providing it with a template in JSON format (see Appendix A.1).
87
+
88
+ # 3.2. Prompting the 2D Generator
89
+
90
+ We use the language prompts $p$ from Sec. 3.1 to prompt an off-the-shelf 2D image generator $\Phi_{2\mathrm{D}}$ to output a 2D image $I(x,y)$ of each tile to be generated, as shown in Fig. 4. The image $I(x,y)$ must satisfy several constraints: (1) It must reflect the tile-specific prompt $p_{xy}$ of the target tile as well
91
+
92
+ as the world-level prompt $p_{\star}$ . (2) It must be suitable for prompting the image-to-3D generator in the next step. (3) It must be consistent with the previously generated tiles.
93
+
94
+ We propose a prompting strategy designed to satisfy these constraints. The image is drawn as a sample $I(x,y) \sim \Phi_{2\mathrm{D}}(q,B,M)$ from the 2D image generator $\Phi_{2\mathrm{D}}$ , where $q = p_{xy} \cdot p_{\star}$ is a prompt that combines the tile-specific and world-level descriptions. The generator $\Phi_{2\mathrm{D}}$ is also provided with a base image $B$ and an inpainting mask $M$ to constrain its output, as explained next.
95
+
96
+ Tile inpainting. To satisfy constraint (2), we encourage the image generator to produce regular tiles so that the imaged-to-3D model outputs tiles with regular geometry that fit well together. We assume that tiles have a square basis of unit size and are imaged in an 'isometric' manner. This framing of the tiles is conducive to generating regular 3D tiles. Furthermore, it is a common choice in video games and might have been observed by the image generator during training, as these models are often trained on game-like data.
97
+
98
+ While we could fine-tune the image generator $\Phi_{2\mathrm{D}}$ to produce such images, we demonstrate below that this effect can be achieved through prompt engineering alone, avoiding the need for retraining. We only assume that $\Phi_{2\mathrm{D}}$ is capable of inpainting—a common feature of modern image generators, and provide it with inputs $B$ and $M$ , as shown in Fig. 4. Specifically, $B$ is set to be the image of the base of the tile, represented as a square, gray slab imaged from a fixed isometric vantage point. The mask $M$ is a binary mask covering a cube on top of the base.
99
+
100
+ Figure 4 shows the result of prompting the model in this manner, as well as what happens if signals $B$ and $M$ are removed: the viewpoint and general frame of the tile become random and unsuitable for 3D generation.
101
+
102
+ ![](images/7d9d758c64f08f772efbc63a891776fa2bb9eb01907020bc9920beb6f3b71c39.jpg)
103
+ Figure 4. Left: Generation of the 2D image prompt for the first world tile at $x = 0$ and $y = 0$ . The image generator $\Phi_{2\mathrm{D}}$ is conditioned on $q = p_{00} \cdot p_{\star}$ and tasked with inpainting the base image $B$ within the masked region $M$ . Right: Without 'framing' the image using $B$ and $M$ , the output image is unsuitable for tiling.
104
+
105
+ Context-aware generation. Except for the first tile $(0,0)$ , tiles are generated in the context of the part of the world generated before. To account for this context, for tiles with $x,y > 0$ , we modify the base image $B$ and the mask $M$ as shown in Fig. 5. For the base image $B$ , instead of the slab, we render the previously generated portion of the 3D
106
+
107
+ ![](images/b29a3c18e2c8226d9d62dbfbe635a31d97e4868ecfeefa7b61a728d350c983d4.jpg)
108
+ Figure 5. Left: Base image $B$ and inpainting mask $M$ (white overlay) used to prompt the image generator $\Phi_{2D}$ to generate an image for a world tile at $x > 0, y > 0$ . Right: Result of inpainting.
109
+
110
+ ![](images/3fcc2d012a5de2e545b0151afb6b5267eeef592dab923b261adaca0cf092108e.jpg)
111
+ Figure 7. Left: Isolating the image of the new tile from $I(x,y)$ . Right: Adding a slightly larger base underneath the tile.
112
+
113
+ world (which is constructed as described next in Secs. 3.3 and 3.4), providing context to the inpainting network. We also modify the mask $M$ to avoid covering tiles already generated to the left (west), i.e., for a tile $(i,j) \in \mathcal{T}$ , these are tiles $\{(x,y) \in \mathcal{T} : x < i \wedge y = j\}$ . See the appendix for a comparison of different masking schemes.
114
+
115
+ To ensure continuity of the ground, before rendering the contextual image, we trim any 3D geometry that is sufficiently high to occlude the tile we wish to generate, as shown in Fig. 6 (observe the trimmed structures in Fig. 5).
116
+
117
+ ![](images/4392d5e68a2282a69ae3118b454c62818e07937d4f495ee4c90bb4191df5494c.jpg)
118
+ Figure 6. Trimming tall structures for 2D prompting
119
+
120
+ ![](images/fbd1843fc804906894cc07e039fef52bf16cde5b8440e8f221d9bd6475217614.jpg)
121
+
122
+ The appendix discusses a special case for tiles at the boundaries of the world (see Appendix A.2).
123
+
124
+ # 3.3. Prompting the 3D Generator
125
+
126
+ Given the tile image $I(x,y)$ obtained from the 2D image generator in Sec. 3.2, the next goal is to generate a corresponding 3D reconstruction $G(x,y)$ of the tile using an image-to-3D model $\Phi_{\mathrm{3D}}$ . We opt for a robust 3D generator and select TRELLIS [63] due to its strong performance, ability to generate both shape and texture, and latent space structure, which is easy to manipulate for blending, as we show later in Sec. 3.4. TRELLIS produces the reconstructions $G(x,y)$ in the form of 3D Gaussian Splats (3DGS).
127
+
128
+ Thus, 3D reconstruction amounts to drawing a sample $G(x,y) \sim \Phi_{\mathrm{3D}}(J(x,y))$ from the image-to-3D generator $\Phi_{\mathrm{3D}}$ . Rather than conditioning on the image $I(x,y)$ , we use a pre-processed version $J(x,y)$ , as described next.
129
+
130
+ 2D foreground extraction and rebasing. Recall that the image $I(x,y)$ output by the 2D generator in Sec. 3.2 is an image of the tile and its context. However, the 3D generator
131
+
132
+ ![](images/1d3c92958393f3d4bccfdd4b9b5d630735e57ba06b3072eb22ce4dd23b621641.jpg)
133
+
134
+ $\Phi_{3D}$ expects the input image to focus only on the object that needs to be reconstructed, i.e., the new tile. The first step is to extract from $I(x,y)$ only the part that corresponds to the new tile, which we achieve using rembg [15] with alpha matting [4], as shown in Fig. 7.
135
+
136
+ The resulting image is narrowly cropped around the new tile. Similar to Sec. 3.2, we found it beneficial to include a slab base for the tile, an operation we call 'rebasing,' as shown in Fig. 7. We simply compose the image of the tile with a slightly larger gray slab (in 2D) to obtain $J(x,y)$ , which effectively provides a 'frame' for the 3D generator to work with. The base is reconstructed as part of the tile's geometry, which can be used for validation and as an easy-to-detect handle for further 3D processing.
137
+
138
+ The 'rebased' image $J(x,y)$ is fed to the 3D generator $\Phi_{\mathrm{3D}}$ to obtain the 3D reconstruction $G(x,y)$ of the tile, in the form of 3D Gaussian Splats. The effect of rebasing on the 3D result is shown in Fig. 8.
139
+
140
+ ![](images/ee57d0ae2a2fd1326b7cf21360aae9b2239b925a01ec165dd42744d2cfc9a279.jpg)
141
+ Figure 8. Top: 3D reconstruction using a tight base. Bottom: 3D reconstruction with a slightly larger base, which helps to keep the tile's geometry above ground (see the back of the reconstruction) and creates an easy-to-detect 3D base.
142
+
143
+ 3D geometric validation. Because the generators are imperfect, we verify the 3D reconstruction $G(x,y)$ to ensure that it is of sufficient quality. If not, we discard it and regenerate the tile using a different random seed. To verify the tile, we use a few heuristics to check that the tile's ge
144
+
145
+ ometry occupies a square region of sufficient size and that the base of the tile has been reconstructed faithfully. Please see Appendix A.3 for more details.
146
+
147
+ 3D post-processing. At this point, we have verified the 3D reconstruction $G(x,y)$ for the tile as a mixture of 3D Gaussians. However, the actual 3D footprint, orientation, and size of the tile are controlled by the 3D generator and are inconsistent. The post-processing step utilizes simple heuristics to crop the subset of 3D Gaussians that form the actual tile, remove the extended base, rescale them to fill the full tile, and reorient them to face the same way as the 2D image prompt. We explain this in more detail in Appendix A.4.
148
+
149
+ # 3.4. 3D Blending
150
+
151
+ At this stage of the pipeline, we have reconstructed all 3D tiles $G(x,y)$ , $(x,y) \in \mathcal{T}$ . Due to the prompting and post-processing steps in Secs. 3.2 and 3.3, the tiles are already approximately aligned and oriented correctly, including being roughly level with the ground.
152
+
153
+ However, they are not perfectly aligned, particularly at their boundaries. This misalignment arises because TREL-LIS does not reconstruct the input images exactly, and only a single view of each tile is provided, which only indirectly controls the reconstruction of the back of the tile. Further, while $G(x,y)$ is represented as 3DGS, these imperfections are not easily addressed in that representation space.
154
+
155
+ We address this issue by blending the tiles in TRELLIS latent space to create a coherent and continuous 3D scene. As explained next, we first repaint the boundary region in 2D. Then, we align the tiles in the latent voxel grid of $\Phi_{3\mathrm{D}}$ . Finally, we resample the voxel features in a narrow boundary region between two tiles.
156
+
157
+ Blending in 2D. To blend the latents of two neighboring tiles, we first predict the appearance of the boundary between the two tiles. To achieve this, we place the two 3D tiles next to each other, render a frontal view, and inpaint the middle region of the rendering (Fig. 2) using $\Phi_{2\mathrm{D}}$ . This results in a blended image, which we use to condition $\Phi_{3\mathrm{D}}$ .
158
+
159
+ Unifying the tile size in 3D latent space. Recall that, due to the rebasing, $G(x,y)$ contains a 3D base. While we have removed the base in 3DGS space, we have yet to do the same in the latent space. We use the same cuts applied in 3DGS space to crop the latents, rounding them to account for the discrete nature of the latent voxel grid.
160
+
161
+ Because these cuts might differ for two neighboring tiles, $\gamma^1$ and $\gamma^2$ , we may need to upsample the latents to ensure $\gamma^1, \gamma^2 \in \mathbb{R}^{D \times R \times R \times R}$ . Here, $\gamma$ represents $D$ -dimensional features in the $R$ -sized 3D grid that TRELLIS denoises.
162
+
163
+ We found that naively upsampling the latents by interpolation leads to poor reconstructions, as shown in Fig. 9. Instead, we propose a different scheme where we resample the features (called structured latents in [62]) after upsam
164
+
165
+ ![](images/344713b811212ab31832aa331308e561ca83466e920e3f012feb127a7f3b7f65.jpg)
166
+ Figure 9. Upsampling sparse latents. We need to resize or up-sample sparse latents in order to stitch them. Due to the sparsity of the latents and the behaviour of the latent decoder, naively resampling in latent space leads to artifacts. Our proposed resizing of the sparse latents better preserves textures and fine structures.
167
+
168
+ pling the latent voxel grid of each tile. First, we upsample the cropped occupancy volume that TRELLIS predicted to the original resolution $V \in \{0,1\}^{R \times R \times R}$ . Next, we denoise a new set of latents $\gamma$ on the upsampled occupancy volume. To preserve the details and textures of the original 3D tile, we render it from multiple views and jointly condition the structured latent inference on all of them. In practice, when denoising with multiple conditioning views, at each timestep, the denoising step is computed as the average denoising step across all views. We show that this upsampling scheme leads to superior reconstructions in Fig. 9.
169
+
170
+ Now, mirroring the base cropping operation in 3DGS space, we have tiles of matching sizes in latent space.
171
+
172
+ Blending in 3D. Finally, we use $\Phi_{\mathrm{3D}}$ to blend tiles. We take the latents of the two tiles $\gamma^1$ and $\gamma^2$ , where $\gamma^1, \gamma^2 \in \mathbb{R}^{D \times R \times R \times R}$ after upsampling. We combine them into a new volume $\gamma$ , where the side where they meet is in the center:
173
+
174
+ $$
175
+ \gamma_ {:, x, y, z} = \left\{ \begin{array}{l l} \gamma_ {:, x + R / 2, y, z} ^ {1}, & \text {i f} x < R / 2, \\ \gamma_ {:, x - R / 2, y, z} ^ {2}, & \text {i f} x \geq R / 2. \end{array} \right.
176
+ $$
177
+
178
+ We apply the denoising function $\Omega$ , which is the latent denoiser of $\Phi_{3D}$ , to the volume $\gamma$ , but only within the middle region where we have applied the stitch, i.e., for $x \in [R/2 - r, R/2 + r]$ for some $r < R/2$ , while keeping the rest fixed. Formally, we initialize $\tilde{\gamma} \sim \mathcal{N}(0, I)$ and at each denoising step $t$ , we update $\tilde{\gamma}$ as:
179
+
180
+ $$
181
+ \tilde {\gamma} _ {t + 1,: x, y, z} = \left\{ \begin{array}{l l} \Omega (\tilde {\gamma} _ {t,: x, y, z}), & \text {i f} | x - R / 2 | \leq r, \\ \gamma_ {t + 1,: x, y, z}, & \text {o t h e r w i s e}. \end{array} \right.
182
+ $$
183
+
184
+ Here, $\gamma_{t}$ is obtained by adding noise to the original $\gamma$ at the corresponding noise level for step $t$ . In practice, we only update the structured latents, keeping the sparse structure (latent voxel grid) fixed: The low spatial resolution of the sparse structure ( $R = 16$ , compared to $R = 64$ for the structured latents) is too coarse for choosing an adequate $r$ .
185
+
186
+ <table><tr><td colspan="5">Win Rate (%)</td></tr><tr><td>Overall</td><td>Geometry</td><td>Exploration</td><td>Diversity</td><td>Realism</td></tr><tr><td>90.9</td><td>81.8</td><td>90.9</td><td>90.9</td><td>86.4</td></tr></table>
187
+
188
+ ![](images/9c2a00b23f4ed6887b163eb7b2b5a179ecc7e9e2b703f284e8cd9fa1d8162d74.jpg)
189
+ Figure 10. Left: $2 \times 2$ grid generated with our method, not taking context into account—here, the scale of the buildings is not consistent. Right: Generated with our method using the same prompts, where context is taken into account as described in Sec. 3.2.
190
+
191
+ # 4. Experiments
192
+
193
+ Experimental details. We generate the text prompts using ChatGPT o3-mini-high. For the 2D inpainter, we use the Flux ControlNet of [1].
194
+
195
+ Human preference. We evaluate human preference for the results generated by our method compared to those obtained with BlockFusion [60]. In particular, we compare a 'city' scene, showing the entire scene as well as close-up detail views. As seen in Tab. 1, participants $(n = 22)$ find our method better overall, with superior geometry, realism, and diversity.
196
+
197
+ # 4.1. Ablations
198
+
199
+ Here, we ablate several components of our approach to demonstrate the importance of each.
200
+
201
+ Building a grid. A naive approach to generating a 3D scene involves querying the image generator to produce an image of a large-scale scene (using our 2D image prompt setup) and then obtaining the entire 3D world directly with TRELLIS. To achieve the same level of control provided by our method, the textual prompt needs to be highly detailed and include layout instructions. However, we found that neither precise nor abstract prompts were effective at steering the generations of Flux (for details, see Appendix A.4). This highlights the effectiveness of our grid-based approach in generating highly detailed 3D worlds at scale.
202
+
203
+ 2D prompting context. We remove the context from neighboring tiles, as described in Sec. 3.2. When this is done, each tile is sampled independently, and the relative scale between objects becomes inconsistent (Fig. 10).
204
+
205
+ Table 1. Win rates of our method against BlockFusion. We asked participants to select which scene they prefer overall, as well as which one has better geometry, would be more interesting to explore, is more diverse, and has better realism.
206
+
207
+ <table><tr><td>Method</td><td>Base Area</td><td>Squareness ↑</td><td>Completeness ↑</td></tr><tr><td>No Rebasing</td><td>2271</td><td>0.92</td><td>0.73</td></tr><tr><td>Ours</td><td>4096</td><td>1.00</td><td>1.00</td></tr></table>
208
+
209
+ Table 2. Average tile 3D geometry metrics for an approach without rebasing and our method. Rebasing is crucial to ensure a tile is square and its base has been reconstructed faithfully. The metrics we use are the area of the base in voxels, a measure for the 'squareness' of the base, and how many border voxels have been faithfully reconstructed. For details, please refer to the appendix.
210
+
211
+ <table><tr><td>Method</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>FID ↓</td><td>KID ↓</td></tr><tr><td>Naïve upsampling</td><td>0.5914</td><td>0.3093</td><td>200.5</td><td>0.243</td></tr><tr><td>Ours (single frame)</td><td>0.3517</td><td>0.5149</td><td>111.6</td><td>0.069</td></tr><tr><td>Ours (multi frame)</td><td>0.3212</td><td>0.5312</td><td>89.1</td><td>0.051</td></tr></table>
212
+
213
+ Table 3. Perceptual metrics for our method and the naive approach for 3D latent upsampling. Lower values for LPIPS [72], FID [16], and KID [3] are better, while higher values for SSIM are better. We see that even using a single conditioning frame leads to better results, and multiple frames further improve performance.
214
+
215
+ Rebasing. To place tiles on a grid, they need to be square (otherwise the grid would be jagged), and their base must be reconstructed faithfully (clearly delimiting where the tile stops). Without rebasing, the geometry generated by TREL-LIS might extend beyond the base, making the tile's 'true' extent difficult to detect, as shown in Fig. 8. We ablate the effect of rebasing using a small $2 \times 2$ scene to minimize the effect of error accumulation. As seen in Tab. 2, without rebasing, TREL-LIS generates tiles that are, on average, neither perfectly square nor have a solid border.
216
+
217
+ Latent upsampling. We sample 10 random views from each of 200 tiles generated by TRELLIS and compute perceptual metrics in Tab. 3 when upsampling latents with our proposed approach in Sec. 3.4 versus naive interpolation. Our proposed method leads to better results across a range of metrics, even when using a single conditioning frame.
218
+
219
+ 3D blending. In Fig. 12, we generate a scene without applying 3D blending (Sec. 3.4), resulting in visible discontinuities between the tiles.
220
+
221
+ # 4.2. Qualitative Results
222
+
223
+ We present example scenes generated by our method in Fig. 1. Additionally, we show detailed views highlighting the quality and diversity of the scenes. Please see the appendix for many more examples.
224
+
225
+ Exploring a generated world. SynCity produces explorable worlds that are easy to navigate. To demonstrate this, we sample trajectories and 'walk into' the generated 3D worlds (Fig. 11). Pre-made skybox textures are added
226
+
227
+ ![](images/4949890d9511b0f49b759ba5c818102f5a6eaa8e16a6e42c8abc143abb41d08e.jpg)
228
+
229
+ ![](images/dcd0a6ea556e2e67826a8425cc557a1fa220ea7c302bd39655fb6936e5d58ba6.jpg)
230
+
231
+ ![](images/9209c41848fe4c005ab121ec1a1aca7fe2415e3dff280c9634c0db1bed8e1387.jpg)
232
+
233
+ ![](images/cc7338cffa49f244652417102d6bbb8ea47247478e44bbddce71ce88e6894f4b.jpg)
234
+
235
+ ![](images/601f125b72b6435cf8cd282e935b92f130b7e334b8d12e91c93d59015f0e0223.jpg)
236
+
237
+ ![](images/943da4a153285a43c9a96216a754fc278eec7b996c4519585377a2a7ac539956.jpg)
238
+
239
+ ![](images/223bfe20f6031ba604a9d3f5f569d8a75888b5e44539414b956c93b79030c5ec.jpg)
240
+
241
+ ![](images/dbadda5f459d18a4e623ac68ca7ba6feb0e8af6bcaf275656100812bfdf7e7ce.jpg)
242
+
243
+ ![](images/11a99a57aee2464c5c0e5169849c49abce8668d7562c02616425e95883af283b.jpg)
244
+
245
+ ![](images/0826b10a6f103b48a2a5e5dfaf05b45dcfad8411868c2b214bf2f625c8afdfa1.jpg)
246
+
247
+ ![](images/7886c7d428333c77765d5fa3abba36966322ad1036c70a59ef272fb73d84a19d.jpg)
248
+
249
+ ![](images/b0daf9f185a12ab80f4284719923f4591e4e89774536f565a5ad61feba952ff5.jpg)
250
+
251
+ ![](images/a8a376174a44204dcca1aafb22e8abe0fedee2c5f2e313dcd96578277e1ffbca.jpg)
252
+
253
+ ![](images/c50955da3b398b91a5a85b275c98a2978f3dd1800e3819daa3a88f04cd1d1d1d.jpg)
254
+
255
+ ![](images/d1d5ef7362afee5c84c2a90981f0a43c6f9fbd31ddd9b1027edd3bedf5fd4866.jpg)
256
+ Figure 11. Exploring a 3D world. We show camera walk-throughs that explore the generated 3D worlds. Please refer to the supplement for accompanying videos.
257
+
258
+ ![](images/859888a2add68234c85d596dd7243bf275696725d4e17db9cfa0b0903ca65ebd.jpg)
259
+
260
+ ![](images/5568cacb63542a7a8ef9abe44b2dbe56011af62fe74ee235a9d0e0da4500ae16.jpg)
261
+
262
+ ![](images/5582bc0fd5656cbb61f48d3a382cec8a8199fd7ea66d29b3f5d9adcee88dfe27.jpg)
263
+
264
+ ![](images/633cc8baa924ff7eff9ac8f2eed36833fab109a7bdd6a3f56824af78a9abdfde.jpg)
265
+
266
+ ![](images/97a6b06639440a1c90af0deeb3ac4d2cda3d81dc14dfc091016da3561bdb58ea.jpg)
267
+
268
+ ![](images/e8b3cfb9b6dd72e9ec80897a2f11e5f28aa4cd93dbcfb16a7d0779aa2b3dce2c.jpg)
269
+
270
+ ![](images/b7c35b1e0ea24ff69d2f8c40cc9040b66b07164cc652be64fe345896413c9c50.jpg)
271
+ Before blending
272
+ Figure 12. Left: Tiles before the 3D blending step. Right: After the 3D blending step. Previously visible boundaries between tiles are now well-blended, resulting in a more coherent appearance.
273
+
274
+ ![](images/e6f698cb82686647a1a7c0fe13a35311b8c62e27663ff5b4ba60bbad0cc93dd2.jpg)
275
+ After blending
276
+
277
+ for visual effect. Unlike videos generated by world models such as [39], our results are guaranteed to be consistent and do not suffer from semantic drift. Unlike other systems that only generate a 'bubble,' our method creates spaces sufficiently large to be navigated in a meaningful way.
278
+
279
+ # 5. Conclusion
280
+
281
+ We have introduced SynCity, an approach to generate complex, diverse, high-quality, and fully textured 3D worlds with fine-grained control over their layout and appearance. SynCity creates worlds by autoregressively generating tiles on a grid, which can be scaled to arbitrary sizes. By accounting for local context when generating individual tiles and applying a novel blending operation, the tiles smoothly integrate with one another, creating seamless and coherent scenes. SynCity is flexible: It can either generate worlds
282
+
283
+ from a brief 'world' text prompt, but also allows fine-grained control of individual tiles via tile-specific instructions. Despite offering this high degree of control, SynCity maintains an overall stylistic and thematic consistency of the generated world. The rich detail of the generated worlds can be fully explored, without being restricted to a single '3D bubble' as in many prior works.
284
+
285
+ We have demonstrated the effectiveness of off-the-shelf generation by utilizing pre-trained language, 2D, and 3D generators through carefully designed prompting strategies. This eliminates the need to retrain any of these components, which would be complicated by the lack of large-scale scene datasets. Nevertheless, we expect that once these do become available, fine-tuning some components could result in further improvement of the method's performance, and would further simplify the alignment and rebasing steps of the pipeline. Future work could also consider relaxing the rigid grid structure and establishing a greater coherence between tiles. In terms of structure, the tiles could be randomly shifted and scaled. To ensure a coherent global theme that is carried into fine-grained local details, a coarse-to-fine approach could be employed. Here, a theme could inform a coarse representation of a grid, which then influences the generation of individual tiles on a local level.
286
+
287
+ Ethics. For details on ethics, data protection, and copyright, please see https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html.
288
+
289
+ Acknowledgments. The authors of this work are supported by ERC 101001212-UNION, AIMS EP/S024050/1, and Meta Research.
290
+
291
+ # References
292
+
293
+ [1] AlimamaCreative. Flux-controlnet-inpainting. https://github.com/alimama-creative/FLUX-Controlnet-Inpainting, 2024. GitHub repository. 7
294
+ [2] Miguel Angel Bautista et al. GAUDI: A neural architect for immersive 3D scene generation. arXiv.cs, abs/2207.13751, 2022. 3
295
+ [3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 7
296
+ [4] Ron Brinkmann. The art and science of digital compositing: Techniques for visual effects, animation and motion graphics. Morgan Kaufmann, 2008. 5
297
+ [5] Shengqu Cai, Eric Ryan Chan, Songyou Peng, Mohamad Shahbazi, Anton Obukhov, Luc Van Gool and Gordon Wetzstein. DiffDreamer: Towards consistent unsupervised single-view scene extrapolation with conditional diffusion models. In Proc. ICCV, 2023. 3
298
+ [6] Lucy Chai, Richard Tucker, Zhengqi Li, Phillip Isola and Noah Snavely. Persistent nature: A generative model of unbounded 3d worlds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20863-20874, 2023.
299
+ [7] Zhaoxi Chen, Guangcong Wang and Ziwei Liu. Scenedreamer: Unbounded 3d scene generation from 2d image collections. IEEE transactions on pattern analysis and machine intelligence, 45(12):15562-15576, 2023. 3
300
+ [8] Jaeyoung Chung, Suyoung Lee, Hyeongjin Nam, Jaerin Lee and Kyoung Mu Lee. LucidDreamer: Domain-free generation of 3d gaussian splatting scenes. In arXiv, 2023. 2
301
+ [9] Dana Cohen-Bar, Elad Richardson, Gal Metzer, Raja Giryes and Daniel Cohen-Or. Set-the-scene: Global-local training for generating controllable NeRF scenes. In Proc. ICCV Workshops, 2023. 3
302
+ [10] Deemos. Rodin text-to-3D gen-1 (0525) v0.5, 2024. 2
303
+ [11] Matt Deitke et al. Objverse-XL: A universe of 10M+ 3D objects. CoRR, abs/2307.05663, 2023. 3
304
+ [12] Paul Engstler, Andrea Vedaldi, Iro Laina and Christian Rupprecht. Invisible stitch: Generating smooth 3D scenes with depth inpainting. In Proceedings of the International Conference on 3D Vision (3DV), 2025. 2
305
+ [13] Rafail Fridman, Amit Abecasis, Yoni Kasten and Tali Dekel. Scenescape: Text-driven consistent scene generation. CoRR, abs/2302.01133, 2023. 2
306
+ [14] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron and Ben Poole. CAT3D: create anything in 3d with multi-view diffusion models. arXiv, 2405.10314, 2024. 1
307
+ [15] Daniel Gatis. rembg, 2025. 5
308
+ [16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Proc. NeurIPS, 2017. 7
309
+ [17] Lukas Hollein, Ang Cao, Andrew Owens, Justin Johnson and Matthias Nießner. Text2Room: Extracting textured 3D
310
+
311
+ meshes from 2D text-to-image models. In Proc. ICCV, 2023. 2
312
+ [18] Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger and Shenghua Gao. 2d gaussian splattering for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 conference papers, pages 1-11, 2024. 2
313
+ [19] Zehuan Huang, Yuan-Chen Guo, Xingqiao An, Yunhan Yang, Yangguang Li, Zi-Xin Zou, Ding Liang, Xihui Liu, Yan-Pei Cao and Lu Sheng. Midi: Multi-instance diffusion for single image to 3d scene generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 23646-23657, 2025. 3
314
+ [20] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler and George Drettakis. 3D Gaussian Splatting for real-time radiance field rendering. Proc. SIGGRAPH, 42(4), 2023. 2
315
+ [21] Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2023.2
316
+ [22] World Labs. Generating worlds, 2024. 2
317
+ [23] Jiabao Lei, Jiapeng Tang and Kui Jia. RGBD2: generative scene synthesis via incremental view inpainting using RGBD diffusion models. In Proc. CVPR, 2023. 2
318
+ [24] Jiaxin Li, Zijian Feng, Qi She, Henghui Ding, Changhu Wang and Gim Hee Lee. MINE: towards continuous depth MPI with NeRF for novel view synthesis. In Proc. ICCV, 2021. 2
319
+ [25] Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich and Sai Bi. Instant3D: Fast text-to-3D with sparse-view generation and large reconstruction model. Proc. ICLR, 2024. 2
320
+ [26] Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan and Xiaoxiao Long. CraftsMan: high-fidelity mesh generation with 3d native generation and interactive geometry refiner. arXiv, 2405.14979, 2024. 2, 3
321
+ [27] Yangguang Li et al. TripoSG: high-fidelity 3D shape synthesis using large-scale rectified flow models. arXiv, 2502.06608, 2025. 2, 3
322
+ [28] Zhengqi Li, Qianqian Wang, Noah Snavely and Angjoo Kanazawa. Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In European Conference on Computer Vision, pages 515-534. Springer, 2022. 3
323
+ [29] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. arXiv.cs, abs/2211.10440, 2022. 3
324
+ [30] Chieh Hubert Lin, Hsin-Ying Lee, Willi Menapace, Menglei Chai, Aliaksandr Siarohin, Ming-Hsuan Yang and Sergey Tulyakov. Infiniticity: Infinite-scale city synthesis. In Proceedings of the IEEE/CVF international conference on computer vision, pages 22808-22818, 2023. 3
325
+ [31] A Liu, R Tucker, V Jampani, A Makadia and N Snavely.... Infinite nature: Perpetual view generation of natural scenes from a single image. In Proc. ICCV, 2021. 2, 3
326
+ [32] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3D object. In Proc. ICCV, 2023. 3
327
+
328
+ [33] Luke Melas-Kyriazio, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni and Filippos Kokkinos. IM-3D: Iterative multiview diffusion and reconstruction for high-quality 3D generation. In Proceedings of the International Conference on Machine Learning (ICML), 2024. 1
329
+ [34] Quan Meng, Lei Li, Matthias Nießner and Angela Dai. LT3SD: latent trees for 3D scene diffusion. arXiv, 2409.08215, 2024. 2, 3
330
+ [35] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng and Abhishek Kar. Local light field fusion: practical view synthesis with prescriptive sampling guidelines. Proc. SIGGRAPH, 38(4), 2019. 2
331
+ [36] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In Proc. ECCV, 2020. 2
332
+ [37] OpenAI et al. GPT-4 technical report. arXiv, 2303.08774, 2024. 2, 4
333
+ [38] Hao Ouyang, Kathryn Heal, Stephen Lombardi and Tiancheng Sun. Text2Immersion: Generative immersive scene with 3D gaussians. arXiv.cs, abs/2312.09242, 2023. 2
334
+ [39] Jack Parker-Holder et al. Genie 2: A large-scale foundation world model, 2024. 8
335
+ [40] Ben Poole, Ajay Jain, Jonathan T. Barron and Ben Mildenhall. DreamFusion: Text-to-3D using 2D diffusion. In Proc. ICLR, 2023. 1
336
+ [41] Guocheng Qian et al. Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors. arXiv.cs, abs/2306.17843, 2023. 3
337
+ [42] Amit Raj et al. DreamBooth3D: subject-driven text-to-3D generation. In Proc. ICCV, 2023. 3
338
+ [43] Chris Rockwell, David F. Fouhey and Justin Johnson. PixelSynth: Generating a 3D-consistent experience from a single image. In Proc. ICCV, 2021. 2
339
+ [44] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 2022. 1
340
+ [45] Kyle Sargent et al. ZeroNVS: Zero-shot 360-degree view synthesis from a single real image. arXiv.cs, abs/2310.17994, 2023. 2, 3
341
+ [46] Junyoung Seo, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin Lai, Seungryong Kim and Yuki Mitsufuji. GenWarp: single image to novel views with semantic-preserving generative warping. arXiv, 2405.17251, 2024. 2
342
+ [47] Yu Shang, Yuming Lin, Yu Zheng, Hangyu Fan, Jingtao Ding, Jie Feng, Jiansheng Chen, Li Tian and Yong Li. Urbanworld: An urban world model for 3d city generation. arXiv preprint arXiv:2407.11965, 2024. 3
343
+ [48] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li and Xiao Yang. MVDream: Multi-view diffusion for 3D generation. In Proc. ICLR, 2024. 1, 3
344
+ [49] Meng-Li Shih, Shih-Yang Su, Johannes Kopf and Jia-Bin Huang. 3d photography using context-aware layered depth inpainting. In Proc. CVPR, 2020. 2
345
+
346
+ [50] Jaidev Shriram, Alex Trevithick, Lingjie Liu and Ravi Ramamoorthi. RealmDreamer: text-driven 3d scene generation with inpainting and depth diffusion. In Proc. 3DV, 2025. 2
347
+ [51] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng and Noah Snavely. Pushing the boundaries of view extrapolation with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 175-184, 2019. 2
348
+ [52] Gabriela Ben Melech Stan et al. LDM3D: Latent diffusion model for 3D. arXiv.cs, 2305.10853, 2023. 3
349
+ [53] Stanislaw Szymanowicz, Christian Rupprecht and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3D generative models from 2D data. In Proceedings of the International Conference on Computer Vision (ICCV), 2023. 1
350
+ [54] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng and Ziwei Liu. LGM: Large multi-view Gaussian model for high-resolution 3D content creation. arXiv, 2402.05054, 2024. 3
351
+ [55] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 551-560, 2020. 2
352
+ [56] Shubham Tulsiani, Richard Tucker and Noah Snavely. Layer-structured 3d scene inference via view synthesis. In Proc. ECCV, 2018. 2
353
+ [57] Guangcong Wang, Peng Wang, Zhaoxi Chen, Wenping Wang, Chen Change Loy and Ziwei Liu. PERF: panoramic neural radiance field from a single panorama. tpami, 2024. 3
354
+ [58] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 36:8406-8441, 2023. 2
355
+ [59] Olivia Wiles, Georgia Gkioxari, Richard Szeliski and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In Proc. CVPR, 2020. 2
356
+ [60] Zhennan Wu et al. BlockFusion: Expandable 3D scene generation using latent tri-plane extrapolation. arXiv.cs, 2024. 2, 3, 7, 14
357
+ [61] Jianfeng Xiang, Jiaolong Yang, Binbin Huang and Xin Tong. 3D-aware image generation using 2D diffusion models. arXiv.cs, abs/2303.17905, 2023. 2
358
+ [62] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 6
359
+ [63] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong and Jiaolong Yang. Structured 3D latents for scalable and versatile 3D generation. arXiv, 2412.01506, 2024. 2, 3, 5
360
+ [64] Haozhe Xie, Zhaoxi Chen, Fangzhou Hong and Ziwei Liu. Citydreamer: Compositional generative model of unbounded 3d cities. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9666-9675, 2024. 3
361
+ [65] Yinghao Xu, Zifan Shi, Wang Yifan, Hansheng Chen, Ceyuan Yang, Sida Peng, Yujun Shen and Gordon Wetzstein.
362
+
363
+ GRM: Large gaussian reconstruction model for efficient 3D reconstruction and generation. arXiv, 2403.14621, 2024. 2
364
+ [66] Kaixin Yao, Longwen Zhang, Xinhao Yan, Yan Zeng, Qixuan Zhang, Lan Xu, Wei Yang, Jiayuan Gu and Jingyi Yu. Cast: Component-aligned 3d scene reconstruction from an rgb image. ACM Transactions on Graphics (TOG), 44(4): 1-19, 2025. 3
365
+ [67] Hong-Xing Yu et al. Wonderjourney: Going from anywhere to everywhere. arXiv.cs, abs/2312.03884, 2023. 2
366
+ [68] Hong-Xing Yu, Haoyi Duan, Charles Herrmann, William T. Freeman and Jiajun Wu. Wonderworld: Interactive 3D scene generation from a single image. arXiv preprint arXiv:2406.09394, 2024. 3
367
+ [69] Biao Zhang, Jiapeng Tang, Matthias Niessner and Peter Wonka. 3DShape2VecSet: A 3D shape representation for neural fields and generative diffusion models. In ACM Transactions on Graphics, 2023. 2
368
+ [70] Jingbo Zhang, Xiaoyu Li, Ziyu Wan, Can Wang and Jing Liao. Text2NeRF: Text-driven 3D scene generation with neural radiance fields. arXiv.cs, abs/2305.11588, 2023. 2
369
+ [71] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 3
370
+ [72] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. CVPR, pages 586-595, 2018. 7, 13
371
+ [73] Zibo Zhao et al. Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation. arXiv preprint arXiv:2501.12202, 2025. 3
2025/SynCity_ Training-Free Generation of 3D Worlds/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f3d93d1774c579c274332384547512b6fa58aa95d61e118abf8e152ae6b433
3
+ size 665515
2025/SynCity_ Training-Free Generation of 3D Worlds/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/1b1abe0d-673c-4757-97e7-7262600b5102_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8171b9417c7c196f77624d84cdeb15281d559f48b5f751462f42d2b2c2d38f3a
3
+ size 10519470
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/full.md ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynFER: Towards Boosting Facial Expression Recognition with Synthetic Data
2
+
3
+ Xilin He $^{1,9*}$ , Cheng Luo $^{4,5*}$ , Xiaole Xian $^{1}$ , Bing Li $^{4}$ , Muhammad Haris Khan $^{8}$ , Zongyuan Ge $^{5}$ , Weicheng Xie $^{1,2,3\dagger}$ , Siyang Song $^{6\dagger}$ , Linlin Shen $^{7\dagger}$ , Bernard Ghanem $^{4}$ , Xiangyu Yue $^{9}$ , $^{1}$ School of Computer Science & Software Engineering, Shenzhen University, China $^{2}$ Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China $^{3}$ Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen University, China $^{4}$ KAUST, $^{5}$ Monash University, $^{6}$ School of Computer Science, University of Exeter, UK $^{7}$ Computer Vision Institute, School of Artificial Intelligence, Shenzhen University, China $^{8}$ MBZUAI $^{9}$ The Chinese University of Hong Kong
4
+
5
+ # Abstract
6
+
7
+ Facial expression datasets remain limited in scale due to the subjectivity of annotations and the labor-intensive nature of data collection. This limitation poses a significant challenge for developing modern deep learning-based facial expression analysis models, particularly foundation models, that rely on large-scale data for optimal performance. To tackle the overarching and complex challenge, instead of introducing a new large-scale dataset, we introduce SynFER (Synthesis of Facial Expressions with Refined Control), a novel synthetic framework for synthesizing facial expression image data based on high-level textual descriptions as well as more fine-grained and precise control through facial action units. To ensure the quality and reliability of the synthetic data, we propose a semantic guidance technique to steer the generation process and a pseudo-label generator to help rectify the facial expression labels for the synthetic images. To demonstrate the generation fidelity and the effectiveness of the synthetic data from SynFER, we conduct extensive experiments on representation learning using both synthetic data and real-world data. Results validate the efficacy of our approach and the synthetic data. Notably, our approach achieves a $67.23\%$ classification accuracy on AffectNet when training solely with synthetic data equivalent to the AffectNet training set size, which increases to $69.84\%$ when scaling up to five times the original size. Code is available here.
8
+
9
+ # 1. Introduction
10
+
11
+ Facial Expression Recognition (FER) is at the forefront of advancing AI's ability to interpret human emotions, opening new frontiers for various human-centered applications.
12
+
13
+ From automatic emotion detection to early interventions in mental health [63], accurate pain assessment [33], and enhancing human-computer interaction [1], the potential impact of FER systems is profound [57, 67, 91]. In recent years, learning-based FER models have gained significant traction due to their promising performances [24, 42, 86]. However, despite recent advancements in network architectures and learning methodologies, the progress of existing FER models has been hindered by the inadequate scale and quality of available training data, underscoring the need to expand datasets with high-quality data to push the boundaries of FER capabilities.
14
+
15
+ Existing FER datasets, such as $\mathrm{CK + }$ (953 sequences) [53], FER-2013 (30,000 $48\times 48$ images) [5], RAF-DB (29,672 images) [43], AFEW (113,355 images) [19], and SFEW (1,766 images) [18], are small compared to popular image datasets for general image processing (e.g., ImageNet [17] with 1.4 million images and Laion [68] with billion-level data). While AffectNet [58] compiles a large number of facial images from the web, it still suffers from vital drawbacks. A considerable portion of AffectNet's images are low-quality, and its annotations often contain incorrect labels, which impairs the training process of FER models [38, 78]. Consequently, the absence of high-quality and large-scale FER datasets has delayed the development of FER foundation models. However, collecting a large-scale FER dataset with high-quality facial images and meticulous annotations is almost an unrealistic endeavor due to substantial financial and time costs, ethical concerns around facial data collection, and limited resources for large-scale acquisition. Additionally, the subjective interpretation of facial expressions results in inconsistent labeling by annotators, which exacerbates variability and hinders the creation of reliable datasets.
16
+
17
+ To address the challenges in developing FER models, we
18
+
19
+ ![](images/57b3b499570b6645c37fbb26452cda91047ebb839f952eed1e5ce496a3bcac05.jpg)
20
+ Figure 1. (a) Examples of synthetic facial expression data generated by our SynFER model, (b) Comparison of training paradigms: training with real-world data versus training with synthetic facial expression data and (c) Performance boost from SynFER generated Data in supervised, self-supervised, zero-shot, and few-shot (5-shot) learning tasks.
21
+
22
+ turn to explore the generation paradigm to synthesize high-quality facial expression images paired with reliable labels. This approach draws inspiration from successful strategies employed to expand annotated datasets for other computer vision tasks, such as semantic segmentation [4, 14, 39] and depth estimation [2, 15, 30]. These advances leverage powerful generative models such as Stable Diffusion [64] and DALL-E [6], which capture intricate natural image patterns. By tapping into these models, researchers have generated realistic images with their corresponding annotations, thereby boosting model performance. However, applying diffusion models to synthesize facial expression images with reliable FER labels presents two major challenges. (1) the training sets used by these generative models often lack diverse facial expression data, limiting their ability to produce images that capture subtle and nuanced emotional semantics; and (2) prior approaches to generate annotations for synthetic images focused on tangible attributes such as pixel-wise layouts, or depth maps. In contrast, facial expressions convey abstract and subjective emotions, making the generation of precise and reliable expression labels much more complex. To the best of our knowledge, none of the existing methods can simultaneously conduct fine-grained control for facial expression generation and generate robust categorical facial expression labels leveraging diffusion-based features.
23
+
24
+ In this paper, instead of a dataset, we present SynFER, a leading data synthesis pipeline capable of synthesizing unlimited and realistic facial expression images paired with reliable expression labels, to drive advancements in FER models. To address the shortcomings of existing FER datasets, which often lack expression-related text paired with facial images, we introduce FEText, a unique hybrid dataset created by curating and filtering data from existing FER and high-quality face datasets. This vision-language dataset serves as the foundation for training our generative model to synthesize facial expression data. To ensure fine-grained control and faithful generation of facial expression images, we inject facial action unit (FAU) information and semantic guidance from external pre-trained FER mod
25
+
26
+ els. Building upon this, we propose FERAnno, the first diffusion-based label calibrator for FER, which automatically generates reliable annotations for the synthesized images. Together, these innovations position SynFER as a powerful tool for producing large-scale, high-quality facial expression data, offering a significant resource for the development of FER models.
27
+
28
+ We investigate the effectiveness of the synthetic data across various learning paradigms, demonstrating consistent and modest improvement in model performance. As shown in Fig. 1(c), training with the synthetic data yields significant performance boosts across various learning paradigms. Notably, pre-training on the synthetic data (Fig. 1(c)) improves self-supervised learning model MoCov3 [13] on AffectNet, surpassing real-world data pretraining. In supervised learning, SynFER improves accuracy by $+1.55\%$ for the state-of-the-art FER model, POSTER++ [56], on AffectNet. We further explore the performance scaling of the synthetic data, revealing further gains as dataset size increases. Our key contributions are:
29
+
30
+ - We introduce FEText, the first dataset of facial expression-related image-text pairs, providing a crucial resource for advancing FER tasks.
31
+ - We propose SynFER, the first diffusion-based data synthesis pipeline for FER, integrating FAUs information and semantic guidance to achieve fine-grained control and faithful expression generation. Additionally, FERAanno, a novel diffusion-based label calibrator, is designed to automatically refine and enhance the annotations of synthesized facial expression images.
32
+ - Extensive experiments across six datasets and four learning paradigms demonstrate the effectiveness of the proposed SynFER, validating the quality and scalability of its synthesized output.
33
+
34
+ # 2. Related Work
35
+
36
+ Facial Expression Recognition (FER): Recent success in deep learning (DL) has largely boosted the performance of the FER task, despite the substantial data requirements for
37
+
38
+ training DL models. To address the limited training data in FER, previous methods mainly focus on developing different learning paradigms, including semi-supervised learning [16, 41, 81], transfer learning [45, 66] and multi-task learning [44, 52]. For example, Ada-CM [41] learns a confidence margin to make full use of the unlabeled facial expression data in a semi-supervised manner. Despite achieving performance gains for FER, these methods remain constrained by limited data. Recently, researchers have explored an alternative data-driven perspective of introducing large-scale face datasets from other facial analysis tasks (e.g., face recognition [84]). Meta-Face2Exp [84] utilizes large-scale face recognition data to enhance FER by matching the feature distribution between face recognition and FER. However, face data drawn from these datasets lack diverse facial expressions, and thereby couldn't fully unlock the potential of large-scale data in FER.
39
+
40
+ Synthetic Data: Recently, growing attention has been paid to the advanced generative models (e.g., Generative Adversarial Networks (GANs) [28] and Diffusion Models [65]), which are typically flexible to synthesize training images for a wider range of downstream tasks, including classification [3, 26], face recognition [7, 37], semantic segmentation [60, 74, 75] and human pose estimation [25, 90]. In particular, some studies pioneer the capabilities of powerful pre-trained diffusion generative models on natural images [46, 60, 75]. For example, DatasetDM [75] further introduces a generalized perception decoder to parse the rich latent space of the pre-trained diffusion model for various downstream tasks. Despite the growing adoption of diffusion models in synthetic data generation, their application to anatomically grounded facial expression synthesis remains critically underexplored. While recent diffusion-based methods like FineFace [72] and EmojiDiff [34] achieve visually compelling facial expression synthesis through AU-guided editing or reference-driven generation, their utility as synthetic training data for FER remains unverified. However, these methods prioritize perceptual quality over functional utility, lacking explicit mechanisms to align generated expressions with FER label semantics or to validate downstream task performance. In this paper, we attempt to investigate the potential and feasibility of synthetic images for FER tasks.
41
+
42
+ # 3. Preliminaries
43
+
44
+ Diffusion models include a forward process that adds Gaussian noise $\epsilon$ to convert a clean sample $x_0$ to noise sample $x_{T}$ , and a backward process that iteratively performs denoising from $x_{T}$ to $x_0$ , where $T$ represents the total number of timesteps. The forward process of injecting noise is:
45
+
46
+ $$
47
+ x _ {t} = \sqrt {\alpha_ {t}} x _ {0} + \sqrt {1 - \alpha_ {t}} \epsilon \tag {1}
48
+ $$
49
+
50
+ $x_{t}$ is the noise feature at timestep $t$ and $\alpha_{t}$ is a predetermined hyperparam. for sampling $x_{t}$ with a given noise
51
+
52
+ schedule [69]. In backward process of denoising, given input noise $x_{t}$ sampled from a Gaussian distribution, a learnable network $\epsilon_{\theta}$ estimates the noise at each timestep $t$ with condition $c$ . $x_{t - 1}$ , the feature at the previous timestep is:
53
+
54
+ $$
55
+ x _ {t - 1} = \frac {\sqrt {\alpha_ {t - 1}}}{\sqrt {\alpha_ {t}}} x _ {t} + \sqrt {\alpha_ {t - 1}} \left(\sqrt {\frac {1}{\alpha_ {t - 1}} - 1} - \sqrt {\frac {1}{\alpha_ {t}} - 1}\right) \epsilon_ {\theta} \left(x _ {t}, t, c\right) \tag {2}
56
+ $$
57
+
58
+ During training, the noise estimation network $\epsilon_{\theta}$ is guided to conduct denoising with condition $c$ by the learning objective:
59
+
60
+ $$
61
+ \min _ {\theta} \mathbb {E} _ {x _ {0}, \epsilon \sim \mathcal {N} (\mathbf {0}, \mathbf {I}), c, t} \| \epsilon - \epsilon_ {\theta} (x _ {t}, c, t) \| _ {2} ^ {2}, \tag {3}
62
+ $$
63
+
64
+ With its powerful capability to model complex data distributions, the diffusion model serves as the foundation for generating high-quality FER data. Our SynFER framework is the pioneering work that explores the use of diffusion models to synthesize affective modalities.
65
+
66
+ # 4. Methodology
67
+
68
+ We first introduce i) the overall synthetic pipeline for generating facial expression image-label pairs. Next, we detail ii) our approach for producing high-fidelity facial expression images, which are controlled through high-level text descriptions (Sec.4.2.1), fine-grained facial action units corresponding to localized facial muscles (Sec.4.2.2), and a semantic guidance technique (Sec.4.3). Finally, we introduce iii) the FER annotation coder (FERAnno), a crucial component that thoroughly understands the synthetic facial expression data and automatically generates accurate annotations accordingly (Sec.4.4). This pipeline ensures both precision and reliability in facial expression generation and labeling.
69
+
70
+ # 4.1. Overall Pipeline for FER Data Synthesis
71
+
72
+ We introduce the overall pipeline for FER data synthesis (Fig. 2). The process starts with a coarse human portrait description assigned to a specific facial expression. ChatGPT enriches this description with details such as facial appearance, subtle facial muscle movements, and contextual cues. Simultaneously, facial action unit annotations are generated based on prior FAU-FE knowledge [21], aligning them with emotion categories to serve as explicit control signals for guiding the facial expression image synthesis. Once the facial expression label, facial action unit labels, and expanded textual prompt are prepared, these inputs condition our diffusion model to generate high-fidelity FER images, guided by semantic guidance to ensure accurate FER semantics. During the denoising process, FERAmmo automatically produces pseudo labels for the generated images. To further improve labeling accuracy, we ensemble our FERAmmo with existing FER models, which collaborate to vote on the accuracy of the predefined FER labels. In cases where discrepancies arise, the predefined label is replaced by averaging
73
+
74
+ ![](images/1ec87817656d60dfb8ec96cdf79e15aa7c3a0de73821871021c67ce093431597.jpg)
75
+ Figure 2. Overall pipeline of our FER data synthesis process.
76
+
77
+ the predictions from the ensemble experts. This mechanism effectively reduces the risk of inconsistent or uncertain annotations, ensuring that the final synthesis data is precise and dependable for downstream applications.
78
+
79
+ # 4.2. Diffusion Model Training for FER Data
80
+
81
+ # 4.2.1. FEText Data Construction
82
+
83
+ To address the lack of facial expression image-text pairs for diffusion model training, we introduce FEText (Fig. 3), the first hybrid image-text dataset for FER. It combines face images from FFHQ [36], CelebA-HQ [35], AffectNet [58], and SFEW [18], each paired with captions generated by a multi-modal large language model (MLLM). FEText includes 400K curated pairs tailored for facial expression tasks.
84
+
85
+ Resolution Alignment: Due to variations in image resolution across different datasets, we first utilize a super-resolution model [48] to standardize the resolutions of images from AffectNet and SFEW. Specifically, we incorporate high-resolution images from FFHQ and CelebA-HQ datasets to preserve the model's capacity for high-fidelity image generation. This dual approach allows the model to not only maintain the fidelity of the generated images but also to learn and incorporate the facial expression semantics from AffectNet and SFEW.
86
+
87
+ Textual Caption Annotation: To generate a textural caption for each face image, we employ the open-source multi-modal language model ShareGPT-4V [11], by guiding it with carefully crafted instructions. To ensure that the generated captions are both context-aware and expressive, we clearly define the model's role and provide examples of detailed facial expression descriptions within the prompts. This approach enables the model to generate precise, emotion-reflective captions for the input images.
88
+
89
+ With the FEText obtained, we then conduct the fine-tuning of the diffusion model on it using the diffusion loss in Eq. 3. A detailed fine-tuning strategy of the model can be found in the supplementary material.
90
+
91
+ # 4.2.2. Explicit Control Signals via Facial Action Units
92
+
93
+ While fine-tuning the diffusion model using facial expression captions provides general language-based guidance for facial expression generation, it lacks the precision needed to capture fine-grained facial details, such as localized muscle movements. To this end, we propose to incorporate more explicit control signals through Facial Action Units (FAUs), each of which represents a specific facial muscle movement. Inspired by IP-Adapter [80], we apply a decoupled cross-attention module to integrate FAU embeddings with the diffusion model's generation process. These embeddings are derived by mapping discrete FAU labels into high-dimensional representations using a Multi-Layer Perceptron, referred to as the AU adapter. FAU labels for each image in the FEText dataset are annotated using the widely adopted FAU detection model, OpenGraphAU [54]. With the diffusion model's parameters frozen, we train the AU adapter to guide the model in recovering facial images based on the annotated FAU labels, using the objective in Eq. 3.
94
+
95
+ # 4.3. Semantic Guidance for Precise Expression Control
96
+
97
+ Due to the imbalanced distribution of FER labels in the training data and the potential ambiguity between certain facial expressions [88], such as disgust, relying solely on textual and FAU conditions might not guarantee the faithful generation of these expressions. To address this issue, we propose incorporating semantic guidance on the textual embeddings $c^{\mathrm{text}}$ , during the later stages of the denoising process. We leverage external knowledge from open-source FER models to steer the generation process, ensuring a more accurate and faithful synthesis of hard-to-distinguish facial expressions.
98
+
99
+ Layout Initialization: During inference, we select a random face image $x^{s}$ from FEText and invert it to initialize the noise sample $x_{T}^{s}$ (Eq. 1). Since early diffusion stages shape the global layout of the image [55, 61, 87], this strategy helps preserve the natural facial structure, ensuring the
100
+
101
+ ![](images/d2908e1f25b602571aafddef6d20d7a728b065be5030b4623d5934ad30985542.jpg)
102
+ Image-text pair in LAION-5B
103
+
104
+ Becoming More Than a Good Bible Study Girl: Living the Faith after Bible Class Is Over...
105
+
106
+ ![](images/a0ff6f5481039ec1e3b220f82f37094be87a8a84ce2297689f6dac12e4659888.jpg)
107
+ Image-text pair in FEText
108
+
109
+ The facial expression is one of intense anger or rage. The furrowed brow, narrowed eyes, wrinkled nose, and open mouth with exposed teeth all convey a sense of aggression and hostility...
110
+
111
+ ![](images/264d1c1db0d1806d0c012fdb67cad7f5860bd8ff5f20c602e650e69deb0a490a.jpg)
112
+ Raw image
113
+
114
+ ![](images/fe070ad7731e9d221f06aeb1f7d1cca5d8ff05fd28875779e278244bb1e1ff8d.jpg)
115
+ 1. Resolution Alignment
116
+
117
+ ![](images/4ab056ce8f2ba10ec896dac70d5c3135106da3aeda014bb78f2e6daa16c1390e.jpg)
118
+ FText Construction Pipeline
119
+ Super-resolution image
120
+
121
+ User: <Super-resolution Image> Act as an affective computing expert. State the facial expression type and facial emotional affective features (eyes, nose, cheek, eyebrow, mouth) information and gaze. There are only seven types of facial expression Happy, Sad, Neutral, Surprised, Fear, Angry, Disgusted.
122
+
123
+ ![](images/4d2a96feb8bb1c574bf3c4890807ff607b5e5118c99a29a01200bbe6a437a00f.jpg)
124
+ 2.Textual Caption Annotation
125
+ ShareGPT4V
126
+
127
+ ![](images/2aca9e370bf723c6a15e67ad6de6b4c1d4ca62fe95eb19ebec9b5c3c571e71a7.jpg)
128
+
129
+ ![](images/4bfb9373db325891c830871b755655d3caca447b5eb877bea2c530b9fe22d8bc.jpg)
130
+ Textual caption
131
+
132
+ ShareGPT4V: The facial expression is one of intense anger or rage. The furrowed brow, narrowed eyes, wrinkled nose, and open mouth with exposed teeth all convey a sense of aggression and hostility. The raised upper lip and stretched lips further emphasize the intensity of the emotion.
133
+
134
+ generated images are coherent, high-quality, and visually consistent with real-world expressions.
135
+
136
+ Semantic Guidance: In the early steps of the diffusion process, the generation process is conditioned on the original textual condition $c^{\mathrm{text}}$ . To further induce the generation of facial expression images corresponding to their FER labels $y$ , we iteratively update the textual condition in the subsequent time steps. Specifically, a facial expression classifier $f(\cdot)$ is utilized for the injection of complex semantics. To guide the generated images towards the specific class $y$ , we propose to do so by updating the textual embeddings. Given an intermediate denoised sample $x_{t}$ at timestep $t$ , following Eq. 15 in DDPM [32], we first estimate the one-step prediction of the original image $\hat{x}_0$ as:
137
+
138
+ $$
139
+ \hat {x} _ {0} = \left(x _ {t} - \sqrt {1 - \bar {\alpha} _ {t}}\right) \epsilon_ {\theta} \left(x _ {t}, t, c ^ {\text {t e x t}}, c ^ {\text {a u}}\right) / \sqrt {\bar {\alpha} _ {t}} \tag {4}
140
+ $$
141
+
142
+ We then calculate the classification loss with:
143
+
144
+ $$
145
+ \mathcal {L} _ {g} = - y \log (h (f (\hat {x} _ {0})) _ {i}) \tag {5}
146
+ $$
147
+
148
+ Given the guidance loss $\mathcal{L}_g$ , the textual embedding is updated with the corresponding gradient:
149
+
150
+ $$
151
+ c _ {t - 1} ^ {\text {t e x t}} = c _ {t} ^ {\text {t e x t}} + \lambda_ {g} \frac {\nabla_ {c _ {t} ^ {\text {t e x t}}} \mathcal {L} _ {g}}{\| \nabla_ {c _ {t} ^ {\text {t e x t}}} \mathcal {L} _ {g} \| _ {2}} \tag {6}
152
+ $$
153
+
154
+ where $\lambda_{g}$ and $c_{t - 1}^{\mathrm{ext}}$ denote the step size and the updated textual embedding at timestep $t - 1$ , respectively. In the latter steps of the diffusion process, the noise estimator network $\epsilon_{\theta}$ is conditioned on the updated textual embeddings rather than the original one.
155
+
156
+ # 4.4. Diffusion-based Label Calibrator (FERanno)
157
+
158
+ To ensure semantic alignment between each synthesized face image and its assigned facial expression label, we introduce FERAnno, a label calibration framework designed to validate the consistency of the generated data. By analyzing the facial patterns of each synthesized image, FERAnno categorizes them and compares the post-categorized
159
+
160
+ ![](images/6b848f1359710ad5522169545f145695de51d621e693d46f82a2c2754e406732.jpg)
161
+ Figure 3. Overview of our FEText data construction pipeline.
162
+ Figure 4. Overview of our FERAanno pseudo-label generator.
163
+
164
+ labels with their pre-assigned facial expression labels. This verification process helps identify and filter out samples with mismatched labels, preventing them from negatively impacting downstream FER model training. Specifically, FERAanno is a diffusion-based label calibrator equipped with a deep understanding of facial semantics. It leverages the multi-scale intermediate features and cross-attention maps inherent in the diffusion model to predict accurate FER labels, as depicted in Fig. 4. This ensures only high-quality, correctly labeled samples are included in the training pipeline, leading to more reliable model performance.
165
+
166
+ Image Inversion: To extract facial features and cross-attention maps with the diffusion model $\epsilon_{\theta}$ , we first inverse the generated image $x_0$ back to the noise sample $x_{t}$ at a denoising timestep $t$ , following a predefined scheduler, as described in Eq. 1. To preserve facial details, we set $t = 1$ during the inversion process, ensuring that the facial features remain as close as possible to the original generated image $x_0$ . This partially denoised sample is then passed through the trained denoising network, allowing us to extract rich facial features and cross-attention maps from intermediate layers, which are critical for capturing detailed facial patterns.
167
+
168
+ Feature Extraction: Given the inverted noise sample $x_{1}$ and the corresponding textual condition $c^{\mathrm{text}}$ and AU condition $c^{\mathrm{au}}$ , we can extract the multi-scale feature representa
169
+
170
+ tions and textual cross-attention maps from the U-Net $\epsilon_{\theta}$ as $\{\mathcal{F},\mathcal{A}\} = \epsilon_{\theta}(x_1,t_1,c^{\mathrm{text}},c^{\mathrm{au}})$ , where $\mathcal{F}$ and $\mathcal{A}$ denote the multi-scale feature representations and the cross-attention maps, respectively. $\mathcal{F}$ contains multi-scale feature maps from different layers of the U-Net $\epsilon_{\theta}$ with four different resolutions. $\mathcal{A}$ contains the cross-attention maps drawn from the 16 cross-attention blocks in $\epsilon_{\theta}$ . Both the feature representation $\mathcal{F}$ and the cross-attention maps $\mathcal{A}$ are regrouped according to their resolutions.
171
+
172
+ Multi-scale Features and Attention Maps Fusion: Given that the multi-scale feature maps $\mathcal{F}$ capture global information essential for image generation, and the cross-attention maps provide class-discriminative information as well as relationships between object locations [9, 70], FER-Anno fuses both features and attention maps within a dual-branch encoder architecture for pseudo-label annotation. An overview of this architecture is shown in Fig. 4. We first compute the mean of the regrouped attention maps, denoted as $\mathcal{A}_{\mathrm{reg}}$ , yielding a set of averaged attention maps $\bar{\mathcal{A}}$ . Both the feature maps $\mathcal{F}$ and the averaged attention maps $\bar{\mathcal{A}}$ are then passed through a residual convolution block to prepare them for further processing. To effectively integrate information at different scales, we introduce a bidirectional cross-attention block to fuse the features and attention maps. $1\times 1$ convolutions are employed at various stages to adapt the fusion across multiple resolution layers. Finally, the fused feature maps and attention maps are concatenated and passed through a linear layer, which outputs a probability vector for predicting facial expression classes.
173
+
174
+ # 5. Experiments
175
+
176
+ We conduct extensive experiments to evaluate both the generation quality of our synthetic data (Sec. 5.1) and its effectiveness in FER tasks (Sec. 5.2). Implementation details are in the appendix.
177
+
178
+ Evaluation metrics: In the experimental evaluation, we employ both objective metrics and user studies to comprehensively assess the generation quality. For synthetic image quality, we utilize Fréchet Inception Distance (FID) [82] to measure distribution similarity with real data, FaceScore (FS) [47] for facial quality assessment, and Human Preference Score v2 (HPSv2) [76] with Multi-dimensional Preference Score (MPS) [85] for human perception evaluation. Facial expression accuracy is quantified using pre-trained classifiers and Facial Action Unit Accuracy (FAU Acc.) via AU detection models. For subjective user study, the subjects are asked to select the images with better expression alignment and face fidelity for pairs of synthetic images generated by SynFER and other baselines. Downstream task performance is evaluated through linear probing accuracy in self-supervised learning, classification accuracy in supervised settings, and both Weighted Average Recall (WAR) and Unweighted Average Recall (UAR) for zero-shot recog
179
+
180
+ nition. Few-shot learning capabilities are measured using standard n-way k-shot protocols across compound expression datasets.
181
+
182
+ # 5.1. Generation Quality
183
+
184
+ We present both objective metrics and subjective user studies, comparing our method to state-of-the-art (SOTA) diffusion models [10, 40, 64] and the latest facial expression generation technique, FineFace [72]. We compute FID between the synthesis images and the test set of the AffectNet [58]. Tab. 1 shows that our method outperforms popular diffusion models and SOTA facial expression generation method FineFace [72], across all metrics of image quality, human preference and facial expression accuracy. Notably, the advantages of SynFER in both FE Acc. and AU Acc. indicate its outstanding controllability in facial expression generation.
185
+
186
+ # 5.2. Effectiveness of Synthetic Dataset
187
+
188
+ Self-supervised Representation Learning: We trained self-supervised learning (SSL) models, including BYOL [29], MoCo v3 [13], and SimCLR [12], using real-world data, our synthetic data, and a combination of both. The linear probe performances are evaluated on three widely used facial expression recognition (FER) datasets: RAF-DB [43], AffectNet [58], and SFEW [18], with results reported in Tab. 2. All SSL models are trained with a ResNet-50 architecture [31]. Notably, SOTA self-supervised facial representation learning, such as MCF [73], FRA [27], and PCL [51], are pre-trained on much larger face datasets like LAION-Face [89], VGGFace2 [8], and VoxCeleb [59]. However, these models underperformed on FER tasks compared to ours, highlighting that existing large-scale face datasets may lack the high-quality and diverse facial expression patterns required for accurate FER. Results show that combining real-world and synthetic data consistently boosts SSL baselines. Remarkably, even when MoCo v3 was trained solely on our synthetic data, it achieved a $2.12\%$ improvement on RAF-DB, underscoring the effectiveness of our approach in capturing critical facial expression details that are essential for FER.
189
+
190
+ Supervised Representation Learning: We validate the effectiveness of SynFER for supervised representation learning by evaluating its performance on RAF-DB and AffectNet (Tab. 3). We compare with SOTA FER models, including Ada-DF [50], POSTER++ [56], and APViT [77]. The results demonstrate that incorporating synthetic data consistently enhances both baseline models and the latest SO-TAs in supervised facial expression recognition. Notably, APViT benefits from the synthetic data with improvements of $0.27\%$ on RAF-DB and $0.32\%$ on AffectNet. While the improvements in supervised learning are more modest compared to self-supervised learning, they remain consistent across datasets.
191
+
192
+ <table><tr><td rowspan="2">Method</td><td colspan="6">Objective Metrics</td><td colspan="2">User study (Ours vs. )(%)</td></tr><tr><td>FID (↓)</td><td>HPSv2(↑)</td><td>FS(↑)</td><td>MPS (↓)</td><td>FER Acc.(↑)</td><td>FAU Acc.(↑)</td><td>EA (↑)</td><td>FF (↑)</td></tr><tr><td>Stable Diffusion</td><td>88.40</td><td>0.263</td><td>2.01</td><td>2.00</td><td>20.06</td><td>87.72</td><td>2.86</td><td>1.79</td></tr><tr><td>PixelArt</td><td>145.23</td><td>0.271</td><td>3.79</td><td>5.26</td><td>15.52</td><td>84.57</td><td>24.26</td><td>10.00</td></tr><tr><td>PlayGround</td><td>81.76</td><td>0.265</td><td>2.86</td><td>3.73</td><td>21.56</td><td>87.28</td><td>7.50</td><td>5.00</td></tr><tr><td>FineFace</td><td>74.61</td><td>0.268</td><td>3.29</td><td>1.48</td><td>38.05</td><td>89.68</td><td>5.73</td><td>6.41</td></tr><tr><td>SynFER</td><td>16.32</td><td>0.280</td><td>4.26</td><td>0.50</td><td>55.14</td><td>93.31</td><td>59.64</td><td>76.79</td></tr></table>
193
+
194
+ Table 1. Generation quality comparisons. Ours vs.' shows the proportion of users who prefer our method over the alternative. An MPS above 1.00 and results above $50\%$ in the user study indicate SynFER outplays the counterpart. FS, FER Acc., FAU Acc., EA and FF denote FaceScore [47], FER accuracy, facial action unit accuracy, expression alignment and face fidelity, respectively.
195
+
196
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Pre-train Data</td><td rowspan="2">RAF-DB</td><td rowspan="2">AffectNet</td><td rowspan="2">SFEW</td></tr><tr><td>Dataset</td><td>Scale</td></tr><tr><td>MCF</td><td>Laion-Face</td><td>20M</td><td>65.22</td><td>-</td><td>32.61</td></tr><tr><td>FRA</td><td>VGGFace2</td><td>3.3M</td><td>73.89</td><td>57.38</td><td>-</td></tr><tr><td>PCL</td><td>VoxCeleb</td><td>1.8M</td><td>74.47</td><td>-</td><td>39.68</td></tr><tr><td>SimCLR</td><td>AffectNet</td><td>0.2M</td><td>78.65</td><td>48.36</td><td>46.79</td></tr><tr><td>SimCLR</td><td>Ours</td><td>1.0M</td><td>80.24 (+1.59)</td><td>52.05 (+3.69)</td><td>47.62 (+0.83)</td></tr><tr><td>SimCLR</td><td>AffectNet+Ours</td><td>1.2M</td><td>81.52 (+2.87)</td><td>54.37 (+6.01)</td><td>48.52 (+1.73)</td></tr><tr><td>BYOL</td><td>AffectNet</td><td>0.2M</td><td>78.24</td><td>50.04</td><td>48.70</td></tr><tr><td>BYOL</td><td>Ours</td><td>1.0M</td><td>80.96 (+2.72)</td><td>53.13 (+3.09)</td><td>51.35 (+2.65)</td></tr><tr><td>BYOL</td><td>AffectNet+Ours</td><td>1.2M</td><td>81.25 (+3.01)</td><td>54.95 (+4.91)</td><td>51.70 (+3.00)</td></tr><tr><td>MoCo v3</td><td>AffectNet</td><td>0.2M</td><td>79.05</td><td>51.03</td><td>49.34</td></tr><tr><td>MoCo v3</td><td>Ours</td><td>1.0M</td><td>81.17 (+2.12)</td><td>55.56(+4.53)</td><td>50.78 (+1.44)</td></tr><tr><td>MoCo v3</td><td>AffectNet+Ours</td><td>1.2M</td><td>81.68 (+2.63)</td><td>57.84 (+6.81)</td><td>51.26 (+1.92)</td></tr></table>
197
+
198
+ Table 2. Linear probe performance comparisons of SSL models on three FER datasets.
199
+
200
+ <table><tr><td>Method</td><td>RAF-DB</td><td>AffectNet</td></tr><tr><td>ResNet-18</td><td>87.48</td><td>50.32</td></tr><tr><td>ResNet-18 + Ours</td><td>87.97</td><td>51.65</td></tr><tr><td>Ada-DF</td><td>90.94</td><td>65.34</td></tr><tr><td>Ada-DF + Ours</td><td>91.21</td><td>66.82</td></tr><tr><td>POSTER++</td><td>91.59</td><td>67.49</td></tr><tr><td>POSTER++ + Ours</td><td>91.95</td><td>69.04</td></tr><tr><td>APViT</td><td>91.78</td><td>66.94</td></tr><tr><td>APViT + Ours</td><td>92.05</td><td>67.26</td></tr><tr><td>FERAnno</td><td>92.56</td><td>70.38</td></tr></table>
201
+
202
+ Table 3. Comparison of supervised learning models (with and without our synthetic data) and the label calibrator FERAanno.
203
+
204
+ <table><tr><td rowspan="2">Method</td><td colspan="2">CFEE_C</td><td colspan="2">EmotionNet_C</td><td colspan="2">RAF_C</td></tr><tr><td>1-shot</td><td>5-shot</td><td>1-shot</td><td>5-shot</td><td>1-shot</td><td>5-shot</td></tr><tr><td>InfoPatch</td><td>54.19</td><td>67.29</td><td>48.14</td><td>59.84</td><td>41.02</td><td>57.98</td></tr><tr><td>InfoPatch*</td><td>55.21</td><td>68.73</td><td>48.52</td><td>61.16</td><td>41.88</td><td>59.54</td></tr><tr><td>LR+DC</td><td>53.20</td><td>64.18</td><td>52.09</td><td>60.12</td><td>42.90</td><td>56.74</td></tr><tr><td>LR+DC*</td><td>54.65</td><td>65.28</td><td>51.96</td><td>60.14</td><td>43.87</td><td>57.90</td></tr><tr><td>STARTUP</td><td>54.89</td><td>67.79</td><td>52.61</td><td>61.95</td><td>43.97</td><td>59.14</td></tr><tr><td>STARTUP*</td><td>56.25</td><td>69.93</td><td>52.87</td><td>62.12</td><td>45.18</td><td>61.23</td></tr><tr><td>CDNet</td><td>56.99</td><td>68.98</td><td>55.16</td><td>63.03</td><td>46.07</td><td>63.03</td></tr><tr><td>CDNet*</td><td>57.74</td><td>70.64</td><td>56.79</td><td>65.63</td><td>46.97</td><td>64.34</td></tr></table>
205
+
206
+ Table 4. Comparisons with SOTA few-shot learning methods on 5-way few-shot FER tasks with a $95\%$ confidence interval. (*) indicates training with both real-world and synthetic data.
207
+
208
+ tent. This is likely due to the stricter distribution alignment required in supervised learning between synthetic training data and real-world test data. In the following section on scaling behavior analysis, we provide further insights, showcasing the use of the distribution alignment technique, Real-Fake [83], to alleviate this problem.
209
+
210
+ Few-shot Learning: We explore the potential of synthetic
211
+
212
+ ![](images/2ab43c6fd7ad0dba9dc83876f51dcdf41c6e7ff0bc5f7891aef4eb79439cf46f.jpg)
213
+ Figure 5. Example of generated samples.
214
+
215
+ data to enhance few-shot learning, as presented in Tab. 4. Following the protocol from CDNet [92], we train models on five basic expression datasets and evaluate on three compound expression datasets: CFEE_C [20], EmotionNet_C [22], and RAF_C [43]. We compare it against SOTA few-shot learning methods, including InfoPatch [49], LR+DC [79], and STARTUP [62]. The results clearly demonstrate that integrating synthetic data consistently enhances few-shot FER performance across key metrics. This highlights the potential of synthetic data in data-limited scenarios, allowing models to better generalize to complex, real-world expressions in few-shot tasks.
216
+
217
+ # 5.3. Ablation Study
218
+
219
+ Effectiveness of FAU Control. Fig. 5 shows samples generated with FAU control (third column) exhibit facial expressions that more accurately match their assigned labels compared to those generated with only text guidance (second column). For example, the 'fear' expression, driven by FAUs like Inner Brow Raiser and Lip Stretcher, becomes more distinct (third column, second row), making it easier to differentiate from other emotions such as 'surprise.' Similarly, 'disgust' is more pronounced with FAUs like Lid Tightener. Without FAU control, facial expressions (second column) tend to blur, as different categories show overlapping features. Quantitative results in Tab. 5 highlight the impact of FAU control: FER accuracy increases from $34.62\%$ to $48.74\%$ , and FAU detection accuracy rises from $88.91\%$ to $92.37\%$ . This also translates into improved downstream performance on RAF-DB and AffectNet.
220
+
221
+ Effectiveness of Semantic Guidance: We further explore the impact of semantic guidance (SG) on both generation quality and supervised representation learning (Fig. 5 and Tab. 5). By updating text embeddings to better align with
222
+
223
+ ![](images/62a0726a8cb668b5253efb288b46bd217721a730a8c50e0d8a15d982e5252e04.jpg)
224
+ (a) Synthetic Data Scaling in Self-Supervised Learning
225
+
226
+ ![](images/dae795f7996288d835d05cf2d76f0427f686093e49055e5d64ed7d364ca5b046.jpg)
227
+ (b) Synthetic Data Scaling in Supervised Learning
228
+ Figure 6. Scaling up the synthetic dataset with MoCo v3 (ResNet-50) [13], and linear probe performance is evaluated on AffectNet and RAF-DB. The SOTA FER model, POSTER++ [56], is trained using supervised learning (with and without the Real-Fake technique [83]) on our synthetic dataset and evaluated on the same two target FER datasets. $\star$ is model's performance trained on corresponding real data.
229
+
230
+ ![](images/005bd8b1dda0a8c0c9bae5b6ddc24c97943d826cd9e2d92d74d193affead9b3c.jpg)
231
+ (c) Synthetic Data Scaling in Supervised Learning (with Real-Fake technique [83])
232
+
233
+ <table><tr><td>Method</td><td>HPSv2</td><td>FE Acc.</td><td>AU Acc.</td><td>RAF-DB</td><td>AffectNet</td></tr><tr><td>Real-world Data</td><td>-</td><td>-</td><td>-</td><td>91.59</td><td>67.49</td></tr><tr><td>SD</td><td>0.263</td><td>20.06</td><td>87.72</td><td>89.42</td><td>65.36</td></tr><tr><td>w/ FEText</td><td>0.267</td><td>34.62</td><td>88.91</td><td>90.54</td><td>66.62</td></tr><tr><td>w/ FEText+FAUs</td><td>0.275</td><td>48.74</td><td>92.37</td><td>91.68</td><td>67.68</td></tr><tr><td>w/ FEText+FAUs+SG</td><td>0.280</td><td>55.14</td><td>93.31</td><td>91.95</td><td>68.13</td></tr></table>
234
+
235
+ Table 5. Ablation study on AU injection and semantic guidance (SG) on both generation quality and supervised learning. SD denotes Stable Diffusion [65], which is used as a baseline.
236
+
237
+ the target facial expression category, SG improves the accuracy of the generated expressions by $6.4\%$ , compared to static text and FAUs. The samples in the last column of Fig. 5 show more exaggerated facial expressions than those in the third column, with SG enhancing the intensity.
238
+
239
+ Reliability of FERAnno: We assess the reliability of FERAnno as a label calibrator by evaluating its performance on two FER datasets and visualizing its attention maps in Tab. 3 and Fig. 7. FERAnno outperforms FER SOTAs by $+0.51\%$ and $+1.34\%$ on RAF-DB and AffectNet over the second-best models. The attention maps (Fig. 7) further show FERAnno's ability to accurately locate facial expression-related patterns e.g., jaw-dropping and furrowed eyebrows, highlighting its semantic understanding.
240
+
241
+ ![](images/cf79ee6ef5b192258f8a947f7f69914624b56b6cd6106c7fe57909de570ba2d5.jpg)
242
+ Figure 7. Synthesis images and attention maps in the fine-tuned diffusion model.
243
+
244
+ Synthetic Data Scaling Analysis: Following [23, 71], we investigate the scaling behavior of synthetic data in both self-supervised and supervised learning paradigms. We train models exclusively on synthetic images, without combining real-world data. The results in Fig. 6 (a)-(b) show a stronger scaling effect in self-supervised learning compared to supervised learning, where performance improves significantly with more data. (c). Compared to standard supervised learning, Real-Fake demonstrates a clear perfor
245
+
246
+ mance boost, which is likely due to the need for better distribution alignment in supervised learning [83]. While SynFER focuses on addressing FER data scarcity, aligning the synthetic data distribution with real-world data is crucial for supervised tasks. To further explore this, we apply the Real-Fake technique [83] for real and synthetic data distribution alignment, and present the results in Fig. 6 (c). Compared to standard supervised learning, Real-Fake demonstrates a clear performance boost.
247
+
248
+ # 6. Conclusion
249
+
250
+ We propose a synthetic data framework SynFER for facial expression recognition to address the data shortage in the field. We introduce the first facial expression-related image-text pair dataset FEText. We inject facial action unit and external knowledge from existing FER models to ensure fine-grained control and faithful generation of the facial expression images. To incorporate the generated images into training, we propose a diffusion-based label calibrator to help rectify the annotations for the synthesized images. After constructing the data synthesis pipeline, we show the effectiveness of synthesis data across different learning paradigms. Limitations of this work are discussed in detail in supplementary material.
251
+
252
+ # 7. Acknowledgments
253
+
254
+ The work was supported by the National Natural Science Foundation of China under grants no. 62276170, 82261138629, 62306061, the Science and Technology Project of Guangdong Province under grants no. 2023A1515010688, the Science and Technology Innovation Commission of Shenzhen under grant no. JCYJ20220531101412030, Open Research Fund from Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) under Grant No. GML-KF-24-11, and Guangdong Provincial Key Laboratory under grant no. 2023B1212060076. The research reported in this publication was supported by funding from King Abdullah University of Science and Technology - Center
255
+
256
+ of Excellence for Generative AI, under award number 5940.
257
+
258
+ # References
259
+
260
+ [1] Faiza Abdat, Choubeila Maaoui, and Alain Pruski. Humancomputer interaction using emotion recognition from facial expression. In 2011 UKSim 5th European Symposium on Computer Modeling and Simulation, pages 196-201. IEEE, 2011. 1
261
+ [2] Amir Atapour-Abarghouei and Toby P Breckon. Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2800-2810, 2018. 2
262
+ [3] Shekoofeh Azizi, Simon Kornblith, Chitwan Sahara, Mohammad Norouzi, and David J Fleet. Synthetic data from diffusion models improves imagenet classification. arXiv preprint arXiv:2304.08466, 2023. 3
263
+ [4] Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126, 2021. 2
264
+ [5] Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang. Training deep networks for facial expression recognition with crowd-sourced label distribution. In Proceedings of the 18th ACM international conference on multimodal interaction, pages 279-283, 2016. 1
265
+ [6] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3.pdf, 2(3):8, 2023. 2
266
+ [7] Fadi Boutros, Jonas Henry Grebe, Arjan Kuijper, and Naser Damer. Idiff-face: Synthetic-based face recognition through fizzy identity-conditioned diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19650-19661, 2023. 3
267
+ [8] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67-74. IEEE, 2018. 6
268
+ [9] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650-9660, 2021. 6
269
+ [10] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 6
270
+ [11] Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793, 2023. 4
271
+ [1] Faiza Abdat, Choubeila Maaoui, and Alain Pruski. Humancomputer interaction using emotion recognition from facial expression. In 2011 UKSim 5th European Symposium on Computer Modeling and Simulation, pages 196-201. IEEE, 2011. 1
272
+ [2] Amir Atapour-Abarghouei and Toby P Breckon. Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2800–2810, 2018. 2
273
+ [3] Shekoofeh Azizi, Simon Kornblith, Chitwan Sahara, Mohammad Norouzi, and David J Fleet. Synthetic data from diffusion models improves imagenet classification. arXiv preprint arXiv:2304.08466, 2023. 3
274
+ [4] Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126, 2021. 2
275
+ [5] Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang. Training deep networks for facial expression recognition with crowd-sourced label distribution. In Proceedings of the 18th ACM international conference on multimodal interaction, pages 279-283, 2016. 1
276
+ [6] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn.openaia.com/papers/dall-e-3.pdf, 2(3):8, 2023. 2
277
+ [7] Fadi Boutros, Jonas Henry Grebe, Arjan Kuijper, and Naser Damer. Idiff-face: Synthetic-based face recognition through fizzy identity-conditioned diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19650-19661, 2023. 3
278
+ [8] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67-74. IEEE, 2018. 6
279
+ [9] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650-9660, 2021. 6
280
+ [10] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-alpha: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023. 6
281
+ [11] Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793, 2023. 4
282
+
283
+ [12] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 6
284
+
285
+ [13] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9640-9649, 2021. 2, 6, 8
286
+
287
+ [14] Yuhua Chen, Wen Li, Xiaoran Chen, and Luc Van Gool. Learning semantic segmentation from synthetic data: A geometrically guided input-output adaptation approach. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1841-1850, 2019. 2
288
+
289
+ [15] Bin Cheng, Inderjot Singh Saggu, Raunak Shah, Gaurav Bansal, and Dinesh Bharadia. S 3 net: Semantic-aware self-supervised depth estimation with monocular videos and synthetic data. In European Conference on Computer Vision, pages 52-69. Springer, 2020. 2
290
+
291
+ [16] Yunseong Cho, Chanwoo Kim, Hoseong Cho, Yunhoe Ku, Eunseo Kim, Muhammadjon Boboev, Joonseok Lee, and Seungryul Baek. Rmfer: Semi-supervised contrastive learning for facial expression recognition with reaction mashup video. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5913-5922, 2024. 3
292
+
293
+ [17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1
294
+
295
+ [18] Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In 2011 IEEE international conference on computer vision workshops (ICCV workshops), pages 2106-2112. IEEE, 2011. 1, 4, 6
296
+
297
+ [19] Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, and Tom Gedeon. From individual to group-level emotion recognition: Emotiv 5.0. In Proceedings of the 19th ACM international conference on multimodal interaction, pages 524-528, 2017. 1
298
+
299
+ [20] Shichuan Du, Yong Tao, and Aleix M Martinez. Compound facial expressions of emotion. Proceedings of the national academy of sciences, 111(15):E1454-E1462, 2014. 7
300
+
301
+ [21] Paul Ekman and Wallace V Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978. 3
302
+
303
+ [22] C Fabian Benitez-Quiroz, Ramprakash Srinivasan, and Aleix M Martinez. Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5562-5570, 2016. 7
304
+
305
+ [23] Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, and Yonglong Tian. Scaling laws of synthetic images for model training ... for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7382-7392, 2024. 8
306
+
307
+ [24] Amir Hossein Farzaneh and Xiaojun Qi. Facial expression recognition in the wild via deep attentive center loss. In Pro
308
+
309
+ ceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2402-2411, 2021. 1
310
+ [25] Runyang Feng, Yixing Gao, Tze Ho Elden Tse, Xueqing Ma, and Hyung Jin Chang. Diffpose: Spatiotemporal diffusion model for video-based human pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14861-14872, 2023. 3
311
+ [26] Maayan Frid-Adar, Eyal Klang, Michal Amitai, Jacob Goldberger, and Hayit Greenspan. Synthetic data augmentation using gan for improved liver lesion classification. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pages 289–293. IEEE, 2018. 3
312
+ [27] Zheng Gao and Ioannis Patras. Self-supervised facial representation learning with facial region awareness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2081-2092, 2024. 6
313
+ [28] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020. 3
314
+ [29] Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. 6
315
+ [30] Vitor Guizilini, Kuan-Hui Lee, Rares Ambrus, and Adrien Gaidon. Learning optical flow, depth, and scene flow without real-world labels. IEEE Robotics and Automation Letters, 7 (2):3491-3498, 2022. 2
316
+ [31] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6
317
+ [32] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 5
318
+ [33] Yuhao Huang, Jay Gopal, Bina Kakusa, Alice H Li, Weichen Huang, Jeffrey B Wang, Amit Persad, Ashwin Ramayya, Josef Parvizi, Vivek P Buch, et al. Naturalistic acute pain states decoded from neural and facial dynamics. bioRxiv, 2024. 1
319
+ [34] Liangwei Jiang, Ruida Li, Zhifeng Zhang, Shuo Fang, and Chenguang Ma. Emojidiff: Advanced facial expression control with high identity preservation in portrait generation. arXiv preprint arXiv:2412.01254, 2024. 3
320
+ [35] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 4
321
+ [36] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410, 2019. 4
322
+ [37] Minchul Kim, Feng Liu, Anil Jain, and Xiaoming Liu. Deface: Synthetic face generation with dual condition diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12715-12725, 2023. 3
323
+
324
+ [38] Nhat Le, Khanh Nguyen, Quang Tran, Erman Tjiputra, Bac Le, and Anh Nguyen. Uncertainty-aware label distribution learning for facial expression recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 6088-6097, 2023. 1
325
+ [39] Daiqing Li, Huan Ling, Seung Wook Kim, Karsten Kreis, Sanja Fidler, and Antonio Torralba. Bigdatasetgan: SynthesizingImagenet with pixel-wise annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21330-21340, 2022. 2
326
+ [40] Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, and Suhail Doshi. Playground v2. 5: Three insights towards enhancing aesthetic quality in text-to-image generation. arXiv preprint arXiv:2402.17245, 2024. 6
327
+ [41] Hangyu Li, Nannan Wang, Xi Yang, Xiaoyu Wang, and Xinbo Gao. Towards semi-supervised deep facial expression recognition with an adaptive confidence margin. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4166-4175, 2022. 3
328
+ [42] Shan Li and Weihong Deng. Deep facial expression recognition: A survey. IEEE transactions on affective computing, 13(3):1195-1215, 2020. 1
329
+ [43] Shan Li, Weihong Deng, and JunPing Du. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2852-2861, 2017. 1, 6, 7
330
+ [44] Ximan Li, Weihong Deng, Shan Li, and Yong Li. Compound expression recognition in-the-wild with au-assisted meta multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5734-5743, 2023. 3
331
+ [45] Yingjian Li, Zheng Zhang, Bingzhi Chen, Guangming Lu, and David Zhang. Deep margin-sensitive representation learning for cross-domain facial expression recognition. IEEE Transactions on Multimedia, 25:1359-1373, 2022. 3
332
+ [46] Ziyi Li, Qinye Zhou, Xiaoyun Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie. Open-vocabulary object segmentation with diffusion models. 2023. 3
333
+ [47] Zhenyi Liao, Qingsong Xie, Chen Chen, Hannan Lu, and Zhijie Deng. Facescore: Benchmarking and enhancing face quality in human generation. 2024. 6, 7
334
+ [48] Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Ben Fei, Bo Dai, Wanli Ouyang, Yu Qiao, and Chao Dong. Diffbir: Towards blind image restoration with generative diffusion prior. arXiv preprint arXiv:2308.15070, 2023. 4
335
+ [49] Chen Liu, Yanwei Fu, Chengming Xu, Siqian Yang, Jilin Li, Chengjie Wang, and Li Zhang. Learning a few-shot embedding model with contrastive learning. In Proceedings of the AAAI conference on artificial intelligence, pages 8635-8643, 2021. 7
336
+ [50] Shu Liu, Yan Xu, Tongming Wan, and Xiaoyan Kui. A dual-branch adaptive distribution fusion framework for real-world facial expression recognition. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE, 2023. 6
337
+ [51] Yuanyuan Liu, Wenbin Wang, Yibing Zhan, Shaoze Feng, Kejun Liu, and Zhe Chen. Pose-disentangled contrastive
338
+
339
+ learning for self-supervised facial representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9717-9728, 2023. 6
340
+ [52] Yang Liu, Xingming Zhang, Janne Kauttonen, and Guoying Zhao. Uncertain facial expression recognition via multi-task assisted correction. IEEE Transactions on Multimedia, 2023. 3
341
+ [53] Patrick Lucey, Jeffrey F Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. The extended cohnkanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In 2010 iee computer society conference on computer vision and pattern recognition-workshops, pages 94-101. IEEE, 2010. 1
342
+ [54] Cheng Luo, Siyang Song, Weicheng Xie, Linlin Shen, and Hatice Gunes. Learning multi-dimensional edge feature-based au relation graph for facial action unit recognition. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 1239–1246, 2022. 4
343
+ [55] Jiafeng Mao, Xueting Wang, and Kiyoharu Aizawa. Guided image synthesis via initial image editing in diffusion model. In Proceedings of the 31st ACM International Conference on Multimedia, pages 5321-5329, 2023. 4
344
+ [56] Jiawei Mao, Rui Xu, Xuesong Yin, Yuanqi Chang, Binling Nie, Aibin Huang, and Yigang Wang. Poster: A simpler and stronger facial expression recognition network. Pattern Recognition, page 110951, 2024. 2, 6, 8
345
+ [57] Anam Moin, Farhan Aadil, Zeeshan Ali, and Dongwann Kang. Emotion recognition framework using multiple modalities for an effective human-computer interaction. The Journal of Supercomputing, 79(8):9320-9349, 2023. 1
346
+ [58] Ali Mollahosseini, Behzad Hasani, and Mohammad H Ma-hoor. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1):18-31, 2017. 1, 4, 6
347
+ [59] Arsha Nagrani, Joon Son Chung, Weidi Xie, and Andrew Zisserman. Voxceleb: Large-scale speaker verification in the wild. Computer Speech & Language, 60:101027, 2020. 6
348
+ [60] Quang Nguyen, Truong Vu, Anh Tran, and Khoi Nguyen. Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation. In Advances in Neural Information Processing Systems, pages 76872-76892. Curran Associates, Inc., 2023. 3
349
+ [61] Zhihong Pan, Riccardo Gherardi, Xiufeng Xie, and Stephen Huang. Effective real image editing with accelerated iterative diffusion inversion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 15912-15921, 2023. 4
350
+ [62] Cheng Perng Phoo and Bharath Hariharan. Self-training for few-shot transfer across extreme task differences. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. 7
351
+ [63] Fabien Ringeval, Björn Schuller, Michel Valstar, Nicholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, et al. Avec 2019 workshop and challenge: state-of-mind, de
352
+
353
+ tecting depression with ai, and cross-cultural affect recognition. In Proceedings of the 9th International on Audio/visual Emotion Challenge and Workshop, pages 3-12, 2019. 1
354
+ [64] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 6
355
+ [65] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 3, 8
356
+ [66] Delian Ruan, Rongyun Mo, Yan Yan, Si Chen, JingHao Xue, and Hanzi Wang. Adaptive deep disturbance-disentangled learning for facial expression recognition. International Journal of Computer Vision, 130(2):455-477, 2022. 3
357
+ [67] Muhammad Sajjad, Fath U Min Ullah, Mohib Ullah, Georgia Christodoulou, Faouzi Alaya Cheikh, Mohammad Hijji, Khan Muhammad, and Joel JPC Rodrigues. A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines. *Alexandria Engineering Journal*, 68:817–840, 2023. 1
358
+ [68] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022. 1
359
+ [69] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.3
360
+ [70] Raphael Tang, Linqing Liu, Akshit Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, and Ferhan Ture. What the daam: Interpreting stable diffusion using cross attention. arXiv preprint arXiv:2210.04885, 2022.6
361
+ [71] Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, and Dilip Krishnan. Stablerep: Synthetic images from text-to-image models make strong visual representation learners. Advances in Neural Information Processing Systems, 36, 2024. 8
362
+ [72] Tuomas Varanka, Huai-Qian Khor, Yante Li, Mengting Wei, Hanwei Kung, Nicu Sebe, and Guoying Zhao. Towards localized fine-grained control for facial expression generation. arXiv preprint arXiv:2407.20175, 2024. 3, 6
363
+ [73] Yue Wang, Jinlong Peng, Jiangning Zhang, Ran Yi, Liang Liu, Yabiao Wang, and Chengjie Wang. Toward high quality facial representation learning. 2023. 6
364
+ [74] Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, and Chunhua Shen. Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1206-1217, 2023. 3
365
+ [75] Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, Mike Zheng Shou, and Chunhua
366
+
367
+ Shen. Dasetedm: Synthesizing data with perception annotations using diffusion models. Advances in Neural Information Processing Systems, 36, 2024. 3
368
+ [76] Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. 6
369
+ [77] Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, and Guodong Guo. Vision transformer with attentive pooling for robust facial expression recognition. IEEE Transactions on Affective Computing, 14(4):3244-3256, 2022. 6
370
+ [78] Huan Yan, Yu Gu, Xiang Zhang, Yantong Wang, Yusheng Ji, and Fuji Ren. Mitigating label-noise for facial expression recognition in the wild. In 2022 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6, 2022. 1
371
+ [79] Shuo Yang, Lu Liu, and Min Xu. Free lunch for few-shot learning: Distribution calibration. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. 7
372
+ [80] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721, 2023.4
373
+ [81] Jun Yu, Zhongpeng Cai, Renda Li, Gongpeng Zhao, Guochen Xie, Jichao Zhu, Wangyuan Zhu, Qiang Ling, Lei Wang, Cong Wang, Luyu Qiu, and Wei Zheng. Exploring large-scale unlabeled faces to enhance facial expression recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 5803-5810, 2023. 3
374
+ [82] Yu Yu, Weibin Zhang, and Yun Deng. Frechet inception distance (fid) for evaluating gans. China University of Mining Technology Beijing Graduate School, 3(11), 2021. 6
375
+ [83] Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, and Bo Zhao. Real-fake: Effective training data synthesis through distribution matching. arXiv preprint arXiv:2310.10402, 2023. 7, 8
376
+ [84] Dan Zeng, Zhiyuan Lin, Xiao Yan, Yuting Liu, Fei Wang, and Bo Tang. Face2exp: Combating data biases for facial expression recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20291-20300, 2022. 3
377
+ [85] Sixian Zhang, Bohan Wang, Junqiang Wu, Yan Li, Tingting Gao, Di Zhang, and Zhongyuan Wang. Learning multidimensional human preference for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8018-8027, 2024. 6
378
+ [86] Yuhang Zhang, Chengrui Wang, and Weihong Deng. Relative uncertainty learning for facial expression recognition. Advances in Neural Information Processing Systems, 34: 17616-17627, 2021. 1
379
+ [87] Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, Weiming Dong, and Changsheng Xu. Inversion-based style transfer with diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10146-10156, 2023. 4
380
+
381
+ [88] Yuhang Zhang, Yaqi Li, Xuannan Liu, Weihong Deng, et al. Leave no stone unturned: mine extra knowledge for imbalanced facial expression recognition. Advances in Neural Information Processing Systems, 36, 2024. 4
382
+ [89] Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, and Fang Wen. General facial representation learning in a visual-linguistic manner. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 18697–18709, 2022. 6
383
+ [90] Jieming Zhou, Tong Zhang, Zeeshan Hayden, Lars Petersson, and Mehrtash Harandi. Diff3dhpe: A diffusion model for 3d human pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2092-2102, 2023. 3
384
+ [91] Qihao Zhu and Jianxi Luo. Toward artificial empathy for human-centered design: A framework. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, page V03BT03A072. American Society of Mechanical Engineers, 2023. 1
385
+ [92] Xinyi Zou, Yan Yan, Jing-Hao Xue, Si Chen, and Hanzi Wang. Learn-to-decompose: cascaded decomposition network for cross-domain few-shot facial expression recognition. In European Conference on Computer Vision, pages 683-700. Springer, 2022. 7
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6badc8b40ce7619a02024b189ba58af737b94d800522a54971b1edc6ba98752d
3
+ size 479027
2025/SynFER_ Towards Boosting Facial Expression Recognition with Synthetic Data/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_content_list.json ADDED
@@ -0,0 +1,1813 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "SynTag: Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 109,
8
+ 128,
9
+ 887,
10
+ 176
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Han Fang<sup>1</sup> Kejiang Chen<sup>2,*</sup> Zehua Ma<sup>2</sup> Jiajun Deng<sup>1</sup>",
17
+ "bbox": [
18
+ 259,
19
+ 202,
20
+ 733,
21
+ 220
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Yicong Li<sup>1</sup> Weiming Zhang<sup>2</sup> Ee-Chien Chang<sup>1,*</sup>",
28
+ "bbox": [
29
+ 294,
30
+ 222,
31
+ 712,
32
+ 239
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "$^{1}$ National University of Singapore $^{2}$ University of Science and Technology of China",
39
+ "bbox": [
40
+ 161,
41
+ 239,
42
+ 834,
43
+ 257
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "{fanghan, dcscec}@nus.edu.sg chenkj@ustc.edu.cn",
50
+ "bbox": [
51
+ 282,
52
+ 258,
53
+ 709,
54
+ 273
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "Abstract",
61
+ "text_level": 1,
62
+ "bbox": [
63
+ 246,
64
+ 308,
65
+ 326,
66
+ 324
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "Robustness is significant for generative image watermarking, typically achieved by injecting distortion-invariant watermark features. The leading paradigm, i.e., inversion-based framework, excels against non-geometric distortions but struggles with geometric ones. To address this, we propose SynTag, a synchronization tag injection-based method that enhances geometric robustness in inversion-based schemes. Due to the complexity of geometric distortions, finding universally geometric-invariant features is challenging, and it is not clear whether such invariant representation exists. Therefore, instead of seeking invariant representations, we embed a sensitive template feature alongside the watermarking features. This template evolves with geometric distortions, allowing us to reconstruct the distortion trajectory for correction before extraction. Focusing on latent diffusion models, we fine-tune the VAE decoder to inject the invisible SynTag feature, pairing it with a prediction network for extraction and correction. Additionally, we introduce a dither compensation mechanism to further improve correction accuracy. SynTag is highly compatible with existing inversion-based methods. Extensive experiments demonstrate a significant boost in geometric distortion robustness while maintaining resilience against non-geometric distortions.",
73
+ "bbox": [
74
+ 88,
75
+ 340,
76
+ 483,
77
+ 704
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "1. Introduction",
84
+ "text_level": 1,
85
+ "bbox": [
86
+ 89,
87
+ 731,
88
+ 220,
89
+ 747
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "The powerful generative capability of latent diffusion models (LDMs) [26] presents potential disinformation risks, as malicious users could misuse them to create realistic yet misleading images, spreading false information. This makes the detectability and traceability of generated images a pressing priority. Generative image watermarking [13, 25, 31, 33, 34] is a crucial technique to address this need. By proactively watermarking every output of the gen",
96
+ "bbox": [
97
+ 89,
98
+ 756,
99
+ 482,
100
+ 878
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "image",
106
+ "img_path": "images/faae5532437c86362577ac32c4af7e4ab0c31499d2d7794727932658c839fb06.jpg",
107
+ "image_caption": [
108
+ "Figure 1. An intuitive performance comparison between our proposed SynTag and state-of-the-art methods from three typical paradigms, illustrating extraction accuracy under non-geometric and geometric distortions."
109
+ ],
110
+ "image_footnote": [],
111
+ "bbox": [
112
+ 514,
113
+ 308,
114
+ 901,
115
+ 488
116
+ ],
117
+ "page_idx": 0
118
+ },
119
+ {
120
+ "type": "text",
121
+ "text": "erative model, the identification of generated images and their uses are achieved. For generative image watermarking, robustness is a key property, ensuring that watermarks remain extractable even if the image undergoes distortions.",
122
+ "bbox": [
123
+ 511,
124
+ 547,
125
+ 906,
126
+ 609
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "Common generative image watermarking frameworks include the posthoc-based framework [6], fine-tune-based framework [13] and the inversion-based framework [31]. Among them, the inversion-based scheme demonstrates the strongest robustness to non-geometric distortions, such as JPEG, Gaussian noise, etc. However, we find that such kind of framework struggles with geometric distortions (e.g., rotation, scaling), as indicated in Fig. 1. Considering geometric distortions are common in practical applications, ensuring geometric robustness is a significant step to push the technique from lab research to practical application.",
133
+ "bbox": [
134
+ 511,
135
+ 611,
136
+ 908,
137
+ 777
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "text",
143
+ "text": "Traditionally, robustness is achieved by constructing distortion-invariant features for watermark embedding. Empirical evidence suggests that invariant features can be effectively trained or engineered to withstand nongeometric distortions, which primarily affect pixel-level details [13, 25, 31, 34]. However, geometric distortions introduce structural transformations that cause pixel misalignment and desynchronization effects, making it challenging",
144
+ "bbox": [
145
+ 511,
146
+ 779,
147
+ 908,
148
+ 902
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "header",
154
+ "text": "CVF",
155
+ "bbox": [
156
+ 106,
157
+ 2,
158
+ 181,
159
+ 42
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "header",
165
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
166
+ "bbox": [
167
+ 238,
168
+ 0,
169
+ 807,
170
+ 46
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "page_footnote",
176
+ "text": "*Corresponding author.",
177
+ "bbox": [
178
+ 109,
179
+ 887,
180
+ 235,
181
+ 898
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "page_number",
187
+ "text": "15416",
188
+ "bbox": [
189
+ 480,
190
+ 944,
191
+ 519,
192
+ 955
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "text",
198
+ "text": "to establish universally geometric-invariant features. In extreme cases, such a feature may not exist.",
199
+ "bbox": [
200
+ 89,
201
+ 90,
202
+ 482,
203
+ 121
204
+ ],
205
+ "page_idx": 1
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "To address these limitations, we propose SynTag, a synchronization tag injection-based method that fundamentally rethinks how to achieve geometric robustness in inversion-based frameworks. Unlike conventional approaches that attempt to construct geometric-invariant features, we adopt an alternative strategy by injecting geometric-sensitive template-like features. Rather than resisting geometric transformations, these features evolve dynamically in response to distortions. This shift in perspective allows us to leverage the transformation itself as a cue for correction, rather than treating it as a disruption. By analyzing the evolved template, we can accurately estimate the geometric distortion trajectory and reverse the transformation prior to watermark extraction.",
210
+ "bbox": [
211
+ 89,
212
+ 132,
213
+ 483,
214
+ 343
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "Specifically, to tightly couple feature injection with the image generation process, we fine-tune the VAE decoder to integrate SynTag seamlessly. In addition to standard latent decoding, the VAE decoder is trained to inject an imperceptible feature that preserves visual consistency with and without SynTag injection. Simultaneously, we introduce a SynTag predictor that estimates the geometric distortion trajectory. Since common geometric distortions can be described by a homography transformation [21], the predictor is trained to estimate the corresponding transformation parameters. This enables precise geometric correction prior to watermark extraction.",
221
+ "bbox": [
222
+ 89,
223
+ 354,
224
+ 483,
225
+ 537
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "Furthermore, to refine correction accuracy, we introduce a fine-grained dither compensation mechanism that performs a local exhaustive search over both pixel and latent domains. This compensates for minor correction biases, further improving watermark extraction fidelity.",
232
+ "bbox": [
233
+ 89,
234
+ 547,
235
+ 483,
236
+ 625
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "In summary, we make the following contributions:",
243
+ "bbox": [
244
+ 109,
245
+ 633,
246
+ 442,
247
+ 648
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "list",
253
+ "sub_type": "text",
254
+ "list_items": [
255
+ "- We propose SynTag, a synchronization tag injection-based solution to enhance the geometric distortion robustness of inversion-based framework. By fine-tuning the VAE decoder with a prediction network, we embed a template-like feature that effectively aids in geometric distortion correction.",
256
+ "- We introduce dither compensation that complements the SynTag injection-based approach, further minimizing correcting errors and improving the accuracy of watermark extraction.",
257
+ "- Extensive experiments demonstrate that SynTag integrates seamlessly with state-of-the-art inversion-based frameworks, significantly improving true positive rates in watermark detection and bit accuracy in watermark extraction under geometric distortions."
258
+ ],
259
+ "bbox": [
260
+ 89,
261
+ 674,
262
+ 482,
263
+ 900
264
+ ],
265
+ "page_idx": 1
266
+ },
267
+ {
268
+ "type": "text",
269
+ "text": "2. Related Work",
270
+ "text_level": 1,
271
+ "bbox": [
272
+ 513,
273
+ 89,
274
+ 653,
275
+ 104
276
+ ],
277
+ "page_idx": 1
278
+ },
279
+ {
280
+ "type": "text",
281
+ "text": "2.1. Traditional image watermarking",
282
+ "text_level": 1,
283
+ "bbox": [
284
+ 511,
285
+ 114,
286
+ 802,
287
+ 132
288
+ ],
289
+ "page_idx": 1
290
+ },
291
+ {
292
+ "type": "text",
293
+ "text": "Image watermarking plays an important role in copyright protection and leakage tracing. To ensure both fidelity and robustness, traditional image watermarking schemes often embed the watermark into the transform domain [10, 18, 19]. Recently, DNN-based image watermarking has been widely studied, where the commonly used framework is an \"encoder-noise layer-decoder\" architecture. By training with different distortions in the noise layer, the whole system can guarantee different robustness. Common noise layers include JPEG compression simulation layer [16, 37], print-camera layer [29], screen-camera layer [11, 32].",
294
+ "bbox": [
295
+ 511,
296
+ 137,
297
+ 906,
298
+ 305
299
+ ],
300
+ "page_idx": 1
301
+ },
302
+ {
303
+ "type": "text",
304
+ "text": "2.2. Latent diffusion models",
305
+ "text_level": 1,
306
+ "bbox": [
307
+ 511,
308
+ 315,
309
+ 732,
310
+ 330
311
+ ],
312
+ "page_idx": 1
313
+ },
314
+ {
315
+ "type": "text",
316
+ "text": "Diffusion models have gained popularity across various fields [5, 9, 27] for their strong generative capabilities. In practice, latent diffusion models (LDMs), which perform diffusion in the latent space of a pre-trained VAE, are especially prevalent. Large-scale LDMs like Stable Diffusion have demonstrated impressive text-guided image generation ability. The DDIM sampling algorithm [28] is commonly used, as it requires significantly fewer steps than DDPM [15]. The success of diffusion models has spurred interest in watermarking techniques specifically tailored for generated images. While traditional watermarking schemes offer straightforward solutions, recent research has increasingly focused on embedding watermarks directly within the model to eliminate the need for post-processing.",
317
+ "bbox": [
318
+ 511,
319
+ 338,
320
+ 906,
321
+ 550
322
+ ],
323
+ "page_idx": 1
324
+ },
325
+ {
326
+ "type": "text",
327
+ "text": "2.3. Generative image watermarking for LDMs",
328
+ "text_level": 1,
329
+ "bbox": [
330
+ 511,
331
+ 560,
332
+ 879,
333
+ 575
334
+ ],
335
+ "page_idx": 1
336
+ },
337
+ {
338
+ "type": "text",
339
+ "text": "Current generative image watermarking schemes for LDMs (particularly, the technique watermarking the output of LDMs) can be divided into three main aspects: 1). Post-hoc methods: Traditional image watermarking techniques [8, 12, 16, 23] are straightforward post-hoc methods for watermarking LDMs, which are done by concatenating a watermarking procedure after image generation. However, this solution requires additional processing and doesn't integrate the watermarking process well into the image generation process. 2). Fine-tune-based methods: these methods primarily fine-tune the VAE decoder in LDMs with noise layers and a pre-trained watermark extractor to ensure the VAE decoder can both produce high-quality images and embed watermarks. Typical works include Stable Signature [13] and LaWa [25]. 3). Inversion-based methods: these approaches embed a watermark into the initial latent of the diffusion process. Starting from a watermarked latent, LDMs can generate images with an implicit watermark. For extraction, an inversion process is applied to invert the watermarked latent, enabling further extraction. Wen et. al. [31] introduced a method injecting a tree-ring",
340
+ "bbox": [
341
+ 511,
342
+ 583,
343
+ 908,
344
+ 902
345
+ ],
346
+ "page_idx": 1
347
+ },
348
+ {
349
+ "type": "page_number",
350
+ "text": "15417",
351
+ "bbox": [
352
+ 480,
353
+ 944,
354
+ 517,
355
+ 955
356
+ ],
357
+ "page_idx": 1
358
+ },
359
+ {
360
+ "type": "text",
361
+ "text": "watermark into the starting latent's Fourier domain, while Yang et. al. [34] proposed a distribution-preserving sampling approach to generate a watermarked latent directly. Although inversion-based methods exhibit strong robustness against non-geometric distortions, their resilience to geometric distortions remains limited.",
362
+ "bbox": [
363
+ 89,
364
+ 90,
365
+ 483,
366
+ 181
367
+ ],
368
+ "page_idx": 2
369
+ },
370
+ {
371
+ "type": "text",
372
+ "text": "3. Preliminary",
373
+ "text_level": 1,
374
+ "bbox": [
375
+ 89,
376
+ 193,
377
+ 217,
378
+ 209
379
+ ],
380
+ "page_idx": 2
381
+ },
382
+ {
383
+ "type": "text",
384
+ "text": "3.1. Denoising diffusion implicit model (DDIM)",
385
+ "text_level": 1,
386
+ "bbox": [
387
+ 89,
388
+ 217,
389
+ 455,
390
+ 234
391
+ ],
392
+ "page_idx": 2
393
+ },
394
+ {
395
+ "type": "text",
396
+ "text": "DDIM is the commonly used sampling method in latent diffusion models. For denoising schedule $\\{\\alpha_{t}\\}_{t = 0}^{T}$ , the sampling of image/latent $x_{t - 1}$ in step $t$ can be represented by:",
397
+ "bbox": [
398
+ 89,
399
+ 239,
400
+ 483,
401
+ 286
402
+ ],
403
+ "page_idx": 2
404
+ },
405
+ {
406
+ "type": "equation",
407
+ "text": "\n$$\nx _ {t - 1} = \\sqrt {\\frac {\\alpha_ {t - 1}}{\\alpha_ {t}}} x _ {t} - (\\sqrt {\\frac {\\alpha_ {t - 1} (1 - \\alpha_ {t})}{\\alpha_ {t}}} + \\sqrt {1 - \\alpha_ {t - 1}}) \\varepsilon_ {\\theta} \\left(x _ {t}, t\\right) \\tag {1}\n$$\n",
408
+ "text_format": "latex",
409
+ "bbox": [
410
+ 89,
411
+ 304,
412
+ 501,
413
+ 354
414
+ ],
415
+ "page_idx": 2
416
+ },
417
+ {
418
+ "type": "text",
419
+ "text": "where $\\varepsilon_{\\theta}(x_t,t)$ is the noise estimated in step $t$ . To simplify the expression, we denote $a_{t} = \\sqrt{\\alpha_{t - 1} / \\alpha_{t}}$ and $b_{t} = -\\sqrt{\\alpha_{t - 1}(1 - \\alpha_{t}) / \\alpha_{t}} +\\sqrt{1 - \\alpha_{t - 1}}$ . A remarkable property of DDIM sampling is that the denoising process is approximately invertible, where $x_{t}$ can be estimated as follows:",
420
+ "bbox": [
421
+ 89,
422
+ 356,
423
+ 483,
424
+ 446
425
+ ],
426
+ "page_idx": 2
427
+ },
428
+ {
429
+ "type": "equation",
430
+ "text": "\n$$\nx _ {t} = \\frac {x _ {t - 1} - b _ {t} \\varepsilon_ {\\theta} (x _ {t} , t)}{a _ {t}} \\approx \\frac {x _ {t - 1} - b _ {t} \\varepsilon_ {\\theta} (x _ {t - 1} , t)}{a _ {t}}\n$$\n",
431
+ "text_format": "latex",
432
+ "bbox": [
433
+ 117,
434
+ 450,
435
+ 454,
436
+ 484
437
+ ],
438
+ "page_idx": 2
439
+ },
440
+ {
441
+ "type": "text",
442
+ "text": "Such an estimation relies on the assumption that $\\varepsilon_{\\theta}(x_t,t)\\approx \\varepsilon_{\\theta}(x_{t - 1},t)$ [30]. The estimation process from $x_{t - 1}$ to $x_{t}$ is called DDIM inversion.",
443
+ "bbox": [
444
+ 89,
445
+ 488,
446
+ 483,
447
+ 532
448
+ ],
449
+ "page_idx": 2
450
+ },
451
+ {
452
+ "type": "text",
453
+ "text": "3.2. Homography transformation",
454
+ "text_level": 1,
455
+ "bbox": [
456
+ 89,
457
+ 540,
458
+ 349,
459
+ 556
460
+ ],
461
+ "page_idx": 2
462
+ },
463
+ {
464
+ "type": "text",
465
+ "text": "Homography transformation is defined as the mapping between two planar projections of an image. The homography matrix $H$ is often utilized to represent the homographic transformation relationship between two images. It is defined as:",
466
+ "bbox": [
467
+ 89,
468
+ 561,
469
+ 483,
470
+ 635
471
+ ],
472
+ "page_idx": 2
473
+ },
474
+ {
475
+ "type": "equation",
476
+ "text": "\n$$\nH = \\left[ \\begin{array}{c c c} h _ {1 1} & h _ {1 2} & h _ {1 3} \\\\ h _ {2 1} & h _ {2 2} & h _ {2 3} \\\\ h _ {3 1} & h _ {3 2} & h _ {3 3} \\end{array} \\right]\n$$\n",
477
+ "text_format": "latex",
478
+ "bbox": [
479
+ 196,
480
+ 633,
481
+ 375,
482
+ 689
483
+ ],
484
+ "page_idx": 2
485
+ },
486
+ {
487
+ "type": "text",
488
+ "text": "where $[h_{11}, h_{12}, h_{21}, h_{22}]$ , $[h_{13}, h_{23}]$ and $[h_{31}, h_{32}]$ represent the affine transformation, translation transformation, and perspective transformation between images, respectively. By setting appropriate parameters in $H$ , we can perform different geometric transformations (e.g. rotation, translation, scaling, shear transform) [21]. In this paper, we utilized homography transformation to correct geometric distortions in the generated images.",
489
+ "bbox": [
490
+ 89,
491
+ 691,
492
+ 482,
493
+ 811
494
+ ],
495
+ "page_idx": 2
496
+ },
497
+ {
498
+ "type": "text",
499
+ "text": "Specifically, for an image with coordinates $(u,v)$ , the transformed coordinates $(u^{\\prime},v^{\\prime})$ can be calculated as:",
500
+ "bbox": [
501
+ 89,
502
+ 811,
503
+ 483,
504
+ 843
505
+ ],
506
+ "page_idx": 2
507
+ },
508
+ {
509
+ "type": "equation",
510
+ "text": "\n$$\n\\left[ \\begin{array}{l} u ^ {\\prime} \\\\ v ^ {\\prime} \\\\ 1 \\end{array} \\right] = H \\left[ \\begin{array}{l} u \\\\ v \\\\ 1 \\end{array} \\right] = \\left[ \\begin{array}{l l l} h _ {1 1} & h _ {1 2} & h _ {1 3} \\\\ h _ {2 1} & h _ {2 2} & h _ {2 3} \\\\ h _ {3 1} & h _ {3 2} & h _ {3 3} \\end{array} \\right] \\left[ \\begin{array}{l} u \\\\ v \\\\ 1 \\end{array} \\right] \\tag {2}\n$$\n",
511
+ "text_format": "latex",
512
+ "bbox": [
513
+ 101,
514
+ 847,
515
+ 483,
516
+ 905
517
+ ],
518
+ "page_idx": 2
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "The homography matrix $H$ is a $3 \\times 3$ homogeneous matrix, where the final element $h_{33}$ is often normalized to 1, leaving it with 8 degrees of freedom. In other words, to solve the homography matrix, 8 corresponding coordinate points are required. In this paper, we fix the 4 coordinate points before transformation with $(1,1), (1,A), (B,1)$ and $(B,A)$ , where $A$ and $B$ represent the image dimensions. Therefore, we only have to determine the 4 transformed coordinate points, denoted as $\\mathbb{P} = \\{P_1, P_2, P_3, P_4\\}$ to construct a homography transform $\\mathcal{T}_{\\mathbb{P}}$ for geometric distortion correction.",
523
+ "bbox": [
524
+ 511,
525
+ 90,
526
+ 906,
527
+ 243
528
+ ],
529
+ "page_idx": 2
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "4. Proposed Methods",
534
+ "text_level": 1,
535
+ "bbox": [
536
+ 511,
537
+ 255,
538
+ 694,
539
+ 272
540
+ ],
541
+ "page_idx": 2
542
+ },
543
+ {
544
+ "type": "text",
545
+ "text": "4.1. Application scenarios.",
546
+ "text_level": 1,
547
+ "bbox": [
548
+ 511,
549
+ 280,
550
+ 718,
551
+ 296
552
+ ],
553
+ "page_idx": 2
554
+ },
555
+ {
556
+ "type": "text",
557
+ "text": "There are two typical scenarios of generative image watermarking: generated image detection and source generative model tracing.",
558
+ "bbox": [
559
+ 511,
560
+ 303,
561
+ 905,
562
+ 348
563
+ ],
564
+ "page_idx": 2
565
+ },
566
+ {
567
+ "type": "text",
568
+ "text": "Detection. In the detection scenario, the watermark serves as a proactive method to determine whether an image is generated by the model or is a real image. For a generative model with a watermark sequence $s$ , every output image from the model should contain this watermark $s$ . During detection, given a suspect image, if the extracted watermark $s'$ satisfies $S(s', s) \\geq \\tau$ (where $S$ is the similarity evaluation function and $\\tau$ is the threshold), the image is regarded as generated.",
569
+ "bbox": [
570
+ 511,
571
+ 348,
572
+ 906,
573
+ 483
574
+ ],
575
+ "page_idx": 2
576
+ },
577
+ {
578
+ "type": "text",
579
+ "text": "Tracing. In the tracing scenario, the watermark is used to identify the source generative model of the suspect image. Given $n$ generative models with different watermarks $\\mathbb{M} = \\{s_1, s_2, \\ldots, s_n\\}$ , and a suspect image with an extracted watermark $s'$ , the source model is determined by finding the one with highest similarity: arg max $S(s', s_i)$ .",
580
+ "bbox": [
581
+ 511,
582
+ 484,
583
+ 906,
584
+ 575
585
+ ],
586
+ "page_idx": 2
587
+ },
588
+ {
589
+ "type": "text",
590
+ "text": "i",
591
+ "bbox": [
592
+ 723,
593
+ 575,
594
+ 733,
595
+ 583
596
+ ],
597
+ "page_idx": 2
598
+ },
599
+ {
600
+ "type": "text",
601
+ "text": "4.2. Overview",
602
+ "text_level": 1,
603
+ "bbox": [
604
+ 511,
605
+ 590,
606
+ 624,
607
+ 606
608
+ ],
609
+ "page_idx": 2
610
+ },
611
+ {
612
+ "type": "text",
613
+ "text": "The proposed framework consists of three main stages, as illustrated in Fig. 2: SynTag initialization stage, a trainable copy of existing the VAE decoder $\\mathcal{D}_{\\Delta}$ , named SynTag decoder $\\mathcal{D}_{Syn}$ , is fine-tuned alongside a prediction network, SynTag predictor $\\mathcal{P}_{Syn}$ . This stage enables the training of injection of SynTag features and the estimation of geometric distortion correction parameters; Injection stage, a watermarked starting latent is first generated using an inversion-based mechanism applied to the given watermark sequence $s$ . This latent undergoes denoising via the latent diffusion model (LDM) and is subsequently decoded using the fine-tuned $\\mathcal{D}_{Syn}$ , which injects both the watermark and the SynTag features. Extraction stage, the distorted image is first corrected using the SynTag predictor $\\mathcal{P}_{Syn}$ . A dither compensation operation is then applied to generate multiple candidate latents, facilitating the extraction process. Following DDIM inversion and corresponding watermark extraction, multiple candidate watermark messages are obtained. The extracted messages are then compared against",
614
+ "bbox": [
615
+ 511,
616
+ 613,
617
+ 906,
618
+ 900
619
+ ],
620
+ "page_idx": 2
621
+ },
622
+ {
623
+ "type": "page_number",
624
+ "text": "15418",
625
+ "bbox": [
626
+ 480,
627
+ 944,
628
+ 519,
629
+ 955
630
+ ],
631
+ "page_idx": 2
632
+ },
633
+ {
634
+ "type": "image",
635
+ "img_path": "images/051feab3a9a8cfc1f398e37f833cc1b53c8ab58062f7f8b6db55b53cfa586da0.jpg",
636
+ "image_caption": [
637
+ "Figure 2. The overview of the proposed SynTag, which mainly contains three stages: SynTag initialization stage, which finetunes the VAE decoder with a prediction network; Injection stage, which generates the final watermarked image according to the given watermark message; Extraction stage, which extracts the watermark message."
638
+ ],
639
+ "image_footnote": [],
640
+ "bbox": [
641
+ 96,
642
+ 88,
643
+ 906,
644
+ 305
645
+ ],
646
+ "page_idx": 3
647
+ },
648
+ {
649
+ "type": "text",
650
+ "text": "the embedded watermark. If any extracted message exhibits a similarity greater than a predefined threshold $\\tau$ with $s$ , the tested image is identified as containing the watermark.",
651
+ "bbox": [
652
+ 89,
653
+ 356,
654
+ 482,
655
+ 402
656
+ ],
657
+ "page_idx": 3
658
+ },
659
+ {
660
+ "type": "text",
661
+ "text": "4.3. SynTag initialization stage",
662
+ "text_level": 1,
663
+ "bbox": [
664
+ 89,
665
+ 417,
666
+ 328,
667
+ 434
668
+ ],
669
+ "page_idx": 3
670
+ },
671
+ {
672
+ "type": "text",
673
+ "text": "The initialization process is detailed in Fig. 2. Given an input image $I_O$ , it is first encoded by the pre-trained VAE encoder $\\mathcal{E}_{\\Delta}$ , as used in standard LDMs, to obtain the latent representation $z_O$ , where $z_O = \\mathcal{E}_{\\Delta}(I_O)$ . This latent is then passed through the pre-trained VAE decoder $\\mathcal{D}_{\\Delta}$ to reconstruct the image $I_R$ . Simultaneously, $z_O$ is also processed by the trainable SynTag decoder $\\mathcal{D}_{Syn}$ , generating the injected image $I_{Syn} = \\mathcal{D}_{Syn}(z_O)$ . The injected image $I_{Syn}$ is then subjected to geometric distortions (e.g., rotation, scaling, and translation) via a noise layer $\\mathcal{N}_G$ , producing the distorted image $I_D$ . Given the applied distortion, the corresponding ground truth points $\\mathbb{P}_{gt}$ are computed, which define the homography transformation parameters for correction via Eq. 2. To estimate the correction parameters, $I_D$ is processed through the SynTag predictor $\\mathcal{P}_{Syn}$ , yielding the predicted correction points: $\\mathbb{P}_p = \\mathcal{P}_{Syn}(I_D)$ . To maintain visual consistency between $I_{Syn}$ and $I_R$ , we introduce a reconstruction loss $\\mathcal{L}_R$ , adapted from Stable Signature [13]:",
674
+ "bbox": [
675
+ 89,
676
+ 441,
677
+ 483,
678
+ 729
679
+ ],
680
+ "page_idx": 3
681
+ },
682
+ {
683
+ "type": "equation",
684
+ "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {R} \\left(I _ {\\text {S y n}}, I _ {R}\\right) = \\lambda_ {1} \\mathcal {L} _ {\\text {M S E}} \\left(I _ {\\text {S y n}}, I _ {R}\\right) + \\lambda_ {2} \\mathcal {L} _ {\\text {V G G}} \\left(I _ {\\text {S y n}}, I _ {R}\\right) \\\\ + \\lambda_ {3} \\mathcal {L} _ {S S I M} \\left(I _ {\\text {S y n}}, I _ {R}\\right) \\tag {3} \\\\ \\end{array}\n$$\n",
685
+ "text_format": "latex",
686
+ "bbox": [
687
+ 89,
688
+ 744,
689
+ 486,
690
+ 792
691
+ ],
692
+ "page_idx": 3
693
+ },
694
+ {
695
+ "type": "text",
696
+ "text": "where $\\mathcal{L}_{MSE},\\mathcal{L}_{VGG}$ , and $\\mathcal{L}_{SSIM}$ denote the mean squared error (MSE) loss, perceptual VGG loss [17], and structural similarity (SSIM) loss, respectively, with $\\lambda_{1}$ $\\lambda_{2}$ ,and $\\lambda_{3}$ being weighting coefficients. For training the SynTag predictor $\\mathcal{P}_{Syn}$ , we employ an MSE loss function:",
697
+ "bbox": [
698
+ 89,
699
+ 792,
700
+ 483,
701
+ 869
702
+ ],
703
+ "page_idx": 3
704
+ },
705
+ {
706
+ "type": "equation",
707
+ "text": "\n$$\n\\mathcal {L} _ {P} = \\mathcal {L} _ {M S E} \\left(\\mathbb {P} _ {p}, \\mathbb {P} _ {g t}\\right) = \\mathcal {L} _ {M S E} \\left(\\mathcal {P} _ {\\text {S y n}} \\left(I _ {D}\\right), \\mathbb {P} _ {g t}\\right)\n$$\n",
708
+ "text_format": "latex",
709
+ "bbox": [
710
+ 120,
711
+ 883,
712
+ 450,
713
+ 902
714
+ ],
715
+ "page_idx": 3
716
+ },
717
+ {
718
+ "type": "text",
719
+ "text": "The decoder $\\mathcal{D}_{Syn}$ and predictor $\\mathcal{P}_{Syn}$ are jointly optimized in an end-to-end manner. After training, the well-fine-tuned SynTag decoder $\\mathcal{D}_{Syn}$ replaces the original VAE decoder during the injection stage.",
720
+ "bbox": [
721
+ 511,
722
+ 356,
723
+ 906,
724
+ 417
725
+ ],
726
+ "page_idx": 3
727
+ },
728
+ {
729
+ "type": "text",
730
+ "text": "4.4. Injection stage",
731
+ "text_level": 1,
732
+ "bbox": [
733
+ 511,
734
+ 424,
735
+ 663,
736
+ 441
737
+ ],
738
+ "page_idx": 3
739
+ },
740
+ {
741
+ "type": "text",
742
+ "text": "During the injection stage, for a given $l$ -bit watermark sequence $s \\in \\{0,1\\}$ , a watermark injection mechanism $\\mathcal{F}$ is first applied to generate the watermarked starting latent: $z_T^s = \\mathcal{F}(s)$ , where $\\mathcal{F}$ represents the embedding mechanism of any inversion-based scheme (e.g., the distribution-preserving sampling method proposed by [34]). The standard diffusion process in LDM is then performed for $T$ steps, yielding the diffused latent $z_0^s$ . Finally, $z_0^s$ is processed through the fine-tuned SynTag decoder $\\mathcal{D}_{Syn}$ , which injects SynTag features to produce the watermarked image $I_s$ . Notably, in this stage, $\\mathcal{D}_{Syn}$ remains fixed, and its parameters are not updated.",
743
+ "bbox": [
744
+ 511,
745
+ 446,
746
+ 906,
747
+ 628
748
+ ],
749
+ "page_idx": 3
750
+ },
751
+ {
752
+ "type": "text",
753
+ "text": "4.5. Extraction stage",
754
+ "text_level": 1,
755
+ "bbox": [
756
+ 511,
757
+ 635,
758
+ 674,
759
+ 651
760
+ ],
761
+ "page_idx": 3
762
+ },
763
+ {
764
+ "type": "text",
765
+ "text": "To recover the watermark from a distorted image $I_{D}$ , the trained SynTag predictor $\\mathcal{P}_{\\text{Syn}}$ first estimates the correction parameters: $\\mathbb{P}_p = \\mathcal{P}_{\\text{Syn}}(I_D)$ . Based on $\\mathbb{P}_p$ , the homography transformation $\\mathcal{T}_{\\mathbb{P}_p}$ is applied to correct the distortion: $I_C = \\mathcal{T}_{\\mathbb{P}_p}(I_D)$ . To further refine correction accuracy, we introduce a dither compensation mechanism, which operates at two levels:",
766
+ "bbox": [
767
+ 511,
768
+ 657,
769
+ 905,
770
+ 761
771
+ ],
772
+ "page_idx": 3
773
+ },
774
+ {
775
+ "type": "text",
776
+ "text": "Pixel-level compensation $\\mathbb{C}_p$ . This involves applying small-scale homography transformations to $I_{C}$ , such as scaling $(0.9 - 1.1\\times)$ and rotation $(\\pm 3^{\\circ})$ . If $n$ transformations are applied, a set of $n$ corrected images $\\mathbb{I}_p = I_p^1,I_p^2,\\ldots ,I_p^n$ is obtained: $I_p^i = \\mathcal{T}^i (I_C)$ . For each $I_p^i$ , the pre-trained VAE encoder $\\mathcal{E}_{\\Delta}$ computes the latent representation, followed by DDIM inversion to produce the inverted latent $\\hat{z}_p^i$ .",
777
+ "bbox": [
778
+ 511,
779
+ 762,
780
+ 905,
781
+ 871
782
+ ],
783
+ "page_idx": 3
784
+ },
785
+ {
786
+ "type": "text",
787
+ "text": "Latent-level compensation $\\mathbb{C}_l$ . Latent-level compensation includes outer padding and re-sampling operations. Given",
788
+ "bbox": [
789
+ 511,
790
+ 871,
791
+ 905,
792
+ 900
793
+ ],
794
+ "page_idx": 3
795
+ },
796
+ {
797
+ "type": "page_number",
798
+ "text": "15419",
799
+ "bbox": [
800
+ 480,
801
+ 944,
802
+ 517,
803
+ 955
804
+ ],
805
+ "page_idx": 3
806
+ },
807
+ {
808
+ "type": "image",
809
+ "img_path": "images/e9b6632cc607ff588427d0648a056994ab4cafcc6bd9d1cc8bc03e04e3a3da24.jpg",
810
+ "image_caption": [
811
+ "Figure 3. The watermarked images and the differences between watermarked/non-watermarked images with different methods."
812
+ ],
813
+ "image_footnote": [],
814
+ "bbox": [
815
+ 133,
816
+ 89,
817
+ 859,
818
+ 282
819
+ ],
820
+ "page_idx": 4
821
+ },
822
+ {
823
+ "type": "text",
824
+ "text": "an inverted latent $\\hat{z}_p^i$ of size $C\\times H\\times W$ , we apply zeropadding to obtain a padded latent $\\bar{z}_l^i$ of size $C\\times (H + 2r)\\times$ $(W + 2r)$ . A sliding window with stride 1 resamples $\\bar{z}_l^i$ , generating $(2r + 1)^2$ latent variations $\\bar{z}_l^j$ . Each $\\bar{z}_l^j$ undergoes the extraction mechanism $\\mathcal{F}^{-1}$ to recover a potential watermark sequence $\\bar{s}_j$ . The extracted watermark candidates form a set $\\mathbb{S}_{ex} = \\{\\bar{s}_1,\\bar{s}_2,\\dots \\}$ . The similarity between each extracted sequence and the original watermark $s$ is evaluated. If $S(\\bar{s}_i,s)\\geq \\tau$ , where $S$ represents the similarity measurement, the image $I_{D}$ is identified as containing watermark $s$ .",
825
+ "bbox": [
826
+ 89,
827
+ 310,
828
+ 483,
829
+ 478
830
+ ],
831
+ "page_idx": 4
832
+ },
833
+ {
834
+ "type": "text",
835
+ "text": "Notably, extraction can be performed without prior knowledge of $s$ by utilizing error correction and detection codes (details provided in supplementary materials).",
836
+ "bbox": [
837
+ 89,
838
+ 481,
839
+ 483,
840
+ 526
841
+ ],
842
+ "page_idx": 4
843
+ },
844
+ {
845
+ "type": "text",
846
+ "text": "5. Experimental Results and Analysis",
847
+ "text_level": 1,
848
+ "bbox": [
849
+ 89,
850
+ 542,
851
+ 408,
852
+ 559
853
+ ],
854
+ "page_idx": 4
855
+ },
856
+ {
857
+ "type": "text",
858
+ "text": "5.1. Implementation details",
859
+ "text_level": 1,
860
+ "bbox": [
861
+ 89,
862
+ 568,
863
+ 303,
864
+ 584
865
+ ],
866
+ "page_idx": 4
867
+ },
868
+ {
869
+ "type": "text",
870
+ "text": "5.1.1. Experimental settings.",
871
+ "text_level": 1,
872
+ "bbox": [
873
+ 89,
874
+ 590,
875
+ 292,
876
+ 607
877
+ ],
878
+ "page_idx": 4
879
+ },
880
+ {
881
+ "type": "text",
882
+ "text": "In this paper, we focus on text-to-image latent diffusion models and use Stable Diffusion with version 1.4 and 2.1 [26] (SD-v1.4/v2.1) provided by Hugging Face for experiments. The generated images are with size $512 \\times 512 \\times 3$ , and the latent size is $4 \\times 64 \\times 64$ .",
883
+ "bbox": [
884
+ 89,
885
+ 611,
886
+ 482,
887
+ 686
888
+ ],
889
+ "page_idx": 4
890
+ },
891
+ {
892
+ "type": "text",
893
+ "text": "In the initialization stage, MS COCO [20] are used as the training datasets in fine-tuning $\\mathcal{D}_{\\theta}$ and $\\mathcal{P}_{Syn}$ . The geometric distortions set in the noise layer include \"rotation with $-45^{\\circ} \\sim 45^{\\circ}$ , translation with $10 \\sim 75$ pixels, scale with factor $0.5 \\sim 2$ (combining with padding/cropping) and shear mapping $5 \\sim 10$ pixels\". Additionally, after applying the geometric distortions, non-geometric distortions such as Gaussian noise, median filtering, and JPEG compression are also introduced to simulate a combined noise.",
894
+ "bbox": [
895
+ 89,
896
+ 688,
897
+ 482,
898
+ 823
899
+ ],
900
+ "page_idx": 4
901
+ },
902
+ {
903
+ "type": "text",
904
+ "text": "In the watermarked image generation stage, we combine SynTag with the existing two representative inversion-based frameworks GauShad [34] and TreeRings [31]. For GauShad-SynTag, $l$ is set as 64. For Tree-Rings-SynTag, we only change the VAE decoder to $\\mathcal{D}_{\\text{Syn}}$ . During in-",
905
+ "bbox": [
906
+ 89,
907
+ 825,
908
+ 483,
909
+ 901
910
+ ],
911
+ "page_idx": 4
912
+ },
913
+ {
914
+ "type": "text",
915
+ "text": "ference, we employ prompts from the Stable-Diffusion Prompt[2]. The sampling step and guidance scale are set to 50 and 7.5.",
916
+ "bbox": [
917
+ 511,
918
+ 310,
919
+ 905,
920
+ 354
921
+ ],
922
+ "page_idx": 4
923
+ },
924
+ {
925
+ "type": "text",
926
+ "text": "In the extraction stage, we conduct DDIM inversion with $\\emptyset$ -condition and 20 steps. We set 4 operations in the pixel-level compensation: rotation with $\\pm 3^{\\circ}$ and scaling with factors 0.9 and 1.1. The padding radius $r$ is set as 2. The selection of the threshold $\\tau$ is based on the requirement of a fixed false positive rate (FPR). The calculation of $\\tau$ is the same as introduced in Stable Signature [13]. In this paper, we set the FPR to be $10^{-6}$ , so the value of $\\tau$ is set to 0.78125. All experiments are performed using PyTorch 1.12.1 and a single NVIDIA-A40 GPU.",
927
+ "bbox": [
928
+ 511,
929
+ 357,
930
+ 906,
931
+ 507
932
+ ],
933
+ "page_idx": 4
934
+ },
935
+ {
936
+ "type": "text",
937
+ "text": "5.1.2. Evaluation metrics.",
938
+ "text_level": 1,
939
+ "bbox": [
940
+ 511,
941
+ 518,
942
+ 694,
943
+ 532
944
+ ],
945
+ "page_idx": 4
946
+ },
947
+ {
948
+ "type": "text",
949
+ "text": "To assess the robustness, we adopt the evaluation settings from [34], encompassing both the detection and tracing scenarios. In the detection scenario, we measure the true positive rate (TPR) corresponding to the fixed false positive rate (FPR). For tracing, we utilize extraction bit accuracy as the metric. We evaluate the quality of watermarked images using FID [14], comparing them with non-watermarked images, and CLIP-Score [24] (larger is better). All results are obtained from the testing on 50 watermarked images generated with randomly sampled prompts from [2].",
950
+ "bbox": [
951
+ 511,
952
+ 537,
953
+ 905,
954
+ 689
955
+ ],
956
+ "page_idx": 4
957
+ },
958
+ {
959
+ "type": "text",
960
+ "text": "5.1.3. Baseline and benchmark.",
961
+ "text_level": 1,
962
+ "bbox": [
963
+ 511,
964
+ 699,
965
+ 735,
966
+ 713
967
+ ],
968
+ "page_idx": 4
969
+ },
970
+ {
971
+ "type": "text",
972
+ "text": "We compare the performance with 9 state-of-the-art watermarking frameworks: 5 post-hoc-based frameworks including 2 methods official used by Stable Diffusion, namely DwtDctSvd [8] and RivaGAN [35], and 3 deep-learning-based frameworks MBRS [16], StegaStamp [29] and RoSteALS [6]; 2 fine-tune-based method, Stable Signature [13] and LaWa [25]; and 2 inversion-based methods, TreeRings [31] and Gaussian Shading (GauShad) [34]. Note that TreeRings only embeds a 1-bit watermark, so we only evaluate the true positive rate (TPR) for TreeRings. Details of the training and implementation of the compared methods are provided in the supplementary materials.",
973
+ "bbox": [
974
+ 511,
975
+ 719,
976
+ 906,
977
+ 900
978
+ ],
979
+ "page_idx": 4
980
+ },
981
+ {
982
+ "type": "page_number",
983
+ "text": "15420",
984
+ "bbox": [
985
+ 480,
986
+ 944,
987
+ 517,
988
+ 955
989
+ ],
990
+ "page_idx": 4
991
+ },
992
+ {
993
+ "type": "table",
994
+ "img_path": "images/651901b671cab838d348e4a5fb3b6dac38c7565f5e2a4bb1c5a99158a543aeac.jpg",
995
+ "table_caption": [
996
+ "Table 1. The overall robustness and visual quality performance evaluation with different methods. Results show in the form of SD-v1.4/v2.1."
997
+ ],
998
+ "table_footnote": [],
999
+ "table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"2\">Geometric</td><td colspan=\"2\">Non-geometric</td><td rowspan=\"2\">FID</td><td rowspan=\"2\">CLIP-Score</td></tr><tr><td>TPR</td><td>Bit Acc.</td><td>TPR</td><td>Bit Acc.</td></tr><tr><td>Stable Diffusion</td><td>-</td><td>-</td><td>-</td><td>-</td><td>25.28±.17</td><td>0.3628±.0006</td></tr><tr><td>DwtDctSvd[8]</td><td>0.004/0.008</td><td>0.539/0.533</td><td>0.000/0.000</td><td>0.524/0.522</td><td>24.75±.21</td><td>0.3610±.0008</td></tr><tr><td>RivaGAN[35]</td><td>0.788/0.792</td><td>0.863/0.866</td><td>0.483/0.487</td><td>0.779/0.781</td><td>24.84±.36</td><td>0.3611±.0009</td></tr><tr><td>MBRS[16]</td><td>0.744/0.744</td><td>0.850/0.850</td><td>0.346/0.353</td><td>0.707/0.719</td><td>24.98±.26</td><td>0.3601±.0011</td></tr><tr><td>StegaStamp[29]</td><td>0.240/0.236</td><td>0.645/0.648</td><td>0.897/0.900</td><td>0.899/0.899</td><td>25.17±.16</td><td>0.3612±.0010</td></tr><tr><td>RoSteALS[6]</td><td>0.240/0.236</td><td>0.563/0.564</td><td>0.917/0.920</td><td>0.870/0.869</td><td>24.37±.18</td><td>0.3534±.0004</td></tr><tr><td>Stable Signature[13]</td><td>0.760/0.756</td><td>0.857/0.859</td><td>0.467/0.470</td><td>0.777/0.779</td><td>25.25±.35</td><td>0.3627±.0009</td></tr><tr><td>LaWa[25]</td><td>0.764/0.760</td><td>0.845/0.852</td><td>0.926/0.930</td><td>0.914/0.916</td><td>25.21±.12</td><td>0.3609±.0008</td></tr><tr><td>TreeRings[31]</td><td>0.548/0.552</td><td>-</td><td>0.947/0.943</td><td>-</td><td>25.13±.32</td><td>0.3628±.0007</td></tr><tr><td>GauShad[34]</td><td>0.016/0.020</td><td>0.633/0.635</td><td>0.963/0.963</td><td>0.936/0.937</td><td>25.23±.24</td><td>0.3629±.0006</td></tr><tr><td>TreeRings-SynTag</td><td>0.928/0.932</td><td>-</td><td>0.950/0.949</td><td>-</td><td>25.22±.21</td><td>0.3613±.0004</td></tr><tr><td>GauShad-SynTag</td><td>0.980/0.988</td><td>0.938/0.940</td><td>0.967/0.970</td><td>0.938/0.939</td><td>25.21±.17</td><td>0.3615±.0006</td></tr></table>",
1000
+ "bbox": [
1001
+ 156,
1002
+ 107,
1003
+ 836,
1004
+ 337
1005
+ ],
1006
+ "page_idx": 5
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "5.2. Comparison to baselines",
1011
+ "text_level": 1,
1012
+ "bbox": [
1013
+ 89,
1014
+ 349,
1015
+ 313,
1016
+ 364
1017
+ ],
1018
+ "page_idx": 5
1019
+ },
1020
+ {
1021
+ "type": "text",
1022
+ "text": "In this section, we measure the robustness and visual quality performance of GauShad-SynTag and TreeRings-SynTag with baseline methods. For visual quality evaluation, in addition to calculating the FID and CLIP-score, we also provide one example for subjective evaluation, as shown in Fig. 3. For robustness evaluation, we mainly test two kinds of distortions including 5 geometric distortions (Rotation $30^{\\circ}$ , Translation 50 pixels, Scale 0.75x&pad, Scale 1.25x&crop, shear mapping 5 pixels) and 6 non-geometric-distortions (JPEG compression, QF=15, Gaussian noise, $\\sigma = 0.05$ , Median filtering, $k = 11$ , Dropout, $r = 30\\%$ , Cropout, $r = 70\\%$ , Brightness, factor = 6). The appearance of the distortion can be found in supplementary materials.",
1023
+ "bbox": [
1024
+ 89,
1025
+ 371,
1026
+ 482,
1027
+ 568
1028
+ ],
1029
+ "page_idx": 5
1030
+ },
1031
+ {
1032
+ "type": "text",
1033
+ "text": "As we can see in Table 1, the FID and CLIP-score values of GauShad-Syn and TreeRings-SynTag are at the same level as that calculated with the raw images generated with Stable Diffusion, indicating that the embedding of the watermark will substantially affect the distribution and semantic information of the image. From Fig. 3 we can see that images generated with SynTag look natural since the watermark is located where the eye is not sensitive.",
1034
+ "bbox": [
1035
+ 89,
1036
+ 568,
1037
+ 482,
1038
+ 688
1039
+ ],
1040
+ "page_idx": 5
1041
+ },
1042
+ {
1043
+ "type": "text",
1044
+ "text": "Regarding the robustness evaluation, SynTag significantly improves the geometric robustness of inversion-based framework, and GauShad-SynTag maintains the highest TPR and bit accuracy across both geometric and non-geometric distortions. For geometric distortions, compared to the most competitive state-of-the-art method, GauShad-SynTag achieves $19\\%$ higher overall true positive rate and $8\\%$ higher bit accuracy. Besides, for non-geometric distortions, SynTag maintains the robustness of the original methods. Detailed results can be found in the supplementary materials.",
1045
+ "bbox": [
1046
+ 89,
1047
+ 688,
1048
+ 482,
1049
+ 854
1050
+ ],
1051
+ "page_idx": 5
1052
+ },
1053
+ {
1054
+ "type": "text",
1055
+ "text": "To further highlight the advantages of SynTag over the inversion-based schemes in geometric distortions, additional experiments were conducted, including rotation from",
1056
+ "bbox": [
1057
+ 89,
1058
+ 854,
1059
+ 482,
1060
+ 900
1061
+ ],
1062
+ "page_idx": 5
1063
+ },
1064
+ {
1065
+ "type": "text",
1066
+ "text": "$-30^{\\circ}$ to $30^{\\circ}$ , scale factors from 0.7 to 1.75, translations with 20 to 75 pixels, and shear mapping from 5 to 10 pixels. The results are shown in Fig. 4(a) to Fig. 4(h).",
1067
+ "bbox": [
1068
+ 511,
1069
+ 349,
1070
+ 903,
1071
+ 396
1072
+ ],
1073
+ "page_idx": 5
1074
+ },
1075
+ {
1076
+ "type": "text",
1077
+ "text": "It can be observed that SynTag greatly improves the geometric robustness for all settings, the frameworks with SynTag achieve higher TPR and bit accuracy. Furthermore, the advantages of SynTag are particularly evident under stronger distortions.",
1078
+ "bbox": [
1079
+ 511,
1080
+ 397,
1081
+ 903,
1082
+ 472
1083
+ ],
1084
+ "page_idx": 5
1085
+ },
1086
+ {
1087
+ "type": "text",
1088
+ "text": "5.3. Adaptive attacks",
1089
+ "text_level": 1,
1090
+ "bbox": [
1091
+ 511,
1092
+ 487,
1093
+ 679,
1094
+ 503
1095
+ ],
1096
+ "page_idx": 5
1097
+ },
1098
+ {
1099
+ "type": "text",
1100
+ "text": "In this paper, we follow the settings of GauShad [34], which mainly investigate two adaptive attacks: 1). Reconstruction attack, as proposed by [36], refers to the operation that utilizes an auto-encoder to compress and subsequently decompress the watermarked images. 2) Purification attack, as proposed by [22], refers to a procedure that adds noise on the image/latent and then denoises with diffusion models. For reconstruction attacks, we employ four widely used auto-encoders \"Cheng\" [7], \"Bmshj\" [4], VQ-VAE and KL-VAE [26]. For purification attacks, we add noise with strengths 0.1 to 0.7 and then perform a diffusion denoising process with 100 steps to generate the purified images. We tested the performance with GauShad-SynTag. The PSNR of the attacked images and the extracted results are shown in Table 2.",
1101
+ "bbox": [
1102
+ 511,
1103
+ 511,
1104
+ 906,
1105
+ 737
1106
+ ],
1107
+ "page_idx": 5
1108
+ },
1109
+ {
1110
+ "type": "table",
1111
+ "img_path": "images/3b4bdd810b7efb0d2da7f4a6826e0f6a6c719afa2a470c8eb14970d74a4fa664.jpg",
1112
+ "table_caption": [
1113
+ "Table 2. Adaptive attacks on GauShad-SynTag."
1114
+ ],
1115
+ "table_footnote": [],
1116
+ "table_body": "<table><tr><td rowspan=\"2\">Attacks</td><td colspan=\"4\">Reconstruction Attack</td></tr><tr><td>Cheng</td><td>Bmshj</td><td>VQ-VAE</td><td>KL-VAE</td></tr><tr><td>PSNR(dB)</td><td>35.13</td><td>37.92</td><td>30.16</td><td>30.21</td></tr><tr><td>TPR</td><td>0.980</td><td>0.920</td><td>0.980</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.992</td><td>0.989</td><td>0.966</td><td>0.973</td></tr><tr><td rowspan=\"2\">Attacks</td><td colspan=\"4\">Purification Attack</td></tr><tr><td>f = 0.1</td><td>0.3</td><td>0.5</td><td>0.7</td></tr><tr><td>PSNR(dB)</td><td>24.12</td><td>20.59</td><td>19.17</td><td>15.44</td></tr><tr><td>TPR</td><td>0.980</td><td>0.980</td><td>0.580</td><td>0.000</td></tr><tr><td>Bit Acc.</td><td>0.974</td><td>0.903</td><td>0.808</td><td>0.726</td></tr></table>",
1117
+ "bbox": [
1118
+ 555,
1119
+ 763,
1120
+ 864,
1121
+ 910
1122
+ ],
1123
+ "page_idx": 5
1124
+ },
1125
+ {
1126
+ "type": "page_number",
1127
+ "text": "15421",
1128
+ "bbox": [
1129
+ 480,
1130
+ 944,
1131
+ 517,
1132
+ 955
1133
+ ],
1134
+ "page_idx": 5
1135
+ },
1136
+ {
1137
+ "type": "image",
1138
+ "img_path": "images/e4ffeaa24b8114fa66523ed58c010514cdbdc8111f5dfad7ac73a33433949f1f.jpg",
1139
+ "image_caption": [
1140
+ "(a) Rotation with $-30^{\\circ} \\sim 30^{\\circ}$ ."
1141
+ ],
1142
+ "image_footnote": [],
1143
+ "bbox": [
1144
+ 106,
1145
+ 93,
1146
+ 292,
1147
+ 178
1148
+ ],
1149
+ "page_idx": 6
1150
+ },
1151
+ {
1152
+ "type": "image",
1153
+ "img_path": "images/0bb37736fa2c72421f8750985f584db1f05d438dc067be6b6497f1d8fb712728.jpg",
1154
+ "image_caption": [
1155
+ "(b) Scale $0.7\\mathrm{x}\\sim 1.75\\mathrm{x}$"
1156
+ ],
1157
+ "image_footnote": [],
1158
+ "bbox": [
1159
+ 310,
1160
+ 93,
1161
+ 498,
1162
+ 178
1163
+ ],
1164
+ "page_idx": 6
1165
+ },
1166
+ {
1167
+ "type": "image",
1168
+ "img_path": "images/556bf4020c9c0a328cc041f23dca5b6bcf5cf1d2c16cf61ece30e59e0be93b58.jpg",
1169
+ "image_caption": [
1170
+ "(c) Translation $20\\sim 75$ pixels."
1171
+ ],
1172
+ "image_footnote": [],
1173
+ "bbox": [
1174
+ 519,
1175
+ 93,
1176
+ 705,
1177
+ 178
1178
+ ],
1179
+ "page_idx": 6
1180
+ },
1181
+ {
1182
+ "type": "image",
1183
+ "img_path": "images/eabfa315821caf20f2064640b673c1fdc60c08ec393962628179f5350fbc08ff.jpg",
1184
+ "image_caption": [
1185
+ "(d) Shear mapping $5\\sim 10$ pixels."
1186
+ ],
1187
+ "image_footnote": [],
1188
+ "bbox": [
1189
+ 725,
1190
+ 93,
1191
+ 911,
1192
+ 178
1193
+ ],
1194
+ "page_idx": 6
1195
+ },
1196
+ {
1197
+ "type": "image",
1198
+ "img_path": "images/8e85ad3b47a3564a6fc59e18cac3ae863ff39137f7de2017b17b11b7de979dff.jpg",
1199
+ "image_caption": [
1200
+ "(e) Rotation with $-30^{\\circ} \\sim 30^{\\circ}$ ."
1201
+ ],
1202
+ "image_footnote": [],
1203
+ "bbox": [
1204
+ 106,
1205
+ 202,
1206
+ 292,
1207
+ 284
1208
+ ],
1209
+ "page_idx": 6
1210
+ },
1211
+ {
1212
+ "type": "image",
1213
+ "img_path": "images/61fa7c7abef123a69ce92ee06e9b2f54ed98c40690f1f879f4c957b12dc1d8d5.jpg",
1214
+ "image_caption": [
1215
+ "(f) Scale $0.7\\mathrm{x}\\sim 1.75\\mathrm{x}$"
1216
+ ],
1217
+ "image_footnote": [],
1218
+ "bbox": [
1219
+ 312,
1220
+ 202,
1221
+ 498,
1222
+ 284
1223
+ ],
1224
+ "page_idx": 6
1225
+ },
1226
+ {
1227
+ "type": "image",
1228
+ "img_path": "images/7fb52e45eaf67dcf9ff8c284cbb50ccd24971b1a8f96c03e1b2dc930e0a4e19a.jpg",
1229
+ "image_caption": [
1230
+ "(g) Translation $20\\sim 75$ pixels."
1231
+ ],
1232
+ "image_footnote": [],
1233
+ "bbox": [
1234
+ 519,
1235
+ 202,
1236
+ 705,
1237
+ 282
1238
+ ],
1239
+ "page_idx": 6
1240
+ },
1241
+ {
1242
+ "type": "image",
1243
+ "img_path": "images/b6e1c7626e44b883aa37cb8d2e8e3ef3f1c95602ec36bb53f3616499ad6ee3f3.jpg",
1244
+ "image_caption": [
1245
+ "(h) Shear mapping $5\\sim 10$ pixels."
1246
+ ],
1247
+ "image_footnote": [],
1248
+ "bbox": [
1249
+ 725,
1250
+ 202,
1251
+ 911,
1252
+ 282
1253
+ ],
1254
+ "page_idx": 6
1255
+ },
1256
+ {
1257
+ "type": "text",
1258
+ "text": "The results in Table 2 highlight the robustness of the watermark extraction process against reconstruction attacks, with TPR above 0.92 and bit accuracy exceeding 0.96 across all auto-encoders. For purification attacks, GauShadSynTag also demonstrates considerable robustness. When the attack strength is below 0.3, both bit accuracy and TPR remain high. However, as the attack strength increases, performance gradually declines. But we should highlight that at $s = 0.5$ , while robustness decreases, the PSNR of the attacked images drops sharply to 19dB, indicating that the purified images differ significantly from the original watermarked images. Additional visual results of purificationattack are included in the supplementary materials.",
1259
+ "bbox": [
1260
+ 89,
1261
+ 335,
1262
+ 483,
1263
+ 532
1264
+ ],
1265
+ "page_idx": 6
1266
+ },
1267
+ {
1268
+ "type": "text",
1269
+ "text": "5.4. False positive detection of dither compensation",
1270
+ "text_level": 1,
1271
+ "bbox": [
1272
+ 89,
1273
+ 544,
1274
+ 483,
1275
+ 559
1276
+ ],
1277
+ "page_idx": 6
1278
+ },
1279
+ {
1280
+ "type": "text",
1281
+ "text": "Since the dither compensation involves a small range of exhaustive searching, which may potentially enhance the false positive detection—identifying non-watermarked images as having a watermark. Therefore, in this section, we explore the effect of this operation on false positive detection. We use GauShad-SynTag as the backbone, fix one watermark message $s$ and generate 1000 watermarked images, then we distort them with rotation $-30^{\\circ} \\sim 30^{\\circ}$ , scaling $0.75x \\sim 1.25x$ , JPEG compression (QF=15) and Gaussian noise $(\\sigma = 0.05)$ . The highest bit accuracy of the watermark extraction for each image was recorded. A similar operation was performed on 1000 non-watermarked images. The distribution of bit accuracy for both the watermarked and non-watermarked images is shown in Fig. 5(a) and Fig. 5(b), respectively.",
1282
+ "bbox": [
1283
+ 88,
1284
+ 566,
1285
+ 482,
1286
+ 792
1287
+ ],
1288
+ "page_idx": 6
1289
+ },
1290
+ {
1291
+ "type": "text",
1292
+ "text": "It can be seen that the distribution of the highest bit accuracy with non-watermarked images has distinct differences with watermarked images. For non-watermarked images, even with dither compensation, the highest bit accuracy is less than 0.8, but for watermarked images, almost all of them maintain over 0.8 bit accuracy. Such differences indicate that by selecting an appropriate threshold",
1293
+ "bbox": [
1294
+ 88,
1295
+ 795,
1296
+ 483,
1297
+ 901
1298
+ ],
1299
+ "page_idx": 6
1300
+ },
1301
+ {
1302
+ "type": "image",
1303
+ "img_path": "images/87d36fcb84fc928afbe4e443a7dd70a0fc3d27f1fd212b3256d24f506f5b1fc9.jpg",
1304
+ "image_caption": [
1305
+ "Figure 4. The TPR and bit acc. comparison with/without SynTag, (a)~(d) indicates the results with GauShad, (e)~(h) indicates the results with TreeRings."
1306
+ ],
1307
+ "image_footnote": [],
1308
+ "bbox": [
1309
+ 529,
1310
+ 329,
1311
+ 702,
1312
+ 406
1313
+ ],
1314
+ "page_idx": 6
1315
+ },
1316
+ {
1317
+ "type": "image",
1318
+ "img_path": "images/c5e431038a0886be1366ef411b4b69fb1588b0d19eea312d53873c41554a1d76.jpg",
1319
+ "image_caption": [
1320
+ "(c) JPEG with $QF = 15$ ."
1321
+ ],
1322
+ "image_footnote": [],
1323
+ "bbox": [
1324
+ 531,
1325
+ 411,
1326
+ 702,
1327
+ 505
1328
+ ],
1329
+ "page_idx": 6
1330
+ },
1331
+ {
1332
+ "type": "image",
1333
+ "img_path": "images/9c7f0e5f6401f50aa95090921f5354bd9f98b2a8c96a161fd3bc12b8e27bc42a.jpg",
1334
+ "image_caption": [
1335
+ "(b) Scale $0.75\\mathrm{x}\\sim 1.25\\mathrm{x}$"
1336
+ ],
1337
+ "image_footnote": [],
1338
+ "bbox": [
1339
+ 715,
1340
+ 329,
1341
+ 887,
1342
+ 406
1343
+ ],
1344
+ "page_idx": 6
1345
+ },
1346
+ {
1347
+ "type": "image",
1348
+ "img_path": "images/84e29a00706ad1562155a64093054b903516f826f2acded2dc1c627bf048b103.jpg",
1349
+ "image_caption": [
1350
+ "(d) Gaussian noise with $\\sigma = 0.05$",
1351
+ "Figure 5. The distribution of highest bit accuracy under different distortions extracted from images with and without watermark."
1352
+ ],
1353
+ "image_footnote": [],
1354
+ "bbox": [
1355
+ 715,
1356
+ 426,
1357
+ 887,
1358
+ 505
1359
+ ],
1360
+ "page_idx": 6
1361
+ },
1362
+ {
1363
+ "type": "text",
1364
+ "text": "(e.g. $\\tau = 0.8$ ), we can control the false positive detection rate of non-watermarked images at a very low level. As a result, the dither compensation operation does not lead to significant false positive detections.",
1365
+ "bbox": [
1366
+ 511,
1367
+ 574,
1368
+ 906,
1369
+ 636
1370
+ ],
1371
+ "page_idx": 6
1372
+ },
1373
+ {
1374
+ "type": "text",
1375
+ "text": "5.5. Generalizable experiments",
1376
+ "text_level": 1,
1377
+ "bbox": [
1378
+ 511,
1379
+ 646,
1380
+ 754,
1381
+ 662
1382
+ ],
1383
+ "page_idx": 6
1384
+ },
1385
+ {
1386
+ "type": "text",
1387
+ "text": "5.5.1. Adaptability on different generation settings",
1388
+ "text_level": 1,
1389
+ "bbox": [
1390
+ 511,
1391
+ 669,
1392
+ 867,
1393
+ 684
1394
+ ],
1395
+ "page_idx": 6
1396
+ },
1397
+ {
1398
+ "type": "text",
1399
+ "text": "To validate the adaptability of SynTag, we vary the sampling methods, sampling steps, guidance scale, and different prompt sets in the generation process, then we conduct the extraction experiments. For sampling methods, we utilize three commonly used samplers based on ODE solvers (DDIM, UniPC, PNDM). Sampling steps are varied from 20 to 100, and guidance scale values tested are 3, 7.5 and 11. The prompt sets we utilized are open-sourced in Hugging Face, denoted as $P^1$ to $P^3$ [1-3]. The default settings for the guidance scale, sampling steps, and sampling methods are 7.5, 50, DDIM with prompt sets [2] respectively. For each experiment, only the tested settings are varied. After generation, we test the detection TPR and bit accuracy results with 4 geometric distortions (rotation $30^{\\circ}$ , scale $1.25\\mathrm{x}$ ,",
1400
+ "bbox": [
1401
+ 511,
1402
+ 688,
1403
+ 906,
1404
+ 900
1405
+ ],
1406
+ "page_idx": 6
1407
+ },
1408
+ {
1409
+ "type": "page_number",
1410
+ "text": "15422",
1411
+ "bbox": [
1412
+ 480,
1413
+ 944,
1414
+ 517,
1415
+ 955
1416
+ ],
1417
+ "page_idx": 6
1418
+ },
1419
+ {
1420
+ "type": "table",
1421
+ "img_path": "images/3abbc3d31a0055f1b6c824c9bc6894e4764758a05627eb8969fd8db68f17c7a1.jpg",
1422
+ "table_caption": [
1423
+ "Table 3. Adaptability of SynTag with different generation settings."
1424
+ ],
1425
+ "table_footnote": [],
1426
+ "table_body": "<table><tr><td rowspan=\"2\">Settings \nParameters</td><td colspan=\"3\">Sampling Methods</td><td colspan=\"3\">Sampling Steps</td></tr><tr><td>DDIM</td><td>UniPC</td><td>PNDM</td><td>20</td><td>50</td><td>100</td></tr><tr><td>TPR</td><td>0.990</td><td>0.960</td><td>0.970</td><td>0.980</td><td>0.990</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.935</td><td>0.927</td><td>0.928</td><td>0.934</td><td>0.935</td><td>0.934</td></tr><tr><td rowspan=\"2\">Settings \nParameters</td><td colspan=\"3\">Guidance Scale</td><td colspan=\"3\">Prompt Sets</td></tr><tr><td>3</td><td>7.5</td><td>11</td><td>P1[2]</td><td>P2[3]</td><td>P3[1]</td></tr><tr><td>TPR</td><td>1.000</td><td>0.990</td><td>0.980</td><td>0.990</td><td>0.965</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.956</td><td>0.935</td><td>0.928</td><td>0.935</td><td>0.924</td><td>0.927</td></tr></table>",
1427
+ "bbox": [
1428
+ 98,
1429
+ 113,
1430
+ 480,
1431
+ 237
1432
+ ],
1433
+ "page_idx": 7
1434
+ },
1435
+ {
1436
+ "type": "text",
1437
+ "text": "translation-50 and shear-5), results are shown in Table 3.",
1438
+ "bbox": [
1439
+ 89,
1440
+ 252,
1441
+ 464,
1442
+ 266
1443
+ ],
1444
+ "page_idx": 7
1445
+ },
1446
+ {
1447
+ "type": "text",
1448
+ "text": "It can be seen that in all cases, the detection TPR and bit accuracy are stayed at a high level, which is over 0.96 for TPR and 0.92 for bit accuracy. This superior performance underscores the remarkable adaptability of GauShad-SynTag in different generation conditions.",
1449
+ "bbox": [
1450
+ 89,
1451
+ 267,
1452
+ 483,
1453
+ 342
1454
+ ],
1455
+ "page_idx": 7
1456
+ },
1457
+ {
1458
+ "type": "text",
1459
+ "text": "5.5.2. Robustness of combined distortions",
1460
+ "text_level": 1,
1461
+ "bbox": [
1462
+ 89,
1463
+ 351,
1464
+ 383,
1465
+ 364
1466
+ ],
1467
+ "page_idx": 7
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "text": "We use the combined distortion of both geometric distortions and geometric distortion plus non-geometric distortions for illustration. In detail, we select 10 distortions, as shown in Table 4, where $R, S, T$ indicate the rotation, scaling and translation respectively. \"JPEG, GauN and MedF\" indicates the JPEG compression with QF=75, Gaussian noise with $\\sigma = 0.01$ and Medain filter with $w = 5 \\times 5$ .",
1472
+ "bbox": [
1473
+ 89,
1474
+ 369,
1475
+ 483,
1476
+ 476
1477
+ ],
1478
+ "page_idx": 7
1479
+ },
1480
+ {
1481
+ "type": "table",
1482
+ "img_path": "images/e5ed2b82a2ea301b4ff6bea4378c197836d5cf250018d3bc9657f60017b5cd6c.jpg",
1483
+ "table_caption": [
1484
+ "Table 4. Robustness of GauShad-SynTag against combined distortions."
1485
+ ],
1486
+ "table_footnote": [],
1487
+ "table_body": "<table><tr><td>Distortions</td><td>R-15° S-1.25x</td><td>R-15° S-0.9x</td><td>R-15° T-25</td><td>S-1.25x T-25</td><td>S-0.9x T-25</td></tr><tr><td>TPR</td><td>0.96</td><td>1.00</td><td>0.98</td><td>0.94</td><td>1.00</td></tr><tr><td>Bit Acc.</td><td>0.941</td><td>0.937</td><td>0.922</td><td>0.928</td><td>0.960</td></tr><tr><td>Distortions</td><td>R-15° JPEG</td><td>R-15° GauN</td><td>R-15° MedF</td><td>T-25 JPEG</td><td>S-1.25x JPEG</td></tr><tr><td>TPR</td><td>0.94</td><td>0.94</td><td>0.92</td><td>0.94</td><td>0.92</td></tr><tr><td>Bit Acc.</td><td>0.919</td><td>0.904</td><td>0.902</td><td>0.918</td><td>0.901</td></tr></table>",
1488
+ "bbox": [
1489
+ 120,
1490
+ 500,
1491
+ 452,
1492
+ 617
1493
+ ],
1494
+ "page_idx": 7
1495
+ },
1496
+ {
1497
+ "type": "text",
1498
+ "text": "For all the combined noise, GauShad-SynTag achieves high TPR ( $\\geq 0.92$ ) and high extraction accuracy ( $\\geq 0.90$ ), indicating strong robustness against combined noise.",
1499
+ "bbox": [
1500
+ 89,
1501
+ 621,
1502
+ 482,
1503
+ 667
1504
+ ],
1505
+ "page_idx": 7
1506
+ },
1507
+ {
1508
+ "type": "text",
1509
+ "text": "5.6. Ablation Study",
1510
+ "text_level": 1,
1511
+ "bbox": [
1512
+ 89,
1513
+ 676,
1514
+ 243,
1515
+ 693
1516
+ ],
1517
+ "page_idx": 7
1518
+ },
1519
+ {
1520
+ "type": "text",
1521
+ "text": "5.6.1. Necessity of SynTag feature injection",
1522
+ "text_level": 1,
1523
+ "bbox": [
1524
+ 89,
1525
+ 699,
1526
+ 393,
1527
+ 715
1528
+ ],
1529
+ "page_idx": 7
1530
+ },
1531
+ {
1532
+ "type": "text",
1533
+ "text": "To validate the necessity of the SynTag feature, we also train a passive restoration network: instead of actively embedding the feature, we only train the prediction network itself, namely $\\mathcal{P}_{-}$ , to perform the geometric distortion trajectory prediction and thus correct the distortion. The experimental results with (rotation $R - 30^{\\circ}$ , scaling $S - 0.75\\mathrm{x} / 1.25\\mathrm{x}$ , translation $T - 50$ , and shear $Sh - 5$ ) are shown in Table 5 and Fig. 6.",
1534
+ "bbox": [
1535
+ 89,
1536
+ 719,
1537
+ 482,
1538
+ 838
1539
+ ],
1540
+ "page_idx": 7
1541
+ },
1542
+ {
1543
+ "type": "text",
1544
+ "text": "It can be seen that only training $\\mathcal{P}_{-}$ is not effective for distortion correction, and the restoration results are far from the SynTag. We summarize the reason as that the geometric distortion are quite complex, making it challenging to",
1545
+ "bbox": [
1546
+ 89,
1547
+ 839,
1548
+ 483,
1549
+ 901
1550
+ ],
1551
+ "page_idx": 7
1552
+ },
1553
+ {
1554
+ "type": "image",
1555
+ "img_path": "images/994b38e70c6c870656823253e52388d284f5ed70bf639228cf69a2091eadc4a8.jpg",
1556
+ "image_caption": [
1557
+ "Figure 6. The correction performance of SynTag $(\\mathcal{D}_{Syn} + \\mathcal{P}_{Syn})$ and only with passive restoration network $\\mathcal{P}_{-}$ ."
1558
+ ],
1559
+ "image_footnote": [],
1560
+ "bbox": [
1561
+ 544,
1562
+ 90,
1563
+ 872,
1564
+ 205
1565
+ ],
1566
+ "page_idx": 7
1567
+ },
1568
+ {
1569
+ "type": "table",
1570
+ "img_path": "images/f36eea24565c0bc1570de2b0d65df3831f6c56fc4fa8a65ea7b63f6afd019d28.jpg",
1571
+ "table_caption": [
1572
+ "Table 5. Robustness comparison of correction with only SynTag and $\\mathcal{P}$ _."
1573
+ ],
1574
+ "table_footnote": [],
1575
+ "table_body": "<table><tr><td>Distortion</td><td>R-30°</td><td>S-0.75x</td><td>S-1.25x</td><td>T-50</td><td>Sh-5</td><td>Ave</td></tr><tr><td>P_</td><td>0.711</td><td>0.737</td><td>0.720</td><td>0.761</td><td>0.778</td><td>0.730</td></tr><tr><td>DSyn+PSyn</td><td>0.941</td><td>0.949</td><td>0.934</td><td>0.963</td><td>0.902</td><td>0.938</td></tr></table>",
1576
+ "bbox": [
1577
+ 527,
1578
+ 262,
1579
+ 890,
1580
+ 311
1581
+ ],
1582
+ "page_idx": 7
1583
+ },
1584
+ {
1585
+ "type": "text",
1586
+ "text": "perform distortion correction without any assistant features. The extraction result also indicates the necessity of SynTag as the extraction accuracy of SynTag is significantly higher.",
1587
+ "bbox": [
1588
+ 511,
1589
+ 321,
1590
+ 903,
1591
+ 367
1592
+ ],
1593
+ "page_idx": 7
1594
+ },
1595
+ {
1596
+ "type": "text",
1597
+ "text": "5.6.2. Improvements of correction modules",
1598
+ "text_level": 1,
1599
+ "bbox": [
1600
+ 511,
1601
+ 373,
1602
+ 815,
1603
+ 387
1604
+ ],
1605
+ "page_idx": 7
1606
+ },
1607
+ {
1608
+ "type": "text",
1609
+ "text": "To investigate the improvements for each correction module $(\\mathcal{P}_{Syn},\\mathbb{C}_p$ and $\\mathbb{C}_l)$ , we conduct the ablation study on 4 geometric distortions (rotation $30^{\\circ}$ , scale 1.25x, translation-50 and shear-5). In the extraction stage, we test the results without $\\mathcal{P}_{Syn}$ (denote as $w / o\\mathcal{P}_{Syn}$ ), with $\\mathcal{P}_{Syn}$ , $\\mathcal{P}_{Syn} + \\mathbb{C}_p$ , $\\mathcal{P}_{Syn} + \\mathbb{C}_l$ and $\\mathcal{P}_{Syn} + \\mathbb{C}_p + \\mathbb{C}_l$ , respectively. The average TPRs and bit accuracy are shown in Table 6.",
1610
+ "bbox": [
1611
+ 511,
1612
+ 392,
1613
+ 905,
1614
+ 498
1615
+ ],
1616
+ "page_idx": 7
1617
+ },
1618
+ {
1619
+ "type": "table",
1620
+ "img_path": "images/81d3f5569685fa4d5196e9569e04a6b5b5935a4fdf1518bd70a845a18df3cd98.jpg",
1621
+ "table_caption": [
1622
+ "Table 6. Ablation study on extraction modules."
1623
+ ],
1624
+ "table_footnote": [],
1625
+ "table_body": "<table><tr><td>Modules</td><td>w/o PSyn</td><td>PSyn</td><td>PSyn+Cp</td><td>PSyn+Cl</td><td>PSyn+Cp+Cl</td></tr><tr><td>TPR</td><td>0.016</td><td>0.763</td><td>0.795</td><td>0.930</td><td>0.990</td></tr><tr><td>Bit Acc.</td><td>0.633</td><td>0.819</td><td>0.865</td><td>0.915</td><td>0.935</td></tr></table>",
1626
+ "bbox": [
1627
+ 527,
1628
+ 521,
1629
+ 890,
1630
+ 568
1631
+ ],
1632
+ "page_idx": 7
1633
+ },
1634
+ {
1635
+ "type": "text",
1636
+ "text": "It can be seen that the existence of $\\mathcal{P}_{\\mathrm{Syn}}$ significantly improves the geometric distortion robustness, where the detection TPR increases from 0.016 to 0.763, and bit accuracy increases from 0.633 to 0.819. Besides, with the help of $\\mathbb{C}_p$ and $\\mathbb{C}_l$ , the performance is further improved, and compared with $\\mathbb{C}_p$ , $\\mathbb{C}_l$ contributes more in improvements.",
1637
+ "bbox": [
1638
+ 511,
1639
+ 575,
1640
+ 905,
1641
+ 667
1642
+ ],
1643
+ "page_idx": 7
1644
+ },
1645
+ {
1646
+ "type": "text",
1647
+ "text": "6. Conclusion",
1648
+ "text_level": 1,
1649
+ "bbox": [
1650
+ 511,
1651
+ 679,
1652
+ 633,
1653
+ 694
1654
+ ],
1655
+ "page_idx": 7
1656
+ },
1657
+ {
1658
+ "type": "text",
1659
+ "text": "In this paper, we introduce SynTag, a synchronization tag injection-based method to enhance the geometric robustness of inversion-based generative image watermarking. Unlike previous approaches that focus on directly constructing distortion-invariant watermarking features, SynTag embeds a template-like feature that evolved with geometric distortion, allowing further distortion correction. Additionally, we propose a dither compensation mechanism to further enhance the accuracy of the correction process. Experimental results demonstrate that SynTag can successfully compete with inversion-based frameworks and offers strong robustness against both geometric and non-geometric distortions.",
1660
+ "bbox": [
1661
+ 511,
1662
+ 704,
1663
+ 906,
1664
+ 898
1665
+ ],
1666
+ "page_idx": 7
1667
+ },
1668
+ {
1669
+ "type": "page_number",
1670
+ "text": "15423",
1671
+ "bbox": [
1672
+ 480,
1673
+ 944,
1674
+ 517,
1675
+ 955
1676
+ ],
1677
+ "page_idx": 7
1678
+ },
1679
+ {
1680
+ "type": "text",
1681
+ "text": "Acknowledgement. This research is supported by the National Research Foundation, Singapore, through the National Cybersecurity R&D Lab at the National University of Singapore under its National Cybersecurity R&D Programme (Award No. NCR25-NCL P3-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, and National Cybersecurity R&D Lab at the National University of Singapore.",
1682
+ "bbox": [
1683
+ 89,
1684
+ 90,
1685
+ 485,
1686
+ 243
1687
+ ],
1688
+ "page_idx": 8
1689
+ },
1690
+ {
1691
+ "type": "text",
1692
+ "text": "References",
1693
+ "text_level": 1,
1694
+ "bbox": [
1695
+ 91,
1696
+ 255,
1697
+ 187,
1698
+ 271
1699
+ ],
1700
+ "page_idx": 8
1701
+ },
1702
+ {
1703
+ "type": "list",
1704
+ "sub_type": "ref_text",
1705
+ "list_items": [
1706
+ "[1] Daspartho-stable-diffusion-prompts. https://huggingface.co/datasets/daspartho/stable-diffusion-prompts. Accessed: Nov. 2024. 7,8",
1707
+ "[2] Gustavosta-stable-diffusion-prompt. https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts. Accessed: Nov. 2024. 5,7,8",
1708
+ "[3] Isidentical-stable-diffusion-prompt. https://huggingface.co/datasets/isidentical/random-stable-diffusion-prompts. Accessed: Nov. 2024. 7, 8",
1709
+ "[4] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018. 6",
1710
+ "[5] Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise. arXiv preprint arXiv:2208.09392, 2022. 2",
1711
+ "[6] Tu Bui, Shruti Agarwal, Ning Yu, and John Collomosse. Rosteals: Robust steganography using autoencoder latent space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 933-942, 2023. 1, 5, 6",
1712
+ "[7] Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7939-7948, 2020. 6",
1713
+ "[8] Ingemar Cox, Matthew Miller, Jeffrey Bloom, Jessica Fridrich, and Ton Kalker. Digital watermarking and steganography. Morgan Kaufmann, 2007. 2, 5, 6",
1714
+ "[9] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. 2",
1715
+ "[10] Han Fang, Weiming Zhang, Hang Zhou, Hao Cui, and Nenghai Yu. Screen-shooting resilient watermarking. IEEE Transactions on Information Forensics and Security, 14(6):1403-1418, 2018. 2",
1716
+ "[11] Han Fang, Zhaoyang Jia, Zehua Ma, Ee-Chien Chang, and Weiming Zhang. Pimog: An effective screen-shooting noise-layer simulation for deep-learning-based watermarking net-"
1717
+ ],
1718
+ "bbox": [
1719
+ 93,
1720
+ 280,
1721
+ 483,
1722
+ 901
1723
+ ],
1724
+ "page_idx": 8
1725
+ },
1726
+ {
1727
+ "type": "list",
1728
+ "sub_type": "ref_text",
1729
+ "list_items": [
1730
+ "work. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2267-2275, 2022. 2",
1731
+ "[12] Han Fang, Yupeng Qiu, Kejiang Chen, Jiyi Zhang, Weiming Zhang, and Ee-Chien Chang. Flow-based robust watermarking with invertible noise layer for black-box distortions. In Proceedings of the AAAI conference on artificial intelligence, pages 5054-5061, 2023. 2",
1732
+ "[13] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023. 1, 2, 4, 5, 6",
1733
+ "[14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017. 5",
1734
+ "[15] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2",
1735
+ "[16] Zhaoyang Jia, Han Fang, and Weiming Zhang. Mbrs: Enhancing robustness of cnn-based watermarking by minibatch of real and simulatedJPEG compression. In Proceedings of the 29th ACM International Conference on Multimedia, pages 41-49, 2021. 2, 5, 6",
1736
+ "[17] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 4",
1737
+ "[18] Xiangui Kang, Jiwu Huang, Yun Q Shi, and Yan Lin. A dwt-dft composite watermarking scheme robust to both affine transform and jpeg compression. IEEE Transactions on Circuits and Systems for Video Technology, 13(8):776-786, 2003. 2",
1738
+ "[19] Xiangui Kang, Jiwu Huang, and Wenjun Zeng. Efficient general print-scanning resilient data hiding based on uniform log-polar mapping. IEEE Transactions on Information Forensics and Security, 5(1):1-12, 2010. 2",
1739
+ "[20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, pages 740-755. Springer, 2014. 5",
1740
+ "[21] Yinhui Luo, Xingyi Wang, Yanhao Liao, Qiang Fu, Chang Shu, Yuezhou Wu, and Yuanqing He. A review of homography estimation: Advances and challenges. *Electronics*, 12 (24):4977, 2023. 2, 3",
1741
+ "[22] Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Animashree Anandkumar. Diffusion models for adversarial purification. In International Conference on Machine Learning, pages 16805-16827. PMLR, 2022. 6",
1742
+ "[23] Joseph JK O'Ruanaidh and Thierry Pun. Rotation, scale and translation invariant digital image watermarking. In Proceedings of International Conference on Image Processing, pages 536-539. IEEE, 1997. 2",
1743
+ "[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,"
1744
+ ],
1745
+ "bbox": [
1746
+ 516,
1747
+ 92,
1748
+ 903,
1749
+ 901
1750
+ ],
1751
+ "page_idx": 8
1752
+ },
1753
+ {
1754
+ "type": "page_number",
1755
+ "text": "15424",
1756
+ "bbox": [
1757
+ 480,
1758
+ 944,
1759
+ 519,
1760
+ 955
1761
+ ],
1762
+ "page_idx": 8
1763
+ },
1764
+ {
1765
+ "type": "list",
1766
+ "sub_type": "ref_text",
1767
+ "list_items": [
1768
+ "Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 5",
1769
+ "[25] Ahmad Rezaei, Mohammad Akbari, Saeed Ranjbar Alvar, Arezou Fatemi, and Yong Zhang. Lawa: Using latent space for in-generation image watermarking. arXiv preprint arXiv:2408.05868, 2024. 1, 2, 5, 6",
1770
+ "[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 1, 5, 6",
1771
+ "[27] Chitwan Sahara, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4713-4726, 2022. 2",
1772
+ "[28] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 2",
1773
+ "[29] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. arXiv preprint arXiv:1904.05343, 2019. 2, 5, 6",
1774
+ "[30] Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact diffusion inversion via coupled transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22532-22541, 2023. 3",
1775
+ "[31] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. arXiv preprint arXiv:2305.20030, 2023. 1, 2, 5, 6",
1776
+ "[32] Eric Wengrowski and Kristin Dana. Light field messaging with deep photographic steganography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1515-1524. Computer Vision Foundation / IEEE, 2019. 2",
1777
+ "[33] Cheng Xiong, Chuan Qin, Guorui Feng, and Xinpeng Zhang. Flexible and secure watermarking for latent diffusion model. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1668-1676, 2023. 1",
1778
+ "[34] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12162-12171, 2024. 1, 3, 4, 5, 6",
1779
+ "[35] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019. 5, 6",
1780
+ "[36] Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. arXiv preprint arXiv:2306.01953, 2023. 6",
1781
+ "[37] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of"
1782
+ ],
1783
+ "bbox": [
1784
+ 91,
1785
+ 90,
1786
+ 483,
1787
+ 902
1788
+ ],
1789
+ "page_idx": 9
1790
+ },
1791
+ {
1792
+ "type": "text",
1793
+ "text": "the European Conference on Computer Vision, pages 657-672, 2018. 2",
1794
+ "bbox": [
1795
+ 545,
1796
+ 90,
1797
+ 903,
1798
+ 119
1799
+ ],
1800
+ "page_idx": 9
1801
+ },
1802
+ {
1803
+ "type": "page_number",
1804
+ "text": "15425",
1805
+ "bbox": [
1806
+ 480,
1807
+ 944,
1808
+ 517,
1809
+ 955
1810
+ ],
1811
+ "page_idx": 9
1812
+ }
1813
+ ]
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/94eed023-e594-46a7-a0d4-c4c09a3fe5b4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b7c3b6f4d28cc79111977d072fc0038b8b8e976daddeda5539f34974bb7881
3
+ size 15092921
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/full.md ADDED
@@ -0,0 +1,350 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynTag: Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking
2
+
3
+ Han Fang<sup>1</sup> Kejiang Chen<sup>2,*</sup> Zehua Ma<sup>2</sup> Jiajun Deng<sup>1</sup>
4
+
5
+ Yicong Li<sup>1</sup> Weiming Zhang<sup>2</sup> Ee-Chien Chang<sup>1,*</sup>
6
+
7
+ $^{1}$ National University of Singapore $^{2}$ University of Science and Technology of China
8
+
9
+ {fanghan, dcscec}@nus.edu.sg chenkj@ustc.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Robustness is significant for generative image watermarking, typically achieved by injecting distortion-invariant watermark features. The leading paradigm, i.e., inversion-based framework, excels against non-geometric distortions but struggles with geometric ones. To address this, we propose SynTag, a synchronization tag injection-based method that enhances geometric robustness in inversion-based schemes. Due to the complexity of geometric distortions, finding universally geometric-invariant features is challenging, and it is not clear whether such invariant representation exists. Therefore, instead of seeking invariant representations, we embed a sensitive template feature alongside the watermarking features. This template evolves with geometric distortions, allowing us to reconstruct the distortion trajectory for correction before extraction. Focusing on latent diffusion models, we fine-tune the VAE decoder to inject the invisible SynTag feature, pairing it with a prediction network for extraction and correction. Additionally, we introduce a dither compensation mechanism to further improve correction accuracy. SynTag is highly compatible with existing inversion-based methods. Extensive experiments demonstrate a significant boost in geometric distortion robustness while maintaining resilience against non-geometric distortions.
14
+
15
+ # 1. Introduction
16
+
17
+ The powerful generative capability of latent diffusion models (LDMs) [26] presents potential disinformation risks, as malicious users could misuse them to create realistic yet misleading images, spreading false information. This makes the detectability and traceability of generated images a pressing priority. Generative image watermarking [13, 25, 31, 33, 34] is a crucial technique to address this need. By proactively watermarking every output of the gen
18
+
19
+ ![](images/faae5532437c86362577ac32c4af7e4ab0c31499d2d7794727932658c839fb06.jpg)
20
+ Figure 1. An intuitive performance comparison between our proposed SynTag and state-of-the-art methods from three typical paradigms, illustrating extraction accuracy under non-geometric and geometric distortions.
21
+
22
+ erative model, the identification of generated images and their uses are achieved. For generative image watermarking, robustness is a key property, ensuring that watermarks remain extractable even if the image undergoes distortions.
23
+
24
+ Common generative image watermarking frameworks include the posthoc-based framework [6], fine-tune-based framework [13] and the inversion-based framework [31]. Among them, the inversion-based scheme demonstrates the strongest robustness to non-geometric distortions, such as JPEG, Gaussian noise, etc. However, we find that such kind of framework struggles with geometric distortions (e.g., rotation, scaling), as indicated in Fig. 1. Considering geometric distortions are common in practical applications, ensuring geometric robustness is a significant step to push the technique from lab research to practical application.
25
+
26
+ Traditionally, robustness is achieved by constructing distortion-invariant features for watermark embedding. Empirical evidence suggests that invariant features can be effectively trained or engineered to withstand nongeometric distortions, which primarily affect pixel-level details [13, 25, 31, 34]. However, geometric distortions introduce structural transformations that cause pixel misalignment and desynchronization effects, making it challenging
27
+
28
+ to establish universally geometric-invariant features. In extreme cases, such a feature may not exist.
29
+
30
+ To address these limitations, we propose SynTag, a synchronization tag injection-based method that fundamentally rethinks how to achieve geometric robustness in inversion-based frameworks. Unlike conventional approaches that attempt to construct geometric-invariant features, we adopt an alternative strategy by injecting geometric-sensitive template-like features. Rather than resisting geometric transformations, these features evolve dynamically in response to distortions. This shift in perspective allows us to leverage the transformation itself as a cue for correction, rather than treating it as a disruption. By analyzing the evolved template, we can accurately estimate the geometric distortion trajectory and reverse the transformation prior to watermark extraction.
31
+
32
+ Specifically, to tightly couple feature injection with the image generation process, we fine-tune the VAE decoder to integrate SynTag seamlessly. In addition to standard latent decoding, the VAE decoder is trained to inject an imperceptible feature that preserves visual consistency with and without SynTag injection. Simultaneously, we introduce a SynTag predictor that estimates the geometric distortion trajectory. Since common geometric distortions can be described by a homography transformation [21], the predictor is trained to estimate the corresponding transformation parameters. This enables precise geometric correction prior to watermark extraction.
33
+
34
+ Furthermore, to refine correction accuracy, we introduce a fine-grained dither compensation mechanism that performs a local exhaustive search over both pixel and latent domains. This compensates for minor correction biases, further improving watermark extraction fidelity.
35
+
36
+ In summary, we make the following contributions:
37
+
38
+ - We propose SynTag, a synchronization tag injection-based solution to enhance the geometric distortion robustness of inversion-based framework. By fine-tuning the VAE decoder with a prediction network, we embed a template-like feature that effectively aids in geometric distortion correction.
39
+ - We introduce dither compensation that complements the SynTag injection-based approach, further minimizing correcting errors and improving the accuracy of watermark extraction.
40
+ - Extensive experiments demonstrate that SynTag integrates seamlessly with state-of-the-art inversion-based frameworks, significantly improving true positive rates in watermark detection and bit accuracy in watermark extraction under geometric distortions.
41
+
42
+ # 2. Related Work
43
+
44
+ # 2.1. Traditional image watermarking
45
+
46
+ Image watermarking plays an important role in copyright protection and leakage tracing. To ensure both fidelity and robustness, traditional image watermarking schemes often embed the watermark into the transform domain [10, 18, 19]. Recently, DNN-based image watermarking has been widely studied, where the commonly used framework is an "encoder-noise layer-decoder" architecture. By training with different distortions in the noise layer, the whole system can guarantee different robustness. Common noise layers include JPEG compression simulation layer [16, 37], print-camera layer [29], screen-camera layer [11, 32].
47
+
48
+ # 2.2. Latent diffusion models
49
+
50
+ Diffusion models have gained popularity across various fields [5, 9, 27] for their strong generative capabilities. In practice, latent diffusion models (LDMs), which perform diffusion in the latent space of a pre-trained VAE, are especially prevalent. Large-scale LDMs like Stable Diffusion have demonstrated impressive text-guided image generation ability. The DDIM sampling algorithm [28] is commonly used, as it requires significantly fewer steps than DDPM [15]. The success of diffusion models has spurred interest in watermarking techniques specifically tailored for generated images. While traditional watermarking schemes offer straightforward solutions, recent research has increasingly focused on embedding watermarks directly within the model to eliminate the need for post-processing.
51
+
52
+ # 2.3. Generative image watermarking for LDMs
53
+
54
+ Current generative image watermarking schemes for LDMs (particularly, the technique watermarking the output of LDMs) can be divided into three main aspects: 1). Post-hoc methods: Traditional image watermarking techniques [8, 12, 16, 23] are straightforward post-hoc methods for watermarking LDMs, which are done by concatenating a watermarking procedure after image generation. However, this solution requires additional processing and doesn't integrate the watermarking process well into the image generation process. 2). Fine-tune-based methods: these methods primarily fine-tune the VAE decoder in LDMs with noise layers and a pre-trained watermark extractor to ensure the VAE decoder can both produce high-quality images and embed watermarks. Typical works include Stable Signature [13] and LaWa [25]. 3). Inversion-based methods: these approaches embed a watermark into the initial latent of the diffusion process. Starting from a watermarked latent, LDMs can generate images with an implicit watermark. For extraction, an inversion process is applied to invert the watermarked latent, enabling further extraction. Wen et. al. [31] introduced a method injecting a tree-ring
55
+
56
+ watermark into the starting latent's Fourier domain, while Yang et. al. [34] proposed a distribution-preserving sampling approach to generate a watermarked latent directly. Although inversion-based methods exhibit strong robustness against non-geometric distortions, their resilience to geometric distortions remains limited.
57
+
58
+ # 3. Preliminary
59
+
60
+ # 3.1. Denoising diffusion implicit model (DDIM)
61
+
62
+ DDIM is the commonly used sampling method in latent diffusion models. For denoising schedule $\{\alpha_{t}\}_{t = 0}^{T}$ , the sampling of image/latent $x_{t - 1}$ in step $t$ can be represented by:
63
+
64
+ $$
65
+ x _ {t - 1} = \sqrt {\frac {\alpha_ {t - 1}}{\alpha_ {t}}} x _ {t} - (\sqrt {\frac {\alpha_ {t - 1} (1 - \alpha_ {t})}{\alpha_ {t}}} + \sqrt {1 - \alpha_ {t - 1}}) \varepsilon_ {\theta} \left(x _ {t}, t\right) \tag {1}
66
+ $$
67
+
68
+ where $\varepsilon_{\theta}(x_t,t)$ is the noise estimated in step $t$ . To simplify the expression, we denote $a_{t} = \sqrt{\alpha_{t - 1} / \alpha_{t}}$ and $b_{t} = -\sqrt{\alpha_{t - 1}(1 - \alpha_{t}) / \alpha_{t}} +\sqrt{1 - \alpha_{t - 1}}$ . A remarkable property of DDIM sampling is that the denoising process is approximately invertible, where $x_{t}$ can be estimated as follows:
69
+
70
+ $$
71
+ x _ {t} = \frac {x _ {t - 1} - b _ {t} \varepsilon_ {\theta} (x _ {t} , t)}{a _ {t}} \approx \frac {x _ {t - 1} - b _ {t} \varepsilon_ {\theta} (x _ {t - 1} , t)}{a _ {t}}
72
+ $$
73
+
74
+ Such an estimation relies on the assumption that $\varepsilon_{\theta}(x_t,t)\approx \varepsilon_{\theta}(x_{t - 1},t)$ [30]. The estimation process from $x_{t - 1}$ to $x_{t}$ is called DDIM inversion.
75
+
76
+ # 3.2. Homography transformation
77
+
78
+ Homography transformation is defined as the mapping between two planar projections of an image. The homography matrix $H$ is often utilized to represent the homographic transformation relationship between two images. It is defined as:
79
+
80
+ $$
81
+ H = \left[ \begin{array}{c c c} h _ {1 1} & h _ {1 2} & h _ {1 3} \\ h _ {2 1} & h _ {2 2} & h _ {2 3} \\ h _ {3 1} & h _ {3 2} & h _ {3 3} \end{array} \right]
82
+ $$
83
+
84
+ where $[h_{11}, h_{12}, h_{21}, h_{22}]$ , $[h_{13}, h_{23}]$ and $[h_{31}, h_{32}]$ represent the affine transformation, translation transformation, and perspective transformation between images, respectively. By setting appropriate parameters in $H$ , we can perform different geometric transformations (e.g. rotation, translation, scaling, shear transform) [21]. In this paper, we utilized homography transformation to correct geometric distortions in the generated images.
85
+
86
+ Specifically, for an image with coordinates $(u,v)$ , the transformed coordinates $(u^{\prime},v^{\prime})$ can be calculated as:
87
+
88
+ $$
89
+ \left[ \begin{array}{l} u ^ {\prime} \\ v ^ {\prime} \\ 1 \end{array} \right] = H \left[ \begin{array}{l} u \\ v \\ 1 \end{array} \right] = \left[ \begin{array}{l l l} h _ {1 1} & h _ {1 2} & h _ {1 3} \\ h _ {2 1} & h _ {2 2} & h _ {2 3} \\ h _ {3 1} & h _ {3 2} & h _ {3 3} \end{array} \right] \left[ \begin{array}{l} u \\ v \\ 1 \end{array} \right] \tag {2}
90
+ $$
91
+
92
+ The homography matrix $H$ is a $3 \times 3$ homogeneous matrix, where the final element $h_{33}$ is often normalized to 1, leaving it with 8 degrees of freedom. In other words, to solve the homography matrix, 8 corresponding coordinate points are required. In this paper, we fix the 4 coordinate points before transformation with $(1,1), (1,A), (B,1)$ and $(B,A)$ , where $A$ and $B$ represent the image dimensions. Therefore, we only have to determine the 4 transformed coordinate points, denoted as $\mathbb{P} = \{P_1, P_2, P_3, P_4\}$ to construct a homography transform $\mathcal{T}_{\mathbb{P}}$ for geometric distortion correction.
93
+
94
+ # 4. Proposed Methods
95
+
96
+ # 4.1. Application scenarios.
97
+
98
+ There are two typical scenarios of generative image watermarking: generated image detection and source generative model tracing.
99
+
100
+ Detection. In the detection scenario, the watermark serves as a proactive method to determine whether an image is generated by the model or is a real image. For a generative model with a watermark sequence $s$ , every output image from the model should contain this watermark $s$ . During detection, given a suspect image, if the extracted watermark $s'$ satisfies $S(s', s) \geq \tau$ (where $S$ is the similarity evaluation function and $\tau$ is the threshold), the image is regarded as generated.
101
+
102
+ Tracing. In the tracing scenario, the watermark is used to identify the source generative model of the suspect image. Given $n$ generative models with different watermarks $\mathbb{M} = \{s_1, s_2, \ldots, s_n\}$ , and a suspect image with an extracted watermark $s'$ , the source model is determined by finding the one with highest similarity: arg max $S(s', s_i)$ .
103
+
104
+ i
105
+
106
+ # 4.2. Overview
107
+
108
+ The proposed framework consists of three main stages, as illustrated in Fig. 2: SynTag initialization stage, a trainable copy of existing the VAE decoder $\mathcal{D}_{\Delta}$ , named SynTag decoder $\mathcal{D}_{Syn}$ , is fine-tuned alongside a prediction network, SynTag predictor $\mathcal{P}_{Syn}$ . This stage enables the training of injection of SynTag features and the estimation of geometric distortion correction parameters; Injection stage, a watermarked starting latent is first generated using an inversion-based mechanism applied to the given watermark sequence $s$ . This latent undergoes denoising via the latent diffusion model (LDM) and is subsequently decoded using the fine-tuned $\mathcal{D}_{Syn}$ , which injects both the watermark and the SynTag features. Extraction stage, the distorted image is first corrected using the SynTag predictor $\mathcal{P}_{Syn}$ . A dither compensation operation is then applied to generate multiple candidate latents, facilitating the extraction process. Following DDIM inversion and corresponding watermark extraction, multiple candidate watermark messages are obtained. The extracted messages are then compared against
109
+
110
+ ![](images/051feab3a9a8cfc1f398e37f833cc1b53c8ab58062f7f8b6db55b53cfa586da0.jpg)
111
+ Figure 2. The overview of the proposed SynTag, which mainly contains three stages: SynTag initialization stage, which finetunes the VAE decoder with a prediction network; Injection stage, which generates the final watermarked image according to the given watermark message; Extraction stage, which extracts the watermark message.
112
+
113
+ the embedded watermark. If any extracted message exhibits a similarity greater than a predefined threshold $\tau$ with $s$ , the tested image is identified as containing the watermark.
114
+
115
+ # 4.3. SynTag initialization stage
116
+
117
+ The initialization process is detailed in Fig. 2. Given an input image $I_O$ , it is first encoded by the pre-trained VAE encoder $\mathcal{E}_{\Delta}$ , as used in standard LDMs, to obtain the latent representation $z_O$ , where $z_O = \mathcal{E}_{\Delta}(I_O)$ . This latent is then passed through the pre-trained VAE decoder $\mathcal{D}_{\Delta}$ to reconstruct the image $I_R$ . Simultaneously, $z_O$ is also processed by the trainable SynTag decoder $\mathcal{D}_{Syn}$ , generating the injected image $I_{Syn} = \mathcal{D}_{Syn}(z_O)$ . The injected image $I_{Syn}$ is then subjected to geometric distortions (e.g., rotation, scaling, and translation) via a noise layer $\mathcal{N}_G$ , producing the distorted image $I_D$ . Given the applied distortion, the corresponding ground truth points $\mathbb{P}_{gt}$ are computed, which define the homography transformation parameters for correction via Eq. 2. To estimate the correction parameters, $I_D$ is processed through the SynTag predictor $\mathcal{P}_{Syn}$ , yielding the predicted correction points: $\mathbb{P}_p = \mathcal{P}_{Syn}(I_D)$ . To maintain visual consistency between $I_{Syn}$ and $I_R$ , we introduce a reconstruction loss $\mathcal{L}_R$ , adapted from Stable Signature [13]:
118
+
119
+ $$
120
+ \begin{array}{l} \mathcal {L} _ {R} \left(I _ {\text {S y n}}, I _ {R}\right) = \lambda_ {1} \mathcal {L} _ {\text {M S E}} \left(I _ {\text {S y n}}, I _ {R}\right) + \lambda_ {2} \mathcal {L} _ {\text {V G G}} \left(I _ {\text {S y n}}, I _ {R}\right) \\ + \lambda_ {3} \mathcal {L} _ {S S I M} \left(I _ {\text {S y n}}, I _ {R}\right) \tag {3} \\ \end{array}
121
+ $$
122
+
123
+ where $\mathcal{L}_{MSE},\mathcal{L}_{VGG}$ , and $\mathcal{L}_{SSIM}$ denote the mean squared error (MSE) loss, perceptual VGG loss [17], and structural similarity (SSIM) loss, respectively, with $\lambda_{1}$ $\lambda_{2}$ ,and $\lambda_{3}$ being weighting coefficients. For training the SynTag predictor $\mathcal{P}_{Syn}$ , we employ an MSE loss function:
124
+
125
+ $$
126
+ \mathcal {L} _ {P} = \mathcal {L} _ {M S E} \left(\mathbb {P} _ {p}, \mathbb {P} _ {g t}\right) = \mathcal {L} _ {M S E} \left(\mathcal {P} _ {\text {S y n}} \left(I _ {D}\right), \mathbb {P} _ {g t}\right)
127
+ $$
128
+
129
+ The decoder $\mathcal{D}_{Syn}$ and predictor $\mathcal{P}_{Syn}$ are jointly optimized in an end-to-end manner. After training, the well-fine-tuned SynTag decoder $\mathcal{D}_{Syn}$ replaces the original VAE decoder during the injection stage.
130
+
131
+ # 4.4. Injection stage
132
+
133
+ During the injection stage, for a given $l$ -bit watermark sequence $s \in \{0,1\}$ , a watermark injection mechanism $\mathcal{F}$ is first applied to generate the watermarked starting latent: $z_T^s = \mathcal{F}(s)$ , where $\mathcal{F}$ represents the embedding mechanism of any inversion-based scheme (e.g., the distribution-preserving sampling method proposed by [34]). The standard diffusion process in LDM is then performed for $T$ steps, yielding the diffused latent $z_0^s$ . Finally, $z_0^s$ is processed through the fine-tuned SynTag decoder $\mathcal{D}_{Syn}$ , which injects SynTag features to produce the watermarked image $I_s$ . Notably, in this stage, $\mathcal{D}_{Syn}$ remains fixed, and its parameters are not updated.
134
+
135
+ # 4.5. Extraction stage
136
+
137
+ To recover the watermark from a distorted image $I_{D}$ , the trained SynTag predictor $\mathcal{P}_{\text{Syn}}$ first estimates the correction parameters: $\mathbb{P}_p = \mathcal{P}_{\text{Syn}}(I_D)$ . Based on $\mathbb{P}_p$ , the homography transformation $\mathcal{T}_{\mathbb{P}_p}$ is applied to correct the distortion: $I_C = \mathcal{T}_{\mathbb{P}_p}(I_D)$ . To further refine correction accuracy, we introduce a dither compensation mechanism, which operates at two levels:
138
+
139
+ Pixel-level compensation $\mathbb{C}_p$ . This involves applying small-scale homography transformations to $I_{C}$ , such as scaling $(0.9 - 1.1\times)$ and rotation $(\pm 3^{\circ})$ . If $n$ transformations are applied, a set of $n$ corrected images $\mathbb{I}_p = I_p^1,I_p^2,\ldots ,I_p^n$ is obtained: $I_p^i = \mathcal{T}^i (I_C)$ . For each $I_p^i$ , the pre-trained VAE encoder $\mathcal{E}_{\Delta}$ computes the latent representation, followed by DDIM inversion to produce the inverted latent $\hat{z}_p^i$ .
140
+
141
+ Latent-level compensation $\mathbb{C}_l$ . Latent-level compensation includes outer padding and re-sampling operations. Given
142
+
143
+ ![](images/e9b6632cc607ff588427d0648a056994ab4cafcc6bd9d1cc8bc03e04e3a3da24.jpg)
144
+ Figure 3. The watermarked images and the differences between watermarked/non-watermarked images with different methods.
145
+
146
+ an inverted latent $\hat{z}_p^i$ of size $C\times H\times W$ , we apply zeropadding to obtain a padded latent $\bar{z}_l^i$ of size $C\times (H + 2r)\times$ $(W + 2r)$ . A sliding window with stride 1 resamples $\bar{z}_l^i$ , generating $(2r + 1)^2$ latent variations $\bar{z}_l^j$ . Each $\bar{z}_l^j$ undergoes the extraction mechanism $\mathcal{F}^{-1}$ to recover a potential watermark sequence $\bar{s}_j$ . The extracted watermark candidates form a set $\mathbb{S}_{ex} = \{\bar{s}_1,\bar{s}_2,\dots \}$ . The similarity between each extracted sequence and the original watermark $s$ is evaluated. If $S(\bar{s}_i,s)\geq \tau$ , where $S$ represents the similarity measurement, the image $I_{D}$ is identified as containing watermark $s$ .
147
+
148
+ Notably, extraction can be performed without prior knowledge of $s$ by utilizing error correction and detection codes (details provided in supplementary materials).
149
+
150
+ # 5. Experimental Results and Analysis
151
+
152
+ # 5.1. Implementation details
153
+
154
+ # 5.1.1. Experimental settings.
155
+
156
+ In this paper, we focus on text-to-image latent diffusion models and use Stable Diffusion with version 1.4 and 2.1 [26] (SD-v1.4/v2.1) provided by Hugging Face for experiments. The generated images are with size $512 \times 512 \times 3$ , and the latent size is $4 \times 64 \times 64$ .
157
+
158
+ In the initialization stage, MS COCO [20] are used as the training datasets in fine-tuning $\mathcal{D}_{\theta}$ and $\mathcal{P}_{Syn}$ . The geometric distortions set in the noise layer include "rotation with $-45^{\circ} \sim 45^{\circ}$ , translation with $10 \sim 75$ pixels, scale with factor $0.5 \sim 2$ (combining with padding/cropping) and shear mapping $5 \sim 10$ pixels". Additionally, after applying the geometric distortions, non-geometric distortions such as Gaussian noise, median filtering, and JPEG compression are also introduced to simulate a combined noise.
159
+
160
+ In the watermarked image generation stage, we combine SynTag with the existing two representative inversion-based frameworks GauShad [34] and TreeRings [31]. For GauShad-SynTag, $l$ is set as 64. For Tree-Rings-SynTag, we only change the VAE decoder to $\mathcal{D}_{\text{Syn}}$ . During in-
161
+
162
+ ference, we employ prompts from the Stable-Diffusion Prompt[2]. The sampling step and guidance scale are set to 50 and 7.5.
163
+
164
+ In the extraction stage, we conduct DDIM inversion with $\emptyset$ -condition and 20 steps. We set 4 operations in the pixel-level compensation: rotation with $\pm 3^{\circ}$ and scaling with factors 0.9 and 1.1. The padding radius $r$ is set as 2. The selection of the threshold $\tau$ is based on the requirement of a fixed false positive rate (FPR). The calculation of $\tau$ is the same as introduced in Stable Signature [13]. In this paper, we set the FPR to be $10^{-6}$ , so the value of $\tau$ is set to 0.78125. All experiments are performed using PyTorch 1.12.1 and a single NVIDIA-A40 GPU.
165
+
166
+ # 5.1.2. Evaluation metrics.
167
+
168
+ To assess the robustness, we adopt the evaluation settings from [34], encompassing both the detection and tracing scenarios. In the detection scenario, we measure the true positive rate (TPR) corresponding to the fixed false positive rate (FPR). For tracing, we utilize extraction bit accuracy as the metric. We evaluate the quality of watermarked images using FID [14], comparing them with non-watermarked images, and CLIP-Score [24] (larger is better). All results are obtained from the testing on 50 watermarked images generated with randomly sampled prompts from [2].
169
+
170
+ # 5.1.3. Baseline and benchmark.
171
+
172
+ We compare the performance with 9 state-of-the-art watermarking frameworks: 5 post-hoc-based frameworks including 2 methods official used by Stable Diffusion, namely DwtDctSvd [8] and RivaGAN [35], and 3 deep-learning-based frameworks MBRS [16], StegaStamp [29] and RoSteALS [6]; 2 fine-tune-based method, Stable Signature [13] and LaWa [25]; and 2 inversion-based methods, TreeRings [31] and Gaussian Shading (GauShad) [34]. Note that TreeRings only embeds a 1-bit watermark, so we only evaluate the true positive rate (TPR) for TreeRings. Details of the training and implementation of the compared methods are provided in the supplementary materials.
173
+
174
+ Table 1. The overall robustness and visual quality performance evaluation with different methods. Results show in the form of SD-v1.4/v2.1.
175
+
176
+ <table><tr><td rowspan="2">Methods</td><td colspan="2">Geometric</td><td colspan="2">Non-geometric</td><td rowspan="2">FID</td><td rowspan="2">CLIP-Score</td></tr><tr><td>TPR</td><td>Bit Acc.</td><td>TPR</td><td>Bit Acc.</td></tr><tr><td>Stable Diffusion</td><td>-</td><td>-</td><td>-</td><td>-</td><td>25.28±.17</td><td>0.3628±.0006</td></tr><tr><td>DwtDctSvd[8]</td><td>0.004/0.008</td><td>0.539/0.533</td><td>0.000/0.000</td><td>0.524/0.522</td><td>24.75±.21</td><td>0.3610±.0008</td></tr><tr><td>RivaGAN[35]</td><td>0.788/0.792</td><td>0.863/0.866</td><td>0.483/0.487</td><td>0.779/0.781</td><td>24.84±.36</td><td>0.3611±.0009</td></tr><tr><td>MBRS[16]</td><td>0.744/0.744</td><td>0.850/0.850</td><td>0.346/0.353</td><td>0.707/0.719</td><td>24.98±.26</td><td>0.3601±.0011</td></tr><tr><td>StegaStamp[29]</td><td>0.240/0.236</td><td>0.645/0.648</td><td>0.897/0.900</td><td>0.899/0.899</td><td>25.17±.16</td><td>0.3612±.0010</td></tr><tr><td>RoSteALS[6]</td><td>0.240/0.236</td><td>0.563/0.564</td><td>0.917/0.920</td><td>0.870/0.869</td><td>24.37±.18</td><td>0.3534±.0004</td></tr><tr><td>Stable Signature[13]</td><td>0.760/0.756</td><td>0.857/0.859</td><td>0.467/0.470</td><td>0.777/0.779</td><td>25.25±.35</td><td>0.3627±.0009</td></tr><tr><td>LaWa[25]</td><td>0.764/0.760</td><td>0.845/0.852</td><td>0.926/0.930</td><td>0.914/0.916</td><td>25.21±.12</td><td>0.3609±.0008</td></tr><tr><td>TreeRings[31]</td><td>0.548/0.552</td><td>-</td><td>0.947/0.943</td><td>-</td><td>25.13±.32</td><td>0.3628±.0007</td></tr><tr><td>GauShad[34]</td><td>0.016/0.020</td><td>0.633/0.635</td><td>0.963/0.963</td><td>0.936/0.937</td><td>25.23±.24</td><td>0.3629±.0006</td></tr><tr><td>TreeRings-SynTag</td><td>0.928/0.932</td><td>-</td><td>0.950/0.949</td><td>-</td><td>25.22±.21</td><td>0.3613±.0004</td></tr><tr><td>GauShad-SynTag</td><td>0.980/0.988</td><td>0.938/0.940</td><td>0.967/0.970</td><td>0.938/0.939</td><td>25.21±.17</td><td>0.3615±.0006</td></tr></table>
177
+
178
+ # 5.2. Comparison to baselines
179
+
180
+ In this section, we measure the robustness and visual quality performance of GauShad-SynTag and TreeRings-SynTag with baseline methods. For visual quality evaluation, in addition to calculating the FID and CLIP-score, we also provide one example for subjective evaluation, as shown in Fig. 3. For robustness evaluation, we mainly test two kinds of distortions including 5 geometric distortions (Rotation $30^{\circ}$ , Translation 50 pixels, Scale 0.75x&pad, Scale 1.25x&crop, shear mapping 5 pixels) and 6 non-geometric-distortions (JPEG compression, QF=15, Gaussian noise, $\sigma = 0.05$ , Median filtering, $k = 11$ , Dropout, $r = 30\%$ , Cropout, $r = 70\%$ , Brightness, factor = 6). The appearance of the distortion can be found in supplementary materials.
181
+
182
+ As we can see in Table 1, the FID and CLIP-score values of GauShad-Syn and TreeRings-SynTag are at the same level as that calculated with the raw images generated with Stable Diffusion, indicating that the embedding of the watermark will substantially affect the distribution and semantic information of the image. From Fig. 3 we can see that images generated with SynTag look natural since the watermark is located where the eye is not sensitive.
183
+
184
+ Regarding the robustness evaluation, SynTag significantly improves the geometric robustness of inversion-based framework, and GauShad-SynTag maintains the highest TPR and bit accuracy across both geometric and non-geometric distortions. For geometric distortions, compared to the most competitive state-of-the-art method, GauShad-SynTag achieves $19\%$ higher overall true positive rate and $8\%$ higher bit accuracy. Besides, for non-geometric distortions, SynTag maintains the robustness of the original methods. Detailed results can be found in the supplementary materials.
185
+
186
+ To further highlight the advantages of SynTag over the inversion-based schemes in geometric distortions, additional experiments were conducted, including rotation from
187
+
188
+ $-30^{\circ}$ to $30^{\circ}$ , scale factors from 0.7 to 1.75, translations with 20 to 75 pixels, and shear mapping from 5 to 10 pixels. The results are shown in Fig. 4(a) to Fig. 4(h).
189
+
190
+ It can be observed that SynTag greatly improves the geometric robustness for all settings, the frameworks with SynTag achieve higher TPR and bit accuracy. Furthermore, the advantages of SynTag are particularly evident under stronger distortions.
191
+
192
+ # 5.3. Adaptive attacks
193
+
194
+ In this paper, we follow the settings of GauShad [34], which mainly investigate two adaptive attacks: 1). Reconstruction attack, as proposed by [36], refers to the operation that utilizes an auto-encoder to compress and subsequently decompress the watermarked images. 2) Purification attack, as proposed by [22], refers to a procedure that adds noise on the image/latent and then denoises with diffusion models. For reconstruction attacks, we employ four widely used auto-encoders "Cheng" [7], "Bmshj" [4], VQ-VAE and KL-VAE [26]. For purification attacks, we add noise with strengths 0.1 to 0.7 and then perform a diffusion denoising process with 100 steps to generate the purified images. We tested the performance with GauShad-SynTag. The PSNR of the attacked images and the extracted results are shown in Table 2.
195
+
196
+ Table 2. Adaptive attacks on GauShad-SynTag.
197
+
198
+ <table><tr><td rowspan="2">Attacks</td><td colspan="4">Reconstruction Attack</td></tr><tr><td>Cheng</td><td>Bmshj</td><td>VQ-VAE</td><td>KL-VAE</td></tr><tr><td>PSNR(dB)</td><td>35.13</td><td>37.92</td><td>30.16</td><td>30.21</td></tr><tr><td>TPR</td><td>0.980</td><td>0.920</td><td>0.980</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.992</td><td>0.989</td><td>0.966</td><td>0.973</td></tr><tr><td rowspan="2">Attacks</td><td colspan="4">Purification Attack</td></tr><tr><td>f = 0.1</td><td>0.3</td><td>0.5</td><td>0.7</td></tr><tr><td>PSNR(dB)</td><td>24.12</td><td>20.59</td><td>19.17</td><td>15.44</td></tr><tr><td>TPR</td><td>0.980</td><td>0.980</td><td>0.580</td><td>0.000</td></tr><tr><td>Bit Acc.</td><td>0.974</td><td>0.903</td><td>0.808</td><td>0.726</td></tr></table>
199
+
200
+ ![](images/e4ffeaa24b8114fa66523ed58c010514cdbdc8111f5dfad7ac73a33433949f1f.jpg)
201
+ (a) Rotation with $-30^{\circ} \sim 30^{\circ}$ .
202
+
203
+ ![](images/0bb37736fa2c72421f8750985f584db1f05d438dc067be6b6497f1d8fb712728.jpg)
204
+ (b) Scale $0.7\mathrm{x}\sim 1.75\mathrm{x}$
205
+
206
+ ![](images/556bf4020c9c0a328cc041f23dca5b6bcf5cf1d2c16cf61ece30e59e0be93b58.jpg)
207
+ (c) Translation $20\sim 75$ pixels.
208
+
209
+ ![](images/eabfa315821caf20f2064640b673c1fdc60c08ec393962628179f5350fbc08ff.jpg)
210
+ (d) Shear mapping $5\sim 10$ pixels.
211
+
212
+ ![](images/8e85ad3b47a3564a6fc59e18cac3ae863ff39137f7de2017b17b11b7de979dff.jpg)
213
+ (e) Rotation with $-30^{\circ} \sim 30^{\circ}$ .
214
+
215
+ ![](images/61fa7c7abef123a69ce92ee06e9b2f54ed98c40690f1f879f4c957b12dc1d8d5.jpg)
216
+ (f) Scale $0.7\mathrm{x}\sim 1.75\mathrm{x}$
217
+
218
+ ![](images/7fb52e45eaf67dcf9ff8c284cbb50ccd24971b1a8f96c03e1b2dc930e0a4e19a.jpg)
219
+ (g) Translation $20\sim 75$ pixels.
220
+
221
+ ![](images/b6e1c7626e44b883aa37cb8d2e8e3ef3f1c95602ec36bb53f3616499ad6ee3f3.jpg)
222
+ (h) Shear mapping $5\sim 10$ pixels.
223
+
224
+ The results in Table 2 highlight the robustness of the watermark extraction process against reconstruction attacks, with TPR above 0.92 and bit accuracy exceeding 0.96 across all auto-encoders. For purification attacks, GauShadSynTag also demonstrates considerable robustness. When the attack strength is below 0.3, both bit accuracy and TPR remain high. However, as the attack strength increases, performance gradually declines. But we should highlight that at $s = 0.5$ , while robustness decreases, the PSNR of the attacked images drops sharply to 19dB, indicating that the purified images differ significantly from the original watermarked images. Additional visual results of purificationattack are included in the supplementary materials.
225
+
226
+ # 5.4. False positive detection of dither compensation
227
+
228
+ Since the dither compensation involves a small range of exhaustive searching, which may potentially enhance the false positive detection—identifying non-watermarked images as having a watermark. Therefore, in this section, we explore the effect of this operation on false positive detection. We use GauShad-SynTag as the backbone, fix one watermark message $s$ and generate 1000 watermarked images, then we distort them with rotation $-30^{\circ} \sim 30^{\circ}$ , scaling $0.75x \sim 1.25x$ , JPEG compression (QF=15) and Gaussian noise $(\sigma = 0.05)$ . The highest bit accuracy of the watermark extraction for each image was recorded. A similar operation was performed on 1000 non-watermarked images. The distribution of bit accuracy for both the watermarked and non-watermarked images is shown in Fig. 5(a) and Fig. 5(b), respectively.
229
+
230
+ It can be seen that the distribution of the highest bit accuracy with non-watermarked images has distinct differences with watermarked images. For non-watermarked images, even with dither compensation, the highest bit accuracy is less than 0.8, but for watermarked images, almost all of them maintain over 0.8 bit accuracy. Such differences indicate that by selecting an appropriate threshold
231
+
232
+ ![](images/87d36fcb84fc928afbe4e443a7dd70a0fc3d27f1fd212b3256d24f506f5b1fc9.jpg)
233
+ Figure 4. The TPR and bit acc. comparison with/without SynTag, (a)~(d) indicates the results with GauShad, (e)~(h) indicates the results with TreeRings.
234
+
235
+ ![](images/c5e431038a0886be1366ef411b4b69fb1588b0d19eea312d53873c41554a1d76.jpg)
236
+ (c) JPEG with $QF = 15$ .
237
+
238
+ ![](images/9c7f0e5f6401f50aa95090921f5354bd9f98b2a8c96a161fd3bc12b8e27bc42a.jpg)
239
+ (b) Scale $0.75\mathrm{x}\sim 1.25\mathrm{x}$
240
+
241
+ ![](images/84e29a00706ad1562155a64093054b903516f826f2acded2dc1c627bf048b103.jpg)
242
+ (d) Gaussian noise with $\sigma = 0.05$
243
+ Figure 5. The distribution of highest bit accuracy under different distortions extracted from images with and without watermark.
244
+
245
+ (e.g. $\tau = 0.8$ ), we can control the false positive detection rate of non-watermarked images at a very low level. As a result, the dither compensation operation does not lead to significant false positive detections.
246
+
247
+ # 5.5. Generalizable experiments
248
+
249
+ # 5.5.1. Adaptability on different generation settings
250
+
251
+ To validate the adaptability of SynTag, we vary the sampling methods, sampling steps, guidance scale, and different prompt sets in the generation process, then we conduct the extraction experiments. For sampling methods, we utilize three commonly used samplers based on ODE solvers (DDIM, UniPC, PNDM). Sampling steps are varied from 20 to 100, and guidance scale values tested are 3, 7.5 and 11. The prompt sets we utilized are open-sourced in Hugging Face, denoted as $P^1$ to $P^3$ [1-3]. The default settings for the guidance scale, sampling steps, and sampling methods are 7.5, 50, DDIM with prompt sets [2] respectively. For each experiment, only the tested settings are varied. After generation, we test the detection TPR and bit accuracy results with 4 geometric distortions (rotation $30^{\circ}$ , scale $1.25\mathrm{x}$ ,
252
+
253
+ Table 3. Adaptability of SynTag with different generation settings.
254
+
255
+ <table><tr><td rowspan="2">Settings
256
+ Parameters</td><td colspan="3">Sampling Methods</td><td colspan="3">Sampling Steps</td></tr><tr><td>DDIM</td><td>UniPC</td><td>PNDM</td><td>20</td><td>50</td><td>100</td></tr><tr><td>TPR</td><td>0.990</td><td>0.960</td><td>0.970</td><td>0.980</td><td>0.990</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.935</td><td>0.927</td><td>0.928</td><td>0.934</td><td>0.935</td><td>0.934</td></tr><tr><td rowspan="2">Settings
257
+ Parameters</td><td colspan="3">Guidance Scale</td><td colspan="3">Prompt Sets</td></tr><tr><td>3</td><td>7.5</td><td>11</td><td>P1[2]</td><td>P2[3]</td><td>P3[1]</td></tr><tr><td>TPR</td><td>1.000</td><td>0.990</td><td>0.980</td><td>0.990</td><td>0.965</td><td>0.980</td></tr><tr><td>Bit Acc.</td><td>0.956</td><td>0.935</td><td>0.928</td><td>0.935</td><td>0.924</td><td>0.927</td></tr></table>
258
+
259
+ translation-50 and shear-5), results are shown in Table 3.
260
+
261
+ It can be seen that in all cases, the detection TPR and bit accuracy are stayed at a high level, which is over 0.96 for TPR and 0.92 for bit accuracy. This superior performance underscores the remarkable adaptability of GauShad-SynTag in different generation conditions.
262
+
263
+ # 5.5.2. Robustness of combined distortions
264
+
265
+ We use the combined distortion of both geometric distortions and geometric distortion plus non-geometric distortions for illustration. In detail, we select 10 distortions, as shown in Table 4, where $R, S, T$ indicate the rotation, scaling and translation respectively. "JPEG, GauN and MedF" indicates the JPEG compression with QF=75, Gaussian noise with $\sigma = 0.01$ and Medain filter with $w = 5 \times 5$ .
266
+
267
+ Table 4. Robustness of GauShad-SynTag against combined distortions.
268
+
269
+ <table><tr><td>Distortions</td><td>R-15° S-1.25x</td><td>R-15° S-0.9x</td><td>R-15° T-25</td><td>S-1.25x T-25</td><td>S-0.9x T-25</td></tr><tr><td>TPR</td><td>0.96</td><td>1.00</td><td>0.98</td><td>0.94</td><td>1.00</td></tr><tr><td>Bit Acc.</td><td>0.941</td><td>0.937</td><td>0.922</td><td>0.928</td><td>0.960</td></tr><tr><td>Distortions</td><td>R-15° JPEG</td><td>R-15° GauN</td><td>R-15° MedF</td><td>T-25 JPEG</td><td>S-1.25x JPEG</td></tr><tr><td>TPR</td><td>0.94</td><td>0.94</td><td>0.92</td><td>0.94</td><td>0.92</td></tr><tr><td>Bit Acc.</td><td>0.919</td><td>0.904</td><td>0.902</td><td>0.918</td><td>0.901</td></tr></table>
270
+
271
+ For all the combined noise, GauShad-SynTag achieves high TPR ( $\geq 0.92$ ) and high extraction accuracy ( $\geq 0.90$ ), indicating strong robustness against combined noise.
272
+
273
+ # 5.6. Ablation Study
274
+
275
+ # 5.6.1. Necessity of SynTag feature injection
276
+
277
+ To validate the necessity of the SynTag feature, we also train a passive restoration network: instead of actively embedding the feature, we only train the prediction network itself, namely $\mathcal{P}_{-}$ , to perform the geometric distortion trajectory prediction and thus correct the distortion. The experimental results with (rotation $R - 30^{\circ}$ , scaling $S - 0.75\mathrm{x} / 1.25\mathrm{x}$ , translation $T - 50$ , and shear $Sh - 5$ ) are shown in Table 5 and Fig. 6.
278
+
279
+ It can be seen that only training $\mathcal{P}_{-}$ is not effective for distortion correction, and the restoration results are far from the SynTag. We summarize the reason as that the geometric distortion are quite complex, making it challenging to
280
+
281
+ ![](images/994b38e70c6c870656823253e52388d284f5ed70bf639228cf69a2091eadc4a8.jpg)
282
+ Figure 6. The correction performance of SynTag $(\mathcal{D}_{Syn} + \mathcal{P}_{Syn})$ and only with passive restoration network $\mathcal{P}_{-}$ .
283
+
284
+ Table 5. Robustness comparison of correction with only SynTag and $\mathcal{P}$ _.
285
+
286
+ <table><tr><td>Distortion</td><td>R-30°</td><td>S-0.75x</td><td>S-1.25x</td><td>T-50</td><td>Sh-5</td><td>Ave</td></tr><tr><td>P_</td><td>0.711</td><td>0.737</td><td>0.720</td><td>0.761</td><td>0.778</td><td>0.730</td></tr><tr><td>DSyn+PSyn</td><td>0.941</td><td>0.949</td><td>0.934</td><td>0.963</td><td>0.902</td><td>0.938</td></tr></table>
287
+
288
+ perform distortion correction without any assistant features. The extraction result also indicates the necessity of SynTag as the extraction accuracy of SynTag is significantly higher.
289
+
290
+ # 5.6.2. Improvements of correction modules
291
+
292
+ To investigate the improvements for each correction module $(\mathcal{P}_{Syn},\mathbb{C}_p$ and $\mathbb{C}_l)$ , we conduct the ablation study on 4 geometric distortions (rotation $30^{\circ}$ , scale 1.25x, translation-50 and shear-5). In the extraction stage, we test the results without $\mathcal{P}_{Syn}$ (denote as $w / o\mathcal{P}_{Syn}$ ), with $\mathcal{P}_{Syn}$ , $\mathcal{P}_{Syn} + \mathbb{C}_p$ , $\mathcal{P}_{Syn} + \mathbb{C}_l$ and $\mathcal{P}_{Syn} + \mathbb{C}_p + \mathbb{C}_l$ , respectively. The average TPRs and bit accuracy are shown in Table 6.
293
+
294
+ Table 6. Ablation study on extraction modules.
295
+
296
+ <table><tr><td>Modules</td><td>w/o PSyn</td><td>PSyn</td><td>PSyn+Cp</td><td>PSyn+Cl</td><td>PSyn+Cp+Cl</td></tr><tr><td>TPR</td><td>0.016</td><td>0.763</td><td>0.795</td><td>0.930</td><td>0.990</td></tr><tr><td>Bit Acc.</td><td>0.633</td><td>0.819</td><td>0.865</td><td>0.915</td><td>0.935</td></tr></table>
297
+
298
+ It can be seen that the existence of $\mathcal{P}_{\mathrm{Syn}}$ significantly improves the geometric distortion robustness, where the detection TPR increases from 0.016 to 0.763, and bit accuracy increases from 0.633 to 0.819. Besides, with the help of $\mathbb{C}_p$ and $\mathbb{C}_l$ , the performance is further improved, and compared with $\mathbb{C}_p$ , $\mathbb{C}_l$ contributes more in improvements.
299
+
300
+ # 6. Conclusion
301
+
302
+ In this paper, we introduce SynTag, a synchronization tag injection-based method to enhance the geometric robustness of inversion-based generative image watermarking. Unlike previous approaches that focus on directly constructing distortion-invariant watermarking features, SynTag embeds a template-like feature that evolved with geometric distortion, allowing further distortion correction. Additionally, we propose a dither compensation mechanism to further enhance the accuracy of the correction process. Experimental results demonstrate that SynTag can successfully compete with inversion-based frameworks and offers strong robustness against both geometric and non-geometric distortions.
303
+
304
+ Acknowledgement. This research is supported by the National Research Foundation, Singapore, through the National Cybersecurity R&D Lab at the National University of Singapore under its National Cybersecurity R&D Programme (Award No. NCR25-NCL P3-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, and National Cybersecurity R&D Lab at the National University of Singapore.
305
+
306
+ # References
307
+
308
+ [1] Daspartho-stable-diffusion-prompts. https://huggingface.co/datasets/daspartho/stable-diffusion-prompts. Accessed: Nov. 2024. 7,8
309
+ [2] Gustavosta-stable-diffusion-prompt. https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts. Accessed: Nov. 2024. 5,7,8
310
+ [3] Isidentical-stable-diffusion-prompt. https://huggingface.co/datasets/isidentical/random-stable-diffusion-prompts. Accessed: Nov. 2024. 7, 8
311
+ [4] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018. 6
312
+ [5] Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise. arXiv preprint arXiv:2208.09392, 2022. 2
313
+ [6] Tu Bui, Shruti Agarwal, Ning Yu, and John Collomosse. Rosteals: Robust steganography using autoencoder latent space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 933-942, 2023. 1, 5, 6
314
+ [7] Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7939-7948, 2020. 6
315
+ [8] Ingemar Cox, Matthew Miller, Jeffrey Bloom, Jessica Fridrich, and Ton Kalker. Digital watermarking and steganography. Morgan Kaufmann, 2007. 2, 5, 6
316
+ [9] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. 2
317
+ [10] Han Fang, Weiming Zhang, Hang Zhou, Hao Cui, and Nenghai Yu. Screen-shooting resilient watermarking. IEEE Transactions on Information Forensics and Security, 14(6):1403-1418, 2018. 2
318
+ [11] Han Fang, Zhaoyang Jia, Zehua Ma, Ee-Chien Chang, and Weiming Zhang. Pimog: An effective screen-shooting noise-layer simulation for deep-learning-based watermarking net-
319
+
320
+ work. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2267-2275, 2022. 2
321
+ [12] Han Fang, Yupeng Qiu, Kejiang Chen, Jiyi Zhang, Weiming Zhang, and Ee-Chien Chang. Flow-based robust watermarking with invertible noise layer for black-box distortions. In Proceedings of the AAAI conference on artificial intelligence, pages 5054-5061, 2023. 2
322
+ [13] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023. 1, 2, 4, 5, 6
323
+ [14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017. 5
324
+ [15] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 2
325
+ [16] Zhaoyang Jia, Han Fang, and Weiming Zhang. Mbrs: Enhancing robustness of cnn-based watermarking by minibatch of real and simulatedJPEG compression. In Proceedings of the 29th ACM International Conference on Multimedia, pages 41-49, 2021. 2, 5, 6
326
+ [17] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, pages 694–711. Springer, 2016. 4
327
+ [18] Xiangui Kang, Jiwu Huang, Yun Q Shi, and Yan Lin. A dwt-dft composite watermarking scheme robust to both affine transform and jpeg compression. IEEE Transactions on Circuits and Systems for Video Technology, 13(8):776-786, 2003. 2
328
+ [19] Xiangui Kang, Jiwu Huang, and Wenjun Zeng. Efficient general print-scanning resilient data hiding based on uniform log-polar mapping. IEEE Transactions on Information Forensics and Security, 5(1):1-12, 2010. 2
329
+ [20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, pages 740-755. Springer, 2014. 5
330
+ [21] Yinhui Luo, Xingyi Wang, Yanhao Liao, Qiang Fu, Chang Shu, Yuezhou Wu, and Yuanqing He. A review of homography estimation: Advances and challenges. *Electronics*, 12 (24):4977, 2023. 2, 3
331
+ [22] Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Animashree Anandkumar. Diffusion models for adversarial purification. In International Conference on Machine Learning, pages 16805-16827. PMLR, 2022. 6
332
+ [23] Joseph JK O'Ruanaidh and Thierry Pun. Rotation, scale and translation invariant digital image watermarking. In Proceedings of International Conference on Image Processing, pages 536-539. IEEE, 1997. 2
333
+ [24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
334
+
335
+ Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 5
336
+ [25] Ahmad Rezaei, Mohammad Akbari, Saeed Ranjbar Alvar, Arezou Fatemi, and Yong Zhang. Lawa: Using latent space for in-generation image watermarking. arXiv preprint arXiv:2408.05868, 2024. 1, 2, 5, 6
337
+ [26] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022. 1, 5, 6
338
+ [27] Chitwan Sahara, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4713-4726, 2022. 2
339
+ [28] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 2
340
+ [29] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. arXiv preprint arXiv:1904.05343, 2019. 2, 5, 6
341
+ [30] Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact diffusion inversion via coupled transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22532-22541, 2023. 3
342
+ [31] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. arXiv preprint arXiv:2305.20030, 2023. 1, 2, 5, 6
343
+ [32] Eric Wengrowski and Kristin Dana. Light field messaging with deep photographic steganography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1515-1524. Computer Vision Foundation / IEEE, 2019. 2
344
+ [33] Cheng Xiong, Chuan Qin, Guorui Feng, and Xinpeng Zhang. Flexible and secure watermarking for latent diffusion model. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1668-1676, 2023. 1
345
+ [34] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12162-12171, 2024. 1, 3, 4, 5, 6
346
+ [35] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019. 5, 6
347
+ [36] Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. arXiv preprint arXiv:2306.01953, 2023. 6
348
+ [37] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of
349
+
350
+ the European Conference on Computer Vision, pages 657-672, 2018. 2
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:187bcd466063cc77b76c3290ee288073fcda6491a87218d63b1e3887dd0040fc
3
+ size 692189
2025/SynTag_ Enhancing the Geometric Robustness of Inversion-based Generative Image Watermarking/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/144abf90-8197-45c7-bc2f-75f04512f099_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47b040eae9a761ac1b4a8692822bed6eab401b9d0ae8f7b4b1b3cb4c42fd1bb1
3
+ size 16187443
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/full.md ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SyncDiff: Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis
2
+
3
+ Wenkun He $^{1,2}$ , Yun Liu $^{1,2,3}$ , Ruitao Liu $^{1}$ , Li Yi $^{\dagger,1,2,3}$
4
+
5
+ $^{1}$ Tsinghua University $^{2}$ Shanghai Qi Zhi Institute $^{3}$ Shanghai Artificial Intelligence Laboratory https://syncdiff.github.io/
6
+
7
+ ![](images/1552df0d895d8f84df62cad8656195f32d3758dc646da0f98c2658d254d40162.jpg)
8
+ Figure 1. SyncDiff is a unified framework synthesizing synchronized multi-body interaction motions with any number of hands, humans, and rigid objects. In SyncDiff, we introduce two novel multi-body motion synchronization mechanisms, namely the alignment scores for training and explicit synchronization strategy in inference. With these mechanisms, the synthesized results can effectively prevent interpenetration, contact loss, or asynchronous human-object interactions in various scenarios, as shown in the above figure.
9
+
10
+ # Abstract
11
+
12
+ Synthesizing realistic human-object interaction motions is a critical problem in VR/AR and human animation. Unlike the commonly studied scenarios involving a single human or hand interacting with one object, we address a more generic multi-body setting with arbitrary numbers of humans, hands, and objects. The high correlations and mutual influences among bodies leads to two major challenges, for which we propose solutions. First, to satisfy the high demands for synchronization of different body motions, we mathematically derive a new set of alignment scores during the training process, and use maximum likelihood sampling on a dynamic graphical model for explicit synchronization during inference. Second, the high-frequency interactions between objects are often overshadowed by the large-scale low-frequency movements. To address this, we introduce frequency decomposition and explicitly represent high-frequency components in the frequency domain. Extensive experiments across five datasets with various multi-body configurations demonstrate the superiority of SyncDiff over existing state-of-the-art motion synthesis methods.
13
+
14
+ # 1. Introduction
15
+
16
+ In daily life, humans frequently manipulate objects to complete tasks, often using both hands or collaborating with others. For instance, an individual might use both hands to set up a fallen chair, hold a brush in one hand to scrub a bowl held in the other, or work with another person to lift and position a heavy object. These are examples of multi-body human-object interactions (HOI) [13, 41, 42], where "body" can refer to objects, hands, or humans. Synthesizing such interactions has prominent applications in VR/AR, human animation, and robot learning [91].
17
+
18
+ The primary challenge in multi-body HOI synthesis is capturing the complex joint distribution of body motions and ensuring that the individual body motions are not only synchronized but also mutually aligned with meaningful interaction semantics. This challenge intensifies as the number of bodies increases with high-order motion relationships. Existing works only focus on some specific configurations with limited body numbers, such as a single hand interacting with an object [91], a person engaging with an object [42, 79], or two hands manipulating a single item [13, 70], where motion trajectories are often simple and without too many semantics. These works introduce configuration-specific priors and are restricted to particular setups, leading to a strong demand for a generic method
19
+
20
+ that could handle the series of multi-body HOI synthesis problems in a unified manner without restrictions for body numbers, and with more complex motion distributions.
21
+
22
+ A straightforward method to promote synchronization is treating the motions of all individual bodies as a high-dimensional representation and using a single diffusion model to depict its distribution [13]. However, this diffusion model only estimates data sample scores, and only implicitly depicts the correlations across individual motions. To further promote the alignment between individual body motions with limited data amount, analogous to the data sample scores guiding data denoising and reconstruction, we need a set of alignment scores to promote motion synchronization. Second, with estimated sample and alignment scores, in inference, we can jointly optimize sample and alignment likelihoods by formulating multi-body HOI synthesis as a motion synchronization problem within a graphical model, where nodes represent individual body motions and edges capture the relative motions between body pairs.
23
+
24
+ In practice, we often observe high-frequency, small-amplitude vibrations with semantics be overshadowed by low-frequency, large-scale movements (e.g., A brush only contacts the teapot without effective frictions between them). To solve this, we propose frequency decomposition, which factorizes all individual and relative motions into low and high frequency components, and supervises them in the time domain and frequency domain separately.
25
+
26
+ To realize the ideas above, we design Synchronized Motion Diffusion (SyncDiff) for generic multi-body HOI synthesis. SyncDiff is a diffusion model defined on the graphical model above, operating on a high-order motion representation consisting of all individual motions and relative motions, which is the first unified framework for multi-body HOI synthesis of any body count. Our detailed contributions include: 1) For better synchronization, we derive a set of alignment scores and the corresponding loss term for training. 2) At inference time, by leveraging both sample and alignment scores, we discover a simple explicit synchronization strategy, which is equivalent to maximum likelihood inference with a mathematical justification. 3) We decompose all individual and relative motions based on frequency, to better model high-frequency components with semantics. 4) Extensive experiments on five datasets demonstrate the superior human-object contact quality of our method, and the semantic recognition accuracy rate surpasses the sota methods by an average of over $15\%$ .
27
+
28
+ # 2. Related Works
29
+
30
+ # 2.1. Human-object Interaction Motion Synthesis
31
+
32
+ Synthesizing interactive motions between humans and objects has been an emerging research field supported by extensive HOI datasets [2-4, 6, 21, 26, 39, 40, 47, 48,
33
+
34
+ 76, 95, 100, 103, 104, 114]. A branch of works [27, 56, 88, 89, 116] aims at synthesizing human bodies interacting with objects, while others [5, 23, 34, 52, 61, 63, 91, 108, 119] focus on hand-object interactions for learning more dexterous manipulation behaviors. For synthesizing human-object interaction motions, early works [24, 60, 77, 112, 115] propose to leverage conditional variational auto-encoders [38, 72] to model distributions of human-object interaction motions and enable generalization ability, meanwhile using CLIP [62] to encode language features for text-based control. With the tremendous progress of diffusion models [30], diffusion-based methods [14, 20, 41, 42, 58, 79, 90, 92, 93, 97] have been widely proposed for superior generation qualities. Focusing on multi-person and object collaboration synthesis, recent approaches [15, 44, 58, 69, 78, 96, 113] enhance feature exchange among different entities in diffusion steps for better cross-entity synchronization. For synthesizing hand-object interaction motions, one solution is to train dexterous hand agents [8, 11, 85, 98, 106, 107] using physical simulations [51, 84] and reinforcement learning [25, 68]. Another solution that is similar to ours is fully data-driven. Early methodologies [45, 46, 99, 105, 117, 120] apply representation techniques such as contact map or reference grasp, which better model the contact between hand joints or hand surface points and the relevant local regions of the object. Several recent works [5, 12, 70, 108, 111] have shifted the focus to multi-hand collaboration, while few works have addressed interactions involving more than one object. Compared to the above methods, our framework can handle both human-object and hand-object interactions, allowing any number of bodies in the scene, without the need for any delicately designed structures like contact guidance.
35
+
36
+ # 2.2. Injecting External Knowledge into Diffusion Models
37
+
38
+ Benefitting from high generation qualities, diffusion models [30, 73] are widely adopted in versatile synthesis tasks regarding images [65, 109], videos [31, 32], and motions [79, 110]. To further induce generation results to satisfy specific demands or constraints, injecting additional knowledge (e.g., expected image styles, human-object contact constraints) into diffusion processes is an emerging methodology in recent studies [20, 41, 58, 97]. For this purpose, one paradigm is explicitly improving intermediate denoising results in inference steps using linear fusions/imputations with conditions [10, 37, 50, 79], optimizations with differential energy functions [18, 41, 58, 101], or guidance from learnable networks [22, 35, 83, 97, 102]. Another paradigm is to design additional diffusion branches with novel conditions [29, 94, 118] or representations [20] that comprise the knowledge, encompassing the challenge of fusing them with existing branches.
39
+
40
+ Classifier-free guidance [29, 118] uses a linear combination of results from different branches in inference steps, Xu et al. [94] propose blending and learnable merging strategies, while CG-HOI [20] presents additional cross-attention modules. Our method follows the second paradigm with relative representation designs and transforms the problem into a graphical model, optimizing it with an explicit synchronization strategy.
41
+
42
+ # 2.3. Motion Synchronization
43
+
44
+ Motion Synchronization aims at coordinating motions of different bodies in a multi-body system to satisfy specific inter-body constraints. As technical bases, for specific parameterized closed-form constraints, the solutions in Euler angles, axis-angle, quaternions, and rotation matrices are derived in traditional approaches [1, 7, 9, 33, 36, 55, 67, 121]. To achieve global attitude synchronization under mechanical constraints in $SE(3)$ , several works adopt gradient flow [53, 54], lifting method [81], and matrix decomposition [82] techniques. In the trend of graph-based multi-body modeling, existing approaches present coordination control based on general undirected [64] and directed [49, 86, 87] graphs, while some works [28, 80] further extend to nonlinear configurations and dynamic topologies. These works do not adopt learning-based methods; by contrast, our method uses a generative model to enhance consistency, formulating data-driven motion synchronization.
45
+
46
+ # 2.4. Modeling Motions in Frequency Domain
47
+
48
+ Modeling motions in the frequency domain is a widely used strategy in diverse research areas. Animation approaches [71, 74, 75] leverage frequency domain characterization to generate periodic vibrations like fluttering of clouds, smoke, or leaves. Physical simulation methods [16, 17, 19] simulate physical motions by analyzing the underlying motion dynamics in the frequency domain. Adopting learning-based methods, GID [43] uses diffusion models to capture object motions in the frequency domain, generating complex scenes with realistic periodic motions.
49
+
50
+ # 3. Method
51
+
52
+ In this section, we first formulate the problem (Section 3.1), and describe the motion representation we use (Section 3.2). Then we introduce our algorithm of frequency decomposition (Section 3.3) and model architecture (Section 3.4) in detail. Finally, in Section 3.5 and 3.6, our two synchronization mechanisms will be elaborated.
53
+
54
+ # 3.1. Problem Definition
55
+
56
+ Consider an interaction scenario comprising $n$ articulated skeletons $h_1, h_2, \ldots, h_n$ and $m$ rigid bodies $o_1, o_2, \ldots, o_m$ , which we uniformly define as "body". In our experiments,
57
+
58
+ these skeletons are MANO [66] hands or SMPL-X [57] humans, whose motions can be reconstructed with joint information and intrinsic shape parameter $\beta$ . Details can be found in supp document Section C.2. Our task is to synthesize body motions with known action labels, object categories and geometries, and shape parameters $\beta_{i\in [1,n]}$ of each skeleton. We highlight that our task releases the needs of either predefined object motions [42] or HOI key frames [13] that are used in other existing works.
59
+
60
+ # 3.2. Motion Representation
61
+
62
+ As mentioned in Section 1, in the graphical model, we need individual motions for single bodies on nodes, and relative motions for pairwise bodies on edges.
63
+
64
+ For a single body $\mathbf{b} = h_{i\in [1,n]}$ or $\mathbf{b} = o_{j\in [1,m]}$ , we represent its motion as $x_{\mathbf{b}}$ in the world coordinate system. Here $x_{h_i}\in \mathbb{R}^{N\times 3D}$ represents 3D joint positions, where $N$ is number of frames, and $D$ is number of joints. $x_{o_j} = [\mathbf{t}_{o_j},\mathbf{q}_{o_j}]\in \mathbb{R}^{N\times 7}$ , where $\mathbf{t}_{o_j}\in \mathbb{R}^{N\times 3}$ is its translation, and $\mathbf{q}_{o_j}\in \mathbb{R}^{N\times 4}$ is the quaternion for its orientation.
65
+
66
+ For two bodies $\mathbf{b}_1$ and $\mathbf{b}_2$ , we use $x_{\mathbf{b}_2 \to \mathbf{b}_1}$ to denote the relative motion of $\mathbf{b}_2$ in $\mathbf{b}_1$ 's coordinate system. We require that $\mathbf{b}_1$ is a rigid body, and omit relative representation between articulated skeletons. If $\mathbf{b}_2$ is also a rigid body, then $x_{\mathbf{b}_2 \to \mathbf{b}_1} = [\mathbf{t}_{\mathbf{b}_2 \to \mathbf{b}_1}, \mathbf{q}_{\mathbf{b}_2 \to \mathbf{b}_1}]$ , where $\mathbf{t}_{\mathbf{b}_2 \to \mathbf{b}_1} = \mathbf{q}_{\mathbf{b}_1}^{-1} (\mathbf{t}_{\mathbf{b}_2} - \mathbf{t}_{\mathbf{b}_1})$ , and $\mathbf{q}_{\mathbf{b}_2 \to \mathbf{b}_1} = \mathbf{q}_{\mathbf{b}_1}^{-1} \mathbf{q}_{\mathbf{b}_2}$ . If $\mathbf{b}_2$ is an articulated skeleton, the position of one of its joints $\mathbf{p}$ under individual representation will become $\mathbf{p}'$ in $\mathbf{b}_1$ 's coordinate system, where $\mathbf{p}' = \mathbf{q}_{\mathbf{b}_1}^{-1} (\mathbf{p} - \mathbf{t}_{\mathbf{b}_1})$ .
67
+
68
+ After deriving all individual and relative motion representations, we concatenate all of them together into a high-order representation $x$ , including $\{x_{o_{j\in [1,m]}}\}$ , $\{x_{h_{i\in [1,n]}}\}$ , $\{x_{o_{j_2\to o_j_1}}\mid j_1,j_2\in [1,m],j_1\neq j_2\}$ , $\{x_{h_i\to o_j}\mid i\in [1,n],j\in [1,m]\}$ . Thus, $x\in \mathbb{R}^{N\times D_{\mathrm{sum}}}$ , where $D_{\mathrm{sum}} = 7m + 3Dn + 7m(m - 1) + 3Dmn$ . Although $x$ is composed of individual and relative representations, the final synthesized motions only involve individual motions, and those relative ones serve merely as auxiliary components.
69
+
70
+ # 3.3. Frequency Decomposition
71
+
72
+ As body number increases, action semantics become more dependent on subtle, high-frequency interactions between bodies (e.g., periodic frictions between the teapot and brush during the action of brushing). Unfortunately, using existing works to denoise based on time-domain trajectories often results in these high-frequency components being overshadowed by common low-frequency, large-scale patterns (e.g., objects only move and contact each other, without relative interactions). To better model these high-frequency components, inspired by GID [43], we explicitly represent them in the frequency domain and supervise them independently in the loss terms described in Section 3.5.
73
+
74
+ ![](images/d7e4c7f9e5d7c9e45302ce65670252f4b5db0a859979bf65b1c7163f4962a4fa.jpg)
75
+ Figure 2. Overview of SyncDiff. The light blue boxes show the inference process with explicit synchronization steps performed every $s$ step. For denoising steps irrelevant to explicit synchronization (those marked as “ $(s - 1)$ times”), the noise level is set to $\sigma_t$ . For the calculation from $\hat{x}$ to $\hat{\mu}$ , no noise is added. For the calculation of $\hat{x}_{t-1}$ based on $\hat{x}_t$ and $\hat{\mu}_t$ in explicit synchronization step, the noise level is $\sigma_t'$ . Please refer to Section 3.6 for more details. The light green box illustrates the architecture of denoising backbone (Section 3.4).
76
+
77
+ For a motion sequence $x \in \mathbb{R}^{N \times D_{\mathrm{sum}}}$ , we select $N$ frequency bases $\phi_{l \in [0, N-1]} = \frac{l}{N}$ , and then decompose $x$ into components of $\phi$ by Fast Fourier Transform (FFT):
78
+
79
+ $$
80
+ x _ {u \in [ 0, N - 1 ]} = \sum_ {l = 0} ^ {N - 1} a _ {l} \cos (u \phi_ {l}) + b _ {l} \sin (u \phi_ {l}), \tag {1}
81
+ $$
82
+
83
+ where $a_{l}, b_{l} \in \mathbb{R}^{D_{\mathrm{sum}}}$ are the coefficients computed by FFT. Note that here $u$ denotes the frame id, which is different from the noise timestep $t$ for the diffusion model later. To prevent networks from overfitting high-frequency noises in mocap datasets, we select a cutoff boundary $L \in [4, \frac{N}{4})$ , and directly discard signals with frequencies higher than $\phi_L / 2\pi$ . In practice, we select $L = 16$ , which strikes a balance between motion fidelity conservation and simplicity (See supp document Section B.3). We then divide the remaining signals into low-frequency components $(x_{\mathrm{dc}})$ and high-frequency components $(x_{\mathrm{ac}})$ :
84
+
85
+ $$
86
+ x _ {\mathrm {d c}, u} = \sum_ {l = - 3} ^ {2} a _ {l} \cos \left(u \phi_ {l}\right) + b _ {l} \sin \left(u \phi_ {l}\right), \text {a n d} \tag {2}
87
+ $$
88
+
89
+ $$
90
+ x _ {\mathrm {a c}, u} = \sum_ {l \in [ - L, - 4 ] \cup [ 3, L - 1 ]} a _ {l} \cos (u \phi_ {l}) + b _ {l} \sin (u \phi_ {l}),
91
+ $$
92
+
93
+ where $a_{l} = a_{l + N}$ and $b_{l} = b_{l + N}$ for $l < 0$ . $x_{\mathrm{dc}}, x_{\mathrm{ac}} \in \mathbb{R}^{N \times D_{\mathrm{sum}}}$ . Additionally, we denote the frequency domain representation of high-frequency components as $x_{\mathrm{F}} = [a_3, \ldots, a_{L - 1}, a_{N - L}, \ldots, a_{N - 4}, b_3, \ldots, b_{L - 1}, b_{N - L}, \ldots, b_{N - 4}, z] \in \mathbb{R}^{N \times D_{\mathrm{sum}}}$ , where $z$ is a zero mask, padding $x_{\mathrm{F}}$ into the same length as $x_{\mathrm{dc}}$ . Our model denoises on $x_{\mathrm{dc}}$ and $x_{\mathrm{F}}$ , and supervises the differences on $x_{\mathrm{dc}}$ and $x_{\mathrm{ac}}$ between synthesized and ground-truth values in temporal domain.
94
+
95
+ # 3.4. Model Architecture
96
+
97
+ We jointly model all individual and relative motions in a high-order representation with one single diffusion model (Figure 2). The latent diffusion [65] paradigm is adopted, where action and object label features are extracted from pretrained CLIP [62], and object geometry features are encoded by BPS [59]. To facilitate batch operations, we pad all trajectories into the same length, and use $0/1$ padding masks to indicate the positions that need to be generated. Concatenate together label/geometry features, padding masks, noise timestep embeddings, and shape parameters $\beta_{i\in [1,n]}$ to form the condition vector. The condition vector is replicated and combined with $x_{t,\mathrm{dc}}$ and $x_{t,\mathrm{F}}$ decomposed from noised $x_{t}$ , respectively. They are then projected into the latent space to be denoised by a transformer-based backbone. Let the predicted results be $\widehat{x_{\mathrm{dc}}}$ and $\widehat{x_{\mathrm{F}}}$ , the latter of which is reconstructed as $\widehat{x_{\mathrm{ac}}}$ . The final denoised motion sequence is recomposed by $\widehat{x} = \widehat{x_{\mathrm{dc}}} + \widehat{x_{\mathrm{ac}}}$ . Please refer to supp document Section D.2, D.3 for model hyperparameters.
98
+
99
+ # 3.5. Loss Functions
100
+
101
+ To enhance synchronization, besides data scores, we design a set of alignment scores. The idea is, for every edge on the graphical model, the alignment score guides the computed relative motion between two individual motions to approach the generated relative motion, thereby achieving synchronization across the entire graphical model. We further mathematically derive the corresponding loss term $\mathcal{L}_{\mathrm{align}}$ .
102
+
103
+ Suppose we need to supervise the final synthesis results $\{\widehat{x_{\mathrm{dc}}},\widehat{x_{\mathrm{ac}}}\}$ , where $\{x_{\mathrm{dc}},x_{\mathrm{ac}}\}$ are ground-truth motions. We need loss functions for both data and alignment scores.
104
+
105
+ For data sample scores, our method is similar to the standard reconstruction loss, except that we supervise $x_{\mathrm{dc}}$ and $x_{\mathrm{ac}}$ separately. We denote $\mathcal{L}_{\mathrm{dc}} = \| \widehat{x_{\mathrm{dc}}} - x_{\mathrm{dc}}\| _2^2$ , $\mathcal{L}_{\mathrm{ac}} = \| \widehat{x_{\mathrm{ac}}} - x_{\mathrm{ac}}\| _2^2$ and $\mathcal{L}_{\mathrm{norm}} = \sum_{j = 1}^{m}\| 1 - |\hat{\mathbf{q}}_j|\| _2^2$ . The last one is used to induce the norms of the quaternions representing rigid body rotations to be as close to 1, where $\hat{x}_{o_j} = [\hat{\mathbf{t}}_j,\hat{\mathbf{q}}_j]$ .
106
+
107
+ Define $\operatorname{rel}(x_{\mathbf{a}}, x_{\mathbf{b}})$ as $\mathbf{b}$ 's motion relative to $\mathbf{a}$ . Detailed formulas can be found in supp document Section C.1. For our alignment scores of pairwise bodies, we can derive the corresponding alignment loss term
108
+
109
+ $$
110
+ \begin{array}{l} \mathcal{L}_{\text{align}} = \sum_{\substack{j_{1},j_{2}\in [1,m],j_{1}\neq j_{2}}}\| \hat{x}_{o_{j_{2}}\to o_{j_{1}}} - \operatorname {rel}(\hat{x}_{o_{j_{1}}},\hat{x}_{o_{j_{2}}})\|_{2}^{2} \\ + \sum_ {i \in [ 1, n ], j \in [ 1, m ]} \| \hat {x} _ {h _ {i} \rightarrow o _ {j}} - \operatorname {r e l} \left(\hat {x} _ {o _ {j}}, \hat {x} _ {h _ {i}}\right) \| _ {2} ^ {2}, \tag {3} \\ \end{array}
111
+ $$
112
+
113
+ Finally, the total loss function is calculated as:
114
+
115
+ $$
116
+ \mathcal {L} = \lambda_ {\mathrm {d c}} \mathcal {L} _ {\mathrm {d c}} + \lambda_ {\mathrm {a c}} \mathcal {L} _ {\mathrm {a c}} + \lambda_ {\text {a l i g n}} \mathcal {L} _ {\text {a l i g n}} + \lambda_ {\text {n o r m}} \mathcal {L} _ {\text {n o r m}}. \tag {4}
117
+ $$
118
+
119
+ Proof details of the equivalence between $\mathcal{L}_{\mathrm{align}}$ and alignment scores are provided in the supp document Section A.2.
120
+
121
+ # 3.6. Explicit Synchronization in Inference Time
122
+
123
+ Relying solely on alignment losses only indirectly enhances synchronization. To directly improve synthesis quality and diversity, as well as provide stronger theoretical guarantees, we introduce an explicit synchronization process, which is mathematically equivalent to maximum total likelihood sampling during inference time, aiming to leverage both data sample scores and alignment scores to address this problem. Since the synchronization step is time-consuming, to balance performance and efficiency, we perform synchronization operations every $s(s \ll T)$ steps, where $T$ is the total number of denoising steps, as is shown in Figure 2. In practice, we take $s = 50$ , $T = 1000$ , to ensure synchronization while improving inference speed, as further detailed in supp document Section B.4. For the predicted motion $\hat{x}_t$ at step $t \in [1,T]$ , according to the diffusion formula [30], without synchronization, the next step would be:
124
+
125
+ $$
126
+ \hat {x} _ {t - 1} = \hat {\mu} (\hat {x} _ {t}, t) + \sigma_ {t} \epsilon \quad (\epsilon \sim \mathcal {N} (0, I)), \tag {5}
127
+ $$
128
+
129
+ where $\hat{\mu}$ is the predicted mean value, and noise scale $\sigma_{t}$ is a predefined constant real number. For convenience, we denote the motion before synchronization as $\hat{x}$ and the motion after synchronization as $\hat{x}^{\prime}$ . Let $\sigma_{t}$ be abbreviated as $\sigma$ . For different parts of $\hat{x}^{\prime}$ , we handle them as follows:
130
+
131
+ 1. Individual Motions of Rigid Bodies. Let
132
+
133
+ $$
134
+ \begin{array}{l} \hat {x} _ {o _ {j}} ^ {\prime} = \frac {\frac {2}{m - 1} \sigma^ {2} \bar {\lambda}}{1 + 2 \sigma^ {2} \bar {\lambda}} \sum_ {j ^ {\prime} \neq j} \operatorname {c o m b} \left(\hat {x} _ {o _ {j ^ {\prime}}}, \hat {x} _ {o _ {j} \rightarrow o _ {j ^ {\prime}}}\right) \tag {6} \\ + \frac {1}{1 + 2 \sigma^ {2} \bar {\lambda}} \hat {\mu} _ {o _ {j}} + \sigma^ {\prime} \epsilon , \\ \end{array}
135
+ $$
136
+
137
+ where $\mathrm{comb}(x_{\mathbf{a}}, x_{\mathbf{b} \to \mathbf{a}})$ utilizes the individual motion of $\mathbf{a}$ and relative motion between $\mathbf{b}$ and $\mathbf{a}$ to compute $\mathbf{b}$ 's motion. Its formula is in supp document Section C.1. $\overline{\lambda} = \frac{\lambda_{\mathrm{exp}}}{R} \sum_{r=1}^{R} \frac{1}{2\sigma_{tr}^2}$ , where $R = T / s = 20$ is the number of synchronization steps, $t_1, t_2, \ldots, t_R$ are the corresponding timesteps, and $\sigma_{t_1}, \sigma_{t_2}, \ldots, \sigma_{t_R}$ are the original correspondent noise scales (without synchronization). Finally, $\sigma' = \sqrt{\frac{\sigma^2}{1 + 2\sigma^2\overline{\lambda}}}$ . Specifically, when $m = 1$ , we do not perform synchronization for this part, and the denoising formula is identical to that without synchronization.
138
+
139
+ # 2. Individual Motions of Articulated Skeletons. Let
140
+
141
+ $$
142
+ \begin{array}{l} \hat {x} _ {h _ {i}} ^ {\prime} = \frac {\frac {2}{m} \sigma^ {2} \bar {\lambda}}{1 + 2 \sigma^ {2} \bar {\lambda}} \sum_ {j \in [ 1, m ]} \operatorname {c o m b} \left(\hat {x} _ {o _ {j}}, \hat {x} _ {h _ {i} \rightarrow o _ {j}}\right) \tag {7} \\ + \frac {1}{1 + 2 \sigma^ {2} \bar {\lambda}} \hat {\mu} _ {h _ {i}} + \sigma^ {'} \epsilon , \\ \end{array}
143
+ $$
144
+
145
+ Definitions of $\overline{\lambda}$ and $\sigma^{\prime}$ are the same as Eq. 6.
146
+
147
+ # 3. Relative Motions. Let
148
+
149
+ $$
150
+ \begin{array}{l} \hat {x} _ {o _ {j} \rightarrow o _ {j ^ {\prime}}} ^ {\prime} = \frac {1}{1 + 2 \sigma^ {2} \overline {{\lambda}}} \hat {\mu} _ {o _ {j} \rightarrow o _ {j ^ {\prime}}} + \frac {2 \sigma^ {2} \overline {{\lambda}}}{1 + 2 \sigma^ {2} \overline {{\lambda}}} \mathrm {r e l} (\hat {x} _ {o _ {j ^ {\prime}}}, \hat {x} _ {o _ {j}}) + \sigma^ {\prime} \epsilon , \\ \hat {x} _ {h _ {i} \rightarrow o _ {j}} ^ {\prime} = \frac {1}{1 + 2 \sigma^ {2} \bar {\lambda}} \hat {\mu} _ {h _ {i} \rightarrow o _ {j}} + \frac {2 \sigma^ {2} \bar {\lambda}}{1 + 2 \sigma^ {2} \bar {\lambda}} \operatorname {r e l} \left(\hat {x} _ {o _ {j}}, \hat {x} _ {h _ {i}}\right) + a \\ \end{array}
151
+ $$
152
+
153
+ Definitions of $\overline{\lambda}$ and $\sigma'$ are the same as Eq. 6. Function rel is defined in Section 3.5.
154
+
155
+ After synchronization operations and deriving $\hat{x}^{\prime}$ , it is again used for further denoising. Between two adjacent synchronization steps, we still directly use Eq. 5 for stepwise denoising, and do not perform synchronization operations. We demonstrate in supp document Section A.2, that the above equations are equivalent to maximum likelihood sampling from the newly computed Gaussian distribution based on both data sample scores and alignment scores.
156
+
157
+ # 4. Experiments
158
+
159
+ In this section, we first introduce some basic experimental settings (Section 4.1, 4.2), and then demonstrate the visual and quantitative results in comparison to several baselines (Section 4.3) and ablation settings (Section 4.4). We also provide in-depth analyses for the results.
160
+
161
+ # 4.1. Datasets
162
+
163
+ To examine our method's generalizability across various multi-body interaction configurations, we utilize five datasets with different interaction scenarios: TACO [48] (two hands and two objects), CORE4D [104] (two people and one object), GRAB [76] (one or two hands and one object), OAKINK2 [103] (two hands and one to three objects),
164
+
165
+ and BEHAVAVE [2] (one human and one object). We describe data splits for each dataset below. Detailed dataset statistics and split sizes are shown in supp document Section D.1.
166
+
167
+ (1) TACO: We use the official split of TACO, with four testing sets, each representing different scenarios: 1) the interaction triplet (action, tool category, target object category) and the object geometries are all seen in the training set, 2) unseen object geometry, 3) unseen triplet, and 4) unseen object category and geometry.
168
+ (2) CORE4D: We divide 875 motion sequences into one training set and two testing sets, where the two testing sets represent seen and unseen object geometries, respectively. The <action, object category> pairs from testing sets are all involved in the training set.
169
+ (3) GRAB: We use an existing data split of unseen subjects from IMoS [24] and that of unseen objects from DiffH $_2$ O [13]. Please refer to the two papers for details.
170
+ (4) OAKINK2: We utilize the train, val, and test divisions stated in the TaMF task of their paper.
171
+ (5) BEHAVE: We utilize the official splits of train/test.
172
+
173
+ Toward a fair comparison with $\mathrm{DiffH_2O}$ [13], we follow the setting of $\mathrm{DiffH_2O}$ for GRAB, where methods are required to generate hand-object manipulations after grasping objects. For other four datasets, all methods need to synthesize complete multi-body interaction motion sequences. Due to space constraints, detailed results of BEHAV are in supp document Section B.1.
174
+
175
+ # 4.2. Evaluation Metrics
176
+
177
+ To evaluate the qualities of synthesized motion sequences comprehensively, we present two types of evaluation metrics focusing on fine-grained contact consistency and general motion semantics, respectively.
178
+
179
+ (1) Contact-based metrics measure the contact plausibility of hand-object/human-object interactions and the extent of motion coordination among different bodies. We use the Contact Surface Ratio (CSR) for hand-object settings and the Contact Root Ratio (CRR) for human-object settings to denote the proportion of frames where hand-object/human-object contact occurs. Contact is defined as the hand mesh being within $5\mathrm{mm}$ of at least one object for CSR, and the two hand roots of a human consistently being within $3\mathrm{cm}$ of at least one object for CRR. When there are multiple hands or humans, we take the average among all of them. We label the frames based on whether hand-object contact occurs for ground-truth or synthesized motions, and then compute their Intersection-over-Union (IoU), denoted as CSIoU. Besides, Interpenetration Volume (IV) and Interpenetration Depth (ID) are incorporated to assess penetration between different bodies.
180
+ (2) Motion semantics metrics evaluate high-level motion semantics and its distributions, comprising Recognition Accuracy (RA), Fréchet Inception Distance (FID), Sample
181
+
182
+ Diversity (SD), and Overall Diversity (OD). Following existing evaluations for motion synthesis [41, 42], we train a network to extract motion features and predict action labels using ground-truth motion data. For better feature semantics, the network is trained on the combination of all train, val, and test splits. RA denotes the action recognition accuracy of the network on synthesized motions. FID measures the difference in feature distributions of generated and ground-truth motions. Following DiffH $_2$ O [13], SD represents the mean Euclidean distance between multiple generated wrist trajectories in a single sample, and OD refers to mean distance between all generated trajectories in a dataset split. To measure semantics from a perceptual aspect, we conduct user studies, comparing our methods with baselines, which is shown in supp document Section B.2.
183
+
184
+ # 4.3. Comparison to Existing Methods
185
+
186
+ Hand-Object Interaction. For hand-object interaction motion synthesis, we compare our method to two state-of-the-art approaches, MACS [70] and DiffH $_2$ O [13]. For MACS, we first generate the motions of all objects in their object trajectory synthesis phase, and then directly use their hand motion synthesis phase to get overall results. For DiffH $_2$ O, we use the version without grasp reference input.
187
+
188
+ ![](images/b8b66305911c82213b06a9a2f62d0ece5dd0f338ef0e61bd33882469b2041edd.jpg)
189
+ Figure 3. Qualitative results from GRAB [76] dataset.
190
+
191
+ ![](images/d991af35a7b1b73de8eaf2dc694b6027af208a1cd70c68c8dd48235aaa46a7d9.jpg)
192
+ Figure 4. Qualitative results from TACO [48] dataset. Invalid action indicates the poses cannot complete the operation effectively.
193
+
194
+ In Figure 3, our method features more realistic contacts and more stable grasping. As is shown in Table 1, 2, and 4, our method outperforms MACS and $\mathrm{DiffH_2O}$ by a large margin with better CSIoU, CSR, IV, and ID. The reason is that our method features more robust alignment
195
+
196
+ <table><tr><td></td><td colspan="4">CSIoU (%, ↑)</td><td colspan="4">IV (cm3, ↓)</td><td colspan="4">FID (↓)</td><td colspan="4">RA (%, ↑)</td></tr><tr><td>Method</td><td>Test1</td><td>Test2</td><td>Test3</td><td>Test4</td><td>Test1</td><td>Test2</td><td>Test3</td><td>Test4</td><td>Test1</td><td>Test2</td><td>Test3</td><td>Test4</td><td>Test1</td><td>Test2</td><td>Test3</td><td>Test4</td></tr><tr><td>Ground-truth</td><td>100.0</td><td>100.0</td><td>100.0</td><td>100.0</td><td>4.56</td><td>3.60</td><td>4.24</td><td>3.50</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.04</td><td>84.92</td><td>89.00</td><td>75.86</td><td>65.90</td></tr><tr><td>MACS [70]</td><td>56.81</td><td>53.79</td><td>21.38</td><td>12.09</td><td>13.18</td><td>18.57</td><td>10.33</td><td>8.99</td><td>10.56</td><td>23.24</td><td>32.18</td><td>42.37</td><td>58.40</td><td>53.08</td><td>33.00</td><td>19.02</td></tr><tr><td>DiffH2O [13]</td><td>62.29</td><td>46.38</td><td>42.12</td><td>16.38</td><td>10.25</td><td>15.21</td><td>4.67</td><td>5.70</td><td>4.34</td><td>17.04</td><td>24.92</td><td>39.20</td><td>61.40</td><td>56.70</td><td>43.67</td><td>28.15</td></tr><tr><td>Ours</td><td>73.00</td><td>70.94</td><td>43.22</td><td>26.70</td><td>6.64</td><td>3.81</td><td>4.02</td><td>7.73</td><td>2.70</td><td>2.68</td><td>22.96</td><td>30.23</td><td>73.28</td><td>85.92</td><td>46.90</td><td>40.12</td></tr><tr><td>w/o all</td><td>62.96</td><td>52.38</td><td>38.02</td><td>26.39</td><td>7.95</td><td>12.02</td><td>7.05</td><td>7.67</td><td>10.63</td><td>21.87</td><td>30.17</td><td>46.38</td><td>57.39</td><td>48.05</td><td>37.13</td><td>24.92</td></tr><tr><td>w/o decompose</td><td>68.86</td><td>54.77</td><td>41.70</td><td>28.07</td><td>6.80</td><td>10.78</td><td>6.93</td><td>7.22</td><td>6.44</td><td>21.21</td><td>28.67</td><td>49.58</td><td>56.60</td><td>51.85</td><td>40.02</td><td>22.18</td></tr><tr><td>w/o Lalign, w/o exp sync</td><td>63.74</td><td>48.35</td><td>39.89</td><td>20.87</td><td>14.28</td><td>13.80</td><td>5.93</td><td>7.44</td><td>4.13</td><td>4.32</td><td>24.65</td><td>38.73</td><td>64.47</td><td>62.12</td><td>41.68</td><td>30.39</td></tr><tr><td>w/o Lalign</td><td>70.39</td><td>67.15</td><td>40.38</td><td>26.83</td><td>6.29</td><td>4.86</td><td>5.88</td><td>7.39</td><td>2.90</td><td>3.02</td><td>22.28</td><td>32.78</td><td>67.82</td><td>79.30</td><td>44.75</td><td>34.51</td></tr><tr><td>w/o exp sync</td><td>65.51</td><td>50.33</td><td>37.72</td><td>23.61</td><td>13.08</td><td>14.40</td><td>6.20</td><td>7.75</td><td>3.39</td><td>3.30</td><td>21.26</td><td>33.67</td><td>67.27</td><td>78.50</td><td>45.82</td><td>37.13</td></tr></table>
197
+
198
+ Table 1. Results on TACO [48] dataset. The best in each column is highlighted in bold.
199
+
200
+ and synchronization mechanisms to ensure synchronization which are excluded in existing methods. In the results from MACS and $\mathrm{DiffH_2O}$ in Figure 4, the brush often penetrates the plate, and the pose of the brush does not guarantee the bristles being pressed closely against the plate, effectively completing the action. FID and RA further indicate that the motions generated by our method are more semantically realistic. This is caused by the more precise object-object interactions in SyncDiff, and our separation of motions at different frequencies ensures that subtle high-frequency periodic movements are not overshadowed, which are crucial for identifying action types. Figure 5 demonstrates the benefits of our synchronization mechanism. Although MACS uses relative representations for some hand-object pairs to ensure firm grasp, due to the lack of synchronization on the complete graphical model, conflicts arise between the left hand and the knife, which should not directly interact. Figure 7 indicates that with a higher demand for coordination quality among objects, our method can also address hand-object and object-object synchronization effectively.
201
+
202
+ ![](images/9131b07008aff3e1ef8fe90581c659bb96316f54b8ab348fdcddb0bb268ec3e6.jpg)
203
+ Figure 5. Qualitative results from TACO [48] dataset.
204
+
205
+ ![](images/7c44f76f1a591dd27bff69080e6d971b06e2882cd35b02a6d0f2d06caab5a728.jpg)
206
+ interpenetration
207
+ cut the plate with knife
208
+
209
+ Human-Object Interaction. For human-object interaction synthesis, we compare our method to CG-HOI [20] and OMOMO [42]. For OMOMO, we first use their conditional diffusion models to generate object trajectories, and then use their whole pipeline to synthesize complete Multi-body HOI motion. Cross-attention in CG-HOI is modified for two humans, one object and contact between them.
210
+
211
+ As shown in Table 3 and Figure 6, our method outperforms all three baseline methods in contact-based and semantics metrics and obtains the most realistic qualities. Synthesized motions from existing methods suffer from unnatural grasping poses and arm-object interpenetration, while our method mitigates these issues.
212
+
213
+ ![](images/dda1822a0d96abf089c6bef8b8347cfd4346a5ff782452aad8072be10e3345b4.jpg)
214
+ Figure 6. Qualitative comparisons on CORE4D [104] dataset.
215
+
216
+ ![](images/48ef3a8a72b803af0dfa857b787bd76deab51db504a7953fe167a907aff8dcfd.jpg)
217
+ Figure 7. Qualitative results from OAKINK2 [103] dataset. The task requires precise contact between objects, where the bottle cap needs to align perfectly with the bottle, and there needs to be a tendency for it to be twisted down in a clockwise spiral.
218
+
219
+ # 4.4. Ablation studies
220
+
221
+ We examine the effect of our three key designs (frequency-domain motion decomposition, the alignment loss $\mathcal{L}_{\mathrm{align}}$ , and the explicit synchronization) separately. The results after removing each of these three components individually are shown as "w/o decompose", "w/o $\mathcal{L}_{\mathrm{align}}$ ", and "w/o exp sync". Two additional ablations are to remove the two synchronization mechanisms and all three designs, denoted as "w/o $\mathcal{L}_{\mathrm{align}}$ , w/o exp sync" and "w/o all", respectively. More details can be found in supp document Section C.4. Results
222
+
223
+ <table><tr><td></td><td>Method</td><td>Backbone</td><td>SD (m,↑)</td><td>OD (m,↑)</td><td>IV (cm3,↓)</td><td>ID (mm,↓)</td><td>CSR (%,↑)</td><td>Hand Motion RA (%,↑)</td><td>Hand-Object Motion RA (%,↑)</td></tr><tr><td rowspan="5">Unseen subject split</td><td>IMoS [24]</td><td>CVAE</td><td>0.002</td><td>0.149</td><td>7.14</td><td>11.47</td><td>5.0</td><td>57.9</td><td>58.8</td></tr><tr><td>DiffH2O [13]</td><td>Transformer</td><td>0.088</td><td>0.185</td><td>6.65</td><td>8.39</td><td>6.7</td><td>76.0</td><td>81.0</td></tr><tr><td>DiffH2O [13]</td><td>UNet</td><td>0.109</td><td>0.188</td><td>6.02</td><td>7.92</td><td>6.4</td><td>83.3</td><td>87.5</td></tr><tr><td>MACS [70]</td><td>MLP+Conv</td><td>0.059</td><td>0.164</td><td>8.29</td><td>10.12</td><td>3.9</td><td>72.9</td><td>76.4</td></tr><tr><td>Ours</td><td>Transformer</td><td>0.106</td><td>0.188</td><td>6.22</td><td>7.75</td><td>7.2</td><td>82.6</td><td>88.9</td></tr><tr><td rowspan="5">Unseen object split</td><td>IMoS [24]</td><td>CVAE</td><td>0.002</td><td>0.132</td><td>10.38</td><td>12.45</td><td>4.8</td><td>56.1</td><td>58.1</td></tr><tr><td>DiffH2O [13]</td><td>Transformer</td><td>0.133</td><td>0.185</td><td>7.99</td><td>10.87</td><td>7.3</td><td>75.0</td><td>80.3</td></tr><tr><td>DiffH2O [13]</td><td>UNet</td><td>0.134</td><td>0.179</td><td>9.03</td><td>11.39</td><td>8.6</td><td>75.5</td><td>83.7</td></tr><tr><td>MACS [70]</td><td>MLP+Conv</td><td>0.105</td><td>0.156</td><td>11.24</td><td>13.42</td><td>5.4</td><td>57.7</td><td>63.9</td></tr><tr><td>Ours</td><td>Transformer</td><td>0.148</td><td>0.192</td><td>7.07</td><td>10.67</td><td>10.5</td><td>77.4</td><td>86.5</td></tr></table>
224
+
225
+ Table 2. Comparison on GRAB [76] dataset for the post-grasping phase. Following $\mathrm{DiffH_2O}$ [13], we conduct experiments on the phase where the object has been grasped. Each column highlights the best method in red, with the second best highlighted in blue. Results of IMoS [24] and $\mathrm{DiffH_2O}$ are from the original paper of $\mathrm{DiffH_2O}$ , while MACS [70] results are obtained via our re-implementation.
226
+
227
+ <table><tr><td></td><td colspan="2">CRR(%, ↑)</td><td colspan="2">FID(↓)</td><td colspan="2">RA (%, ↑)</td></tr><tr><td>Method</td><td>Test1</td><td>Test2</td><td>Test1</td><td>Test2</td><td>Test1</td><td>Test2</td></tr><tr><td>Ground-truth</td><td>7.72</td><td>6.25</td><td>0.01</td><td>0.00</td><td>96.45</td><td>97.44</td></tr><tr><td>OMOMO [42]</td><td>5.31</td><td>5.54</td><td>13.22</td><td>14.94</td><td>68.02</td><td>65.13</td></tr><tr><td>CG-HOI [20]</td><td>5.74</td><td>5.50</td><td>12.16</td><td>15.37</td><td>70.05</td><td>66.15</td></tr><tr><td>Ours</td><td>6.15</td><td>5.78</td><td>6.45</td><td>7.25</td><td>92.89</td><td>90.26</td></tr><tr><td>w/o all</td><td>5.42</td><td>5.35</td><td>17.21</td><td>21.37</td><td>54.82</td><td>48.72</td></tr><tr><td>w/o decompose</td><td>5.70</td><td>5.46</td><td>8.42</td><td>9.54</td><td>75.13</td><td>71.28</td></tr><tr><td>w/o Lalign, w/o exp sync</td><td>4.84</td><td>4.88</td><td>7.43</td><td>8.49</td><td>82.74</td><td>74.11</td></tr><tr><td>w/o Lalign</td><td>5.38</td><td>5.25</td><td>6.74</td><td>8.31</td><td>90.36</td><td>87.69</td></tr><tr><td>w/o exp sync</td><td>5.23</td><td>5.04</td><td>7.55</td><td>7.89</td><td>80.20</td><td>78.46</td></tr></table>
228
+
229
+ in Tables 1, 3, 4 show that removing any of the three components can lead to varying extents of performance decline.
230
+
231
+ Removal of Decomposition. As shown in Figure 8, after removing the decomposition mechanism, once periodic relative motion is involved in object interactions, it becomes easier for high-frequency motions to be neglected, which is intuitively shown as two objects being in an almost relative stationary state, making it difficult to complete the action. This also results in poor performance in semantics metrics FID and RA from Tables 1, 3, and 4. In supp document Section B.5, we've provided more detailed experiments to examine the necessity of decomposition in modeling high-frequency components with semantics.
232
+
233
+ ![](images/78931c2318d396ae20b69498ac0dce47f7d12ace3ad375298c7d5cd8470f5b1d.jpg)
234
+ Figure 8. Qualitative results from TACO [48] dataset. Periodic relative motions are required between two objects. The color changes from deep to light, representing time passage. After removing the decomposition mechanism, the spatula tends to get stuck in a small area on the plate's surface, without effective relative movements.
235
+
236
+ Removal of $\mathcal{L}_{\mathrm{align}}$ or Explicit Synchronization. As is shown in Figure 6 and 7, removing either one of them leads
237
+
238
+ Table 3. Results on CORE4D [104] dataset. The best in each column is highlighted in bold.
239
+
240
+ <table><tr><td>Method</td><td>CSIOU(%,↑)</td><td>IV(cm3,↓)</td><td>FID(↓)</td><td>RA(%,↑)</td></tr><tr><td>Ground-truth</td><td>100.0</td><td>2.51</td><td>0.00</td><td>82.57</td></tr><tr><td>MACS [70]</td><td>57.52</td><td>10.52</td><td>4.96</td><td>54.91</td></tr><tr><td>DiffH2O [13]</td><td>55.50</td><td>5.59</td><td>5.18</td><td>50.48</td></tr><tr><td>Ours</td><td>72.14</td><td>4.41</td><td>2.65</td><td>74.83</td></tr><tr><td>w/o all</td><td>62.96</td><td>6.73</td><td>6.63</td><td>48.96</td></tr><tr><td>w/o decompose</td><td>68.16</td><td>4.90</td><td>4.46</td><td>55.05</td></tr><tr><td>w/o Lalign, w/o exp sync</td><td>57.59</td><td>8.94</td><td>3.82</td><td>70.54</td></tr><tr><td>w/o Lalign</td><td>67.44</td><td>5.34</td><td>3.76</td><td>70.82</td></tr><tr><td>w/o exp sync</td><td>58.05</td><td>7.66</td><td>3.58</td><td>69.16</td></tr></table>
241
+
242
+ Table 4. Results on OAKINK2 [103] dataset. The best in each column is highlighted in bold.
243
+
244
+ to unreasonable penetration or contact loss between objects or between humans and objects, with explicit synchronization playing a more significant role. This phenomenon is also revealed in quantitative evaluation results from Tables 1, 3 and 4. An observed phenomenon is that simply incorporating frequency decomposition (w/o $\mathcal{L}_{\mathrm{align}}$ , w/o exp sync) poses higher demand on synchronization, which is manifested as worse contact-based metrics (Defined in Section 4.2) than "w/o all". Simply integrating $\mathcal{L}_{\mathrm{align}}$ (w/o exp sync) can not fully solve this issue, making explicit synchronization steps become a must.
245
+
246
+ # 5. Conclusions and Discussions
247
+
248
+ This paper presents SyncDiff, a unified framework for synchronized motion synthesis of multi-body HOI interaction by estimating both data sample scores and alignment scores, and jointly optimizing sample and alignment likelihoods in inference. We also introduce a frequency-domain decomposition to better capture high-frequency motions with semantics. Experiments on five datasets demonstrate that SyncDiff can be adapted to multiple scenarios with any number of humans, hands, and rigid objects. Comparative experiments also demonstrate that in each specific setting, our method achieves better contact accuracy and action semantics than a range of state-of-the-art baselines. Limitations, potential solutions, extensions, and some discussions of SyncDiff are provided in supp document Section E.
249
+
250
+ # References
251
+
252
+ [1] Federica Arrigoni, Beatrice Rossi, and Andrea Fusiello. Spectral synchronization of multiple views in se(3). SIAM Journal on Imaging Sciences, 9(4):1963-1990, 2016. 3
253
+ [2] Bharat Lal Bhatnagar, Xianghui Xie, Ilya A. Petrov, Cristian Sminchisescu, Christian Theobalt, and Gerard Pons-Moll. Behave: Dataset and method for tracking human object interactions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15914-15925, 2022. 2, 6
254
+ [3] Samarth Brahmbhatt, Cusuh Ham, Charles C Kemp, and James Hays. Contactdb: Analyzing and predicting grasp contact via thermal imaging. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8709-8719, 2019.
255
+ [4] Samarth Brahmbhatt, Chengcheng Tang, Christopher D Twigg, Charles C Kemp, and James Hays. Contactpose: A dataset of grasps with object contact and hand pose. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIII 16, pages 361-378. Springer, 2020. 2
256
+ [5] Junuk Cha, Jihyeon Kim, Jae Shin Yoon, and Seungryul Baek. Text2hoi: Text-guided 3d motion generation for hand-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1577-1585, 2024. 2
257
+ [6] Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, et al. Dexycb: A benchmark for capturing hand grasping of objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9044–9053, 2021. 2
258
+ [7] Nalin Chaturvedi, Amit Sanyal, and N. H. Mcclamroch. Rigid-body attitude control. IEEE Control Systems, 31:30-51, 2011. 3
259
+ [8] Sirui Chen, Albert Wu, and C. Karen Liu. Synthesizing dexterous nonprehensile pregrasp for ungraspable objects. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings, page 1-10. ACM, 2023. 2
260
+ [9] Ti Chen, Jinjun Shan, and Hao Wen. Distributed adaptive attitude control for networked underactuated flexible spacecraft. IEEE Transactions on Aerospace and Electronic Systems, 55(1):215-225, 2019. 3
261
+ [10] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938, 2021. 2
262
+ [11] Sammy Christen, Muhammed Kocabas, Emre Aksan, Jemin Hwangbo, Jie Song, and Otmar Hilliges. D-grasp: Physically plausible dynamic grasp synthesis for handobject interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20577-20586, 2022. 2
263
+ [12] Sammy Christen, Wei Yang, Claudia Pérez-D'Arpino, Otmar Hilliges, Dieter Fox, and Yu-Wei Chao. Learning
264
+
265
+ human-to-robot handovers from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9654–9664, 2023. 2
266
+ [13] Sammy Christen, Shreyas Hampali, Fadime Sener, Edoardo Remelli, Tomas Hodan, Eric Sauser, Shugao Ma, and Bugra Tekin. Diffh2o: Diffusion-based synthesis of hand-object interactions from textual descriptions. In SIGGRAPH Asia 2024 Conference Papers, pages 1-11, 2024. 1, 2, 3, 6, 7, 8
267
+ [14] Peishan Cong, Ziyi Wang, Yuexin Ma, and Xiangyu Yue. Semgeomo: Dynamic contextual human motion generation with semantic and geometric guidance, 2025. 2
268
+ [15] Divyanshu Daiya, Damon Conover, and Aniket Bera. Collage: Collaborative human-agent interaction generation using hierarchical latent diffusion and language models. arXiv preprint arXiv:2409.20502, 2024. 2
269
+ [16] Abe Davis, Justin G. Chen, and Frédo Durand. Image-space modal bases for plausible manipulation of objects in video. ACM Transactions on Graphics (SIGGRAPH), 34(6):1-7, 2015. 3
270
+ [17] Myers Abraham Davis. Visual vibration analysis. PhD thesis, Massachusetts Institute of Technology, 2016. 3
271
+ [18] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 2
272
+ [19] Julien Diener, Mathieu Rodriguez, Lionel Baboud, and Lionel Reveret. Wind projection basis for real-time animation of trees. In Computer Graphics Forum, pages 533-540. Wiley Online Library, 2009. 3
273
+ [20] Christian Diller and Angela Dai. Cg-hoi: Contact-guided 3d human-object interaction generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19888-19901, 2024. 2, 3, 7, 8
274
+ [21] Zicong Fan, Omid Taheri, Dimitrios Tzionas, Muhammed Kocabas, Manuel Kaufmann, Michael J Black, and Otmar Hilliges. Arctic: A dataset for dexterous bimanual handobject manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12943-12954, 2023. 2
275
+ [22] Xuehao Gao, Yang Yang, Shaoyi Du, Yang Wu, Yebin Liu, and Guo-Jun Qi. Eigenactor: Variant body-object interaction generation evolved from invariant action basis reasoning, 2025. 2
276
+ [23] Guillermo Garcia-Hernando, Edward Johns, and Tae-Kyun Kim. Physics-based dexterous manipulations with estimated hand poses and residual reinforcement learning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 9561-9568. IEEE, 2020. 2
277
+ [24] Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt, and Philipp Slusallek. Imos: Intent-driven full-body motion synthesis for human-object interactions. In Computer Graphics Forum, pages 1-12. Wiley Online Library, 2023. 2, 6, 8
278
+ [25] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018. 2
279
+
280
+ [26] Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. Honnotate: A method for 3d annotation of hand and object poses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3196-3206, 2020. 2
281
+ [27] Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. Stochastic scene-aware motion prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11374-11384, 2021. 2
282
+ [28] Takeshi Hatanaka, Yuji Igarashi, Masayuki Fujita, and Mark W. Spong. Passivity-based pose synchronization in three dimensions. IEEE Transactions on Automatic Control, 57(2):360-375, 2012. 3
283
+ [29] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3
284
+ [30] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 2, 5
285
+ [31] Jonathan Ho, William Chan, Chitwan Sahara, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Image video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022. 2
286
+ [32] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. Advances in Neural Information Processing Systems, 35:8633-8646, 2022. 2
287
+ [33] Yuji Igarashi, Takeshi Hatanaka, Masayuki Fujita, and Mark W. Spong. Passivity-based attitude synchronization in se(3). IEEE Transactions on Control Systems Technology, 17(5):1119-1134, 2009. 3
288
+ [34] Ishant, Rongliang Wu, and Joo Hwee Lim. Controllable hand grasp generation for hoi and efficient evaluation methods, 2025. 2
289
+ [35] Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. arXiv preprint arXiv:2205.09991, 2022. 2
290
+ [36] Xin Jin, Yang Shi, Yang Tang, and Xiaotai Wu. Event-triggered attitude consensus with absolute and relative attitude measurements. Automatica, 122:109245, 2020. 3
291
+ [37] Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. Guided motion diffusion for controllable human motion synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2151-2162, 2023. 2
292
+ [38] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 2
293
+ [39] Franziska Krebs, Andre Meixner, Isabel Patzer, and Tamim Asfour. The kit bimanual manipulation dataset. In 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids), pages 499-506. IEEE, 2021. 2
294
+ [40] Taein Kwon, Bugra Tekin, Jan Stühmer, Federica Bogo, and Marc Pollefeys. H2o: Two hands manipulating objects for first person interaction recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10138-10148, 2021. 2
295
+
296
+ [41] Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, and C Karen Liu. Controllable human-object interaction synthesis. arXiv preprint arXiv:2312.03913, 2023. 1, 2, 6
297
+ [42] Jiaman Li, Jiajun Wu, and C Karen Liu. Object motion guided human motion synthesis. ACM Transactions on Graphics (TOG), 42(6):1-11, 2023. 1, 2, 3, 6, 7, 8
298
+ [43] Zhengqi Li, Richard Tucker, Noah Snavely, and Aleksander Holynski. Generative image dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24142-24153, 2024. 3
299
+ [44] Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, and Lan Xu. Intergen: Diffusion-based multi-human motion generation under complex interactions. International Journal of Computer Vision, 132(9):3463-3483, 2024. 2
300
+ [45] Qingtao Liu, Yu Cui, Zhengnan Sun, Haoming Li, Gaofeng Li, Lin Shao, Jiming Chen, and Qi Ye. Drexrepnet: Learning dexterous robotic grasping network with geometric and spatial hand-object representations. arXiv preprint arXiv:2303.09806, 2023. 2
301
+ [46] Shaowei Liu, Yang Zhou, Jimei Yang, Saurabh Gupta, and Shenlong Wang. Contactgen: Generative contact modeling for grasp generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20609-20620, 2023. 2
302
+ [47] Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, and Li Yi. Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21013-21022, 2022. 2
303
+ [48] Yun Liu, Haolin Yang, Xu Si, Ling Liu, Zipeng Li, Yuxiang Zhang, Yebin Liu, and Li Yi. Taco: Benchmarking generalizable bimanual tool-action-object understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21740-21751, 2024. 2, 5, 6, 7, 8
304
+ [49] Yen-Chen Liu and Nikhil Chopra. Controlled synchronization of heterogeneous robotic manipulators in the task space. IEEE Transactions on Robotics, 28(1):268-275, 2012. 3
305
+ [50] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11461-11471, 2022. 2
306
+ [51] Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, et al. Isaac gym: High performancegpu-based physics simulation for robot learning. arXiv preprint arXiv:2108.10470, 2021. 2
307
+ [52] Priyanka Mandikal and Kristen Grauman. Learning dexterous grasping with object-centric visual affordances. In 2021 IEEE international conference on robotics and automation (ICRA), pages 6169-6176. IEEE, 2021. 2
308
+ [53] Johan Markdahl. Synchronization on riemannian manifolds: Multiply connected implies multistable. IEEE Transactions on Automatic Control, 66(9):4311-4318, 2020. 3
309
+
310
+ [54] Johan Markdahl, Johan Thunberg, and Jorge Goncalves. High-dimensional kuramoto models on stiefel manifolds synchronize complex networks almost globally. Automatica, 113:108736, 2020. 3
311
+ [55] Ziyang Meng, Wei Ren, and Zheng You. Distributed finite-time attitude containment control for multiple rigid bodies. Automatica, 46(12):2092-2099, 2010. 3
312
+ [56] Aymen Mir, Xavier Puig, Angjoo Kanazawa, and Gerard Pons-Moll. Generating continual human motion in diverse 3d scenes. In 2024 International Conference on 3D Vision (3DV), pages 903-913. IEEE, 2024. 2
313
+ [57] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975-10985, 2019. 3
314
+ [58] Xiaogang Peng, Yiming Xie, Zizhao Wu, Varun Jampani, Deqing Sun, and Huaizu Jiang. Hoi-diff: Text-driven synthesis of 3d human-object interactions using diffusion models. arXiv preprint arXiv:2312.06553, 2023. 2
315
+ [59] Sergey Prokudin, Christoph Lassner, and Javier Romero. Efficient learning on point clouds with basis point sets. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4332-4341, 2019. 4
316
+ [60] Mengshi Qi, Zhe Zhao, and Huadong Ma. Human grasp generation for rigid and deformable objects with decomposed vq-vae, 2025. 2
317
+ [61] Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Ruihan Yang, Yang Fu, and Xiaolong Wang. Dexamv: Imitation learning for dexterous manipulation from human videos. In European Conference on Computer Vision, pages 570-587. Springer, 2022. 2
318
+ [62] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2, 4
319
+ [63] Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017. 2
320
+ [64] Wei Ren. Distributed leaderless consensus algorithms for networked euler-lagrange systems. International Journal of Control, 82:2137-2149, 2009. 3
321
+ [65] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. 2, 4
322
+ [66] Javier Romero, Dimitrios Tzionas, and Michael J. Black. Embodied hands: modeling and capturing hands and bodies together. ACM Transactions on Graphics, 36(6):1-17, 2017. 3
323
+ [67] Alain Sarlette, Rodolphe Sepulchre, and Naomi Ehrich Leonard. Autonomous rigid body attitude synchronization.
324
+
325
+ In 2007 46th IEEE Conference on Decision and Control, pages 2566-2571, 2007.3
326
+ [68] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. 2
327
+ [69] Yonatan Shafir, Guy Tevet, Roy Kapon, and Amit H Bermano. Human motion diffusion as a generative prior. arXiv preprint arXiv:2303.01418, 2023. 2
328
+ [70] Soshi Shimada, Franziska Mueller, Jan Bednarik, Bardia Doosti, Bernd Bickel, Danhang Tang, Vladislav Golyanik, Jonathan Taylor, Christian Theobalt, and Thabo Beeler. Macs: Mass conditioned 3d hand and object motion synthesis. In 2024 International Conference on 3D Vision (3DV), pages 1082-1091. IEEE, 2024. 1, 2, 6, 7, 8
329
+ [71] Mikio Shinya and Alain Fournier. Stochastic motion—motion under the influence of wind. Computer Graphics Forum, 11(3), 1992. 3
330
+ [72] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. Advances in neural information processing systems, 28, 2015. 2
331
+ [73] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 2
332
+ [74] Jos Stam. Multi-scale stochastic modelling of complex natural phenomena. PhD thesis, 1995. 3
333
+ [75] Jos Stam. Stochastic dynamics: Simulating the effects of turbulence on flexible structures. Computer Graphics Forum, 16(3), 1997. 3
334
+ [76] Omid Taheri, Nima Ghorbani, Michael J Black, and Dimitrios Tzionas. Grab: A dataset of whole-body human grasping of objects. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16, pages 581-600. Springer, 2020. 2, 5, 6, 8
335
+ [77] Omid Taheri, Vasileios Choutas, Michael J Black, and Dimitrios Tzionas. Goal: Generating 4d whole-body motion for hand-object grasping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13263-13273, 2022. 2
336
+ [78] Mikihiro Tanaka and Kent Fujiwara. Role-aware interaction generation from textual description. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15953-15963, 2023. 2
337
+ [79] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. Human motion diffusion model. arXiv preprint arXiv:2209.14916, 2022. 1, 2
338
+ [80] Johan Thunberg, Jorge Goncalves, and Xiaoming Hu. Consensus and formation control on se (3) for switching topologies. Automatica, 66:109-121, 2016. 3
339
+ [81] Johan Thunberg, Johan Markdahl, Florian Bernard, and Jorge Goncalves. A lifting method for analyzing distributed synchronization on the unit sphere. Automatica, 96:253-258, 2018. 3
340
+ [82] Johan Thunberg, Johan Markdahl, and Jorge Goncalves. Dynamic controllers for column synchronization of rotation
341
+
342
+ matrices: a qr-factorization approach. Automatica, 93:20-25, 2018. 3
343
+ [83] Yongqi Tian, Xueyu Sun, Haoyuan He, Linji Hao, Ning Ding, and Caigui Jiang. Towards semantic 3d hand-object interaction generation via functional text guidance, 2025. 2
344
+ [84] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pages 5026-5033. IEEE, 2012. 2
345
+ [85] Weikang Wan, Haoran Geng, Yun Liu, Zikang Shan, Yaodong Yang, Li Yi, and He Wang. Unidxgrasp++: Improving dexterous grasping policy learning via geometry-aware curriculum and iterative generalist-specialist learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3891-3902, 2023. 2
346
+ [86] Hanlei Wang. Flocking of networked uncertain Euler-lagrange systems on directed graphs. Automatica, 49 (9):2774-2779, 2013. 3
347
+ [87] Hanlei Wang. Consensus of networked mechanical systems with communication delays: A unified framework. IEEE Transactions on Automatic Control, 59(6):1571-1576, 2014. 3
348
+ [88] Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, and Xiaolong Wang. Synthesizing long-term 3d human motion and interaction in 3d scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9401-9411, 2021. 2
349
+ [89] Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. Scene-aware generative network for human motion synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12206-12215, 2021. 2
350
+ [90] Yilin Wang, Chuan Guo, Li Cheng, and Hai Jiang. Region-grasp: A novel task for contact region controllable hand grasp generation, 2024. 2
351
+ [91] Zifan Wang, Junyu Chen, Ziqing Chen, Pengwei Xie, Rui Chen, and Li Yi. Genh2r: Learning generalizable human-to-robot handover via scalable simulation demonstration and imitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16362–16372, 2024. 1, 2
352
+ [92] Qianyang Wu, Ye Shi, Xiaoshui Huang, Jingyi Yu, Lan Xu, and Jingya Wang. Thor: Text to human-object interaction diffusion via relation intervention. arXiv preprint arXiv:2403.11208, 2024. 2
353
+ [93] Zhen Wu, Jiaman Li, and C Karen Liu. Human-object interaction from human-level instructions. arXiv preprint arXiv:2406.17840, 2024. 2
354
+ [94] Haiyang Xu, Yu Lei, Zeyuan Chen, Xiang Zhang, Yue Zhao, Yilin Wang, and Zhuowen Tu. Bayesian diffusion models for 3d shape reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10628-10638, 2024. 2, 3
355
+ [95] Liang Xu, Xintao Lv, Yichao Yan, Xin Jin, Shuwen Wu, Congsheng Xu, Yifan Liu, Yizhou Zhou, Fengyun Rao, Xingdong Sheng, Yunhui Liu, Wenjun Zeng, and Xiaokang Yang. Inter-x: Towards versatile human-human interaction analysis. In Proceedings of the IEEE/CVF conference
356
+
357
+ on computer vision and pattern recognition, pages 22260-22271, 2024. 2
358
+ [96] Liang Xu, Yizhou Zhou, Yichao Yan, Xin Jin, Wenhan Zhu, Fengyun Rao, Xiaokang Yang, and Wenjun Zeng. Regennet: Towards human action-reaction synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1759-1769, 2024. 2
359
+ [97] Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, and Liang-Yan Gui. Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14928-14940, 2023. 2
360
+ [98] Sirui Xu, Hung Yu Ling, Yu-Xiong Wang, and Liang-Yan Gui. Intermimic: Towards universal whole-body control for physics-based human-object interactions, 2025. 2
361
+ [99] Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, and Cewu Lu. Cpf: Learning a contact potential field to model the hand-object interaction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11097-11106, 2021. 2
362
+ [100] Lixin Yang, Kailin Li, Xinyu Zhan, Fei Wu, Anran Xu, Liu Liu, and Cewu Lu. Oakink: A large-scale knowledge repository for understanding hand-object interaction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 20953-20962, 2022. 2
363
+ [101] Hongwei Yi, Justus Thies, Michael J Black, Xue Bin Peng, and Davis Rempe. Generating human interaction motions in scenes with text control. In European Conference on Computer Vision, pages 246-263. Springer, 2025. 2
364
+ [102] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16010-16021, 2023. 2
365
+ [103] Xinyu Zhan, Lixin Yang, Yifei Zhao, Kangrui Mao, Hanlin Xu, Zenan Lin, Kailin Li, and Cewu Lu. Oakink2: A dataset of bimanual hands-object manipulation in complex task completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 445-456, 2024. 2, 5, 7, 8
366
+ [104] Chengwen Zhang, Yun Liu, Ruofan Xing, Bingda Tang, and Li Yi. Core4d: A 4d human-object-human interaction dataset for collaborative object rearrangement. arXiv preprint arXiv:2406.19353, 2024. 2, 5, 7, 8
367
+ [105] He Zhang, Yuting Ye, Takaaki Shiratori, and Taku Komura. Manipnet: neural manipulation synthesis with a hand-object spatial representation. ACM Transactions on Graphics (ToG), 40(4):1-14, 2021. 2
368
+ [106] Hui Zhang, Sammy Christen, Zicong Fan, Luocheng Zheng, Jemin Hwangbo, Jie Song, and Otmar Hilliges. Artigrasp: Physically plausible synthesis of bi-manual dexterous grasping and articulation. In 2024 International Conference on 3D Vision (3DV), pages 235-246. IEEE, 2024. 2
369
+ [107] Hui Zhang, Sammy Christen, Zicong Fan, Otmar Hilliges, and Jie Song. Graspxl: Generating grasping motions for diverse objects at scale. In European Conference on Computer Vision, pages 386-403. Springer, 2025. 2
370
+
371
+ [108] Jiajun Zhang, Yuxiang Zhang, Liang An, Mengcheng Li, Hongwen Zhang, Zonghai Hu, and Yebin Liu. Manidext: Hand-object manipulation synthesis via continuous correspondence embeddings and residual-guided diffusion. arXiv preprint arXiv:2409.09300, 2024. 2
372
+ [109] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836-3847, 2023. 2
373
+ [110] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022. 2
374
+ [111] Wanyue Zhang, Rishabh Dabral, Vladislav Golyanik, Vasileios Choutas, Eduardo Alvarado, Thabo Beeler, Marc Habermann, and Christian Theobalt. Bimart: A unified approach for the synthesis of 3d bimanual interaction with articulated objects. arXiv preprint arXiv:2412.05066, 2024. 2
375
+ [112] Xiaohan Zhang, Bharat Lal Bhatnagar, Sebastian Starke, Vladimir Guzov, and Gerard Pons-Moll. Couch: Towards controllable human-chair interactions. In European Conference on Computer Vision, pages 518-535. Springer, 2022. 2
376
+ [113] Yixuan Zhang, Hui Yang, Chuanchen Luo, Junran Peng, Yuxi Wang, and Zhaoxiang Zhang. Ood-hoi: Text-driven 3d whole-body human-object interactions generation beyond training domains, 2024. 2
377
+ [114] Yuhong Zhang, Jing Lin, Ailing Zeng, Guanlin Wu, Shunlin Lu, Yurong Fu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang. Motion- $\mathbf{x} + +$ : A large-scale multimodal 3d whole-body human motion dataset, 2025. 2
378
+ [115] Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, and Siyu Tang. Compositional human-scene interaction synthesis with semantic control. In European Conference on Computer Vision, pages 311-327. Springer, 2022. 2
379
+ [116] Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, and Siyu Tang. Synthesizing diverse human motions in 3d indoor scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14738-14749, 2023. 2
380
+ [117] Juntian Zheng, Qingyuan Zheng, Lixing Fang, Yun Liu, and Li Yi. Cams: Canonicalized manipulation spaces for category-level functional hand-object manipulation synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 585-594, 2023. 2
381
+ [118] Lei Zhong, Yiming Xie, Varun Jampani, Deqing Sun, and Huaizu Jiang. Smoodi: Stylized motion diffusion model. In European Conference on Computer Vision, pages 405-421. Springer, 2025. 2, 3
382
+ [119] Bohan Zhou, Haoqi Yuan, Yuhui Fu, and Zongqing Lu. Learning diverse bimanual dexterous manipulation skills from human demonstrations. arXiv preprint arXiv:2410.02477, 2024. 2
383
+ [120] Keyang Zhou, Bharat Lal Bhatnagar, Jan Eric Lenssen, and Gerard Pons-Moll. Toch: Spatio-temporal object-to-hand
384
+
385
+ correspondence for motion refinement. In European Conference on Computer Vision, pages 1-19. Springer, 2022. 2
386
+ [121] Yao Zou and Ziyang Meng. Velocity-free leader-follower cooperative attitude tracking of multiple rigid bodies on so(3). IEEE Transactions on Cybernetics, 49(12):4078-4089, 2019. 3
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bad19379135e1579c208311cafef5d914bda29cb7845fba6343029b7243f030
3
+ size 661759
2025/SyncDiff_ Synchronized Motion Diffusion for Multi-Body Human-Object Interaction Synthesis/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Synchronization of Multiple Videos/af4605e7-9aa0-4a23-926a-33856d420d35_content_list.json ADDED
@@ -0,0 +1,1441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "Synchronization of Multiple Videos",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 318,
8
+ 94,
9
+ 679,
10
+ 116
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Avihai Naaman\\*1,2, Ron Shapira Weber\\*1,2, and Oren Freifeld1,2,3",
17
+ "bbox": [
18
+ 236,
19
+ 142,
20
+ 759,
21
+ 161
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Department of Computer Science, Ben-Gurion University of the Negev (BGU) $^{2}$ Data Science Research Center, BGU. $^{3}$ School of Brain Sciences and Cognition, BGU.",
28
+ "bbox": [
29
+ 145,
30
+ 162,
31
+ 849,
32
+ 199
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Abstract",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 248,
42
+ 233,
43
+ 326,
44
+ 250
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Synchronizing videos captured simultaneously from multiple cameras in the same scene is often easy and typically requires only simple time shifts. However, synchronizing videos from different scenes or, more recently, generative AI videos, poses a far more complex challenge due to diverse subjects, backgrounds, and nonlinear temporal misalignment. We propose Temporal Prototype Learning (TPL), a prototype-based framework that constructs a shared, compact 1D representation from high-dimensional embeddings extracted by any of various pretrained models. TPL robustly aligns videos by learning a unified prototype sequence that anchors key action phases, thereby avoiding exhaustive pairwise matching. Our experiments show that TPL improves synchronization accuracy, efficiency, and robustness across diverse datasets, including fine-grained frame retrieval and phase classification tasks. Importantly, TPL is the first approach to mitigate synchronization issues in multiple generative AI videos depicting the same action. Our code and a new multiple video synchronization dataset are available at https://bgu-cs-vil.github.io/TPL/",
51
+ "bbox": [
52
+ 89,
53
+ 266,
54
+ 485,
55
+ 568
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1. Introduction",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 89,
65
+ 599,
66
+ 220,
67
+ 616
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Multiple Video Synchronization (MVS) of the same action is a challenging problem in computer vision, particularly in unconstrained settings. Standard solutions often rely on pairwise alignment [2, 8, 15, 32], where each pair of videos is matched in isolation. Although such methods are relatively straightforward for small-scale problems, they suffer from two major shortcomings when extended to multiple videos. The first is the high computational cost. Consider a dataset of $N$ videos, each containing $L$ frames, alongside a new video of length $L$ for synchronization or frame retrieval. In a pairwise approach, every frame must be compared against all $N \\times L$ frames in the training set, incurring an $O(N \\times L^2)$ complexity. This exhaustive nearest-neighbor (NN) search is prohibitively expensive for real-world scenarios, where both $N$ and $L$ can be large.",
74
+ "bbox": [
75
+ 89,
76
+ 627,
77
+ 482,
78
+ 852
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "The second is the lack of global consistency. Even if pairwise alignments yield accurate matches in isolation, they do not necessarily guarantee a joint alignment across the",
85
+ "bbox": [
86
+ 89,
87
+ 854,
88
+ 483,
89
+ 900
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "entire collection of videos. Repeated pairwise matches can conflict, since different pairs may learn disparate references for similar action phases. As a result, there is no unified representation of the action progression that consistently aligns all videos.",
96
+ "bbox": [
97
+ 511,
98
+ 234,
99
+ 906,
100
+ 309
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "To address these issues, we advocate a prototype-based alignment strategy that bypasses the need for a single reference video and enables the synchronization of all videos at once. We propose Temporal Prototype Learning (TPL), which learns one-dimensional 'bottleneck' signals capturing the underlying temporal structure (i.e., action prototypes) as universal anchors. By mapping each frame in every video to a shared temporal axis, TPL ensures global consistency and drastically reduces the computational cost. Synchronizing or retrieving a specific phase at time step $t$ for a new video thus amounts to referencing the $t$ -th point in the learned prototype, rather than searching through the entire dataset. Figure 1 illustrates the TPL framework, where multiple videos are mapped to the same prototype space. Our main contributions are:",
107
+ "bbox": [
108
+ 511,
109
+ 311,
110
+ 908,
111
+ 536
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "list",
117
+ "sub_type": "text",
118
+ "list_items": [
119
+ "- Prototype-Based Synchronization: We introduce a novel approach to jointly align multiple videos via a shared prototype space, overcoming the scalability and consistency challenges of pairwise methods.",
120
+ "- Diffeomorphic Multitasking Autoencoder (D-MTAE): A novel architecture for learning one-dimensional 'bottle-neck' representation from multivariate video embeddings. D-MTAE can be trained on any pretrained video feature extractor and enables fast inference and robust multiple alignment.",
121
+ "- Linear-Time Frame Retrieval: since, after alignment, semantically similar frames are mapped to the same time point, frame retrieval simply entails returning all frames at that time point.",
122
+ "- Synchronization of GenAI videos: We show that TPL can sync not only multiple real-world videos but also multiple AI-generated videos depicting the same action. To demonstrate this, we also generated and annotated (for evaluation purposes only) the first GenAI-MVS dataset."
123
+ ],
124
+ "bbox": [
125
+ 513,
126
+ 539,
127
+ 908,
128
+ 827
129
+ ],
130
+ "page_idx": 0
131
+ },
132
+ {
133
+ "type": "text",
134
+ "text": "2. Related Work",
135
+ "text_level": 1,
136
+ "bbox": [
137
+ 511,
138
+ 843,
139
+ 653,
140
+ 859
141
+ ],
142
+ "page_idx": 0
143
+ },
144
+ {
145
+ "type": "text",
146
+ "text": "Video Representation Learning. Several approaches leverage sequence-level or pairwise matching signals to",
147
+ "bbox": [
148
+ 511,
149
+ 869,
150
+ 906,
151
+ 902
152
+ ],
153
+ "page_idx": 0
154
+ },
155
+ {
156
+ "type": "header",
157
+ "text": "CVF",
158
+ "bbox": [
159
+ 106,
160
+ 2,
161
+ 181,
162
+ 42
163
+ ],
164
+ "page_idx": 0
165
+ },
166
+ {
167
+ "type": "header",
168
+ "text": "This ICCV paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.",
169
+ "bbox": [
170
+ 238,
171
+ 0,
172
+ 807,
173
+ 46
174
+ ],
175
+ "page_idx": 0
176
+ },
177
+ {
178
+ "type": "page_number",
179
+ "text": "12514",
180
+ "bbox": [
181
+ 480,
182
+ 944,
183
+ 519,
184
+ 957
185
+ ],
186
+ "page_idx": 0
187
+ },
188
+ {
189
+ "type": "image",
190
+ "img_path": "images/d48cf64447a0cc1303050bea242132d46095a750b43725cc0e4791d64ff139bd.jpg",
191
+ "image_caption": [
192
+ "The Ball Release key event",
193
+ "Figure 1. Temporal Prototype Learning (TPL) uses an 'off-the-shelf' feature extractor, denoted by $\\phi$ , to generate initial multichannel action progression sequences for videos of the same action (e.g., Ball pitch [33]). Colors indicate different (and temporally-misaligned) videos of the same action. TPL produces the joint alignment and prototypical sequence, mapping key events (e.g., Ball Release)"
194
+ ],
195
+ "image_footnote": [],
196
+ "bbox": [
197
+ 135,
198
+ 88,
199
+ 859,
200
+ 330
201
+ ],
202
+ "page_idx": 1
203
+ },
204
+ {
205
+ "type": "text",
206
+ "text": "learn representations from videos. TCC [8] focuses on local alignment across pairs of videos, while GTA [11] extends this to longer sequences via a relaxed DTW-based contrastive loss. LAV [12] introduces additional regularization to avoid trivial solutions, and VAVA [15] allows variations in action order using priors on the optimal transport matrix. CARL [2] adopts a transformer-based contrastive framework with spatial and temporal augmentations. Although these methods have demonstrated success in pairwise settings, scaling them to large collections of videos typically requires extensive nearest-neighbor (NN) searches, which is computationally demanding and memory-intensive. In parallel, large-scale pretrained image models such as DINO [1] or OpenCLIP [3] can be used for videos by extracting the [CLS] token or feature vector from each frame. Stacking these tokens over time yields a sequence of embeddings that capture both spatial semantics (from the pretrained image model) and temporal information (through the ordering of frames). We show TPL can use these embeddings for synchronizing multiple videos.",
207
+ "bbox": [
208
+ 88,
209
+ 425,
210
+ 485,
211
+ 714
212
+ ],
213
+ "page_idx": 1
214
+ },
215
+ {
216
+ "type": "text",
217
+ "text": "Time-Series Alignment. Classical time-series alignment algorithms such as Dynamic Time Warping (DTW) [22, 23] and SoftDTW [4] are widely used to find optimal, order-preserving alignments between a pair of temporal sequences. SoftDTW, in particular, offers a differentiable variant, which has enabled end-to-end training when coupled with neural network feature extractors. However, these methods have a quadratic complexity in both time and memory w.r.t. sequence length, limiting their scalability.",
218
+ "bbox": [
219
+ 89,
220
+ 763,
221
+ 485,
222
+ 902
223
+ ],
224
+ "page_idx": 1
225
+ },
226
+ {
227
+ "type": "text",
228
+ "text": "Prototype Learning and Temporal Prototypes. Prototype learning has proven effective in few-shot learning scenarios [29], where prototypical representations facilitate robust classification with minimal labeled data. In principle, a temporal prototype can serve a similar role for video alignment, essentially summarizing the action progression into a single sequence. DTW Barycenter Averaging (DBA) [19, 20] and SoftDTW barycenters (SoftDBA) [4] offer ways to compute an average sequence under their respective distance function. Diffeomorphic Temporal Alignment Net (DTAN) [16, 26, 27] learns diffeomorphic warping functions [9, 10], effectively jointly aligning all input sequences to their average. Although DTAN enables end-to-end learning of joint alignment (JA), it was not developed for aligning high-dimensional video embeddings with large variation in length. In this work, we show how TPL addresses these issues.",
229
+ "bbox": [
230
+ 511,
231
+ 425,
232
+ 908,
233
+ 667
234
+ ],
235
+ "page_idx": 1
236
+ },
237
+ {
238
+ "type": "text",
239
+ "text": "Multiple Video Synchronization: Limitations and Gaps. While pairwise alignment and small-scale joint alignment approaches have made significant progress, critical gaps remain for large-scale, multi-video synchronization. First, naively extending pairwise methods to $N$ videos often leads to $O(N \\times L^2)$ complexity in retrieval or synchronization, posing severe scalability constraints. Second, aligning pairs independently does not guarantee a globally-consistent representation across all videos. Previous multiple video synchronization (MVS) methods focus on well-behaved scenarios that mainly involve multiple cameras recording the same scene, where the misalignment can be explained by a simple translation [14, 28, 31]. However, synchronizing different scenes requires nonlinear alignment of the time axis.",
240
+ "bbox": [
241
+ 511,
242
+ 689,
243
+ 908,
244
+ 901
245
+ ],
246
+ "page_idx": 1
247
+ },
248
+ {
249
+ "type": "page_number",
250
+ "text": "12515",
251
+ "bbox": [
252
+ 480,
253
+ 944,
254
+ 517,
255
+ 955
256
+ ],
257
+ "page_idx": 1
258
+ },
259
+ {
260
+ "type": "table",
261
+ "img_path": "images/0066c9461162356cbc1e4b0e9d55ec2668c90b4747350aecd64d991503435c3f.jpg",
262
+ "table_caption": [
263
+ "Table 1. Table of notations."
264
+ ],
265
+ "table_footnote": [],
266
+ "table_body": "<table><tr><td>Symbol</td><td>Description</td></tr><tr><td>Si</td><td>i-th video.</td></tr><tr><td>st</td><td>Frame t of video Si.</td></tr><tr><td>utl= φ(st)</td><td>Per-frame embedding.</td></tr><tr><td>Ui = {ut}Lt=1</td><td>Embedded feature sequence, ∈ RC×L.</td></tr><tr><td>Ui</td><td>Reconstructed embedded sequence.</td></tr><tr><td>θi ∈ Rd</td><td>Predicted warp parameters for Ui.</td></tr><tr><td>Tθi</td><td>Parametric time-warp associated with θi.</td></tr><tr><td>Zi</td><td>Univariate representation.</td></tr><tr><td>U/ Z</td><td>Average aligned sequence.</td></tr></table>",
267
+ "bbox": [
268
+ 129,
269
+ 114,
270
+ 444,
271
+ 256
272
+ ],
273
+ "page_idx": 2
274
+ },
275
+ {
276
+ "type": "text",
277
+ "text": "In this paper, we propose a new framework, called Temporal Prototype Learning (TPL), that addresses the limitations of existing approaches by jointly aligning multiple videos in a single, shared prototype space. This design enables robust synchronization, eliminates the need for exhaustive nearest-neighbor searches, and yields a linear-time retrieval mechanism for new video sequences.",
278
+ "bbox": [
279
+ 89,
280
+ 281,
281
+ 485,
282
+ 388
283
+ ],
284
+ "page_idx": 2
285
+ },
286
+ {
287
+ "type": "text",
288
+ "text": "3. Method",
289
+ "text_level": 1,
290
+ "bbox": [
291
+ 89,
292
+ 401,
293
+ 181,
294
+ 417
295
+ ],
296
+ "page_idx": 2
297
+ },
298
+ {
299
+ "type": "text",
300
+ "text": "We propose a novel approach for the synchronization of multiple videos without a reference. Our goal is to map similar action sequences to the same time step w.r.t. the action progression. To achieve this, we introduce TPL, which involves performing simultaneous dimensionality reduction and JA in the embedded space using a novel D-MTAE (Figure 2 depicts the framework). This section is organized as follows. We first review the required preliminaries in §3.1. In §3.2, we present a detailed explanation of the TPL framework, including its modules and loss functions. In §3.3, we describe how to perform MVS and annotation transfer with TPL. Lastly, we discuss the limitations of TPL §3.4.",
301
+ "bbox": [
302
+ 89,
303
+ 428,
304
+ 483,
305
+ 609
306
+ ],
307
+ "page_idx": 2
308
+ },
309
+ {
310
+ "type": "text",
311
+ "text": "3.1. Preliminaries",
312
+ "text_level": 1,
313
+ "bbox": [
314
+ 89,
315
+ 619,
316
+ 232,
317
+ 633
318
+ ],
319
+ "page_idx": 2
320
+ },
321
+ {
322
+ "type": "text",
323
+ "text": "Notation and Setup (see Table 1). Consider $N$ videos, $(S_{i})_{i = 1}^{N}$ . Let $S_{i} = (s_{1}^{i}, s_{2}^{i}, \\ldots, s_{L}^{i})$ be a video of length $L$ , where $s_{t}^{i}$ is the $t$ -th frame. We define the per-frame embedding $u_{t}^{i} = \\phi(s_{t}^{i}) \\in \\mathbb{R}^{C}$ , where $\\phi$ is a feature extractor and $C$ is the number of channels (i.e., the embedding dimension) of the representation of the video. The embedded feature sequence is $U_{i} = \\{u_{t}^{i}\\}_{t = 1}^{L} \\in \\mathbb{R}^{C \\times L}$ . Thus, the set of video embeddings to be synchronized is $\\{U_{i}\\}_{i = 1}^{N}$ . $U_{i}$ could either be produced by applying an image-based classifier (i.e., the DINO [CLS] token [1]) to each frame or a video-based one such as CARL [2]. For each $U_{i}$ , we denote the predicted warping parameters and the corresponding time warp as $\\theta_{i} \\in \\mathbb{R}^{d}$ and $T^{\\theta_{i}}$ respectively, such that $U_{i} \\circ T^{\\theta_{i}}$ is the warped sequence and $T^{\\theta_{i}}$ belongs to a $d$ -dimensional parametric transformation family. Finally, the average of the temporally-aligned sequences is $\\widehat{U} = \\frac{1}{N} \\sum_{i = 1}^{N} U_{i} \\circ T^{\\theta_{i}}$ .",
324
+ "bbox": [
325
+ 89,
326
+ 642,
327
+ 483,
328
+ 885
329
+ ],
330
+ "page_idx": 2
331
+ },
332
+ {
333
+ "type": "text",
334
+ "text": "The Joint Alignment (JA) problem can then be thought of",
335
+ "bbox": [
336
+ 109,
337
+ 885,
338
+ 483,
339
+ 900
340
+ ],
341
+ "page_idx": 2
342
+ },
343
+ {
344
+ "type": "text",
345
+ "text": "as finding the set of warping parameters between $\\{U_i\\}_{i=1}^N$ and $\\widehat{U}$ which minimize their discrepancy, $D$ (e.g., the Euclidean distance). Since $\\widehat{U}$ is unknown, the JA problem becomes:",
346
+ "bbox": [
347
+ 511,
348
+ 90,
349
+ 906,
350
+ 138
351
+ ],
352
+ "page_idx": 2
353
+ },
354
+ {
355
+ "type": "equation",
356
+ "text": "\n$$\n\\left(T ^ {\\theta_ {i} ^ {*}}\\right) _ {i = 1} ^ {N}, \\mu = \\underset {\\left(T ^ {\\theta_ {i}}\\right) _ {i = 1} ^ {N} \\in \\mathcal {T}, U} {\\arg \\min } \\sum_ {i = 1} ^ {N} D (U, U _ {i} \\circ T _ {i}) \\tag {1}\n$$\n",
357
+ "text_format": "latex",
358
+ "bbox": [
359
+ 563,
360
+ 148,
361
+ 906,
362
+ 189
363
+ ],
364
+ "page_idx": 2
365
+ },
366
+ {
367
+ "type": "text",
368
+ "text": "where $(T^{\\theta_i^*})_{i=1}^N$ and $\\mu$ denote the optimal warping parameters and average sequence, respectively, and $\\mathcal{T}$ is the transformation family (e.g., phase-shift, elastic, etc.). Partly due to the unsupervised nature of the task, a regularization term is usually added to avoid trivial solutions and/or unrealistic deformations. The problem is then reformulated as:",
369
+ "bbox": [
370
+ 511,
371
+ 199,
372
+ 906,
373
+ 291
374
+ ],
375
+ "page_idx": 2
376
+ },
377
+ {
378
+ "type": "equation",
379
+ "text": "\n$$\n\\left(T ^ {\\boldsymbol {\\theta} _ {i} ^ {*}}\\right) _ {i = 1} ^ {N}, \\mu = \\underset {\\left(T ^ {\\boldsymbol {\\theta} _ {i}}\\right) _ {i = 1} ^ {N} \\in \\mathcal {T}, U} {\\arg \\min } \\sum_ {i = 1} ^ {N} D (U, U _ {i} \\circ T _ {i}) + \\mathcal {R} \\left(T ^ {\\boldsymbol {\\theta} _ {i}}; \\lambda\\right) \\tag {2}\n$$\n",
380
+ "text_format": "latex",
381
+ "bbox": [
382
+ 522,
383
+ 299,
384
+ 906,
385
+ 357
386
+ ],
387
+ "page_idx": 2
388
+ },
389
+ {
390
+ "type": "text",
391
+ "text": "where $\\mathcal{R}(T^{\\theta_i};\\lambda)$ is the regularizer over $T^{\\theta_i}$ , and $\\lambda$ is a hyperparameter (HP) controlling the regularization strength. An important, yet often overlooked, fact is that $\\lambda$ is usually dataset specific, must be found via an expensive search, and that finding a good value requires supervision (i.e., ground-truth labels are needed to rank the performance with different values of $\\lambda$ ). To alleviate this issue, we follow a regularization-free approach [26] to JA which uses the Inverse-Consistency Averaging Error (ICAE; detailed below),",
392
+ "bbox": [
393
+ 511,
394
+ 367,
395
+ 908,
396
+ 503
397
+ ],
398
+ "page_idx": 2
399
+ },
400
+ {
401
+ "type": "text",
402
+ "text": "Diffeomorphic Temporal Alignment Nets (DTAN). DTAN [16, 26, 27] is a learning-based model designed for time series JA. Given $N$ sequences $\\{U_i\\}_{i=1}^N$ , DTAN predicts a set of continuous time-warp parameters $\\{\\theta_i\\}_{i=1}^N$ to minimize the within-class variance. This is akin to finding the average sequence. These warps are applied via CPAB transformations [9, 10] (described below). DTAN has been designed for univariate time series and was evaluated on the relatively 'well-behaved' UCR archive [5]. While DTAN could arguably be generalized to multivariate representations of videos, applying a single warp to each multivariate sequence can overlook channel-specific temporal variations that, in turn, hinder the average sequence's computation. Another limitation specific to ICAE is that the JA of variable-length multivariate data (as opposed to variable-length univariate data) usually results in a 'shrinking' effect, where the average sequence length is much shorter than the original data.",
403
+ "bbox": [
404
+ 511,
405
+ 521,
406
+ 908,
407
+ 779
408
+ ],
409
+ "page_idx": 2
410
+ },
411
+ {
412
+ "type": "text",
413
+ "text": "Our proposed TPL resolves these issues by 1) introducing a univariate \"bottleneck\" that discards channel-specific variations not shared across all sequences, and 2) setting the average sequence to match the median length of the data. This design allows for robust JA in the high-dimensional embedding space while retaining the desirable properties of DTAN. That is, end-to-end, misalignment-invariant learning of a shared temporal structure across multiple videos.",
414
+ "bbox": [
415
+ 511,
416
+ 779,
417
+ 910,
418
+ 901
419
+ ],
420
+ "page_idx": 2
421
+ },
422
+ {
423
+ "type": "page_number",
424
+ "text": "12516",
425
+ "bbox": [
426
+ 480,
427
+ 944,
428
+ 519,
429
+ 955
430
+ ],
431
+ "page_idx": 2
432
+ },
433
+ {
434
+ "type": "image",
435
+ "img_path": "images/35d06d29be54668a0189801c488b3602392dc0f99681d0d83ff77f2d353dd025.jpg",
436
+ "image_caption": [
437
+ "Figure 2. Diffeomorphic Multitasking Autoencoder (D-MTAE) for Temporal Prototype Learning, consists of: 1) $\\Psi_{\\mathrm{enc}}$ , an encoder for dimensionality reduction; 2) $\\Psi_{\\mathrm{Align}}$ [27], for joint alignment; and 3) $\\Psi_{\\mathrm{Dec}}$ , a decoder. The losses for JA and DR are $\\mathcal{L}_{\\text{ICAE}}$ and $\\mathcal{L}_{\\text{rec}}$ respectively. The feature extractor, $\\phi$ , could either be trained per dataset (e.g., CARL [2]) or a pretrained foundation model (e.g., DINO [1])."
438
+ ],
439
+ "image_footnote": [],
440
+ "bbox": [
441
+ 112,
442
+ 85,
443
+ 883,
444
+ 392
445
+ ],
446
+ "page_idx": 3
447
+ },
448
+ {
449
+ "type": "text",
450
+ "text": "CPAB Transformations. The CPAB (Continuous Piecewise Affine-Based) warp [9, 10] lies at the core of DTAN. Unlike discrete alignment approaches (e.g., DTW), a CPAB transformation is parameterized by a parameter vector $\\theta$ that defines a Continuous Piecewise Affine (CPA) velocity field, $v^{\\theta}$ , such that its integration yields a diffeomorphism (namely, a smooth, differentiable map, with a differentiable inverse), $T^{\\theta}$ . In the context of time series, this is a differentiable order-preserving time warp. This approach has three major advantages for learning-based alignment:",
451
+ "bbox": [
452
+ 88,
453
+ 470,
454
+ 482,
455
+ 622
456
+ ],
457
+ "page_idx": 3
458
+ },
459
+ {
460
+ "type": "list",
461
+ "sub_type": "text",
462
+ "list_items": [
463
+ "1. Efficiency and Accuracy: CPA velocity fields permit fast and accurate integration [9, 10], making them suitable for large-scale video data.",
464
+ "2. Closed-Form Gradients: The CPAB gradient, $\\nabla_{\\theta}T^{\\theta}$ , also admits a closed-form solution [16], which enables stable end-to-end training of neural alignment models.",
465
+ "3. Invertibility and symmetry: CPAB warps are invertible, where $(T^{\\theta})^{-1} = T^{-\\theta}$ . This is in contrast to DTW, which might produce different warping paths for DTW(X,Y) and DTW(Y,X),"
466
+ ],
467
+ "bbox": [
468
+ 89,
469
+ 623,
470
+ 482,
471
+ 773
472
+ ],
473
+ "page_idx": 3
474
+ },
475
+ {
476
+ "type": "text",
477
+ "text": "Once a DTAN has been trained for a particular class of sequences, it can be applied directly to new data without re-solving an alignment objective from scratch, thus making the entire pipeline efficient for both training and inference.",
478
+ "bbox": [
479
+ 89,
480
+ 775,
481
+ 482,
482
+ 835
483
+ ],
484
+ "page_idx": 3
485
+ },
486
+ {
487
+ "type": "text",
488
+ "text": "3.2. Temporal Prototype Learning",
489
+ "text_level": 1,
490
+ "bbox": [
491
+ 89,
492
+ 847,
493
+ 356,
494
+ 864
495
+ ],
496
+ "page_idx": 3
497
+ },
498
+ {
499
+ "type": "text",
500
+ "text": "Architecture. Given $N$ videos depicting the same action and their high-dimensional embeddings, $\\{U_i\\}_{i=1}^N$ , we seek",
501
+ "bbox": [
502
+ 89,
503
+ 869,
504
+ 482,
505
+ 902
506
+ ],
507
+ "page_idx": 3
508
+ },
509
+ {
510
+ "type": "text",
511
+ "text": "to learn a temporal prototype, $\\widehat{U} \\in \\mathbb{R}^{C \\times L}$ where $L$ is the prototype's length and $C$ is the number of channels in the learned representation. Since $\\{U_i\\}_{i=1}^N$ are misaligned, a simple averaging will result in a distorted average sequence that represents the data poorly. Another key insight is that while at each time $t$ , $u_t^i \\in \\mathbb{R}^C$ , the $(u_t^i)_{t=1}^{L_i}$ values (where $L_i$ is the length of $S_i$ , hence also of $U_i$ ) should represent, in theory, phases in a 1D action progression. Thus, to learn temporal prototypes of action progression, a 1D representation should suffice. This is further motivated by the fact that the high-dimensional representation might hold irrelevant information, which hinders the alignment task. Taking the discussion above into consideration, we propose a simultaneous dimensionality reduction and JA to achieve a compact representation of the action and its progression.",
512
+ "bbox": [
513
+ 511,
514
+ 470,
515
+ 906,
516
+ 698
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "Specifically, we introduce a novel Diffeomorphic Multitasking Autoencoder (D-MTAE; depicted in Figure 2) designed to learn dimensionality reduction and joint alignment. D-MTAE consists of: 1) an encoder, $\\Psi_{\\mathrm{encoder}}: \\mathbb{R}^{C \\times L_i} \\to \\mathbb{R}^{L_i}$ , that maps the $C$ -dimensional embedding sequence, $U_i \\in \\mathbb{R}^{C \\times L_i}$ , into a latent 1D projection, $Z_i \\in \\mathbb{R}^{L_i}$ ; 2) an alignment module, $\\Psi_{\\mathrm{Align}}$ , that performs JA on the latent representations, $(Z_i)_{i=1}^N$ ; 3) a decoder model, $\\Psi_{\\mathrm{decoder}}: \\mathbb{R}^{L_i} \\to \\mathbb{R}^{C \\times L_i}$ , that maps the latent projection back to the original domain.",
523
+ "bbox": [
524
+ 511,
525
+ 698,
526
+ 908,
527
+ 835
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "Latent Representation Alignment Loss. The encoder, $\\Psi_{\\mathrm{encoder}}$ , is a Temporal Convolutional Network (TCN) that maps each $U_{i}$ to a univariate latent sequence $Z_{i} \\in \\mathbb{R}^{L_{i}}$ .",
534
+ "bbox": [
535
+ 511,
536
+ 854,
537
+ 908,
538
+ 902
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "page_number",
544
+ "text": "12517",
545
+ "bbox": [
546
+ 480,
547
+ 944,
548
+ 517,
549
+ 955
550
+ ],
551
+ "page_idx": 3
552
+ },
553
+ {
554
+ "type": "text",
555
+ "text": "The alignment module $\\Psi_{\\mathrm{Align}}$ predicts warping parameters $\\{\\theta_i\\}_{i = 1}^N$ to produce time-warped latent signals $\\widehat{Z}_i = Z_i\\circ T^{\\theta_i}$ . We seek a shared prototype $\\widehat{Z}\\in \\mathbb{R}^{L}$ that captures the common temporal progression across all videos. Building on the Inverse Consistency Averaging Error (ICAE) [26], which enables JA without explicit warp regularization, we minimize",
556
+ "bbox": [
557
+ 89,
558
+ 90,
559
+ 483,
560
+ 186
561
+ ],
562
+ "page_idx": 4
563
+ },
564
+ {
565
+ "type": "equation",
566
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {I C A E}} = \\left. \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\right\\| \\widehat {Z} \\circ T ^ {- \\theta_ {i}} - Z _ {i} \\Big \\| _ {\\ell_ {2}} ^ {2}. \\tag {3}\n$$\n",
567
+ "text_format": "latex",
568
+ "bbox": [
569
+ 166,
570
+ 195,
571
+ 483,
572
+ 236
573
+ ],
574
+ "page_idx": 4
575
+ },
576
+ {
577
+ "type": "text",
578
+ "text": "Since videos can vary greatly in length, we fix the length of $\\widehat{Z}$ to be the median of all video lengths to prevent \"collapse\" of the prototype (observed empirically when video lengths differ significantly). This approach robustly maintains an appropriate temporal scale in the aligned representation.",
579
+ "bbox": [
580
+ 89,
581
+ 244,
582
+ 485,
583
+ 320
584
+ ],
585
+ "page_idx": 4
586
+ },
587
+ {
588
+ "type": "text",
589
+ "text": "Misalignment-Invariant Reconstruction Loss. Ensuring that $\\widehat{Z}$ accurately reflects the data's true progression requires preventing trivial solutions (e.g., collapsing each $\\widetilde{Z}_i$ to a single repeated scalar). To address this, we include a decoder $\\Psi_{\\mathrm{decoder}}$ that reconstructs the original embeddings from the aligned latents, yielding $\\widetilde{U}_i = \\Psi_{\\mathrm{decoder}}(\\widetilde{Z}_i)$ . We then apply the inverse warp $T^{-\\theta_i}$ to $\\widetilde{U}_i$ and measure the discrepancy from the original embeddings $U_i$ :",
590
+ "bbox": [
591
+ 89,
592
+ 337,
593
+ 483,
594
+ 459
595
+ ],
596
+ "page_idx": 4
597
+ },
598
+ {
599
+ "type": "equation",
600
+ "text": "\n$$\n\\mathcal {L} _ {\\text {r e c}} = \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\left\\| U _ {i} - \\widetilde {U} _ {i} \\circ T ^ {- \\boldsymbol {\\theta} _ {i}} \\right\\| _ {\\ell_ {2}} ^ {2}. \\tag {4}\n$$\n",
601
+ "text_format": "latex",
602
+ "bbox": [
603
+ 176,
604
+ 467,
605
+ 483,
606
+ 507
607
+ ],
608
+ "page_idx": 4
609
+ },
610
+ {
611
+ "type": "text",
612
+ "text": "This misalignment-invariant reconstruction encourages the prototype to capture meaningful temporal structure, as it must remain consistent when warped back to each video's original timeline.",
613
+ "bbox": [
614
+ 89,
615
+ 516,
616
+ 483,
617
+ 575
618
+ ],
619
+ "page_idx": 4
620
+ },
621
+ {
622
+ "type": "text",
623
+ "text": "Overall loss: The overall loss function is obtained by combining the JA loss (Equation 3) and reconstruction loss (Equation 4)",
624
+ "bbox": [
625
+ 89,
626
+ 575,
627
+ 483,
628
+ 621
629
+ ],
630
+ "page_idx": 4
631
+ },
632
+ {
633
+ "type": "equation",
634
+ "text": "\n$$\n\\mathcal {L} _ {\\mathrm {T P L}} = \\lambda_ {t} \\mathcal {L} _ {\\mathrm {I C A E}} + \\mathcal {L} _ {\\mathrm {r e c}} \\tag {5}\n$$\n",
635
+ "text_format": "latex",
636
+ "bbox": [
637
+ 205,
638
+ 631,
639
+ 482,
640
+ 646
641
+ ],
642
+ "page_idx": 4
643
+ },
644
+ {
645
+ "type": "text",
646
+ "text": "where $\\lambda_{t}$ controls the 'annealing' of the alignment loss. This allows for faster and reliable convergence for the simultaneous learning of reconstruction and alignment. It is defined as",
647
+ "bbox": [
648
+ 89,
649
+ 656,
650
+ 483,
651
+ 702
652
+ ],
653
+ "page_idx": 4
654
+ },
655
+ {
656
+ "type": "equation",
657
+ "text": "\n$$\n\\lambda_ {t} = \\frac {1}{1 + e ^ {- \\alpha (t - t _ {0})}} \\tag {6}\n$$\n",
658
+ "text_format": "latex",
659
+ "bbox": [
660
+ 220,
661
+ 710,
662
+ 483,
663
+ 741
664
+ ],
665
+ "page_idx": 4
666
+ },
667
+ {
668
+ "type": "text",
669
+ "text": "where $\\alpha$ is a scaling factor (fixed at 2 for all experiments), $t$ is the current training epoch, and $t_0$ is the epoch at which $\\lambda_t$ reaches 1 and the annealing stops (set to $\\frac{N_{\\mathrm{epo}}}{2}$ ).",
670
+ "bbox": [
671
+ 89,
672
+ 747,
673
+ 482,
674
+ 795
675
+ ],
676
+ "page_idx": 4
677
+ },
678
+ {
679
+ "type": "text",
680
+ "text": "The D-MTAE is trained simultaneously in an end-to-end fashion. We used PyTorch [18] for all of our experiments. The DIFW package [16] was used for the CPAB [9, 10] implementation. Both $\\Psi_{\\mathrm{encoder}}$ and $\\Psi_{\\mathrm{decoder}}$ are 3-layer TCNs. For $\\Psi_{\\mathrm{Align}}$ we follow [26] and use InceptionTime [13]. For a detailed description of the training procedure and hyperparameters, see our supplementary material (SupMat).",
681
+ "bbox": [
682
+ 89,
683
+ 795,
684
+ 483,
685
+ 901
686
+ ],
687
+ "page_idx": 4
688
+ },
689
+ {
690
+ "type": "image",
691
+ "img_path": "images/eca544f028e8c70499b0dd230938862ec1a131566c9d77b49c62a24df64f5320.jpg",
692
+ "image_caption": [],
693
+ "image_footnote": [],
694
+ "bbox": [
695
+ 552,
696
+ 87,
697
+ 849,
698
+ 239
699
+ ],
700
+ "page_idx": 4
701
+ },
702
+ {
703
+ "type": "image",
704
+ "img_path": "images/5b8fd25bb6c00a3abf67065adc6689f6890a2a3336e802fad3c3ba6b7f376d6c.jpg",
705
+ "image_caption": [
706
+ "(a)Baseball swing learned 1D Latent representation.",
707
+ "(b) After synchronization.",
708
+ "Figure 3. Univariate representations learned by TPL for 20 videos depicting a Baseball swing colored by the phase labels, before (top) and after Synchronization (bottom)."
709
+ ],
710
+ "image_footnote": [],
711
+ "bbox": [
712
+ 552,
713
+ 258,
714
+ 846,
715
+ 411
716
+ ],
717
+ "page_idx": 4
718
+ },
719
+ {
720
+ "type": "text",
721
+ "text": "3.3. Multiple Video Synchronization",
722
+ "text_level": 1,
723
+ "bbox": [
724
+ 513,
725
+ 508,
726
+ 794,
727
+ 525
728
+ ],
729
+ "page_idx": 4
730
+ },
731
+ {
732
+ "type": "text",
733
+ "text": "Aligning new videos is achieved by first predicting the warping parameters for the latent representations, $(\\pmb{\\theta}_i)_{i=1}^N$ , and applying them to the original videos; i.e., $(S_i \\circ T^{\\pmb{\\theta}_i})_{i=1}^N$ . The temporal prototype is defined as the average of the representation of the aligned sequences:",
734
+ "bbox": [
735
+ 511,
736
+ 530,
737
+ 906,
738
+ 608
739
+ ],
740
+ "page_idx": 4
741
+ },
742
+ {
743
+ "type": "equation",
744
+ "text": "\n$$\n\\widehat {U} \\triangleq \\frac {1}{N} \\sum_ {i = 1} U _ {i} \\circ T ^ {\\theta_ {i}}. \\tag {7}\n$$\n",
745
+ "text_format": "latex",
746
+ "bbox": [
747
+ 627,
748
+ 617,
749
+ 906,
750
+ 642
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "text",
756
+ "text": "Once the temporal prototypes are computed, we can transfer dense, frame-level, annotations from training to test data. This is achieved by first annotating the prototypes and then transferring their annotations to the test videos using temporal alignment. Formally, let $(A_{i})_{i = 1}^{N}$ be the dense annotations of the input videos (i.e., $A_{i}$ is a sequence of length $L_{i}$ where $A_{i}[t]$ is the frame label at time step $t$ ). To annotate the temporal prototype, we take the mode (i.e., the most frequent label) of the aligned annotations at time step $t$ , $(A_{i} \\circ T^{\\theta_{i}})[t]$ . The prototype labels, $\\widehat{A}$ , for each time step are defined as",
757
+ "bbox": [
758
+ 511,
759
+ 664,
760
+ 908,
761
+ 815
762
+ ],
763
+ "page_idx": 4
764
+ },
765
+ {
766
+ "type": "equation",
767
+ "text": "\n$$\n\\widehat {A} [ t ] \\triangleq \\operatorname {m o d e} (\\mathcal {A} [ t ]) \\tag {8}\n$$\n",
768
+ "text_format": "latex",
769
+ "bbox": [
770
+ 640,
771
+ 824,
772
+ 906,
773
+ 842
774
+ ],
775
+ "page_idx": 4
776
+ },
777
+ {
778
+ "type": "text",
779
+ "text": "where $\\mathcal{A}[t] = ((A_i\\circ T^{\\theta_i})[t])_{i = 1}^{N}$ are all labels at time step $t$ after alignment. New videos are annotated by aligning them to their corresponding class prototype and using the matching",
780
+ "bbox": [
781
+ 511,
782
+ 854,
783
+ 906,
784
+ 901
785
+ ],
786
+ "page_idx": 4
787
+ },
788
+ {
789
+ "type": "page_number",
790
+ "text": "12518",
791
+ "bbox": [
792
+ 480,
793
+ 944,
794
+ 517,
795
+ 955
796
+ ],
797
+ "page_idx": 4
798
+ },
799
+ {
800
+ "type": "image",
801
+ "img_path": "images/e3ce8f708cd58c0764d6ff3baea2bae36a82eda2c5af9449a871116b92eddf0b.jpg",
802
+ "image_caption": [],
803
+ "image_footnote": [],
804
+ "bbox": [
805
+ 98,
806
+ 89,
807
+ 496,
808
+ 223
809
+ ],
810
+ "page_idx": 5
811
+ },
812
+ {
813
+ "type": "image",
814
+ "img_path": "images/5fa0b911d288616bd6390ffb6e9179c28f1bd67ed5ff211df4f5e618f3d8b91e.jpg",
815
+ "image_caption": [
816
+ "Figure 4. Examples from our GenerativeAI Multiple Video Synchronization (GenAI-MVS) dataset, showing seven equally spaced frames before (top) and after (bottom) synchronization. The first video (left) depicts a \"monkey doing dips,\" and the second video (right) shows a \"bear performing a deadlift.\" We highlight mismatches in the original videos in red, and TPL matching in green. In both cases, alignment via TPL successfully synchronizes the key phases of the action progression."
817
+ ],
818
+ "image_footnote": [],
819
+ "bbox": [
820
+ 98,
821
+ 224,
822
+ 496,
823
+ 357
824
+ ],
825
+ "page_idx": 5
826
+ },
827
+ {
828
+ "type": "image",
829
+ "img_path": "images/ce59870285ac1b5b304ac1c5d68763afe49fb656cfbd7ab3bae7f4391d4a9a63.jpg",
830
+ "image_caption": [],
831
+ "image_footnote": [],
832
+ "bbox": [
833
+ 500,
834
+ 90,
835
+ 898,
836
+ 222
837
+ ],
838
+ "page_idx": 5
839
+ },
840
+ {
841
+ "type": "image",
842
+ "img_path": "images/21404b7ca095514f35af9a488be9057ab130093addb730939a4943cde63848d0.jpg",
843
+ "image_caption": [],
844
+ "image_footnote": [],
845
+ "bbox": [
846
+ 500,
847
+ 224,
848
+ 898,
849
+ 357
850
+ ],
851
+ "page_idx": 5
852
+ },
853
+ {
854
+ "type": "text",
855
+ "text": "frame labels. Figure 3 shows the the 1D representations colored by the ground-truth annotations of a 20 Baseball swing videos [33] before and after synchronization.",
856
+ "bbox": [
857
+ 89,
858
+ 450,
859
+ 482,
860
+ 496
861
+ ],
862
+ "page_idx": 5
863
+ },
864
+ {
865
+ "type": "text",
866
+ "text": "3.4. Limitations",
867
+ "text_level": 1,
868
+ "bbox": [
869
+ 89,
870
+ 506,
871
+ 217,
872
+ 520
873
+ ],
874
+ "page_idx": 5
875
+ },
876
+ {
877
+ "type": "text",
878
+ "text": "TPL's effectiveness is intricately tied to the quality of the initial features, i.e., the initial embeddings used. Should these embeddings be of poor quality or fail to adequately represent the data, the resulting outcomes may be suboptimal.",
879
+ "bbox": [
880
+ 89,
881
+ 527,
882
+ 483,
883
+ 588
884
+ ],
885
+ "page_idx": 5
886
+ },
887
+ {
888
+ "type": "text",
889
+ "text": "4. Results",
890
+ "text_level": 1,
891
+ "bbox": [
892
+ 89,
893
+ 602,
894
+ 176,
895
+ 617
896
+ ],
897
+ "page_idx": 5
898
+ },
899
+ {
900
+ "type": "text",
901
+ "text": "In this section, we present a series of experiments designed to demonstrate the effectiveness of TPL for multiple video synchronization (MVS).",
902
+ "bbox": [
903
+ 89,
904
+ 627,
905
+ 483,
906
+ 672
907
+ ],
908
+ "page_idx": 5
909
+ },
910
+ {
911
+ "type": "text",
912
+ "text": "4.1. Datasets",
913
+ "text_level": 1,
914
+ "bbox": [
915
+ 89,
916
+ 681,
917
+ 192,
918
+ 696
919
+ ],
920
+ "page_idx": 5
921
+ },
922
+ {
923
+ "type": "text",
924
+ "text": "We evaluate TPL on the following datasets:",
925
+ "bbox": [
926
+ 89,
927
+ 704,
928
+ 377,
929
+ 718
930
+ ],
931
+ "page_idx": 5
932
+ },
933
+ {
934
+ "type": "list",
935
+ "sub_type": "text",
936
+ "list_items": [
937
+ "1. Pouring [24]: A standard benchmark consisting of 84 videos of people pouring liquids into glasses.",
938
+ "2. Penn Action [33]: This dataset contains 2326 videos of 15 different actions performed in the wild, varying in camera angles, lighting, action duration, backgrounds, and subjects, with phase-level annotations produced by [8].",
939
+ "3. Internet Video Dataset [7]: A smaller dataset, similar to Penn Action, comprising 124 videos of 20 actions. We annotate the phases in the same manner as [8].",
940
+ "4. GenAI Multiple Video Synchronization Dataset: We introduce a first-of-its-kind collection of AI-generated videos using K1ingAI for the task of MVS (GenAI-MVS)."
941
+ ],
942
+ "bbox": [
943
+ 89,
944
+ 719,
945
+ 483,
946
+ 900
947
+ ],
948
+ "page_idx": 5
949
+ },
950
+ {
951
+ "type": "text",
952
+ "text": "For each action, a text prompt is composed, and an initial image is generated using ChatGPT. The image and prompt are then used as input to K1ing AI to generate a video of the action. Multiple videos of the same action are generated in this manner, resulting in natural variation in both visual appearance and temporal execution. The dataset contains 5 classes and 82 hand-picked videos curated for MVS, each accompanied by phase progression annotations (see SupMat for more details).",
953
+ "bbox": [
954
+ 531,
955
+ 450,
956
+ 906,
957
+ 588
958
+ ],
959
+ "page_idx": 5
960
+ },
961
+ {
962
+ "type": "text",
963
+ "text": "4.2. Evaluation Metrics",
964
+ "text_level": 1,
965
+ "bbox": [
966
+ 513,
967
+ 602,
968
+ 696,
969
+ 617
970
+ ],
971
+ "page_idx": 5
972
+ },
973
+ {
974
+ "type": "text",
975
+ "text": "As stated in [6], existing benchmarks often rely on proxy tasks such as phase classification by a linear classifier or Kendall's Tau for phase progression [8]. These metrics had been shown to be affected by spurious correlations between the positional encoding of the model (e.g., CARL [2]) and the phase labels. To evaluate alignment directly, we follow [6] and introduce two metrics: Cycle-Back Consistency (CBC), which measures how well phase labels are preserved when warping videos to the prototype and then un-warping them back, and Phase Label Propagation (PLP), which measures alignment quality by transferring phase labels from a train-set prototype to test videos. We still report phase classification and Kendall's Tau for completeness, but CBC and PLP offer a clearer measure of real-world alignment performance:",
976
+ "bbox": [
977
+ 511,
978
+ 626,
979
+ 906,
980
+ 837
981
+ ],
982
+ "page_idx": 5
983
+ },
984
+ {
985
+ "type": "text",
986
+ "text": "- Cycle-Back Consistency (CBC). Measures how well the prototype maintains phase information. The videos are warped to the prototype and label it according to their annotation. The prototype is then unwarped back to each",
987
+ "bbox": [
988
+ 511,
989
+ 839,
990
+ 908,
991
+ 901
992
+ ],
993
+ "page_idx": 5
994
+ },
995
+ {
996
+ "type": "page_number",
997
+ "text": "12519",
998
+ "bbox": [
999
+ 480,
1000
+ 944,
1001
+ 519,
1002
+ 955
1003
+ ],
1004
+ "page_idx": 5
1005
+ },
1006
+ {
1007
+ "type": "table",
1008
+ "img_path": "images/756c2e6bae45cfdd526119b06bb12557724c4df236a99cbb5027e1b8a5b5639d.jpg",
1009
+ "table_caption": [
1010
+ "Table 2. Comparison of different features and alignment methods on Penn Action, Internet Videos, and Gen AI. We report the alignment objective to minimize (Obj.), Cycle-Back Consistency (CBC), Phase Label Propagation (PLP), and total runtime (Time) in seconds."
1011
+ ],
1012
+ "table_footnote": [],
1013
+ "table_body": "<table><tr><td rowspan=\"2\">Features</td><td rowspan=\"2\">Method</td><td rowspan=\"2\">Obj.</td><td colspan=\"3\">Penn Action</td><td colspan=\"3\">Internet Videos</td><td colspan=\"3\">GenAI-MVS</td></tr><tr><td>CBC</td><td>PLP</td><td>Time</td><td>CBC</td><td>PLP</td><td>Time</td><td>CBC</td><td>PLP</td><td>Time</td></tr><tr><td rowspan=\"2\">Baseline</td><td>Euc.</td><td>Euc.</td><td>0.621</td><td>0.607</td><td>0.65</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>DTAN</td><td>WCSS</td><td>0.42</td><td>0.415</td><td>588</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td rowspan=\"5\">CARL + dataset training</td><td>DTAN</td><td>WCSS + Reg.</td><td>0.647</td><td>0.625</td><td>710</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>DTAN</td><td>ICAE</td><td>0.773</td><td>0.765</td><td>579</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>DBA</td><td>DTW</td><td>0.947</td><td>0.925</td><td>2345</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SoftDTW</td><td>SoftDTW</td><td>0.944</td><td>0.926</td><td>978</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>TPL (ours)</td><td>\\( \\mathcal{L}_{\\text{TPL}} \\)</td><td>0.962</td><td>0.939</td><td>482</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td rowspan=\"6\">DINO-ViT (&#x27;off-the-shelf&#x27;)</td><td>DTAN</td><td>WCSS</td><td>0.418</td><td>0.419</td><td>599</td><td>0.712</td><td>0.63</td><td>215</td><td>0.759</td><td>0.768</td><td>105</td></tr><tr><td>DTAN</td><td>WCSS + Reg.</td><td>0.591</td><td>0.604</td><td>707</td><td>0.764</td><td>0.647</td><td>224</td><td>0.762</td><td>0.752</td><td>110</td></tr><tr><td>DTAN</td><td>ICAE</td><td>0.572</td><td>0.578</td><td>639</td><td>0.874</td><td>0.683</td><td>232</td><td>0.833</td><td>0.740</td><td>114</td></tr><tr><td>DBA</td><td>DTW</td><td>0.756</td><td>0.773</td><td>5415</td><td>0.882</td><td>0.871</td><td>25</td><td>0.886</td><td>0.906</td><td>62</td></tr><tr><td>SoftDTW</td><td>SoftDTW</td><td>0.750</td><td>0.777</td><td>3010</td><td>0.883</td><td>0.872</td><td>53</td><td>0.890</td><td>0.908</td><td>58</td></tr><tr><td>TPL (ours)</td><td>\\( \\mathcal{L}_{\\text{TPL}} \\)</td><td>0.788</td><td>0.803</td><td>534</td><td>0.912</td><td>0.907</td><td>112</td><td>0.953</td><td>0.933</td><td>108</td></tr><tr><td rowspan=\"6\">OpenCLIP (&#x27;off-the-shelf&#x27;)</td><td>DTAN</td><td>WCSS</td><td>0.418</td><td>0.419</td><td>629</td><td>0.792</td><td>0.7</td><td>223</td><td>0.834</td><td>0.803</td><td>104</td></tr><tr><td>DTAN</td><td>WCSS + Reg.</td><td>0.636</td><td>0.615</td><td>736</td><td>0.83</td><td>0.742</td><td>229</td><td>0.754</td><td>0.748</td><td>112</td></tr><tr><td>DTAN</td><td>ICAE</td><td>0.704</td><td>0.688</td><td>680</td><td>0.858</td><td>0.742</td><td>237</td><td>0.535</td><td>0.614</td><td>110</td></tr><tr><td>DBA</td><td>DTW</td><td>0.807</td><td>0.808</td><td>4692</td><td>0.885</td><td>0.842</td><td>33</td><td>0.918</td><td>0.929</td><td>63</td></tr><tr><td>SoftDBA</td><td>SoftDTW</td><td>0.859</td><td>0.831</td><td>1882</td><td>0.901</td><td>0.833</td><td>34</td><td>0.926</td><td>0.921</td><td>68</td></tr><tr><td>TPL (ours)</td><td>\\( \\mathcal{L}_{\\text{TPL}} \\)</td><td>0.873</td><td>0.857</td><td>587</td><td>0.916</td><td>0.902</td><td>146</td><td>0.942</td><td>0.946</td><td>115</td></tr></table>",
1014
+ "bbox": [
1015
+ 145,
1016
+ 128,
1017
+ 854,
1018
+ 376
1019
+ ],
1020
+ "page_idx": 6
1021
+ },
1022
+ {
1023
+ "type": "text",
1024
+ "text": "video, and the phase labels are compared. A higher CBC indicates more accurate and robust synchronization.",
1025
+ "bbox": [
1026
+ 102,
1027
+ 400,
1028
+ 480,
1029
+ 430
1030
+ ],
1031
+ "page_idx": 6
1032
+ },
1033
+ {
1034
+ "type": "list",
1035
+ "sub_type": "text",
1036
+ "list_items": [
1037
+ "- Phase Label Propagation (PLP). Assesses alignment by transferring phase labels from a prototype to each test video. The better the alignment, the more accurately these labels will map onto the correct frames in the test video. PLP thus serves as a direct measure of alignment quality.",
1038
+ "- Phase Classification Accuracy. Assess the embedding quality by training a linear classifier on the per-frame embedding to predict the phase labels.",
1039
+ "- Kendall's Tau. A rank-correlation metric that evaluates the chronological order of phases across videos."
1040
+ ],
1041
+ "bbox": [
1042
+ 89,
1043
+ 431,
1044
+ 482,
1045
+ 582
1046
+ ],
1047
+ "page_idx": 6
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "4.3. Comparison with Multiple Sequence Alignment (MSA) Methods",
1052
+ "text_level": 1,
1053
+ "bbox": [
1054
+ 89,
1055
+ 590,
1056
+ 482,
1057
+ 622
1058
+ ],
1059
+ "page_idx": 6
1060
+ },
1061
+ {
1062
+ "type": "text",
1063
+ "text": "To evaluate our method's performance w.r.t. existing approaches, we align sets of videos depicting the same action using TPL and compare the results against 4 representative MSA methods: Euclidean baseline (Euc.), where we zero-pad all videos to have the same length (according to the longest one) and compute metrics only on each video's valid regions. DBA [19], SoftDBA [4], and DTAN [27]. DBA and SoftDBA are optimization-based methods set to minimize the DTW and SoftDTW from the average sequence, respectively. For SoftDTW, we report the best results among $\\gamma \\in [0.01, 0.1, 1]$ . DTAN is a learning-based method that predicts CPAB [9] warps to minimize the JA loss. We evaluate DTAN with three losses: Within-Class Sum of Squares (WCSS), WCSS + Regularization (WCSS+Reg.), and the current state-of-the-art in time series averaging, DTAN+ICAE [26]. All DTAN models were trained using the closed-form CPAB gradient [16]. We evaluate frame-level embedding from three feature extractors: 1) CARL [2], a video transformer that requires per-dataset",
1064
+ "bbox": [
1065
+ 89,
1066
+ 628,
1067
+ 485,
1068
+ 901
1069
+ ],
1070
+ "page_idx": 6
1071
+ },
1072
+ {
1073
+ "type": "text",
1074
+ "text": "training, 2) DINO-ViT-v2 [17], a pre-train image transformer where we use the per-frame CLS token as the embedding, and 3) OpenCLIP [3], an open-source, more recent variation of CLIP [21]. We note that the available video foundation models (e.g., VideoMAE [30]) do not produce a per-frame embedding vector and were therefore excluded from this evaluation. We report CBC, LPL, and total runtime (training and inference time) on the Penn [33], Internet Videos [7], and GenAI-MVS datasets.",
1075
+ "bbox": [
1076
+ 511,
1077
+ 400,
1078
+ 906,
1079
+ 535
1080
+ ],
1081
+ "page_idx": 6
1082
+ },
1083
+ {
1084
+ "type": "text",
1085
+ "text": "The results are presented in Table 2. We have found that internet videos [7] and GenAI-MVS did not have enough data to train CARL properly and are thus omitted. We observe that TPL significantly outperforms all DTAN variants over all datasets and feature extractors. As discussed in § 3.1, current DTAN formulations are ill-equipped to handle the real-world video embeddings. TPL also outperforms DBA and SoftDBA across all benchmarks. While the margin in performance is less significant compared with DTAN, TPL total runtime is 10 times faster than DBA and 4 - 5 than SoftDBA on the largest dataset, Penn Action [33]. GenAI-MSV results are further discussed in § 4.6.",
1086
+ "bbox": [
1087
+ 511,
1088
+ 537,
1089
+ 908,
1090
+ 717
1091
+ ],
1092
+ "page_idx": 6
1093
+ },
1094
+ {
1095
+ "type": "text",
1096
+ "text": "4.4. Prototype-aligned Features",
1097
+ "text_level": 1,
1098
+ "bbox": [
1099
+ 511,
1100
+ 727,
1101
+ 758,
1102
+ 743
1103
+ ],
1104
+ "page_idx": 6
1105
+ },
1106
+ {
1107
+ "type": "text",
1108
+ "text": "To determine whether TPL prototypes capture meaningful phase progression, we evaluate phase classification accuracy and Kendall's Tau rank correlation on the videos after they have been aligned to the common prototype. By warping each video to the TPL prototype, we test whether these aligned representations provide more discriminative features for recognizing action phases. We conduct these experiments on both Penn Action [33] and Pouring [25] datasets. We compare the prototyped-aligned features to standard benchmarks in video representation learning: TCC [8], GTA [11]",
1109
+ "bbox": [
1110
+ 511,
1111
+ 750,
1112
+ 908,
1113
+ 902
1114
+ ],
1115
+ "page_idx": 6
1116
+ },
1117
+ {
1118
+ "type": "page_number",
1119
+ "text": "12520",
1120
+ "bbox": [
1121
+ 480,
1122
+ 945,
1123
+ 519,
1124
+ 955
1125
+ ],
1126
+ "page_idx": 6
1127
+ },
1128
+ {
1129
+ "type": "text",
1130
+ "text": ", LAV [12], VAVA [15], VSP [32], and CARL[2]. We report the results from their respective papers. The results, presented in Table 3, indicates that TPL-aligned representations are on-par with VSP and CARL, the two strongest baselines. As mentioned in § 4.2, these metrics are not ideal for assessing alignment quality. However, these findings indicate that the synchronized embeddings retain their temporal information after alignment.",
1131
+ "bbox": [
1132
+ 89,
1133
+ 90,
1134
+ 486,
1135
+ 212
1136
+ ],
1137
+ "page_idx": 7
1138
+ },
1139
+ {
1140
+ "type": "table",
1141
+ "img_path": "images/4e3f3d2ff6a059995c747c26ecc37e2bdec1e7895a5b038e923e0dfa91a23b9c.jpg",
1142
+ "table_caption": [
1143
+ "Table 3. Phase classification accuracy (Acc.) & Kendall's Tau ( $\\tau$ ). Positional embedding is indicated (Pos. Emb.)."
1144
+ ],
1145
+ "table_footnote": [],
1146
+ "table_body": "<table><tr><td rowspan=\"2\">Pos. Emb.</td><td rowspan=\"2\">Method</td><td colspan=\"2\">Penn Action</td><td colspan=\"2\">Pouring</td></tr><tr><td>Acc.</td><td>τ</td><td>Acc.</td><td>τ</td></tr><tr><td>X</td><td>TCC [8]</td><td>74.39</td><td>0.623</td><td>86.14</td><td>0.670</td></tr><tr><td>X</td><td>GTA [11]</td><td>78.90</td><td>0.654</td><td>85.16</td><td>0.750</td></tr><tr><td>X</td><td>LAV [12]</td><td>78.68</td><td>0.805</td><td>92.84</td><td>0.856</td></tr><tr><td>X</td><td>VAVA [15]</td><td>84.48</td><td>0.805</td><td>92.84</td><td>0.875</td></tr><tr><td>✓</td><td>VSP [32]</td><td>93.12</td><td>0.986</td><td>93.85</td><td>0.990</td></tr><tr><td>✓</td><td>CARL [2]</td><td>93.07</td><td>0.985</td><td>93.73</td><td>0.992</td></tr><tr><td>✓</td><td>TPL (ours)</td><td>93.31</td><td>0.990</td><td>93.88</td><td>0.993</td></tr></table>",
1147
+ "bbox": [
1148
+ 133,
1149
+ 262,
1150
+ 439,
1151
+ 377
1152
+ ],
1153
+ "page_idx": 7
1154
+ },
1155
+ {
1156
+ "type": "text",
1157
+ "text": "4.5. Frame Retrieval Efficiency",
1158
+ "text_level": 1,
1159
+ "bbox": [
1160
+ 89,
1161
+ 396,
1162
+ 333,
1163
+ 412
1164
+ ],
1165
+ "page_idx": 7
1166
+ },
1167
+ {
1168
+ "type": "text",
1169
+ "text": "TPL's MVS facilitates faster frame retrieval than standard KNN frame-retrieval approach, where each frame from each test video is compared to all frames in all videos in the train set. This implies $O(N_{\\mathrm{train}}N_{\\mathrm{test}}L^2)$ (assuming fixed-length $L$ for simplicity). In contrast, video synchronization allows this process to be linear in $L$ . This advantage arises because TPL establishes a single temporal reference for the action progression, allowing for direct frame lookup at each time step. For evaluation, we perform 1-NN frame retrieval on Penn using CARL's embedding and report the phase classification accuracy and runtime (including inference time for TPL). As shown in Table 4, performing frame retrieval only between synchronized frames is 125 times faster than a full KNN search (0.24 [sec] and 30 [sec], respectively).",
1170
+ "bbox": [
1171
+ 88,
1172
+ 417,
1173
+ 485,
1174
+ 630
1175
+ ],
1176
+ "page_idx": 7
1177
+ },
1178
+ {
1179
+ "type": "text",
1180
+ "text": "4.6. Generalizing to Generative AI Videos",
1181
+ "text_level": 1,
1182
+ "bbox": [
1183
+ 89,
1184
+ 637,
1185
+ 415,
1186
+ 652
1187
+ ],
1188
+ "page_idx": 7
1189
+ },
1190
+ {
1191
+ "type": "text",
1192
+ "text": "Beyond real-world footage, we also study the effectiveness of TPL on synthetic videos generated via a combination of ChatGPT and KlingAI. For each action category, we compose a detailed text prompt and generate an initial reference image using ChatGPT. This image and prompt are then used as input to KlingAI to produce a video illustrating the target action. Finally, we annotate the phase progression in each video similarly to [8]. A key challenge in creating this dataset was identifying videos suitable for MVS. Current video generators often produce truncated action progressions, omit essential phases, or generate clips that do not depict the intended action. After filtering out problematic samples, we retained a diverse set of videos that still pose realistic alignment challenges.",
1193
+ "bbox": [
1194
+ 89,
1195
+ 659,
1196
+ 485,
1197
+ 869
1198
+ ],
1199
+ "page_idx": 7
1200
+ },
1201
+ {
1202
+ "type": "text",
1203
+ "text": "An example of AI-generated video synchronization via TPL is shown in Figure 4. We display seven equally spaced",
1204
+ "bbox": [
1205
+ 89,
1206
+ 869,
1207
+ 483,
1208
+ 902
1209
+ ],
1210
+ "page_idx": 7
1211
+ },
1212
+ {
1213
+ "type": "table",
1214
+ "img_path": "images/f07a44b4ff3d75b87a664f92ed4ff1a39112ef767cdc48ec52d29c39451edb6d.jpg",
1215
+ "table_caption": [
1216
+ "Table 4. Unsynchronized vs. synchronized Nearest-neighbor frame retrieval comparison on Penn Action."
1217
+ ],
1218
+ "table_footnote": [],
1219
+ "table_body": "<table><tr><td>Method</td><td>Complexity</td><td>Time (sec)</td></tr><tr><td>Unsynchronized</td><td>O(NtrainNtestL2)</td><td>30.1</td></tr><tr><td>synchronized (Ours)</td><td>O(NtrainNtestL)</td><td>0.24</td></tr></table>",
1220
+ "bbox": [
1221
+ 547,
1222
+ 128,
1223
+ 870,
1224
+ 184
1225
+ ],
1226
+ "page_idx": 7
1227
+ },
1228
+ {
1229
+ "type": "table",
1230
+ "img_path": "images/e302ddbd6ceee18d4aa15139976af553070471316fe1545515aef008e235861f.jpg",
1231
+ "table_caption": [
1232
+ "Table 5. Ablation study evaluation on Penn Action."
1233
+ ],
1234
+ "table_footnote": [],
1235
+ "table_body": "<table><tr><td>Condition</td><td>CBC</td><td>PLP</td></tr><tr><td>Baseline (Euclidean)</td><td>64.6%</td><td>63.5%</td></tr><tr><td>No 1D Bottleneck</td><td>80.4%</td><td>81.5%</td></tr><tr><td>Encoder Only</td><td>56.5%</td><td>51.3%</td></tr><tr><td>+ Decoder, Standard ICAE</td><td>97.3%</td><td>96.7%</td></tr><tr><td>+ ICAE with Median Length (TPL)</td><td>100%</td><td>100%</td></tr></table>",
1236
+ "bbox": [
1237
+ 552,
1238
+ 220,
1239
+ 870,
1240
+ 309
1241
+ ],
1242
+ "page_idx": 7
1243
+ },
1244
+ {
1245
+ "type": "text",
1246
+ "text": "frames taken from the original (top) and synchronized (bottom) sequences. For instance, in the \"Bear deadlift\" example, TPL successfully aligns key motion phases, including the moment the bear begins lifting and reaches the upright position, demonstrating improved temporal coherence. We also report the CBC and PLP for the MSA methods and report them in Table 2. TPL achieves higher CBC and Phase Label Propagation PLP than traditional Soft/DBA and DTAN. These results highlight the ability of TPL to extend beyond conventional, human-recorded video sources to the emerging domain of AI-generated content, providing a robust solution for synchronizing multiple generative clips depicting the same action.",
1247
+ "bbox": [
1248
+ 511,
1249
+ 332,
1250
+ 908,
1251
+ 529
1252
+ ],
1253
+ "page_idx": 7
1254
+ },
1255
+ {
1256
+ "type": "text",
1257
+ "text": "4.7. Ablation Study",
1258
+ "text_level": 1,
1259
+ "bbox": [
1260
+ 511,
1261
+ 537,
1262
+ 666,
1263
+ 554
1264
+ ],
1265
+ "page_idx": 7
1266
+ },
1267
+ {
1268
+ "type": "text",
1269
+ "text": "Table 5 shows the ablation study for Penn Action using the Euclidean distance as a baseline. Only using the alignment network (without the 1D bottleneck) yields improvement in both CBC and PLP. However, introducing only the encoder diminishes the performance significantly, indicating the importance of reconstruction for stable training, as seen by the significant improvement in results when using the decoder. Finally, enforcing a median-length prototype gives the full TPL framework that achieves the best overall results.",
1270
+ "bbox": [
1271
+ 511,
1272
+ 559,
1273
+ 908,
1274
+ 696
1275
+ ],
1276
+ "page_idx": 7
1277
+ },
1278
+ {
1279
+ "type": "text",
1280
+ "text": "5. Conclusion",
1281
+ "text_level": 1,
1282
+ "bbox": [
1283
+ 511,
1284
+ 709,
1285
+ 633,
1286
+ 724
1287
+ ],
1288
+ "page_idx": 7
1289
+ },
1290
+ {
1291
+ "type": "text",
1292
+ "text": "We introduced Temporal Prototype Learning (TPL), a novel framework for synchronizing multiple videos from different scenes without relying on a reference by simultaneously reducing high-dimensional embeddings to a univariate representation. TPL outperforms existing alignment methods on a range of real-world datasets, while also generalizing effectively to AI-generated content exhibiting diverse visual styles and timing variations. Moreover, its prototype-based alignment yields faster frame retrieval and requires fewer pairwise comparisons, making TPL well-suited for large-scale video analytics.",
1293
+ "bbox": [
1294
+ 511,
1295
+ 734,
1296
+ 908,
1297
+ 902
1298
+ ],
1299
+ "page_idx": 7
1300
+ },
1301
+ {
1302
+ "type": "page_number",
1303
+ "text": "12521",
1304
+ "bbox": [
1305
+ 480,
1306
+ 944,
1307
+ 517,
1308
+ 955
1309
+ ],
1310
+ "page_idx": 7
1311
+ },
1312
+ {
1313
+ "type": "text",
1314
+ "text": "Acknowledgments",
1315
+ "text_level": 1,
1316
+ "bbox": [
1317
+ 91,
1318
+ 90,
1319
+ 246,
1320
+ 107
1321
+ ],
1322
+ "page_idx": 8
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "text": "This work was supported by the Lynn and William Frankel Center at BGU CS, by the Israeli Council for Higher Education via the BGU Data Science Research Center, and by Israel Science Foundation Personal Grant #360/21. R.S.W.'s work was supported by the Kreitman School of Advanced Graduate Studies.",
1327
+ "bbox": [
1328
+ 89,
1329
+ 114,
1330
+ 485,
1331
+ 205
1332
+ ],
1333
+ "page_idx": 8
1334
+ },
1335
+ {
1336
+ "type": "text",
1337
+ "text": "References",
1338
+ "text_level": 1,
1339
+ "bbox": [
1340
+ 91,
1341
+ 217,
1342
+ 186,
1343
+ 233
1344
+ ],
1345
+ "page_idx": 8
1346
+ },
1347
+ {
1348
+ "type": "list",
1349
+ "sub_type": "ref_text",
1350
+ "list_items": [
1351
+ "[1] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650-9660, 2021. 2, 3, 4",
1352
+ "[2] Minghao Chen, Fangyun Wei, Chong Li, and Deng Cai. Frameworkwise action representations for long videos via sequence contrastive learning. In CVPR, 2022. 1, 2, 3, 4, 6, 7, 8",
1353
+ "[3] Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2818-2829, 2023. 2, 7",
1354
+ "[4] Marco Cuturi and Mathieu Blondel. Soft-dtw: a differentiable loss function for time-series. arXiv preprint arXiv:1703.01541, 2017. 2, 7",
1355
+ "[5] Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, and Eamonn Keogh. The ucr time series archive. IEEE/CAA Journal of Automatica Sinica, 2019. 3",
1356
+ "[6] Ishan Rajendrakumar Dave, Fabian Caba Heilbron, Mubarak Shah, and Simon Jenni. Sync from the sea: retrieving alignable videos from large-scale datasets. In European Conference on Computer Vision, pages 371-388. Springer, 2024. 6",
1357
+ "[7] Junting Dong, Qing Shuai, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, and Hujun Bao. Motion capture from internet videos. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 210-227. Springer, 2020. 6, 7",
1358
+ "[8] Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. Temporal cycle-consistency learning. In CVPR, 2019. 1, 2, 6, 7, 8",
1359
+ "[9] Oren Freifeld, Søren Hauberg, Kayhan Batmanghelich, and John W. Fisher III. Highly-expressive spaces of well-behaved transformations: Keeping it simple. In ICCV, 2015. 2, 3, 4, 5, 7",
1360
+ "[10] Oren Freifeld, Søren Hauberg, Kayhan Batmanghelich, and John W. Fisher III. Transformations based on continuous piecewise-affine velocity fields. IEEE TPAMI, 2017. 2, 3, 4, 5",
1361
+ "[11] Isma Hadji, Konstantinos G Derpanis, and Allan D Jepson. Representation learning via global temporal alignment and cycle-consistency. In CVPR, 2021. 2, 7, 8",
1362
+ "[12] Sanjay Haresh, Sateesh Kumar, Huseyin Coskun, Shahram N Syed, Andrey Konin, Zeeshan Zia, and Quoc-Huy Tran. Learning by aligning videos in time. In CVPR, 2021. 2, 8"
1363
+ ],
1364
+ "bbox": [
1365
+ 93,
1366
+ 243,
1367
+ 485,
1368
+ 900
1369
+ ],
1370
+ "page_idx": 8
1371
+ },
1372
+ {
1373
+ "type": "list",
1374
+ "sub_type": "ref_text",
1375
+ "list_items": [
1376
+ "[13] Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan Weber, Geoffrey I Webb, Lhassane Idoumghar, Pierre-Alain Muller, and François Petitjean. Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery, 2020. 5",
1377
+ "[14] Junwei Liang, Poyao Huang, Jia Chen, and Alexander Hauptmann. Synchronization for multi-perspective videos in the wild. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1592-1596. IEEE, 2017. 2",
1378
+ "[15] Weizhe Liu, Bugra Tekin, Huseyin Coskun, Vibhav Vineet, Pascal Fua, and Marc Pollefeys. Learning to align sequential actions in the wild. In CVPR, 2022. 1, 2, 8",
1379
+ "[16] Inigo Martinez, Elisabeth Viles, and Igor G Olaizola. Closed-form diffeomorphic transformations for time series alignment. In ICML. PMLR, 2022. 2, 3, 4, 5, 7",
1380
+ "[17] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 7",
1381
+ "[18] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. NeurIPS, 2019. 5",
1382
+ "[19] François Petitjean, Alain Ketterlin, and Pierre Gançarski. A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognition, 2011. 2, 7",
1383
+ "[20] François Petitjean, Germain Forestier, Geoffrey I Webb, Ann E Nicholson, Yanping Chen, and Eamonn Keogh. Dynamic time warping averaging of time series allows faster and more accurate classification. In IEEE ICDM, 2014. 2",
1384
+ "[21] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 7",
1385
+ "[22] H. Sakoe. Dynamic-programming approach to continuous speech recognition. The International Congress of Acoustics, 1971. 2",
1386
+ "[23] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE TASSP, 1978. 2",
1387
+ "[24] Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time-contrastive networks: Self-supervised learning from multi-view observation. CoRR, abs/1704.06888, 2017. 6",
1388
+ "[25] Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In ICRA. IEEE, 2018. 7",
1389
+ "[26] Ron Shapira Weber and Oren Freifeld. Regularization-free diffeomorphic temporal alignment nets. In ICML. PMLR, 2023. 2, 3, 5, 7"
1390
+ ],
1391
+ "bbox": [
1392
+ 516,
1393
+ 92,
1394
+ 906,
1395
+ 898
1396
+ ],
1397
+ "page_idx": 8
1398
+ },
1399
+ {
1400
+ "type": "page_number",
1401
+ "text": "12522",
1402
+ "bbox": [
1403
+ 480,
1404
+ 944,
1405
+ 519,
1406
+ 955
1407
+ ],
1408
+ "page_idx": 8
1409
+ },
1410
+ {
1411
+ "type": "list",
1412
+ "sub_type": "ref_text",
1413
+ "list_items": [
1414
+ "[27] Ron Shapira Weber, Matan Eyal, Nicki Skafte Detlefsen, Oren Shriki, and Oren Freifeld. Diffeomorphic temporal alignment nets. In NeurIPS, 2019. 2, 3, 4, 7",
1415
+ "[28] Prarthana Shrstha, Mauro Barbieri, and Hans Weda. Synchronization of multi-camera video recordings based on audio. In Proceedings of the 15th ACM international conference on Multimedia, pages 545-548, 2007. 2",
1416
+ "[29] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. NeurIPS, 2017. 2",
1417
+ "[30] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Advances in neural information processing systems, 35:10078-10093, 2022. 7",
1418
+ "[31] Oliver Wang, Christopher Schroers, Henning Zimmer, Markus Gross, and Alexander Sorkine-Hornung. Videosnapping: Interactive synchronization of multiple videos. ACM Transactions on Graphics (TOG), 33(4):1-10, 2014. 2",
1419
+ "[32] Heng Zhang, Daqing Liu, Qi Zheng, and Bing Su. Modeling video as stochastic processes for fine-grained video representation learning. In CVPR, 2023. 1, 8",
1420
+ "[33] Weiyu Zhang, Menglong Zhu, and Konstantinos G Derpanis. From actemes to action: A strongly-supervised representation for detailed action understanding. In ICCV, 2013. 2, 6, 7"
1421
+ ],
1422
+ "bbox": [
1423
+ 91,
1424
+ 92,
1425
+ 482,
1426
+ 417
1427
+ ],
1428
+ "page_idx": 9
1429
+ },
1430
+ {
1431
+ "type": "page_number",
1432
+ "text": "12523",
1433
+ "bbox": [
1434
+ 480,
1435
+ 945,
1436
+ 517,
1437
+ 955
1438
+ ],
1439
+ "page_idx": 9
1440
+ }
1441
+ ]